Is it, as Chris Kenny says:
indisputable that the warming trend has disappeared over the past 15 years or so.
No, it isn’t. First of all, note that the ‘past 15 years or so’ probably means ‘since the extremely hot El Nino year of 1998’. It’s pretty standard for climate agnotologists like Kenny to use 1998 as the break year, since comparing trends before and after 1998 will naturally give the kinds of numbers most favourable to the ‘global warming has stopped’ line of thought.
Nonetheless, let’s allow Kenny to pick his own time periods of comparison and see whether ‘the warming trend has disappeared’. So what question are we asking, statistically? Well, since Kenny is conceding that there was a warming trend before 1998 or thereabouts, the natural statistical hypothesis to test is, ‘is the current temperature trend statistically different from the historical trend that Kenny acknowledges exists’? In slightly more technical words, our null hypothesis is that the warming trend continued. Let’s see if there’s sufficient statistical evidence to conclude that that hypothesis is wrong.
I calculated the trend over our whole time period (I’ve chosen 1979-2013, but you can do this for whatever time period you like.) Then, I calculate time trends over the subperiods (in this case, 1979-1998 and 1999-2013). These look like this—the trend since 1979 is in red, while the subperiod trends are each in blue:
It sure looks like the trends are different, doesn’t it? But are they different in a statistical significant way? Actually—no, they’re not. The p-value from a Chow test (which, in short, tells us if there’s statistical evidence for a structural break in the trend of a time series) is 0.28. Usually we don’t reject the null hypothesis (in this case, that there was no structural break 15 years ago) if the p-value is greater than 0.05. So we don’t reject the null hypothesis. No structural break in 1998. So, actually, there’s insufficient statistical evidence to conclude that the warming trend over the past 15 years has changed from the warming trend that preceded it.
Of course, as is now known by just about everyone there’s also not enough statistical evidence to show that the trend over the past 15 years or so is different from zero. (Can Kenny seriously think that this so-called ‘pause’ has been underreported? My second cousin, a farmer in northwest Victoria, was telling me authoritatively about the ‘pause’ a few months ago, and she’s not exactly sitting on her computer every night reading Anthony Watt’s blog).
But that doesn’t mean there’s no trend. There’s a lot of noise in temperature series, and over short time periods of one or two decades, it can be difficult to separate the underlying trend from the noise. But just as we don’t have enough statistical evidence to say that the temperature trend is different from zero, we also don’t have enough evidence to confirm that it’s different from the the warming trend that preceded it. We don’t have enough evidence to say a lot of things about temperatures over a short time period. In fact, just about the only thing we can say is that from 1999-2013, our best guess is that the trend was positive (I get a value of +0.007479 Celcius a year) but that it could be quite a bit above or below this:
UPDATE: A good way of thinking about what this regression tells us about warming since 1998 is by looking at a 95% confidence interval, which gives us an idea of the range of warming trends that are consistent with the data. In this case, the interval is (-0.00166; 0.01662). In other words, we can be confident that the ‘true’ warming trend since 1998 has been somewhere between -0.16 a century and 1.6 degrees a century. In other other words, a warming trend is quite a bit more likely than a flat or cooling one.
Of course, you might quibble with some of my regression choices. I’ve used annual temperature means because this helps to address the fact that the monthly data are autocorrelated. You could also fit a trend that explicitly models the autocorrelated error structure of the HadCRUT series. You might also calculate your trends from different years. But it is, I think, false to say that it is indisputable that the warming trend has disappeared.
Inequality of opportunity
Inequality being something of the chef’s special at the moment, this paper from the World Bank argues that the notion of ‘inequality of opportunity’ (which, some argue, is the only inequality governments ought to eridicate; inequality deriving from different levels of effort being morally legitimate) rests on uneasy philosophical ground. For example, it takes issue with the notion that inequality that stems from people’s innate talents is ‘natural’ and hence acceptable:
Since endowed talents are by definition beyond an individual’s control, it is odd that de
Barros et al. (and others) are so quick to accept as just inequalities stemming from
inequalities in talents.
Calvin Trillin on corrections
As everyone knows, the New Yorker has opened its doors and is letting the air in: you can read through, as far as I understand it, all of their back catalogue. And while the 20s and 30s may have been the heyday of New Yorker casuals, this piece by Calvin Trillin spoofing NYT-style corrections is proof that the Americans are still as good at the short humour piece as they were in the days of Benchley and Parker.
Because of an editing error, an article in Friday’s theatre section transposed the identifications of two people involved in the production of “Waiting for Bruce,” a farce now in rehearsal at the Rivoli. Ralph W. Murtaugh, Jr., a New York attorney, is one of the play’s financial backers. Hilary Murtaugh plays the ingénue. The two Murtaughs are not related. At no time during the rehearsal visited by the reporter did Mr. Murtaugh “sashay across the stage.”
The academic urban legend of the decimal point and spinach
My friend Leah, in general a good source of vaguely horrifying stories about academic/research malpractice highlighted for our attention this story. It’s about how one paper claimed that the reason for the urban myth of spinach being iron rich resulted from a decimal point error–and how that unsubstantiated claim then itself became an urban legend. In the process, it reiterates a lot of things that most of us should have learnt in first year undergrad–but maybe didn’t. And some rather ominous warnings:
The digital revolution has certainly made it easier to expose and debunk myths, but it has also created opportunities for new and remarkably efficient academic shortcuts, highly attractive and tempting not just in milieus characterized by increasing publication pressure and more concerned with quantity than quality, but also for groups and individuals strongly involved in rhetorics of demarcation of science, but less concerned with following the scientific principles they claim to defend. Some academic urban legends may perish in the new digital academic environment, but others will thrive and have ideal conditions for explosive growth.
A little longer on the cross: interregional slave trading in the antebellum South
Most people who become interested in economic history end up having Engerman and Fogel’s book Time on The Cross thrust upon them, often by people who aren’t economic historians, as an example of how proper economic history and paying due attention to the data can clear away sentimental myths in historiography. Engerman and Fogel’s book is very old now, and the problems with it are well known. An interesting new paper (in draft) takes another look at one of the authors’ claims about the size of the export market for slaves in the South and concludes that it was much, much bigger than Fogel and Engerman allowed.
Perry Anderson on footnotes
In this gloriously peripatetic article of Perry Anderson’s on the history of the New Left Review, he has a swipe at the cultural barbarism that is the Harvard Referencing System:
A major change of the past epoch, often remarked upon, has been the widespread migration of intellectuals of the Left into institutions of higher learning. This development—a consequence not only of changes in occupational structure, but of the emptying-out of political organizations, the dumbing-down of publishing houses, the stunting of counter-cultures—is unlikely to be soon reversed. It has brought with it, notoriously, specific tares. Edward Said has recently drawn attention sharply to some of the worst of these—standards of writing that would have left Marx or Morris speechless. But academization has taken its toll in other ways too: needless apparatuses, more for credential than intellectual purposes, circular references to authorities, complaisant self-citations, and so on. Wherever appropriate, NLR aims to be a scholarly journal; but not an academic one. Unlike most academic—not to speak of other—journals today, it does not shove notes to the end of articles, or resort to sub-literate ‘Harvard’ references, but respects the classical courtesy of footnotes at the bottom of the page, as indicators of sources or tangents to the text, immediately available to the reader. Where they are necessary, authors can be as free with them as Moretti is in this issue. But mere proliferation for its own sake, a plague of too many submissions today, will not pass. It should be a matter of honour on the Left to write at least as well, without redundancy or clutter, as its adversaries.
Andrew Gelman and Guido Imbens say: don’t use higher order polynomials in regression discontinuity designs–use local regression
Regression discontinuity designs in causal impact studies (that is, investigating a variable of interest either side of a critical threshold) have become popular again recently: for example, see David Lee’s work on incumbency effects in Congressional races. In order to estimate the value of the variable at some small distance above and below the threshold in the forcing variable, you usually use either a higher-order polynomial or local regression (loess). On the few occasions I’ve used an RD design for something, I’ve always used a loess for essentially aesthetic reasons, and just because using an arbitrarily high-order polynomial felt kind of wrong. Andrew Gelman and Guido Imbens say my spidey senses were right. In particular:
Results based on high order polynomial regressions are sensitive to the order of the polynomial. Moreover, we do not have good methods for choosing that order in a way that is optimal for the objective of a good estimator for the causal effect of interest. Often researchers choose the order by optimizing some global goodness of fit measure, but that is not closely related to the research objective of causal inference.
A fairly flippant piece in Crikey today: I promise I will return to charting duties ASAP.
Australian belle-lettrists are no doubt familiar with the weekly publication Media Watch Dog, composed by a canine named ‘Nancy’ who belongs, we understand, to Gerard Henderson of the Sydney Institute, a man with an admirable fondness for the epistolary arts. A month or so ago I drew Mr Henderson’s attention to a minor error in an issue of Media Watch Dog, which was promptly, and very quietly, corrected.
I suggested, my tongue perhaps grazing the inside of my mouth, that perhaps the correction should be acknowledged, since Nancy had criticised David Marr in the same edition for not alerting readers to correction. Mr Henderson responded, and the result was published in the next edition of Media Watch Dog. (Although, sadly, the response to my email came very shortly before the publication of the next edition, meaning I was unable to respond in time for publication.)
Mr Henderson has not published my reply, and so, with your permission, I reproduce it below the fold: Read the rest of this entry »
Do carbon taxes kill manufacturing?
The authors of a new paper on British climate change policy suggest: no.
They examine the scheme introduced by the Blair government that either taxed businesses on their energy consumption (the carbon tax element, known as the Climate Change Levy, or CCL) or, if the company was a plant operating in a sensitive industry, required them to make a ‘Climate Change Agreement’ (CCA), which set an emissions reduction target in exchange for a tax discount on the CCL. The levy was not a pure carbon tax: the rate of taxation varied by energy source (so, for example, coal was taxed at a rate of 0.15 pence per kilowatt hour: an implicit carbon tax of 31 pounds per ton of carbon dioxide emitted).
The existence of the agreements also makes it slightly hard to pinpoint what impact the levy may have had: after all, companies had a choice of either the levy or the agreement. This introduces two problems: firstly, we might be interested on the impact of the levy rather than its impact relative to a signed agreement on reducing energy use. Secondly, if the companies have a choice, then it’s possible that the things that make them choose to pay the tax rather than sign an agreement might be something that is also related to the things we’re interested in, like unemployment, output, or electricity usage. (For example, imagine that all biscuit tin manufacturers in Britain are likely to choose to sign an agreement because they think it will be more affordable than paying the full tax in the very short run and the reason they’re looking for the more affordable option in the very short run is they’re nearly bankrupt, because no one buys biscuit tins anymore. Then the fact they then go bankrupt and fire all their employees one year later may not have anything to do with the fact that they signed a Climate Change Agreement.)
The authors’ response to the first problem is to argue that the Climate Change Agreements were fairly toothless, for several reasons:
First, the government may have “double counted” carbon savings from the CCA scheme…On average, CCA targets were supposed to improve energy efficiency by 11% between 2000 and 2010. This figure is well above the 4.8% improvement the government expected to occur under a “business as usual” (BAU) scenario…However, alternative BAU scenarios were much closer to the CCA target, projecting energy efficiency of all UK industry to improve by 9.5%…or even 11.5% when taking into account the effect of the CCL
In other words, the agreements may have been aiming for a target that was going to be met anyway.
Second, there was massive over-compliance with CCA targets. Combined annual carbon savings in all CCA sectors were substantially larger than the 2010 target throughout the first three compliance periods. At the end of the first compliance period in 2002, CCA sectors reported savings of 4.5 MtC — almost twice the target amount of 2.5 MtC to be achieved by 2010…Facilities were re-certified for the reduced tax rate even if they had missed their target, provided that the sector as a whole met its target…Finally, a large degree of flexibility in both the target negotiations and the compliance review further limited the stringency of CCA targets. For instance, CCA sectors could choose their own baseline year for the target indicator. More than two thirds of all sectors chose a baseline year prior to 2000 (in some cases going as far back as 1990), allowing them to count carbon savings unrelated to the CCA towards target achievement
What about the second problem? Remember that firms got to choose whether they would sign an agreement or pay the tax, and the things that made them choose one way or another might muddy our estimates of the impact of the tax over the levy.
The way the authors get around this by using an instrumental variable: eligibility for the program. (For those of you who unfamiliar with the delights of instrumental variable analysis, suffice it to say that it’s a statistical technique that allows us to correct for a lack of randomness. Some plants were eligible to opt for a CCA instead of paying the full carbon taxes while others weren’t.
The essence of the findings are summed up in the graph below. The grey solid lines represent firms that used a Climate Change Agreement; the black solid lines represent those that paid the full tax. A dotted grey line means that the firm was eligible for the CCAs; the dotted black line means that the firm wasn’t eligible.
Before 2001, you can’t see much of a systematic difference between the four types of plants (the authors confirm this statistically).
What about after 2001? As you’d expect if the higher implicit carbon tax imposed additional costs on energy usage, there are significant differences between the grey lines (3% carbon tax) and the black lines (15% tax) after the introduction of the levy in 2001. In both energy expenditure and electricity use and ‘energy share in gross output’ (how much energy went into the making of manufactured goods), the carbon tax had a downward impact. In employment, though, it’s hard to distinguish any particular impact of the carbon tax over the CCAs: it seems as though the carbon tax did not lead to lower unemployment in manufacturing than would otherwise have been the case (assuming the CCAs were toothless), although it is perhaps worth pointing out that manufacturing jobs did decline after the carbon tax was imposed, just as they were declining before it was imposed. Nor did the UK carbon tax seem to lead to manufacturing companies leaving the market, either:
In sum, we find no evidence that the CCL had an impact on plant exit decisions.
Overall, what’s the conclusion?
We do not find evidence of a detrimental effect of the CCL on employment, regardless of which way the data are cut
One of the authors has also cowritten a working paper arguing that there were no discernible effects of the European Union’s emissions trading scheme on manufacturing output or jobs in the
Do carbon taxes kill agriculture?
Over the Atlantic, the Canadian experience tends to suggest: no.
In 2008, the province of British Columbia introduced a carbon tax on fuels. Some agricultural producers complained that this was making agriculture ‘uncompetitive’, and they obtain exemptions. Nicholas Rivers and Brandon Schaufele from the University of Ottowa decided to check out the figures to see if this was true.
The authors are primarily interested in agricultural exports: did the carbon tax have any effect on them? They build a statistical model that incorporates BC’s comparative advantage in each good (reflecting factors like the climate, the quality of the soil and so on), factors that affect that Canadian economy as a whole (like tariffs and the exchange rate) and various weather-related variables. Finally, there’s a variable that indicates whether there was a carbon tax or not. When they squish the data through the machine, they find that the BC carbon tax either had a positive impact on the share of agricultural production in BC that was exported, or no impact. (If you exclude some commodities like wheat and barley that are marketed in monoposonistic ‘single-desk’ arrangements then there’s no impact.)
The authors give two possible reasons for a potentially positive impact on agricultural exports: firstly, standard economic theory tells us that when one factor of production (in this case, ‘carbon’, or, to be more precise, production processes that result in carbon dioxide emissions) becomes more expensive, then we produce fewer goods that use that factor of production more intensely and more goods that don’t use it intensely. If agriculture is more labour-intensive than carbon-intensive, then making it more expensive to emit greenhouse gases should lead to an expansion in the agricultural sector.
Another possibility they mention is that the price signal given by the carbon tax forced businesses to innovate in order to use less fossil fuels, and this overcompensated for the burden of the tax.
Sinclair Davidson, riffing off Justin Wolfers’ piece in the New York Times about the professional economic consensus that the US stimulus package of 2009 lowered unemployment, pastes a graph of the Obama administration’s unemployment forecasts with and without the stimulus and the actual unemployment time series. He then claims:
So our 36 economists [the ones who endorsed the notion that the stimulus lowered unemployment] have some explaining to do. Mainstream macro theory made a series of predictions and is summarised in that graph – reality unfolded in a very different way. So now how were they wrong in 2009 and what have they done to improve their approach since then?
This is logical nonsense, for many reasons.
The original graph comes from this report by Christie Romer and Jared Bernstein:
It was published on January 9 2009. This was before the bill was even introduced into the House and only several days after the bill was introduced into the Senate, and well before it was passed into law. It was produced before Barack Obama became President. Why is this relevant? Well, the job estimates are not for the bill that was actually passed into law on February 17 — amended by both House and Senate — but of what the authors thought Barack Obama’s proposed law might look like.
As Romer and Bernstein themselves pointed out in the report:
Our analysis will surely evolve as we and other economists work further on this topic. The results will also change as the actual package parameters are determined in cooperation with the Congress.
There are other problems with Davidson’s argument. He implies that ‘mainstream macro theory’ made a series of predictions that are ‘summarised’ in the graph. This is false. Two economists—Bernstein and Romer—made the predictions ‘summarised in the graph’. ‘Mainstream macro’ is a pretty broad term, and incorporates hundreds of different models: to talk of a singular ‘prediction’ made by some disembodied entity like ‘mainstream macro’ is very, very, deceptive. Neither Bernstein nor Romer were among the 36 economists who endorsed the stimulus-employment link. What if they thought back in 2009 that the Administration’s forecasts were too rosy? It’s not impossible, after all, Paul Krugman thought that the stimulus was going to be inadequate. Maybe some of the experts did too?
Even if the Bernstein/Romer forecasts were of the actual package passed into law, and even if every single one of the 36 economists were responsible for or endorsed those forecasts, the logic falls down. The test of whether stimulus worked is not whether the unemployment rate has followed the Bernstein/Romer forecasts, but whether it reduced unemployment relative to the counterfactual (if stimulus hadn’t been introduced.) This is obviously a much harder question to answer, and will rely on either messy econometric analysis or modelling. (We’d also need to find a way of pinpointing the effect of the stimulus from other macroeconomic developments from 2009 onwards, including contractionary fiscal policy changes from the 2010 midterms onward: another thing Davidson conveniently ignores.) Dylan Matthews wrote a good piece a while ago on the empirical evidence on the impact of the ARRA with some links to studies. The majority conclude that the stimulus worked, some don’t.
Here’s one model’s estimate of the unemployment impact of the American Recovery and Reinvestment Act (it’s Mark Zandi from Moody Analytics, graph mine)
Here’s another estimate from the CBO, this time showing what impact the stimulus had on the unemployment rate (the lower the line, the higher the stimulus effects). They calculated both ‘low’ and ‘high’ estimates of the stimulatory impact.
So no, Sinclair Davidson, there’s no reason why one graph from a report that was released about a hypothetical law should make us question fiscal stimulus. There are at least respectable arguments to say that fiscal policy doesn’t work (or doesn’t work as much as is claimed), but this ain’t it.
UPDATE: But if we do want to ask everyone how they’ve updated their priors after a bad prediction, perhaps we might be entitled to know how Mr Davidson has changed his thinking since the time he predicted an imminent threat of stagflation for the Australian economy?