Black Magic 2: Rumble in New South WalesPosted: April 6, 2015
About 90 per cent of an iceberg lies beneath its surface. So it is with political commentary. It’s easy enough to open a national broadsheet newspaper after an election and scan the pages filled with phrases like “hostage to entrenched interests” and “politically fabled Labor heartland” and assume that all political commentary is equally and uniformly silly.
But that would be lazy. Each columnist is unique, and by careful examination of the underlying data, using only the most up-to-date and high-tech statistical techniques, we can actually explore the heterogeneity of hacks. Whose commentary is based on nothing more than vague impression and anecdote, and whose is instead based on dodgy mathematics? Today I propose to you that we undertake such analysis. And who better to begin with than an old friend of ours?
Readers of my blog and my Twitter feed will no doubt be aware that, of the hucksters and charlatans that enliven the Australian scene, John Black is my favourite. Perhaps, just possibly, on a good day—maybe depending on which way the wind is blowing through the turbines?—he can be beaten by Graham Lloyd. But on average, in expectation, John Black is the man for me. We saw a little while ago how he uses a very questionable statistical ‘technique’, one that systematically overstates the explanatory power of the resulting model and the magnitude of the effects of variables AND which understates the uncertainty in estimates of the effects of variables, among other problems, to try to determine who voted for Labor in Queensland’s recent election, just by looking at census data and the aggregate votes in Queensland seats.
(Most social scientists would be astounded if he actually could do that, since it would mean he has overcome the ecological fallacy, but he’s a smart guy, let’s give the benefit of the doubt: he probably has solved the most intractable methodological challenge in statistical inference, probably when he had nothing better to do during Senate Estimates.)
We also saw that it was possible for me, with exactly the same technique, to get models of the Queensland electorate with exactly the same level of explanatory power from entirely randomly generated numbers. But John Black thought his model of the Queensland electorate was nonetheless a pretty good guide to the New South Wales election. On March 1, about a month before the state went to the polls, he wrote:
When we took the Queensland demographics underpinning the January 31 state result and applied them to identical demographics in current NSW seats, Labor won 46 out of 93 seats…We felt pretty comfortable carrying out this projection of the Queensland result as we’ve been doing it for 40 years and the model was statistically powerful, explaining 84 per cent of the variation in votes across the 89 seats in Queensland….With an error of estimate of 4 per cent, we don’t expect the projection to be perfect but it should be a better guide than the usual incorrect assumption that uniform swings will apply. Bearing in mind that Labor lost every second vote in some of its safest seats in the previous election, this assumption is even sillier than usual.
But for a guy who has come across a revolutionary technique for forecasting elections — hey, he’s been using it for 40 years! — John Black sure is modest after the election. Just look at his weekend column on the aftermath of the NSW election. He sure knows a lot about people who don’t vote for the Greens, as in this prolier-than-thou paragraph:
The state seats where the Greens failed to win many votes were dominated by mainstream Australian suburban families with children, who drive themselves to work daily or ride as a car passenger.
The parents tend to have certificate qualifications in engineering for dad and hospitality for mum, with dad employed as a machine operator in manufacturing or a transport driver, and mum finding it very difficult to get a hospitality job which pays enough to earn any realistic income and has flexible hours for her to look after three kids in the local government school system.
But you could have told us that before the election, right, John? Your projection told you that, yeah? Why aren’t you bragging about it?
Well, maybe we should have a look at how Black’s model did, especially compared the the ‘uniform swing’ model that he reckons it’s superior to.
So let’s put the two following models to the test: John Black’s Queensland demographic-stepwise-bullshit model (calculating, in a sense, the impact of demographic variables in the QLD election on the ALP vote and using the coefficients from that model to project NSW results based on NSW demographics, the results of which Black generously provided on his website) and a uniform swing model based on the final polls in which the pollsters were predicting a 10 percentage point swing to the Labor Party (that is, a model whose projection is just the 2011 results, adjusted for boundary changes and with 10 percentage points added to it). Comparing their projections to the actual result, what do we find?
Perhaps not unsurprisingly, John Black’s model—which had no input from polling at all, even at an overall state level—overpredicted the swing to Labor. It projected a 17 per cent swing to Labor on the two party preferred measure in the ‘traditional contest’ seats (where there was a TPP fight between Labor and the Coalition); in fact, it was about 10%, almost exactly what the final polls were predicting.
This means that John Black’s model dramatically overpredicted how many of these ‘traditional contests’ the ALP would win—his model predicted 44 wins for the ALP in these contests, while the uniform swing model predicted 35 victories. In reality, there were only 32: But what about the margin of error? The average ‘miss’ shown above understates how much each model missed the actual result by, since an underprediction will cancel out an overprediction. We can show the average of the absolute value of the error in each ‘traditional contest seat prediction for both models:
On average, the uniform swing model missed the result by about 5 percentage points (either over- or under-shooting). John Black’s model, on the other hand, missed by nearly 12 points. For a complete view of both model results, consider this scatterplot with the seat-by-seat predictions. The dotted line is where each dot would be if it had been a ‘perfect’ prediction.
As you can see, the red dots — the John Black model dots — are quite a good deal more dispersed around the ‘perfect prediction’ line than the black ones, the ones that were generated by the ‘uniform swing’ model that Mr Black sneered at in his article above. Even if you make an ad-hoc adjustment for the fact that Mr Black’s model was overheated by about 7 percentage points on average compared to the uniform swing model, you still find that the errors are larger for Black’s model than for the uniform swing model, by about 3 percentage points on average.
On the side are the individual prediction errors by seat for John Black’s model (you may need to click on the image to zoom in), showing a whopping 33.27 point over-prediction for Labor’s vote in Parramatta.
So slice it as you will, John, the weird Queensland demographics-in-NSW model you’ve created actually performed much worse than the uniform swing model. Maybe you could write a column in the Australian about that. Let me get you started:
The pundits who got the NSW election wrong tend to be former Queensland Labor Senators with fishing hobbies and boutique ‘analytics’ firms who haven’t opened a statistics textbook since the 1960s, patronised by Boomer managers with no clue about data analysis. Often, they have obtained a vanity column in a vanity newspaper in which they cannot help but promote at every opportunity and regardless of nominal relevance their business interests. Their names are most often short monosyllables. And most characteristically, they tend to be enormously and, sadly, unjustifiably confident in their risibly flimsy analysis.