Decisions on what data to collect, and how it should be analysed, are not on the face of it the most exciting topics in the world: and yet, some of these decisions have seriously biased the referendum debate. Here we explain how, and suggest what should be done about it.
This note develops on a paper we published in the last issue of Significance, a journal of the Royal Statistical Society. In that paper we gave two examples.
One example was the annual report on public expenditure and revenues by the Scottish government (GERS). Until very recently, all that has been produced by way of official figures on flows into and out of Scotland has been the GERS analysis of flows associated with government – there has not been a proper analysis of the flows associated with trade and capital movements. And yet this broader picture really matters: the lack of it has had a profound effect on channeling the independence debate into concentrating on Scotland’s government deficit/surplus. If a fuller set of figures had been available, then the debate would naturally have turned to consider other areas, which have been almost entirely neglected – like, what has Scotland’s current account balance been with the rest of the world: what has been the non-governmental financial outflow, (and it would, in most years, have been a substantial outflow): what steps would be open to a Scottish government to maximise the benefit from such an outflow – and so on.
Our second example was the Office for Budget Responsibility’s approach to economic forecasting. The OBR has adopted the detailed Treasury macroeconomic model as one of the main tools it uses in producing its forecasts. However, they make the key assumption that, by around the end of the 5 year forecast period, inflation will be stable, and the economy will be operating on its trend line of potential output. In a critical respect, therefore, the OBR assumes the success of economic policy. As a result, OBR significantly understate the risks attaching to the UK economy. This matters for the referendum debate, because a whole swathe of analysts/commentators, (like the Institute for Fiscal Studies, the National Institute of Economic and Social Research, and our own dear Treasury), who have produced forecasts for Scotland in relation to the referendum, simply took the OBR’s UK forecast as a given, disaggregated to Scotland, and then added in a variety of risks for Scotland. This approach had the effect of almost completely ignoring UK risk, hence fundamentally biasing the debate.
Those who want more detail on these examples can refer to our Significance article, (or a pre-peer reviewed version on http://www.cuthbert1.pwp.blueyonder.co.uk under Theme 6). Here, we will be looking in more detail than in Significance at how things went wrong and what can be done.
It is actually quite rare, while not unknown, for governments to lie with statistics: they don’t need to. Instead, they have powerful tools which they use to set the statistical agenda in their favour. The above examples illustrate two of these tools.
The obvious one is control of resources. In the case of GERS, Ian Lang knew exactly what he was doing when he allocated resources to the production of the first GERS in 1992. As he wrote to John Major, “I judge that it is just what is needed at present in our campaign to maintain the initiative and undermine the other parties. This initiative could score against all of them.” And likewise, successive unionist governments knew exactly the effects when they failed to resource extension of GERS to a proper set of accounts for Scotland.
The OBR example illustrates another mechanism of control. OBR is independent: but its terms of reference were set by George Osborne. So as soon as OBR accepted what is basically a remit to produce forecasts, a satisfactory outcome from George Osborne’s viewpoint was virtually guaranteed. This is because a rational forecaster in a policy influenced environment will usually assume the success of policy: if it is obvious to the independent forecaster that the process is currently heading for, say, an undershoot – then this will be equally obvious to the controlling agent. So the forecaster has to assume that the agent will take corrective action.
In addition, there are other powerful factors which ensure that the agenda for statistics and analysis usually implicitly favours central government and the general status quo.
One factor is the power of patronage: you get the Governor of the Bank of England whom you appoint, and he/she will then be looking for re-appointment in the first instance, and a suitable honour towards the end of their “successful” term. And there is also the powerful influence of what one might call Establishment bias: appointees may be strictly neutral in Party political terms, but, having emerged from Establishment grooming, are very unlikely to set in train the production of analyses which might raise awkward questions about the stability and justice of the status quo.
There is also the sheer effect of centralisation. GERS provides another very good illustration of this. In its early years, the production of GERS relied upon data already refined and packaged by central departments, namely, the Office for National Statistics and the Treasury. Government secrecy meant that the detail of the analysis, and in particular, the basic data set, was not released, either to individual government departments or to the public. This meant that GERS was effectively beyond scrutiny: and a number of major errors went undetected. The situation only changed when a freedom of information request, (in fact, by us), secured general release of the relevant data set in 2005.
Given that the control of resources, and of the power of patronage, still rests overwhelmingly with London, it is not surprising that the analysis agenda has favoured the unionist side in the independence debate. Unfortunately, however, the situation has been even worse than it needed to be. While the SNP has been in power in Scotland since 2007, and could have taken significant steps during that time to redress the information imbalance, it was actually very slow to learn this important lesson of power. For example, despite repeated requests, it was only in November 2013 that the Scottish government produced initial estimates of Gross National Income. It was only at this stage that we had provisional estimates fleshing out GERS into a fuller set of international accounts for Scotland: too late, too late to rebase the agenda of the debate.
Another example is that the Scottish government could have used its control of land registration to open up information on who owns Scotland: and similarly, could have published much more data on who receives payments under the Common Agricultural Policy. Better information in these areas could have focused an intense debate on the need for reform of land tenure.
So the SNP needs to learn important lessons of power. But it is actually a counsel of imperfection just to argue that one side of the independence debate should descend to the level of the other. In a better world, much more would be done generally to loosen the grip of government on the statistical analysis agenda. For one thing, government should lose much of its power to control analytical resources. It should not be the government of the day, but Parliament, which sets the main priorities, and the budget, for data collection and analysis. And it should not be the Chancellor of the Exchequer who appoints the Governor of the Bank of England, or sets the terms of reference for the OBR. But in addition, the government’s overall powers of patronage should be much reduced: and a good starting point would be the complete abolition of the honours system.