I have to admit to being stunned by the level of misinformation that is currently accompanying the Health and Social Care bill as it is introduced into Parliament. On Jan 19th alone, the shadow health secretary John Healey stated that “the changes would make the health service profit centered rather than patient centered”, the health secretary Andrew Lansley said “competition would be on quality and not cost” and that as the health service is free at the point of delivery patients obtain the best medical outcome rather than the cheapest option, while Karen Jennings, head of health at Unison, stated that the only survivors will be the private health companies which are “circling like sharks” and MPs say “the reforms have taken the NHS by surprise”.
None of these statements has much basis in either fact or evidence. The reforms being introduced in the bill are essentially a continuation of the reforms started under the previous administration, albeit at an increase in pace and scale. The Labour reforms introduced competition between hospitals for patients and patient choice of hospital and a system of regulated prices. Lansley has changed the buyers of health care from local PCTs to General Practitioners, but under Labour the PCTs were supposed to act on behalf of their local GPs anyhow. Why Healey believes that increasing the pace of reform and replacing the PCTs with GP consortia should mean the NHS switches from being patient to profit centered is completely unclear. GPs have not been seen by politicians as profit centered previously. In fact, perhaps because GPs see so many voters each week, most politicians studiously ignore the fact that GPs are private contractors and not NHS employees. In addition, the new GP consortia will probably employ a fair number of ex-PCT staff. So it seems unlikely there will be a radical shift in values on the purchaser side.
On the other hand, the current secretary of state is also being somewhat disingenuous in his statement that because the NHS is free at point of delivery, competition will be in terms of quality. He rather forgot to mention one key change he has made, which is to abolish the fixed price tariff introduced by the previous administration, The fixed price tariff was introduced on the basis of evidence from the UK (as well as elsewhere) that competition in health care with fixed prices avoids a potential race to the bottom in which sellers of care compete to attract patients on the basis of lowest cost at the expense of quality In his desire to push forward his competitive model, Lansley has thrown away the fixed price tariff and does indeed risk such a race to the bottom. The fact that care will be free to the patients is irrelevant, as the gains from price savings will accrue not to individual patients but to GP consortia that face the pressures arising from fixed budgets and a tight financial settlement. Exactly the same model was employed in the NHS in the internal market of the 1990s and in that market, cash constrained buyers focused on price and reducing waiting lists, at the expense of quality.
Karen Jennings’ statement appears to have even less basis in fact. Again, the plans to allow any willing provider to supply care to the NHS were actually introduced by the Labour administration. There has been relatively little entry of non-NHS providers not because these providers were not allowed to enter but because they didn’t find it profitable. There is no a priori reason why in a more constrained financial era that supplying care to the NHS should make private providers large profits. It is true that GP consortia will probably seek the help of the private sector to carry out their commissioning function but given the poor performance of some PCTs, this may simply allow an increase in talent on the purchasing side of the NHS. And given that the purchaser side has been weaker than the provider side for a long time, this is probably a good thing for patients.
Finally, the fact that MPs think that these reforms have taken the NHS by surprise seems to suggest that MPs don’t notice things until they come to the House. These reforms were trailed very soon after the Coalition government came to power and again in a White Paper this autumn. All the major components of the bill were in that White Paper and many NHS bodies have been preparing themselves for another period of rapid change. What many MPs may not like is the speed of change – and there are good reasons to question this. For example, it is not clear that replacing PCTs with GP consortia at this stage in the reform process will help develop the gains that competition between suppliers has been shown to have had and creating a new set of purchasers will undoubtedly take a lot of attention, resources and time. In my view these resources would have been better used in developing choice to a greater degree, getting the rules of the game right and then introducing, at a later date, a large role for those GPS that want it.
Today the ONS published a statistical bulletin of labour market statistics, which was widely reported as evidence of an increase in unemployment. However, the executive summaries provided by the ONS did not include sufficient information about the precision of the statistics. This led newspapers to report ‘changes’ in important features of the economy, such as the unemployment rate, that may in fact be due to chance.
Labour market statistics, provided by the ONS, are estimates, not exact measures. The statistics are calculated from surveys of thousands of people. This means that the statistics are subject to sampling variation. Therefore, if the ONS were to repeat their survey and calculate all their statistics on a different group of people we would not expect them to be the same. The ONS provide estimates of the size of this variation in additional tables: they estimate the range in which their statistics would be expected in 95% of samples. However, these ranges are not reported in the executive summaries, or in the much of the discussion of what the statistics show. The findings of the ONS bulletins are faithfully reported to the public by newspapers, but unfortunately without a measure of sampling variability, it is not possible to tell if the level of unemployment has changed, or if the reported change is consistent with chance.
The summary of latest labour market statistical bulletin reports the following statistics, to which I have added confidence intervals using the measures of sampling variation provided by ONS later in the document:
- The employment rate for those aged from 16 to 64 for the three months to November 2010 was 70.4 (70.0, 70.8) per cent, down 0.3 (0.0, 0.6) on the quarter. The number of people in employment aged 16 and over fell by 69,000 (-62,000, 200,000) on the quarter to reach 29.09 (27.53, 30.64) million.
- The unemployment rate for the three months to November 2010 was 7.9 (7.7, 8.1) per cent, up 0.2 (-0.1, 0.5) on the quarter. The total number of unemployed people increased by 49,000 (-37,000, 135,000) over the quarter to reach 2.50 (2.42, 2.58) million.
- There were 157,000 (137,000, 177,000) redundancies in the three months to November 2010, up 14,000 (-14,000, 42000) on the quarter.
- The inactivity rate for those aged from 16 to 64 for the three months to November 2010 was 23.4 (23.1, 23.7) per cent, up 0.2 (-0.1, 0.5) on the quarter. The number of economically inactive people aged from 16 to 64 increased by 89,000 (-26,000, 204,000) over the quarter to reach 9.37 (9.23, 9.50) million.
There are 7 estimates of change in levels. All seven of these changes are consistent with what might be expected from sampling variation, so the statistical bulletin provides no strong evidence of a quarter on quarter change in any of these statistics. The statistical bulletin contains further statistics that report the change in employment in sub-groups of population, by age or by region. Each of these sub-populations is smaller, and hence less precisely measured. Therefore estimated changes in unemployment in sub-populations, such as youth unemployment, are more likely to be due to sampling variation. However, we cannot know, as the sampling variation of these sub-populations are not provided by the ONS, so it is not possible to tell if these differences are important, or if they are just due to chance.
Understandably, given the executive summaries of the statistical releases, newspapers report that unemployment is rising, when the data do not support this. This leads newspapers to miss the bigger story: unemployment is still very high after the recession, and there is little evidence of falls in employment. The ONS could help by stating confidence intervals in their statistical bulletins to enable journalists to easily interpret whether the information in a report is consistent with chance, or if there is evidence of a genuine change. We know long term measures of unemployment, for example annual measures, with much more precision, because we have larger samples, so if measures of precision were to be more widely reported then debates over the economy might be slightly less myopic.
Today is “school league tables” day. Performance tables are released for schools and colleges in England, reporting a number of different measures of the exam performance of their students. While much attention this year will focus on the reporting of the new “English Baccalaureate”, we ask a more fundamental question: are school league tables in general any use to parents? One of the major aims for school league tables is to support and inform parents in choosing a school for their child: but are they fit for this purpose? The answer is “yes” – we show that using school league tables does help parents to identify the school in which their own specific child will do best in her future exams.
Parents consistently rank academic standards as being one of the most important criteria for choosing a school. The performance tables provide outcome measures that are very widely reported and easy to get hold of. The idea is that parents can scrutinise the results and weigh up the merits of the local schools, considering the academic performance, travel distance, the child’s own wishes and other factors before deciding which schools to write down on their application form.
But this idea has been subject to a number of critiques. There are three main lines of argument. First, it is argued that differences in raw exam performance largely reflect differences in school composition; they do not reflect teaching quality and so are not informative about how one particular child might do at a school. Second, schools might be differentially effective so that even measures of average teaching quality or test score gains may be misleading for students at either end of the ability distribution. Different school practices and resources might be more important for gifted students or others for low ability. Third, it is argued that the scores reported in performance tables are so variable over time that they cannot be reliably used to predict a student’s future performance. After all, today’s league tables reflect last year’s students’ exams, but a parent wants to know how her child will do in five years time.
It is an empirical question how quantitatively important these points are: are league tables helpful or not? The question on academic standards that parents want answered is: “In which feasible choice school will my child achieve the highest exam score?”. We argue that the best content for school performance tables is the statistic that best answers this question.
To answer this question, we use the long run of pupil data now available to researchers. We can follow students through their years at secondary school and see how they did in the exams at the end; that is standard. But we can also use statistical procedures (details) to estimate the counter-factuals of how that student would have done if s/he had gone to a different local school. We can then ask: if families had picked schools according to the league table information available at the time, would that have turned out to have been a good choice in terms of subsequent exam performance for that specific child? Focussing on the simplest measure of the school’s %5A*-C score, the results show that while it certainly does not produce a good choice for everyone, it produces a good choice for twice as many students than it produces a poor choice for. So on average, a family using the schools’ %5A*-C scores from the league tables to help identify a school that would be good academically for their child will do much better than the same family ignoring the league table information.
So are the league tables useful for parents? Definitely. Can they be improved? Certainly. The measures included in the performance tables should be judged according to their functionality, relevance, and comprehensibility. The test of functionality is the analysis just described. A measure is relevant if it informs parents about the performance of children very similar to their own in ability and social characteristics. It is comprehensible if it is given to them in a metric that they can meaningfully interpret. In fact, none of the current leading performance measures score very well across our three criteria. We have proposed an alternative measure that performs better on these criteria. No measure can be perfect because there are important trade-offs between relevance, functionality and comprehensibility: the more disaggregate the form in which performance tables are provided (increased relevance), the less precision they will have (decreased functionality). The more factors are taken into account in describing school performance for one specific child (increased relevance), the more complex the reported measure will be (decreased comprehensibility). Any choice on the content of league table information has to make decisions on these trade-offs.
The Government has announced is to permit hospitals to compete on price. This dramatic shift towards a more commercial market in the NHS is announced in a single paragraph in the NHS Operating Framework for 2011-12 published last week. Paragraph 5.43 says: “One new flexibility being introduced in 2011-12 is the opportunity for providers to offer services to commissioners at less than the published mandatory tariff price, where both commissioner and provider agree.” It adds: “Commissioners will want to be sure that there is no detrimental impact on quality, choice or competition as a result of any such agreement.”
This problem with this innocuous sounding paragraph is that we have been here before and it doesn’t work. The reason is that in health care, prices are easy to observe, whilst quality is not. In these circumstances, health care commissioners and sellers, despite the hopes of the Operating Framework, end up focusing on price, raising the prospect of two-for-one deals on surgery and cut-rate consultations for certain specialties. At the same time, in order to provide services at these prices, quality suffers.
Research from CMPO showed this is precisely what happened in the internal market of the 1990s, where price competition was allowed. Hospitals facing more competition focused on bringing down costs and lowering waiting list in order to provide what their commissioners wanted. But this was at the expense of quality and the consequence was that patients in hospitals located in competitive markets were more likely to die after an admission following a heart attack. These kind of unforeseen consequences are ikely to happen again – especially now when budgets are tight.
In making this move, Andrew Lansley is ignoring all the evidence on the impact of price competition in the hospital sector and is potentially endangering patients lives.
Last week’s Green Paper set out the government’s strategy for encouraging people to give time and money – part of its vision of a Big Society. There were few concrete ideas – beyond the suggestion that ATMs provide prompts to donate – but instead a set of guiding principles: Great opportunities, Information, Visibility, Exchange and reciprocity and Support (GIVES). In a nutshell – people need new and exciting ways to give (such as at ATMs) and they need to know about them; their giving needs to be visible and it needs to be valued. This is a potentially exciting time for the sector – even as many are worried about the effects of government spending cutbacks – it provides fertile ground for experimentation to see what works and what does not. In designing potential pilots, there is a growing body of evidence to build on.
A number of field experiments with individual charities have found successful triggers that can encourage people to give – these include announcing lead donations, providing a match, rewarding donors with small gifts, making donations public and telling people how much others have given. The findings have led people to generalize about what motivates people to give – signals of quality for individual charities, the desire for prestige, reciprocity etc – yet single-charity studies can only tell us about what motivates people to give to specific charities. None of these studies has looked at whether the triggers simply cause people to give more to charity A at the expense of charity B. There is a real risk that all the shiny new opportunities simply cause people to change the way that they give and a need to show that new schemes increase total giving, not just shuffle it around. To achieve that, there needs to be more understanding of what the real barriers are to people giving – and what can be done to eliminate them.
One of the big ideas in behavioural economics is that defaults can have a powerful effect on people’s behavior in overcoming inertia; they have been shown to work in relation to employer pensions with auto-enrolment leading to big increases in participation. Payroll giving is an obvious extension that we hope to test. But one important lesson from past research is that the detail of the default matters – a low default, while increasing participation, could lead to some people reducing the amount that they give. There also needs to be evidence that a scheme that helps someone to help others can work as well as a scheme that helps someone to help their future selves.
The emphasis on visibility accords with the findings from research which finds positive peer effects. For example, Frey and Meier (2004) found that when students were told that a higher proportion of past students had donated to a good cause, 64 per cent compared to 46 per cent, this had a positive effect on the proportion who gave – but the increase was small at around 2 percentage points. But, as with defaults, providing so-called “social information” can have a negative effect if the amounts that others have given are low. Alpizar et al (2008) found that informing people about a low modal donation increased participation but reduced the average donation (compared to no social information). More interestingly for the ATM proposal, suggestions to give particular amounts that are imposed from above have been found to have a negative effect. Alpizar & Martinsson (2010), show that compared with a social reference that comes from peers, a suggestion from the charity reduced both the probability of giving and the conditional amount given.
Evidence from ultimatum games in the lab, and from elsewhere, suggests that individuals have a preference for fairness, and that this preference is a driving force behind their charitable donations. Based on this, and the increasing prevalence of ideas such as a “Robin Hood tax” and “UK Uncut”, suggesting a belief that corporations, and in particular banks, are bearing too little of the burden of economic hardship, would seem to suggest that the encouraging charitable giving through boxes emblazoned with the logos of banks may not have the desired effect. One response to the ATM suggestion on BBC’s Have Your Say website was that “If bank cards started nagging me to donate I think I’d give LESS not more”.
 Frey, B. & Meier, S. (2004) “Social Comparisons and Pro-social Behavior: Testing “Conditional Cooperation” in a Field Experiment” American Economic Review Vol 94 No 5 pp 1717-1722
 Alpizar, F., Carlsson, F. & Johansson-Stenman, O. (2008) “Anonymity, reciprocity, and conformity: Evidence from voluntary contributions to a national park in Costa Rica” Journal of Public Economics Vol 92 issues 5-6 pp1047-1060
 Alpizar & Martinsson (2010) “Don’t tell me what to do, tell me who to
follow! – Field experiment evidence on voluntary donations” Working Papers in Economics No. 452
 Barr & Zeitlin (2010) “Dictator Games in the lab and in nature: External validity tested and investigated in Ugandan Primary Schools” CSAE WPS/2010-11