Author: Simon Burgess
Should we have profit-making schools?
Profit-making schools have returned to the education debate in England. This is an emotive issue for many, but an economic analysis is useful in defining the real issues.
There are some simple claims that can be quickly dealt with.
- “Education is far too important to be left to the mercy of profit-making companies.” Education is undoubtedly very important, for long-run growth, for social mobility, and for personal well-being. But think about possibly the most elemental of human needs, the production and distribution of food. While this is regulated by government, we are happy to leave all the decisions to profit-making companies. No-one seriously advocates the nationalisation of food.
- “It just won’t work.” It clearly does at a general level. Countries around the world, including those with well-regarded education systems such as Sweden, allow profit-making schools.
- “No-one should make money out of education.” Obviously they do at the moment: schools buy things from profit-making companies. This obviously has to be the case unless schools are going to start making their own books, desks and computers. So the real issues are (1) what kind of deal can schools get to minimise profiteering, and (2) what services are best bought in from outside as opposed to provided by the school itself
The appeal of allowing profit is the view that it makes decisions matter more. It provides strong rewards to organisations to innovate, to raise quality, and to do things more efficiently. Crudely, on a per-unit basis, organisations are pushed to improve quality and therefore revenue, or to reduce cost.
What would be the effects of this in the current education system in England? To answer this, we need to think about the parameters of the market.
Start with revenue. Schools get revenue for having students on the books. It is more or less a per-capita fee, albeit with some extras and some adjustment by the LA (for community schools). But to adopt the language of business, this money is for processing the students. The revenue that the school receives for each student depends not at all on the progress that the student makes.
This is central to the issue. Given the current system, there is nothing that profit maximising schools could do to raise their revenue per student by raising quality. Immediately, a great deal of the appeal of profit-making is removed.
The only way that schools could make profits is by driving down costs. This may be fine; it may be that this doesn’t really affect the quality of education if done in a smart way. If not done in a smart way, the quality of education would suffer and attainment would fall. It is clear that even the optimistic scenario does not improve education systemically in any way, either statically or dynamically through encouraging entry. The quality of education is the same, and the overall cost to the taxpayer is necessarily the same.
The counter-argument is that the pressure for profit might reduce slack enough so that the fall in costs allowed for profits and an increase in money spent wisely so that attainment increased. For this to work, it has to be that school budgets are spent very unwisely, and that an outside organisation could identify and cut ‘bad’ spending, take some profit and raise ‘good’ spending. It is certainly true that there is a huge amount of idiosyncratic variation in school financial decisions, variation that is unlikely to all be the result of optimal decision-making. Schools either know how to better deploy their budgets but are not sufficiently incentivised to do so, or they do not know. If they do not know, it is unlikely that outsiders will do (other schools may know; but that is another issue, only very clumsily mimicked by profit-making). Profit-making may answer the first point, but so do two other approaches, discussed below.
So profit-making is pointless at best: under the current market set-up, improvements in attainment would not make money (so would not happen) with profit-making schools, and cutting costs would make money but would either reduce attainment or leave it unchanged.
There are alternative strategies that might get some of the benefits of the innovative drive that profits might unleash, but in a more productive way: paying for attainment and incentivising cost reductions through resources for the school.
Paying for attainment. A positive step that keeps the current non-profit system intact but provides some of the same incentive is tying schools’ revenue to their pupils’ attainment. This would be straightforward to administer in principle, but there are some critical issues to resolve before it could be implemented. Chief among these is: should we pay for the simple ‘output’ of the school (GCSE points) or for pupil progress? There are good arguments both ways, to be visited in another post. Of course, schools do much more than produce attainment, but this is the focus of policy.
Incentivising greater efficiency in other ways. What if any surplus generated by this process had to be re-invested in the schools? Perhaps schools need some strong incentive to reduce costs. This might well be true, but this is not profit-making: profit-making by definition means the taking of monetary reward out of the school. An alternative scheme would be essentially equivalent to a team (school)-based incentive scheme in which the incentive is not money for the teachers, but resources for the school – resources saved are kept in the school. This is again potentially a good idea, worth looking at and some way short of profit-making.
Profit making in schools would either solve all schools’ problems nor signal the end of civilisation; the issue provokes strong feelings, but largely misses what should be the central policy concerns. Big gains in levels of attainment depend on raising average teacher effectiveness and big gains in equity depend on weakening the importance of proximity as an admissions rule and on changing the allocation of effective teachers across schools. None of these would be strongly or directly affected by for-profit schools. However, there are certainly merits in piloting policies that link school’s revenue per student to the progress of that student, and incentivising cost reductions through keeping the surplus in the school.
Free to choose?
Greater choice and competition in healthcare is a popular reform model. This column discusses recent research suggesting that once restrictions on choice in the UK’s NHS were lifted, patients receiving cardiac surgery became more responsive to the quality of their care. This saved lives and gave hospitals a greater incentive to improve quality.
A central plank of the NHS reforms implemented by the UK Labour government of the 2000s was the introduction of patient choice. For the first time in the history of the NHS it was mandated that patients should have a say in the choice of hospital when being referred for an elective treatment. Rather than relying entirely on their general practitioner (GP), patients were now offered a set of five hospitals to choose from. At the same time, GPs were no longer tied to a particular hospital through selective contracting agreements and could refer patients more easily to any available hospital in the country. The intention behind the reform was to make referrals more responsive to hospital quality. The argument was that this in turn would increase hospitals’ incentives to improve quality. Although most economists subscribe to the idea that more choice always constitutes an improvement, things are slightly more difficult in the case of healthcare. Contrary to most consumer goods markets, evaluating hospital quality is not a trivial task, and patients might indeed find it hard to pick the best hospital for a particular treatment. This is presumably even more of an issue in the UK, where patients are not accustomed to makingchoices in the context of healthcare.
Do referrals respond to quality?
We set out to analyse whether referral patterns did indeed become more responsive to quality after the introduction of the reform (Gaynor, Propper and Seiler 2012). As a first test for whether this is the case, we analyse whether relatively better hospitals attracted a larger number of patients for one particular procedure: elective coronary artery bypass graft (i.e., heart bypass surgery). We look at the relationship between hospital quality, as measured by patient survival rates, and market shares separately for the time periods before and after the reform. Interestingly, we find that market shares are not correlated with hospital quality pre-reform, however they show a significant correlation with patient survival in the post-reform period. This gives us a first piece of evidence suggesting that patients were indeed allocated to relatively higher-quality hospitals after the reform (but not before). The magnitude of the effect of quality on market shares is economically significant. Post-reform, a one percentage point lower mortality rate led to the hospital attracting 20 more patients every year. This corresponds to about a 5% increase in market share.
In a second step we model patient behavior at a more micro level. Instead of analysing only aggregate shares of patients at each hospital, we model hospital choice individually for each patient. This allows us to incorporate the effect of the patient’s location relative to the hospital as well as to analyse how reactions vary across different patient groups.
We find, also at this level of analysis, that patients became more sensitive to the quality of service as measured by patient survival. However the effect differs substantially across patient groups. Our results show that more severely ill patients react more strongly to the reform, i.e. they are even more likely than the average patient to end up at a high-quality hospital post-reform. In other words, we see the reform having the strongest effect for the group of patients that is presumably most in need of high-quality treatment. Similarly, we find a stronger effect of the reform on patients who reported in a survey that they were informed about the choice reform at the point of referral.
Finally, we also analyse whether poorer patients reacted differently to the reform. A major concern of skeptics of the reform was that only affluent, well-educated patients would be able to process the necessary information and make an educated choice. According to this logic the reform would therefore effectively increase inequality in quality of healthcare. Our analysis however, shows that fortunately these concerns were unfounded. We find that poorer patients reacted no differently from other income groups to the introduction of choice. They did indeed react slightly more strongly, but the difference is not statistically significant, i.e. we cannot confidently tell it apart from noise in our data.
We then employ our statistical estimates to quantitatively evaluate the impacts of the reform. We perform the following thought experiment. We compare the actual hospital choices patients made post-reform with the hypothetical choices the same set of patients would have made prior to the reforms, i.e., had referral patterns not changed in response to the freeing of choice. Because we know the sensitivity of referrals to quality pre- and post-reform, we can calculate the probability of visiting each hospital that is available to the patient under either level of responsiveness to quality. Artificially depriving patients of the benefits of the reform allows us to see to what extent patients would have ended up in lower quality hospitals in the absence of patient choice. Using the hospitals’ patient survival rates (adjusted for differences in case-mix, i.e. the severity of the cases treated at each hospital) we find that nine fewer patients (relative to slightly over 300 deaths a year, i.e. around 3%) would have survived every year had the reform not been implemented.
Unexpected drop in mortality rates
The drop in mortality post-reform is an important effect of the introduction of patient choice that was not emphasised by policymakers. Even if the increased responsiveness of referrals to quality did not change hospitals’ performance in any way, the reallocation of patients still leads to better health outcomes. This is due to the fact that patients now visit on average a higher quality hospital from the existing distribution of quality across hospitals.
The hope of the reform was that the quality distribution itself would change due to hospitals’ increased incentives to improve quality in order to attract patients. We next analyse whether there is reason to believe that such a change in hospital quality might have happened. To get a sense of the effect of the change in referrals on hospitals, we compute the change in patient admissions to each hospital if patients had been choosing their hospitals pre-reform according to post-reform referral patterns. This thought experiment – what would have happened had the reform been adopted earlier – gives us a direct sense of how much competitive pressure hospitals found themselves facing when the choice reform kicked in. When calculating the change in market shares we find a very substantial impact for some hospitals, with one hospital losing almost 10% of its market share due to the effects of the reform. There is substantial heterogeneity in the impacts across hospitals however, with most hospitals experiencing more modest changes in market share of around 2% or 3%. There is therefore good reason to believe that at least a significant subset of hospitals had substantial incentives to improve quality in order to retain (or enhance) their market shares of patient admissions.
Do hospitals pay attention to the new system?
In a final step we directly analyse whether there is any evidence of hospitals reacting to the change in referrals by increasing quality, as measured by a fall in mortality rates. To this end we look at whether hospitals that faced the strongest pressure post-reform (as measured by the potential loss in admissions) saw a bigger decline in their mortality rates than other hospitals. We do indeed find that this is the case. Hospitals which had the biggest increases in the responsiveness of patient admissions to their mortality rates had the largest declines in mortality rates, and vice versa. This result closely mirrors related work by Cooper et al. (2011) and Gaynor et al. (2012) who show that areas with more competition experienced a larger increase in patient survival rates after the introduction of patient choice.
In summary, we assess that the reform reduced cardiac bypass surgery mortality by 3% by re-allocating patients to better hospitals. This is clearly a lower bound on the beneficial effect one might expect from allowing choice, as we look only at the effect for one particular procedure. Secondly, we find evidence suggesting that hospitals responded to increased choice by improving their quality. If this is mirrored as a hospital-wide effect, there may be substantial additional positive benefits for patients. Finally, our findings add support to earlier evidence that indicate that the choice reforms led to falls in mortality in other treatments and shorter lengths of stay without increasing hospital total costs (Gaynor et al., 2012; Cooper et al., 2011) and also to work by Bloom et al. (2010) that indicates that more competitive environments have better management practices that are in turn associated with better hospital performance.
Bloom, N, C Propper, S Seiler, J van Reenen (2010), “The Impact of Competition on Quality: Evidence from Public Hospitals”, NBER Working Paper, 1630.
Cooper, Z, S Gibbons, S Jones, and A McGuire (2011), “Does Hospital Competition Save Lives? Evidence from the English Patient Choice Reforms”, Economic Journal, 121 (554), F228-F260.
Gaynor, M, R Morena-Serra, C Propper (2012), “Death by Market Power: Reform, Competition and Patient Outcomes in the British National Health Service”, forthcoming in American Economic Journal: Economic Policy, See also University of Bristol Centre for Market and Public Organisation, Working Paper, 10/242.
Gaynor, Martin, Carol Propper and Stephan Seiler (2012), “Free to Choose? Reform and Demand Response in the English National Health Service”, CEP Discussion Paper, 1179, November.
Author: Simon Burgess
Teacher performance pay without performance pay schemes
Amid the macroeconomic gloom, the Autumn Statement contained a line about teachers’ pay. The School Teachers’ Review Body recommends “much greater freedom for individual schools to set pay in line with performance”. Consultations and proposals are expected in the near future.
But simply giving schools the freedom to do this may be a rather forlorn hope of anything much happening. It is not clear that there is a substantial demand from schools for performance-related pay (PRP) schemes that has only been thwarted by bureaucratic restrictions. It is hard to see high-powered, tough-minded PRP schemes being introduced by more than a handful of schools, not least because we have not seen large scale deviations from national pay bargaining in academies in England despite their new freedoms to do so.
If that path seems unpromising, there are other ways of facilitating a greater reflection of performance in pay, discussed shortly. But first – is PRP for teachers a good idea in the first place? Does it raise pupil attainment? What are the ‘side effects’?
This is a question that economists have produced a good deal of research on. And to summarise a lot of diverse work briefly, the international evidence is mixed. Those on both sides of the argument can point to high quality studies by leading researchers that find substantial positive effects, or no effects. In both cases, interestingly, there appeared to be little evidence of gaming or other unwanted effects of the incentives.
There is little evidence specifically for England. Our own research found a substantial positive effect of the introduction of a PRP scheme, but given the varied results found elsewhere it would seem unwise to place too much weight on this one study. The underlying performance pay scheme was poorly designed but nevertheless had a positive effect on the progress of pupils taught by eligible teachers relative to ineligible ones.
And design is key. There are many reasons why a simple high-powered incentive pay scheme might be detrimental to pupil progress, which we have discussed here and here. These include the fact that teachers have multiple tasks to do, the problems of measuring the outcomes of some of those tasks, the complex mixture of team and individual contributions, and the potential impacts on implicit motivation. The overall message is that incentives work, but schemes have to be very carefully designed to achieve what the schemes’ proponents truly intend.
There is another way to facilitate a closer link between pay and performance that does not require any school to introduce a performance pay scheme.
Published performance information in a labour market can change the way that the market rewards that performance. The critical features are first that the organisation’s own output depends in an important way on this performance characteristic of an individual; second that the organisation has some discretion in the pay offers it can make to new hires; and thirdly that the performance information is public – is available and verifiable outside the current employer. In this case, the pay structure of the market will reflect the performance rankings: high-performing individuals will be paid more.
In teaching, the first two of these three conditions are met: teacher quality matters hugely for schools, and schools have some discretion over pay. Now, suppose we had a simple, useful and universal measure of each teacher’s performance in raising the attainment of her pupils (obviously we don’t at the moment; I come back to this below), and that this was published nationally, primarily for the attention of Headteachers. The idea is that Headteachers trying to improve the attainment of their pupils would be on the look-out for high performing teachers when they had a vacancy to fill. Armed with this performance information, they might try offering a higher wage (or something else – it doesn’t have to be money) to tempt them to join their own school. Equally, the teacher’s current school may respond by raising the offer there. Over time, this process will tend to raise the relative pay of high-performing teachers relative to low-performing ones, whom no-one is trying to bid for.
This idea should not be a strange one. A number of professions have open measures of performance. Just today it is reported that performance measures for more surgeons will be made public in the summer of 2013; this is already true for heart surgeons.
It is well-known that PRP does two things: it motivates and it attracts. The outcome for pay described here will tend to make teaching more attractive to people who are excellent teachers and less attractive to those who aren’t.
There are a number of problems with this idea, though perhaps less than might appear at first glance. First, it could be argued that a performance measure derived from teaching in one school is not relevant to teaching in another school. Obviously each child and each school is unique, but it seems very unlikely that there is no commonality of context between one school and the next. Observation suggests this: teachers moving from one school to another are not counted as having zero experience, and Headteachers are often appointed from outside a school.
Second, there might be a fear that the teacher labour market would become chaotic, with everyone churning around from school to school in search of a quick gain. We have to recognise that there is substantial turnover of teachers now < http://www.bristol.ac.uk/cmpo/publications/papers/2012/wp294.pdf >. But the main point is that it does not require much actual movement to make the market work. Schools can make counter offers to try to retain their star teachers and the end result is the same – higher salaries for high-performing teachers.
Third, any measure would be noisy, partial and imperfect. Of course, all such measures are. Whether a measure is perfect is not really the question, the question is how noisy and imperfect is it, and whether it contains enough information to be useful. One advantage in this case is that the consumers of these performance indicators are the people best able to judge their usefulness and their shortcomings: Headteachers. If such metrics are not useful, Headteachers will simply ignore them; there would be no compulsion to use them. Even in labour markets with some of the most detailed and finely measured performance indicators (for example, football or baseball) there are many moves between employers that do not work out. It is worth re-emphasising that these performance measures are bound to be imperfect and incomplete, but broad measures of performance may nevertheless be very useful.
There are useful parallels to be drawn from another profession: academics. For academics, the combination of very detailed and public performance information and a context where research performance matters a great deal to universities seems to have had a substantial effect on academics’ pay.
The Research Assessment Exercise (RAE) and more recently the Research Excellence Framework (REF) have made a strong research performance very important to a university’s standing and its income. But the critical factor for academics is that an individual’s research performance is public knowledge, through very detailed recording of the impact of their research papers. Departments and universities aiming to improve their ranking seek out star researchers and attempt to bid them away with higher salaries (plus other things such as research facilities). These offers may well be matched by their current employer, but the end result is that salaries now seem to be much more closely correlated with research productivity than before the RAE/REF (I say “seem” as there does not appear to be any evidence on this, so this is casual empiricism). This is a lot of what drives many young researchers to put in very long work hours: having a paper published in a top scientific journal early in a career has a substantial lifetime payoff even in a world with few or low-powered incentive schemes. If you check out academics’ websites you will invariably see their academic output prominently displayed.
Again, an important feature is that these indices of research output are largely consumed by other academics who are aware of their strengths and weaknesses. So although they are far from perfect, they are used by precisely the people best placed to calibrate their usefulness appropriately.
If we are to go down a path of tying teacher pay more closely to performance, and yet respect the rights of increasingly autonomous schools to determine their own pay systems, then this might be an option to consider. The challenge is to devise a measure that is simple, useful and universal. It would measure the progress made by the pupils that teachers taught, it would have to deal with normal variations in performance by averaging over a number of classes and a few years, and be on a common metric. This is not straightforward, but if it gave rise to a robust broad measure of performance it could form a part of performance pay for teachers, and performance management more broadly. It could also have substantial effects on the pay of high-performing teachers.
Author: Christiern Rose
War on drugs
Last week in the Times, Richard Branson, the Billionaire founder of the Virgin Group, called for an end to the `war on drugs’, arguing that only the `naive’ have not realised that this policy has failed. In fact, it may be far worse than that – the evidence suggests that the current weapons used in this war are actively making the situation worse.
Each year, the United States government spends billions of dollars to wage the “war on drugs”. The phrase – first coined by Richard Nixon in 1962- encompasses an arsenal of supply side policies with the objective of restricting drug availability. Interventions include regulating precursors, dismantling production facilities, seizing shipments in transit and incarcerating suppliers.
The facts speak for themselves. Between 1985 and 2011, the Drug Enforcement Administration’s real annual budget has risen from $756 million to just over $2 billion, whilst the number of special agents has more than doubled. Over the same period, the street price of 0.1grammes of crack cocaine has fallen from $130 to just $30, whilst purity has fallen from 90% to 58%. The upshot is that crack is more affordable than ever. A number of empirical papers show that seizures of narcotics have no impact on affordability (DiNardo; 1993, Caulkins & Yuan; 1998, Rose; 2012).
The drug war is justified through invoking elementary theories of supply and demand: through restricting supply, enforcement policies aspire to drive up street prices and reduce consumption. In this framework, disruptive policies are akin to taxation of alcohol/tobacco products. I argue that the failure of the drug war has root in the fallacy of the elementary model in this context.
The elementary framework relies on buyers and sellers having perfect information. This is violated in at least two dimensions. First, the seller may “cut” their product with visually identical adulterants. For example, retail cocaine is frequently diluted with caffeine powder, available at a fraction of the cost of wholesale cocaine. Buyers do not observe purity prior to trade; hence the seller faces a strong incentive to rip them off. In such an environment, all that prevents the seller from always providing zero purity drugs is the knowledge that doing so will damage his reputation, reducing future demand for his product. Second, buyers do not perfectly observe supply disruption; whether or not his shipment was seized is known only by the seller.
These characteristics have substantial implications for the impact of seizures on retail market conditions. Scarcity of wholesale narcotics provides sellers with a strong incentive to dilute their product, in spite of the harm done to their reputation. If sellers respond by cutting, seizures act to reduce purity today and prices tomorrow. Consequentially, in the long-run, increased enforcement results both in lower prices and lower purity. The empirical evidence is consistent with this mechanism: estimates show that a standard deviation increase in the number of cocaine seizures causes both price and purity of crack cocaine to fall by around 4%. Moreover, since 1985, both price and purity have fallen steadily whilst DEA expenditure has risen. Finally, seizures have no effect on the amount of pure cocaine that may be purchased for a given price.
If the objective is to reduce the pure quantity of drugs consumed then supply disruption is successful by definition. However, if the objective is to curb initiation, price reductions are undoubtedly an undesirable outcome. Moreover, if cutting agents are harmful, purity reductions may pose health risks to users. The medical literature suggests that an increasingly popular cocaine adulterant is Levamisole, which causes long term harm to the immune system. Whether demand side policies perform any better is uncertain, though work by Rydell & Everingham (1994) suggests that this may be the case.
 DiNardo, John. “Law enforcement, the price of cocaine and cocaine use.”Mathematical and computer Modelling 17, no. 2 (1993): 53-64.
 Yuan, Yuehong, and Jonathan P. Caulkins. “The effect of variation in high-level domestic drug enforcement on variation in drug prices.” Socio-Economic Planning Sciences 32, no. 4 (1998): 265-276.
 Rose, Christiern. The Impact of Supply Disruption in the War on Drugs: Can Seizures Raise Prices. Working Paper, 2012
 Rydell, C. Peter, and Susan S. Everingham. Controlling cocaine: Supply versus demand programs. Vol. 331. Rand Corporation, 1994
Author: Sarah Smith
A fall in giving
CAF and NCVO have today published the latest UK Giving report showing a decline in donations to charity. The estimated total amount donated to charity by adults in 2011/12 was £9.3 billion, a decrease of £1.7 billion in cash terms, and a decrease of £2.3 billion in real terms, compared to 2010/11.
Much of this decline is likely to be attributable to the ongoing economic climate. Looking at historical data, we know that donations were fairly resilient in previous recessions in the early 80s and early 90s. But this recent recession has lasted much longer and now appears to be hitting giving hard. In the past, donations have also tended to rise strongly when the economy grows, so let’s hope this bodes better for giving bouncing back in the future.
However, analysis that I recently did for CAF points to clear generational patterns in giving that may be more worrying for the prospects for donations. The research highlights a divide between pre- and post-war generations in terms of trends in giving. Among pre-war generations, there was a clear tendency for subsequent generations to be more likely to give at each age than their predecessors, and to be more generous. Among post-war generations, these trends – particularly in the proportion giving – have been going in the other direction. As a consequence of these generational changes, the giving population is ageing. Thirty years ago, around one-third of donations came from the over-60s. Today it is more than half.
A number of commentators have questioned these findings. In the discussion that followed the report’s publication, a number of points were raised about the analysis, all of which were legitimate, but none of which invalidated the research findings.
First – it was argued that we would expect some ageing of the donor population since the general population has been ageing. This is true, but the donor population is ageing faster than the general population. As noted in the original report, the share of giving done by the over-60s has been rising much faster than their share of total spending.
Second – the analysis focused on giving at the household level since many couples make joint decisions about giving. The rise in single-person households mean that the composition of households today is not the same as it was thirty years ago. But the same trends in giving are present if the analysis is done at the individual – not the household – level. The generational divide is not something that can be explained by the rise in single-person households.
Third – much of the media analysis focused on low levels of giving among young households (in their 20s and 30s). This led many to point out – quite rightly – that people in their 20s and 30s today face many new financial pressures that their predecessors did not – from student debt to high house prices. But, the report is clear that the generational divide is one between the pre- and post-way generations, not something that is unique to today’s 20 and 30-somethings. People in their 50s today (the 1960s baby boomers) are less likely to give than today’s older households did when they were at the same age.
A number of factors may explain the generational divide – including changing religiosity, wider trends in civic participation (interestingly, other people have found similar trend when they have looked at voting, for example) and even the growth of the welfare state which for some people reduces the rationale for giving to charity. It is hard to say for sure why the post-war generations are less likely to give than their pre-war predecessors, but important to bear these long-term trends in mind when looking at the latest dip in giving.
Author: Paul Gregg
The UK Employment miracle and productivity catastrophe
There has been a great deal of discussion concerning Britain’s recent employment and productivity record which has regularly been asserted as baffling economists and the bank of England. In essence the conundrum is that employment has recovered to pre-recession peaks whilst in terms of output the recovery has been very limited and stands well below (4%) that peak. This extended fall in productivity (making less with the same workforce) stands in massive contrast with previous recessions and recoveries where productivity growth was strong in the recovery. The figure below drawn from the recent ONS (2012), The Productivity Conundrum, Explanations and Preliminary Analysis by Peter Patterson, shows the productivity gap compared with the 1980s and 90s recessions stands in excess of 15% – a massive underperformance in terms of productive potential. A similar underperformance is apparent if we compare where we are compared to pre-recession trends. In the past, here and abroad, a loss of output compared to trend as a result of a recession, is subsequently unwound through a catch-up period of above trend growth this hasn’t happened, and a similar story is true of other countries. Whether any of this lost potential is recovered in the near future thus rests on why we have underperformed recently. This is actually easier to explain than the media description of baffled economists implies.
Productivity Levels compared to Pre-recession peak in four UK recessions.
The first question concerns data reliability. Could some of the paradox be down to measurement problems? Certainly tomorrows GDP numbers for Q3 of 2012 will show a return to growth after last quarter’s numbers which were suppressed by the Queens Jubilee bank holiday and will suggest the economy is just growing over the past 6 months but this won’t make a dent on the sustained underperformance described above.
As mentioned previously employment has recovered to the pre-recession peak but unemployment remains very high. This apparent paradox is easily explained. Right through the recession employment among the over 65s has grown quite rapidly. Older workers are not retiring as they used to do, pushed by changes to retirement rules which encourage longer working and penalise early retirement and for women the rising state retirement age. So compared to the early 2008 employment of the over 65s stands 250,000 higher and a similar magnitude of extra employment has occurred among women between 60 and 65. Nearly all of this extra employment among older workers is either part-time or self-employed and often both. When we add in a growing population the proportion of the working population in work stands at 71.3% still well below the pre-recession peak of 73%, a shortfall of nearly ½ million jobs. Further there has been a sharp trend of more people wanting to work, especially among those aged between 50 and 60. This has been going on for a decade now but until recently this was offset by more people studying when young – so the share of the population wanting to work was constant at just under 77%. Over the last two years this increase in students has stopped. Perhaps because of a surge in the immediate recession period or a response to policy changes but either way it is adding another quarter of a million to the workforce so that whilst employment has reached previous peaks, the need for work stands considerably higher with a deficit of a million or so jobs. The move to more and more people wanting work, especially among older workers, and so we need to add at least 250,000 jobs a year to stand still and the employment recovery is thus not as good as it first appears. Allied to this is the rise in the numbers working part-time who want to work full-time, which is called under employment. The numbers who are unemployed stands at 1.4 million is shortfall which when combined with 2.5 million underemployed suggests huge unmet need for work. However, the fact that most of the jobs created since 2009 are part-time, fewer hours worked only accounts for about 1% of the productivity decline since 2008 (ONS, op cit).
It has been suggested that there may have been an under recording of employment before the recession, with a large number of migrants not being captured and that these marginal workers have since lost work and left the country again unrecorded. There are a number of major problems with this argument. First, our data on employment comes from two very different sources, one based on households and the other firms. Neither of them questions the legitimacy of an immigrant’s status and so there’s no incentive to hide migrants; therefore in terms of residence these migrants might be hard to find and perhaps be reluctant to reply to a survey, employers have no incentive to hide these workers and both surveys tell the same story about employment. If the firms using this labour are not tracked by the ONS then they will not be present in the output data nor the employment data and hence can’t explain the paradox. In addition it requires a huge number of missing migrants to explain the gap, at least 8% of the workforce and that all these workers lost their jobs with the recession. If only one in five lost their jobs, compared to 1 in 20 in the rest of the population, the numbers would need to be equivalent to 40% of the workforce. This is just implausible.
So the employment story is clear there has been an employment recovery, verified in a number of sources, but this recovery has not met the increased demand for work in the population, leaving 3.9 million unemployed or underemployed. The output side of the story also appears validated by tax receipt data. The ONS report shows how tax receipts VAT, PAYE and in total track the picture of nominal GDP well. That is the total size of the economy measured at current prices and serves to track government receipts well. However when we talk about output we take out the effects of inflation, so there could be a concern the effects of inflation have been overstated and there is more real output out there than estimated and low price increases. This argument tends to be supported by other inflation measures we have, the Consumer Price Index and Retail Price Index (which also includes housing costs) all show that inflation has been strong through this recession. So measurement can only explain a very small part of the story of economic underperformance with a good employment performance.
This provides a serious paradox, so where can the explanations lie? The first argument is one I would have made two years ago, that firms were hoarding labour in the face of recession. In the first phase of a downturn, firms who are profitable will hold valuable labour, with its skills and experience in the expectation that it will be needed again in a year or two as demand returns. Only if a firm is in acute financial distress does it shed skilled labour, when the firm’s very survival is at stake. This was thus eminently plausible in the early part of the recession. But firms hoarding labour would not recruit new staff to replace those that leave through natural wastage, move to a new job or retire etc. Hence such labour hoarding should start to unwind even if there is no economic recovery. Rather we have seen the reverse of increased employment without growth. Even though surveys show some firms hoarding labour this should be a diminishing issue rather than a growing one. As such this cannot be it can’t be the major explanation of current trends.
A variant on this is, if you like, is firm hoarding or as NIESR has described it ‘Zombie firms’. The argument is that banks are not lending to new or expanding firms as they seek to rebalance their own finances. It should also be noted that equally they are not forcing poorly functioning firms into bankruptcy and thus increasing bad debts held by the bank, and as such they are being allowed to persist despite the implied capital misallocation. The argument therefore runs that firms are expanding without capital through increasing employment rather than investment, whilst poorly performing firms are employing workers but are experiencing low productivity because of low order numbers. This is attractive and it is certainly the case that investment is low but there is no evidence to date that this low profitability Zombie sector exists. Certainly overall firm profitability is high outside manufacturing, which continues to struggle. Profitability in the dominant service sector is only slightly lower than in pre-recession period and recovered two thirds of the lost profitability during the recession, which in itself was quite muted compared to previous ones. As such the evidence of zombie firms is not obvious and requires an investigation of company level data for further insight, but at first take it does not feel like it is a major factor.
Net Financial Balance of Private non-financial Companies (from Peter Patterson The Productivity Conundrum, Explanations and Preliminary Analysis, ONS, 2012 )
The evidence of low investment is strong however; firms normally borrow money and invest in productivity enhancing technology. Currently firms are saving money rather than spending and are net lenders not borrowers. This is now evident on a very large scale and well that seen in the last recession at a massive 4% of GDP. The scale of this seems to point to healthy profitability being saved rather than invested as opposed to Banks not lending to growing firms and trying to reduce the net debt position of struggling firms. So why would healthy firms choose not to invest?
The obvious answer is that in the current environment it is easier, cheaper and less risky to hire to meet demand rather than invest. Real wages have fallen steadily since late 2008 and saw a large squeeze in the high inflation burst in 2011. Overall wages have fallen by around 8% since early 2008 in real terms (measured using Retail Price Index), this squeeze on wages is more than enough to explain the fall in productivity since 2008. Using a standard elasticity of demand for labour of around -0.5 then an 8% fall in real wages would raise employment by about 4%. In a period of uncertainty about future demand building cash reserves rather than investing for the long term is safer. Employing extra labour is easy to reverse if demand for product turns out worse than expected. At complete variance with the rhetoric of parts of government and their advisors and evident from firm behaviour hiring workers is easy and low risk. By contrast investments are largely irreversible and therefore inherently more risky. Hence as labour is increasingly cheap and low risk firms are choosing this route rather than replacing ageing infrastructure and machinery.
The apparent divergence between productivity since 2008 compared to previous recessions is huge but can be usefully broken into two parts. The first is that employment has recovered to pre-recession levels despite output still being 4% below the peak. This is partly explained by employment composition moving toward part-time work but mainly because labour has become increasingly cheap and low risk and hence firms are substituting labour for capital. This is occurring because real wages have seen such a large cut over the last four years. Research I’ve undertaken with Steve Machin for the Resolution Foundation shows that this in turn is a combination of a slowdown in real wage growth that occurred well before the current recession In addition there is clear evidence that wages have become more sensitive to unemployment such that the near doubling of unemployment from 4.6 to 8.3% we’ve seen since 2008 results in real earnings being £750 lower now than would have been the case from a similar unemployment rise in the 1990s recession. These changes have their origins in the decline of trade unions, a reduction in the imbalance of unemployment across skill groups and regions and welfare reforms which have meant that the completion for jobs is more intense. The larger part of the shortfall in productivity compared to past trends and recoveries however, stems from low demand in the economy and the corresponding absence on investment to meet that demand.
The implications are threefold. First, employment will rise and unemployment will fall well before there is a recovery in wages and productivity. However, as unemployment falls real wages will start to grow again as the heavy downward pressure on wages in the current labour market eases. This is the pattern already and it will continue until unemployment is firmly on a sustained downward trajectory. When output starts to recover and real wages stop falling firms will mostly likely start to invest their large cash surpluses currently held. This of course, need not all be in this country and the location of this investment will impart depend on the quality of the available workforce and the large scale geographical focus of world growth, with European economic recovery, including the UK, being more important for us. When this dam holding back investment is broken a period of strong growth should follow, as investment and rising wages fuel growth and hence perhaps 2/3 of the lost productivity will be recouped. However in my view at least 1/3 of the 15-18% shortfall in productive potential is lost. The longer the period of high unemployment, falling real wages and low investment continue the greater the damage that will be done.
The policy prescription that follows is reasonably obvious. First, incentives for firms to invest now rather than in the future need to be enhanced. This need not cost a low to the exchequer just rather a change in timing. Second we need to boost other investment in housing, lending to small firms and new low carbon technologies. This could happen through the government borrowing more, the treasury acting as a guarantor for borrowing by housing associations to build more low rent housing (so resources can be drawn from pension funds and the like) or through quantitative easing being invested through a national investment bank rather than used to try and manipulate bank finances. The government is feeling its way slowly to the second option whilst hoping that it doesn’t need to go further. Meanwhile Rome burns.
We recently saw the launch of the Bristol Pound. In this blog we thought we’d offer two perspectives, one from Michael Sanders outlining the challenges of community currencies and one from Susan Steed, one of the co-founders of the Brixton Pound outlining their potential benefits.
Local Currency Anti – Michael Sanders
Bolstering communities, saving the environment, boosting employment and making us all wealthier and society more equal, local currencies have a lot to live up to if their supporters are to be proven correct.
Bristol’s newly minted currency joins a raft of others across the UK, and it certainly appears that this is an idea whose time has come – but how do local currencies stack up against the claims made about them?
The good news is that areas with local currencies are more affluent than those without them. Unfortunately, this seems to be a result of local currencies failing in poor areas and surviving in affluent ones – places where, one might argue, they are less needed. In the little robust economic research that exists in this area, Krohn & Snyder (2007) find no evidence of any economic benefits.
By imposing restrictions on trade, even just psychological ones in the form of a weak commitment device, we limit the number of beneficial exchanges that can take place – although this may be good for the people with whom one is forced to trade, it is necessarily worse for those from further afield.
These two observations – that protectionism will tend to beggar one’s neighbour and that local currencies seem mainly to survive in affluent areas, suggests that the rich or middle class benefit at the expense of the poor.
As for the sociological benefits, which seem plausible, if likely to be endogenous, the benefits of these will be felt by those least in need of them – both social trust (vital to community cohesion), and wellbeinglivepage.apple.com, are highly socially graded.
These questions may be answered with a carefully designed evaluation, and the power of commitment devices and mental accounting has been shown to be strong in the past. More macro-level claims, such as the environmental benefits, are probably unverifiable. Nonetheless, the claims appear naive, ignoring, for all the talk of community and togetherness, the evidence found in history that humanity’s greatest achievements are found when we engage with others, either collaboratively or competitively, but most often both – the market for personal computers could not have been sustained by a local hardware store in Seattle, and the people of Wapokoneta, Ohio did not put Neil Armstrong on the moon alone.
Local Currency Pro – Susan Steed
Complimentary currencies exist to speed the flow of money around a local area, increasing the rate at which goods and services which exist locally are exchanged, and producing local economic well-being. Increasing the velocity of money locally does not represent a drain on the wider economy, or suggest that all goods and services should be relocalized; everybody could use local currencies, and still have national and global systems of exchange. Bernard Lietaer suggests that local currencies enable trade which cannot always happen in national currencies because of their unique features.
For example, the Bristol Pound (£B) uses innovations in technology so you can pay quickly by text, something you can’t yet do with ‘normal’ pounds. This is an innovation that might become common in Sterling in time (“texting money” is big in Africa) but for now the closest the big banks can do is Barclays Pingit. But that needs a smart phone to send payments, where £B works with any basic mobile. Now traders don’t need to hire a credit card machine to make payments, or pay Visa’s cut of every transaction. For some purposes, the £B is better money than Sterling!
The £B is also about getting people to think about where they spend their money, where it goes and who benefits from it. Bristol are breaking new ground being the first local currency in the UK which you can bank with the credit union. Behavioral economics shows us that people need all the help they can get acting in their own long term best interests, and a local currency is a constant reminder of other local efforts like credit unions. In Lambeth we’re part of a wider European project to evaluate whether local currencies do change people’s behavior. A local economy is about more than local currency, but a currency is a good connector.
The £B has a chain effect. Using the currency means businesses are committing to understanding where they respend their money. In Brixton we have found that the currency accelerates the growth of new enterprises by fostering tighter connections between businesses using the currency. Brixton pounds can now buy a share in locally produced energy, organic vegetables grown on a inner-city estate, or get your shopping delivered by a bicycle powered delivery service.
We’re not anticipating a Brixton Mac to come off production lines soon, but Brixton has a thriving computer repair social enterprise that repairs systems in exchange for Brixton Pounds, and picks up businesses looking for a business-to-business use for their local money.
For me personally, the most interesting thing about local currencies is thinking about where the goods and resources that sustain my lifestyle come from, who produces them and what the real costs are. Modern technology means there are goods and services that we can consume and connect with globally, but other things that it makes more sense to produce closer to home. It’s not about one or the other but getting the right balance between the two.