Archive

Author Archive

It’s grim up north: but did it get grimmer with the recession?

March 18, 2014 Leave a comment

Ron Johnston, David Manley and Nabil Khattab.

At their party’s 2014 spring conference in York, eight northern Liberal Democrat MPs presented a report entitled Grim up North? Rebalancing the British Economy, using it as the basis for an argument that the forthcoming budget should pay greater attention to investment that would narrow the north:south divide. Using a range of data, they showed that ‘The South has, in every single category of economic affairs spending, been cut by less than the English average’ – with the obvious consequence that the North has suffered most, and yet it has the biggest problems.

Their argument was taken up by the media. The Guardian, for example, highlighted what the MPs identified as ‘fundamental unfairness’ in the coalition government’s treatment of the North – which was making it harder for the party to win seats there. The Independent backed up the story with a small amount of data indicating a north:south divide in both unemployment rates and house prices. A few weeks earlier, it had published an article by a leading economist and former member of the Bank of England Monetary Policy Committee, Danny Blanchflower, entitled The North is still not feeling this recovery, in which he said that ‘The South is seeing recovery and the rest of the country is being left behind’ – an argument based largely on house price changes.

These contributions – and many others like them – raise two important questions:

  • Is life grimmer ‘up North’ than elsewhere; and
  • Has it got even grimmer since the recession bit in 2008?

To address these questions, we use labour market data from the quarterly ONS Labour Force Survey; the data analysed are for the first quarter of each year between 20-02 and 2013 inclusive. These are available for both the region in which the c.40,000 respondents each quarter live and (for some of the data) that in which they work. Where both are available, we use the latter. (In the data for place of work, Central London is separated from Inner London; it is not in the region of residence data.)

Rather than just use a single indicator of labour market health, we look at the following range:

  • The percentage of the workforce who are unemployed, according to ILO definitions – this is one of the commonest measures of local economic health;
  • The percentage of the unemployed who have been so for three months or more, often taken as a clear indicator of recession;
  • The percentage of those in employment who are working part-time – following the arguments of some of the coalition government’s political and other critics that many of the claimed ‘new jobs’ being created, especially in the private sector, are part-time, low-quality jobs;
  • Following that argument, the percentage of those working part-time are doing so because they wanted but could not find a full-time job;
  • The percentage of people who are in jobs for which they are over-qualified, using conventional measures of that situation – as the labour market gets tighter, so more people (whether working full- or part-time) may find it necessary to take such jobs because nothing else is available;
  • The percentage of those aged 45-64 who have left the labour force, and are no longer either in work or seeking it (what some call discouraged workers) – given cuts in benefits and who might be entitled to them since the coalition government took power in 2010, fewer people in those older adult age-groups may have found leaving the workforce a viable option post-2008; and
  • The median gross hourly income of those in work.

Data for all of these are presented in the tables below, for two groups of years: pre-recession (2002-2008) and recession (2009-2013). In all of the tables, we separate out: regions in the North of England; regions in the Midlands and East of England, plus the Southwest; London and the Southeast; Wales and Scotland. For each indicator, we give the regional average for each of the two periods, plus the percentage change between the two. And in each column we highlight in bold the figures that are above the national average – given at the foot of each table.

If the conventional wisdom regarding a north:south divide is valid, then the figures in bold should be concentrated in two parts of the table – the first eight rows, covering regions in the North of England; and the last four rows, covering Wales, Scotland and Northern Ireland. Regions elsewhere – notably in London and the Southeast – should have few figures in bold, since they are supposed to have the more buoyant economies, and ‘conventional wisdom’ suggested that they were less affected by the recession.

Unemployment (Table 1)

The first set of three columns in Table 1 does not show a pattern of unemployment that conforms to expectations. Whereas six of the eight northern regions had unemployment rates exceeding the national average of 4.9 per cent in the pre-recession period, with a peak of 6.5 per cent in Tyne and Wear, both Inner and Outer London also had rates greater than the national average whereas three of the non-English regions did not. Indeed, the unemployment rate in this period of relative prosperity was higher in Inner London than anywhere else in the country. Before 2009 the divide was between London and the North, on the one hand, and the rest of the UK on the other (with the West Midlands Metropolitan County and Strathclyde being the main outliers). In the subsequent period – 2009-2013 – the pattern was very similar: the unemployment blackspots were now more clearly in the northern metropolitan counties but London was not far behind, and still with rates above the national average.

But the third column suggests that it did get grimmer up north in relative terms. Ten of the twenty regions experienced a percentage growth in unemployment rates above the national figure of 57.1 per cent – and the two London regions were not among them. London and the Southeast suffered less from job losses in the recession than many other regions it seemed – but so did Merseyside, the non-metropolitan northern regions and Scotland.

Long-term unemployment (Table 1)

London also experienced less growth than the average – 28 per cent – in the percentage of the unemployed who had been out-of-work for three months or more once the recession set in, although it was relatively high there in the pre-recession years. In many of the northern regions, plus Wales, Northern Ireland and Strathclyde, over 60 per cent of the unemployed post-2008 had been so for more than three months; in relative terms it didn’t get much worse there once the recession set in, but other parts of the country were catching up.

Part-time employment (Table 2)

London stands out (especially Central and Inner London) in these data on the geography of part-time working as having much lower percentages prior to the recession. The percentage working part-time nationally increased only slightly (by 6.6 per cent) during the subsequent recession years – and London had by far the highest rates of increase. If the recession forced more people into part-time work, therefore, this characterised London much more than areas further north; there, increased unemployment was the norm.

Working part-time out of necessity rather than choice (Table 2)

The LFS surveys ask those working part-time if they are doing so out of choice (perhaps because they are students or carers) or out of necessity: the latter wanted full-time work, but couldn’t find any. Just under 10 per cent gave that answer pre-recession but the percentage almost doubled after 2008. As with the geography of unemployment, the pattern shown by these data is not a simple north:south divide: both before the recession and after it set in there was a split between the South, on the one hand, and – on the other – the North, plus London, plus the four regions outside England. Exacerbation of the problem was not concentrated in the latter group of regions, however, but in those where relatively few part-timers were so employed out of necessity before the slump, notably in the Midlands and the Southeast (as well as Greater Manchester). What was a regional problem became more of a national one in recession conditions.

Over-qualification (Tables 3-4)

In a buyers’ labour market workers are more likely to feel constrained to take positions for which they are over-qualified than when the demand for labour – especially skilled labour – outstrips supply. This suggests that the percentage of those in work who held jobs for which they were over-qualified would: (a) be concentrated in those regions with higher levels of unemployment and under-employment; and (b) increase in number most in those where unemployment also increased. The data in Table 3 are consistent with that argument to a considerable extent.

For full-time workers, with a small number of exceptions (notably West Yorkshire Metropolitan County), holding a job for which you were over-qualified was a characteristic of over one-quarter of all employees in both periods in both the northern regions and those outwith England; the inter-regional differences were not too substantial, however, with only one percentage above 30 pre-2009 and none below 20. Although many of the northern regions with high percentages pre-2009 experienced an above-average increase in their percentages over-qualified, there was also some levelling-off of the regional differences, with large increases in London and the East of England. Among part-timers, on the other hand, most regions in the North of England experienced above-average growth in over-qualified workers – although the largest, from a relatively small base, was in Outer London. Among part-timers who wanted full-time work but couldn’t find it, there was an increase of nearly one-third between the two periods – with much of the increase being in London and the Southeast (Table 4).

Discouraged workers aged 45-64 (Table 4)

This is the one indicator where we anticipated a fall in the percentage involved – and it occurred: some 5 per cent less in that age group had opted out of the labour market post-2008 than was the case pre-2009 (Table 3). And more so than many of the other indicators analysed here, there is a clear ‘traditional’ north:south divide. Older working-age adults were more likely to have opted out of the workforce in the northern regions (many possibly because of employment-related health conditions) but less so in the recession years than previously. Were they forced back into the labour market by benefit cuts, or….?

Income (Table 5)

Although a substantial number of LFS respondents decline to give information on their income, enough do for the geography to be clearly delineated. Overall, in both periods, the ‘grim up north’ argument is clearly sustained. Median incomes in Central London were almost twice those in the northern regions, with that elsewhere in London and gross hourly incomes were also significantly higher for both full- and part-time employees in the Southeast than elsewhere. But the divide was not exacerbated in relative terms by the recession: median incomes grew by less than the average in most of London as well as in the Southeast for both groups of workers and employees in the North and Midlands benefited slightly more.

In conclusion: grimness is not just a northern problem?

There is no simple story to be told about the geography of the recession that set in after the credit crunch began in 2008, therefore. On most labour market indicators it is grim up north: it was before the recession and it remained so afterwards. But not all parts of the ‘North’ experienced worse conditions than the ‘South’ on all indicators either before or during the recession; the geography was more nuanced than that. And it certainly didn’t universally get even grimmer up north. On some of our indicators, instead of the north:south divide getting wider it got narrower – with even London’s labour markets suffering badly relative to the national situation. And so, with no simple pattern it is not sensible to go for simplistic policies – favour the North: parts of the North are indeed in trouble – but they aren’t necessarily alone, and some of the suffering is being shared across the country more widely than sometimes appreciated.

Table 1. Percentages Unemployed (of those economically active) and Unemployed for three months or more (of those unemployed)

Image

Table 2. Percentages Working Part-Time and Part-Time out of Necessity (of those working part-time)

Image

Table 3. Percentage in Jobs for which they are Over-Qualified by Region of Workplace

Image

Table 4. Percentages of the Unemployed who were Unemployed for Three Months or more and of Persons aged 45-64 who were Not Economically Active (Discouraged Workers)

    Image

Table 5. Gross Median Hourly Income (£) by Region of Workplace

Image

The New School Accountability Regime in England: Fairness, Incentives and Aggregation

January 21, 2014 Leave a comment

Author: Simon Burgess

The New School Accountability Regime in England: Fairness, Incentives and Aggregation

The long-standing accountability system in England is in the throes of a major reform, with the complete strategy to be announced in the next few weeks. We already know the broad shape of this from the government’s response to the Spring 2013 consultation, and some work commissioned from us by the Department for Education, just published and discussed below. The proposals for dealing with pupil progress are an improvement on current practice and, within the parameters set by government, are satisfactory. But the way that individual pupil progress is aggregated to a school progress measure is more problematic. This blog does not often consider the merits of linear versus nonlinear aggregation, but here goes …

Schools in England now have a good deal of operational freedom in exactly how they go about educating the students in their care. The quid pro quo for this autonomy is a strong system of accountability: if there is not going to be tight control over day to day practice, then there needs to be scrutiny of the outcome. So schools are held to account in terms of the results that they help their students achieve.

The two central components are new measures of pupils’ attainment and progress. These data inform both market-based and government-initiated accountability mechanisms. The former is driven by parental choices about which schools to apply to. The latter is primarily focussed around the lower end of the performance spectrum and embodied in the floor targets – schools falling below these triggers some form of intervention.

Dave Thomson at FFT and I were asked by the Department for Education (DfE) to help develop the progress measure and the accompanying floor target, and our report is now published. Two requirements were set for the measure, along with an encouragement to explore a variety of statistical techniques to find the best fit. It turns out that the simplest method of all is barely any worse in prediction than much more sophisticated ones (see the Technical Annex) so that is what we proposed. The focus in this post is on the requirements and on the implications for setting the floor.

The primary requirement from the DfE for the national pupil progress line was that it be fair to all pupils. ‘Fair’ in the sense that each pupil, whatever their prior attainment, should have the same statistical chance of beating the average. This is obviously a good thing and indeed might sound like a fairly minimal characteristic, but it is not one satisfied by the current ‘expected progress’ measure. We achieved this: each pupil on whatever level of prior attainment an expected progress measure equal to the national average. And so, by definition, each pupil has an expected deviation from that of zero.

The second requirement was that the expected progress measure be based only on prior attainment, meaning that there is no differentiation by gender for example, or special needs or poverty status. This is not because the DfE believe that these do not affect a pupil’s progress, it was explicitly agreed that they are important. Rather, the aim was for a simple and clear progress measure – starting from a KS2 mark of X you should expect to score Y GCSE points – and there is certainly a case to be made that this expectation should be the same for all, and there should not be lower expectations for certain groups of pupils. (Partly this is a failure of language: an expectation is both a mathematical construct and almost an aspiration, a belief that someone should achieve something).

So while the proposed progress measure is ‘fair’ within the terms set, and is fair in that it sets the same aspirational target for everyone, it is not fair in that some groups will typically score on average below the expected level (boys, say) and others will typically score above (girls). This is discussed in the report and is very nicely illustrated in the accompanying FFT blog. There are plausible arguments on both sides here, and the case against going back to complex and unstable regression approaches to value added is strong. This unfairness carries over to schools, because schools with very different intakes of these groups will have different chances of reaching expected progress. (Another very important point emphasised in the report and in the FFT blog is that the number of exam entries matters a great deal for pupil performance).

Now we come to the question of how to aggregate up from an individual pupil’s progress to a measure for the school. In many ways, this is the crucial part. It is on schools not individual pupils that the scrutiny and possible interventions will impact. Here the current proposal is more problematic.

Each pupil in the school has an individual expected GCSE score and so an individual difference between that and her actual achievement. This is to be expressed in grades: “Jo Smith scored 3 grades above the expected level”. These are then simply averaged to the school level: “Sunny Vale School was 1.45 grades below the expected level”. Some slightly complicated statistical analysis then characterises this school level as either a significant cause for concern or just acceptable random variation.

It is very clear and straightforward, and that indeed is its chief merit: it is easily comprehensible by parents, Headteachers and Ministers.

But it has two significant drawbacks, both of which can be remedied by aggregating the pupil scores to school level in a slightly different way. First, the variation in achieved scores around expected progress is much greater at low levels of attainment than at high attainment. This can be seen clearly in Figure 1, showing that the variance in progress by KS2 sharply and continuously declines across the range where the bulk of pupils are. Schools have pupils of differing ability, so the effect is less pronounced at school level, but still evident.

The implication of this is that if the trigger for significant deviation from expected performance is set as a fixed number of grades, then low-performing students are much more likely to cross that simply due to random variation than high-performing students are. By extension, schools with substantial intakes of low ability pupils are much more likely to fall below the floor simply through random variation than schools with high ability intakes are. So while our measure achieves what might be called ‘fairness in means’, the current proposed school measure does not achieve ‘fairness in variance’. The DfE’s plan is to deal with this by adjusting the school-level variance (based on its intake) and thereby what counts as a significant difference. This helps, but is likely to be much more opaque than the method we proposed and is likely to be lost in public pronouncements relative to the noise about the school’s simple number of grades below expected.

Fig 1: Standard deviation in Value added scores and number of pupils by mean KS2 fine grade (for details – see the report)

figure121012014

The second problem with the proposal is inherent in simple averaging. Suppose a school is hovering close to the floor target, with a number of pupils projected to be significantly below their progress target. The school is considering action and how to deploy extra resources to lift it above the floor. The key point is this: it needs to boost the average, so raising the performance of any pupil will help. Acting sensibly, it will target the resources to the pupils whose grades it believes are easiest to raise. These may well be the high performers or the mid performers – there is nothing to say it will be the pupils whose performance is the source of the problem, and good reason to think it will not be.

While it is quite appropriate for an overall accountability metric to focus on the average, a floor target ought to be about the low-performing students. The linear aggregation allows a school to ‘mask’ under-performing students with high performing students. Furthermore, the incentive for the school may well be to ignore the low performers and to focus on raising the grades of the others, increasing the polarisation of attainment within the school.

The proposal we made in the report solves both of these problems, the non-constant variance and the potential perverse incentive inherent in the averaging.

We combine the individual pupil progress measures to form a school measure in a slightly different way. When we compare the pupil’s achievement in grades relative to their expected performance, we normalise that difference by the degree of variation specific to that KS2 score. This automatically removes the problem of the different degree of natural variation around low and high performers. We then highlight each pupil as causing concern if s/he falls significantly below the expected level, and now each pupil truly has the same statistical chance of doing this. The school measure is now simply the fraction of its pupils ‘causing concern’. Obviously simply through random chance, some pupils in each school will be in this category, so the floor target for each school will be some positive percentage, perhaps 50%. We set out further details and evaluate various parameter values in the report.

The disadvantage of this approach for the DfE is that the result cannot be expressed in terms of grades, and it is slightly more complicated (again, discussed in the report). This is true, but it cannot be beyond the wit of some eloquent graduate in government to find a way of describing this that would resonate with parents and Headteachers.

At the moment, the difference between the two approaches in terms of which schools are highlighted is small, as we make clear in the report. Small, but largely one way: fewer schools with low ability intakes are highlighted under our proposal.

But there are two reasons to be cautious. First, this may not always be true. And second, the perverse incentives – raising inequality – associated with simple averaging may turn out to be important.

Can peer review be improved?

January 14, 2014 Leave a comment

Author: Mike Peacey

Can peer review be improved?

Scientific publishing is under the spotlight at the moment. The long-standing model of scientists submitting their work to a learned journal, for consideration by their peers who ultimately decide whether it is rigorous enough to warrant publish (a process known as peer review), is now hundreds of years old – the first scientific journals date from the 17th Century. Is it time for change? Two major factors are driving the scrutiny which this traditional publication model is facing: the astonishing growth in scientific output in recent years, and technological innovations that make print publishing increasing anachronistic. At the same time, there are concerns that many published research findings may be incorrect. The true extent of this problem is difficult to know with certainty, but pressure on academics to publish (the “publish or perish” culture) may incentivise the publication of novel, eye-catching findings. However, the very nature of these findings (in other words, that they are surprising or unexpected) means that they are more likely to be incorrect – extraordinary claims require extraordinary evidence. Peer review has been criticised because it sometimes fails by allowing such claims to be published, often when it is clear to many scientists that the claims are extremely unlikely to be true. Is peer review as unsuccessful as is sometimes claimed? And how might it be improved? We explored this question recently in a mathematical model of reviewer and author behaviour.

In our model we considered a number of scientists who each, sequentially, obtain private information (e.g., through conducting experiments) about a particular hypothesis. The result of each experiment will never be perfect, but will on average be correct (with more controversial topics providing nosier signals). Once they have completed their experiment, the scientists each write academic papers with the objective of advancing knowledge. These papers are then reviewed by a peer before a decision is made whether or not it is published. When a paper is published, the manuscript begins to partially influence the conclusions that later scientists reach. As a result, the amount of new information transmitted decreases. In other words, authors begin to “herd” on a specific topic. We found that the extent to which this herding occurs (and hence the confidence we can have in a hypothesis being correct) will depend on the particular way in which peer review is conducted. When reviewers are encouraged to be as objective as possible they do not use all the information available to them, and therefore their decision provides other scientists no information about their own private information. When reviewers are allowed a degree of subjectivity when making a decision (i.e., to use their judgement about whether the results are likely to be correct, as well as the more objective characteristics of the paper they’re reviewing), the peer review process transmits more information and this allows science to be self-correcting.

Models such as ours necessarily simplify reality, and typically focus on one aspect of a process to determine how important it is to that process. So the results of our model certainly shouldn’t be taken as definitive; rather, they can help to identify some interesting questions which can then be followed up empirically. For example, reviewers usually have to rate manuscripts on a number of dimensions, such as novelty and likely impact. One question might be whether asking reviewers to explicitly rate the extent to which they believe the results of the study provides useful information. Our results also suggest that opening up other channels through which scientists can make their private information known could also be valuable. These could include post-publication peer review, which is growing in popularity, and prediction markets to capture these signals at an aggregate level. The landscape of scientific publishing is likely to change dramatically over the next few years, as open access, self-archiving, altmetrics and other technology-driven innovations become increasingly common. This provides an opportunity to implement changes to a model of scientific publishing that has otherwise remained essentially unchanged for decades.

Mike Peacey

Park IU, Peacey MW, Munafò MR. Modelling the effects of subjective and objective decision making in scientific peer review. Nature. 2013. doi: 10.1038/nature12786.

The youngest children in each school cohort are over-represented in referrals to mental health services

January 13, 2014 Leave a comment

Author: Erlend Berg

The youngest children in each school cohort are over-represented in referrals to mental health services

It is known that the children who are the youngest in their class tend to do worse, in several respects, than their classmates. On average, they do less well academically throughout their school careers and are less likely to attend university. They have also been found to be less confident in their academic ability and are more likely to report being bullied or unhappy at school, and they are less likely to participate in both youth and professional sports.

Given this, it is perhaps not surprising that these children are also more likely to have mental health problems: they are more likely to be diagnosed with attention disorders, learning disability and dyslexia.

Still, little is known about the consequences for health service provision, and in particular the extent to which these children are over-represented as users of specialist mental health services. In a paper forthcoming in the Journal of Clinical Psychiatry, Shipra Berg and Erlend Berg investigate whether August-born children, who are the youngest in their class in the English educational system, are over-represented in referrals to specialist Child and Adolescent Mental Health Services. The threshold for referral to these services is relatively high, since minor problems are often dealt with by school health workers or family doctors.

The research method is simple. The cut-off date for school entry in England is 1 September. So a child born in August will be among the youngest in his or her class, while a child born in September will be one of the oldest. The researchers obtained dates of birth for all children referred to mental health services in three boroughs of West London for a period of four years, and compared the frequency of birth months of the referred children to the birth-month frequencies in the population.

For example, children born in September represent 8.6% of the population but only 8.0% of referrals. Hence they are 7.3% less likely to be referred to mental health services than the average child.

For August-born children the situation is reversed. Of all children referred to mental health services, 9.4% were born in August. But only 8.6% of the population of children in the relevant age group are born in August. That means that August-born children are 9.1% more likely to be referred than the average child, and 17.8% more likely to be referred than their September-born classmates. These figures are statistically significant, meaning they are very unlikely to be caused by random fluctuations in the data.

When boys and girls are examined separately, the main findings are confirmed for both sexes.

Children in the UK start school at a particularly young age, so an age difference of one year is substantial. The September-born child, who starts school around her fifth birthday, has had a 25% longer life experience than the August-born child, who starts school around his fourth birthday. Clearly, a one-year age difference shrinks as a proportion of life experience as the children grow up. One might therefore expect that the negative effect of being the youngest wears off over time. However, the authors find that the main effect holds for children of both primary-school and secondary-school age. This could mean that being the youngest is detrimental even in secondary school, or alternatively that the disadvantage of being the youngest in primary school has lasting consequences.

It is, in principle, possible to defer a school start to the term (there are three terms per year) in which the child turns five. However, this is rarely practised, because the child would still join the same class they would have been in had entry not been deferred. Deferring entry can therefore mean falling behind in academic and social development even before starting school.

It is worth pointing out that a large majority of children born in August are not referred to mental health services. Other factors, including the children’s home environment, are likely to be more important determinants of mental health than month of birth. Still, August-born children, being the youngest – physically, emotionally and intellectually – in their class, may be more vulnerable than their older peers.

 

RCT + NPD = Progress

October 31, 2013 Leave a comment

Author: Simon Burgess

RCT + NPD = Progress

A lot of research for education policy is focussed on evaluating the effects of a policy that has already been implemented. After all, we can only really learn from policies that have actually been tried.  In the realm of UK education policy evaluation, the hot topic at the moment is the use of randomised control trials or RCTs.

In this post I want to emphasise that in schools in England we are in a very strong position to run RCTs because of the existing highly developed data infrastructure. Running RCTs on top of the census data on pupils in the National Pupil Database dramatically improves their effectiveness and their cost-effectiveness.  This is both an encouragement to researchers (and funders) to consider this approach, and also another example of how useful the NPD is.

A major part of the impetus for using RCTs has come from the Education Endowment Foundation (EEF).  This independent charity was set up with grant money from the Department for Education, and has since raised further charitable funding. Its goal is to discover and promote “what works” in raising the educational attainment of children from disadvantaged backgrounds.  I doubt that anywhere else in the world is there a body with over £100m to spend on such a specific – and important – education objective.  Another driver has been the Department for Education’s recent Analytical Review, led by Ben Goldacre, which recommended that the Department engage more thoroughly with the use of RCTs in generating evidence for education policy.

It is probably worth briefly reviewing why RCTs are thought to be so helpful in this regard: it’s about estimating a causal effect. There are of course many very interesting research questions other than those involving the evaluation of casual effects. But for policy, causality is key: “when this policy was implemented, what happened as a result?” The problem is that isolating a causal effect is very difficult using observational data, principally because the people exposed to the policy are often selected in some way and it is hard to disentangle their special characteristics from the effect of the policy. The classic example to show this is a training policy: a new training programme is offered, and people sign up; later they are shown to do better than those who did not sign up; is this because of the content of the training programme … or because those signing up evidently had more ambition, drive or determination? If the former, the policy is a good one and should be widened; if the latter, it may have no effect at all, and should be abandoned.

RCTs get around this problem by randomly allocating exposure to the policy, so there can be no such ambiguity. There are other advantages too, but the principal attraction is the identification of causal effects. Of course, as with all techniques, there are problems too.

The availability of the NPD makes RCTs much more viable and valuable. It provides a census of all pupils in all years in all state schools, including data on demographic characteristics, a complete test score history, and a complete history of schools attended and neighbourhoods lived in.

This helps in at least three important ways.

First, it improves the trade-off between cost and statistical power. Statistical power refers to the likelihood of being able to detect a causal effect if one is actually in operation. You want this to be high – undertaking a long-term and expensive trial and missing the key causal effect through bad luck is not a happy outcome. Researchers typically aim for 80% or 90% power. One of the initial decisions in an RCT is how many participants to recruit. The greater the sample size, the greater the statistical power to detect any causal effects. But of course, also, the greater is the cost, and sometimes this can be considerable. These trade-offs can be quite stark. For example, to detect an effect size of at least 0.2 standard deviations at standard significance levels with 80% power we would need a sample of 786 pupils, half of them treated. If for various reasons we were running the intervention at school level, we would need over 24,000 pupils.

This is where the NPD comes in. In an ideal world, we would want to be able to clone every individual in our sample and try the policy out on one and compare progress to their clone. Absent that, we can improve our estimate of the causal effect by getting as close as we can to ‘alike’ subjects. We can use the wealth of background data in the NPD to reduce observable differences and improve the precision of estimate of intervention effect. Exploiting the demographic and attainment data allows us to create observationally equivalent pupils, one of whom is treated and one is a control.  This greatly reduces sampling variation and improves the precision of our estimation. This in turn means that the trade-off between cost and power improves. Returning to the previous numerical example, if we have a good set of predictors for (say) GCSE performance, we can reduce the required dataset for a pupil-level intervention from 786 pupils to just 284. Similarly for the school-cohort level intervention, we can cut back the sample from 24,600 pupils and 160 schools to 9,200 pupils and 62 schools.  The relevant correlation is between a ‘pre-test’ and the outcome (this might literally be a pre-test, or it can be a prediction from a set of variables).

Second, the NPD is very useful for dealing with attrition. Researchers running RCTs typically face a big problem of participants dropping out of the study, both from the treatment arms and from the control group. Typically this is because the trial becomes too burdensome or inconvenient, rather than on principle because they did sign up in the first instance. This attrition can cause severe statistical problems and can jeopardise the validity of the study.

The NPD is a census and is an administrative dataset, so data on all pupils in all (state) schools are necessarily collected. This obviously includes all national Keystage test scores, GCSEs and A levels. If the target outcome of the RCT is improving test scores, then these data will be available to the researcher for all schools. Technically this means that an ‘intention to treat’ estimator can always be calculated. (obviously, if the school or pupil drops out and forbids the use of linked data then this is ruled out, but as noted above, most dropout is simply due to the burden).

Finally, the whole system of testing from which the NPD harvests data is also helpful. It embodies routine and expected tests so there is less chance of specific tests prompting specific answers. Although a lot about trials in schools cannot be ‘blind’ in the traditional way, these tests are blind. They are also nationally set and remotely marked, all of which adds to the validity of the study. These do not necessarily cover all the outcomes of interest such as wellbeing or health or very specific knowledge, but they do cover the key goal of raising attainment.

In summary, relative to other fields, education researchers have a major head start in running RCTs because of the strength, depth and coverage of the administrative data available. 

How should long-term unemployment be tackled?

October 3, 2013 Leave a comment

Author: Paul Gregg

How should long-term unemployment be tackled?

Earlier in the week, George Osbourne announced new government plans for the very long term unemployed. The government flagship welfare to work programme, the Work Programme, lasts for two years and so there has been a question about what happens to those not finding work through it. Currently only 20% of those starting the Work Programme find sustained employment, although many more cycle in and out of employment.

Very long-term unemployment (2+ years) is strongly cyclical, almost disappearing from 1998 to 2009, but has returned with the protracted period of poor economic performance. This cyclicality is a strong indicator that it is not driven by a large group of workshy claimants. Rather the state of the economy leaves a few who unable to get work quickly face ever increasing employer resistance to higher them.  Faced with ample choice of newly unemployed these people look like unnecessary risks with outdated skills.

Very long-term unemployment is thus not a new phenomenon and a large range of policies have been tried before and hence we have a very good idea of what does and does not work. The proposals had three elements. The first which got the headlines was that claimants would be made to ‘Work for the Dole’. The effects of requiring people to go into work placements depends a lot on the quality of the work experience offered. Such schemes have three main effects: first, some people leave benefits ahead of the required employment. This is called the deterrent effect and is stronger the more unpleasant and low paid (eg work for the dole) the placement is. Then, whilst on the placement, job search and job entry tend to dip as the person’s time is absorbed by working rather than applying for jobs. Finally, the gaining of work experience raises job search success on completion of the placement. This is stronger for high-quality job placement in terms of the experience gained and being with a regular employer who can give a good reference if the person has worked well.

The net effect of many such programmes, including work for the dole, has often been little or even negative. Australia and New Zealand have all tried and abandoned Work for the Dole policies because they were so ineffectual in getting people into work. The best effects from work experience programmes come where job search is actively required and supported when on a work placement, where the placement is with a regular employer rather than a “make work” scheme and where the placement provider is incentivised to care about the employment outcomes of the unemployed person after the work placement ends. The Future Jobs Fund under the previous labour government, which placed young people into high quality placements and paid a wage, was clearly a success in terms of improving job entry although the government cut it.

This element of the government’s plans has little chance of making a positive difference. However, the other elements maybe more positive. Some, the mix across elements is not clear yet, of the very long-term unemployed will be required to do daily signing. This probably means that the claimant will have to attend a Job Centre Plus office every day and look for and apply for jobs on the suite of computers. This is very similar to the Work Programme but more intense and perhaps with less support for CV writing and presentation etc. This may enhance the frequency of job applications but perhaps not the quality and may prove no more successful than the Work Programme. The third element is to attend a new as yet unspecified programme. As there are few details as yet it is hard to comment on this part.

The overall impression is that the announcement is of a rehashed version of previous rather unsuccessful programmes founded on a belief that the long-term unemployed are workshy rather unfortunates needing intensive help to overcome employer resistance and return to work.

Categories: Uncategorized Tags:

University, Gambling, and the Greater Fool

October 2, 2013 Leave a comment

Author: Michael Sanders

University, Gambling, and the Greater Fool

The betting company Ladbrokes have begun offering students (and their parents) the opportunity to bet on their eventual university degree classifications. This, as may have been predictable (and may have been the intention) has attracted a level of opprobrium from groups concerned about youngsters gambling away their student loans foolishly.

What does economics tell us about this? To begin with, this looks like a fairly standard asymmetric information problem, from which students can only benefit. In general, it is not sensible to make a bet with someone who has more information with you, or who has control over the outcome of that bet. For example, I bet you a million pounds that the next sentence will contain the word banana. Clearly, you won’t take the bet because I can control the outcome banana.

For students, the deal is a good one. They know how clever they are, and they know how hard they will work. Even if there is some noise associated with their outcome (bad days, sick pets, or grandmother fatalities), it is a fair bet that the people beginning their university lives this week have more control over the outcome than Ladbrokes do. So why are Ladbrokes taking these bets (and actively encouraging them)?

One possibility is that Ladbrokes are cash poor, and want to raise finance quickly. They take money in now from students placing their bets, but don’t need to pay out for three years. A perfectly sound theoretical argument, but it seems unlikely, either (a) that Ladbrokes can’t find better rates on what is essentially a loan on the open market, or (b) that students are so cash rich that they’re making long term investments.

A second possibility is that Ladbrokes is a ‘greater fool’ – a person who buys high and sells cheap, so that the rest of us can profit. Given their track record, I suspect not.

More likely, they are relying on students being greater fools. Where traditional economic theory tends to assume that agents observe their own quality with certainty (or, in English, what we know how good we are), behavioural economics suggests otherwise.

Overconfidence is an issue across many dimensions. It leads us to pay for expensive gym contracts we’ll never use and to drive less carefully than we should . Even among hyper-rational investors, it leads to over-investment in our own firms. So, even though we know that only 5% of students will get a first class degree, we rate our own chances at 10%. For some people this may be true, but for most it will not, and so firms like Ladbrokes can profit from our misconception.

Behavioural Economics offers useful tips on self control, and I’d encourage anyone at the beginning of their university career (or later in), to think about them seriously. There are times when it is good to be a greater fool, this is not one of them.

Post Script: A Rational Bet

On circulating this post internally, I’ve been asked under what circumstances you should take this bet. For almost everyone at Bristol, studying the social sciences, the odds you’ll get betting on a 2:1 are probably about 5/6 (Bristol isn’t one of those featured on the Ladbrokes site), so you’d lose money whatever you do. If you’re confident of getting a 2:1, however, you might be interested to know what happens if you work a bit less hard and bet on yourself getting a 2:2. Here the odds are better, probably about 12:5 – so you’ll get your initial investment back, plus an additional 140%. A recent working paper from the LSE finds that the return to a 2:1 is 2040 a year. If we extrapolate this for a 45 year career, that’s an extra £91800 over the course of your lifetime. Assuming a constant rate of inflation at 3% over that time, you’d need £181,516 now in order to maintain the same standard of living for your entire life. To win that, you’d need to bet £129,654 right now in order to be indifferent between getting a 2:2 and winning the bet, or getting a 2:1 and not betting. I’d still recommend against it, though.

Threshold measures in school accountability: asking the right question

September 25, 2013 Leave a comment

Author: Simon Burgess

Threshold measures in school accountability: asking the right question

We are in the midst of a significant upheaval in the setting and marking of exams, and the reporting of school exam results. One feature of the system has been the centre of a lot of criticism and highlighted for reform: the focus on the percentage of a school’s pupils that achieve at least 5 GCSEs at grades C to A*, including the scores on English and maths. This is typically the most-discussed metric for (secondary) school performance and is the headline figure in the school league tables.

The point is that this measure is based on a threshold, a ‘cliff-edge’. Get a grade C and you boost the school’s performance; missing a C by a lot or a little are the same, and just scraping  a C is the same as getting an A*.

This has been described as distorting schools’ behaviour, forcing schools to focus on pupils around this borderline. The argument is seen as obviously right and strong grounds for change. In this post I want to make two counter-arguments, and to suggest we are asking the wrong question.

First a basic point. One central goal of any performance measure is to induce greater or better-targeted effort. This might just mean “working harder” or it might mean a stronger focus on the goals embodied in the measure at the expense of other outcomes. The key for the principal is to design the best scheme to achieve this. A very common scheme is a threshold one – this can be found for example in the Quality and Outcomes Framework for GPs, service organisations with a target number of clients to see, and of course schools trying to help pupils to achieve at least 5 grades of C or better. An organisation working under a threshold scheme faces very different marginal incentives for effort. Considering pupils: the most intense incentives relate to pupils just below the line: this is where the greatest payoff is to schools to devote the most resources.

The first counter argument starts by noting that the asymmetry in the incentive is not a newly-discovered flaw, it is a design feature which can be very powerful. If there is a level of achievement that is extremely important for everyone to reach, then it makes sense to set up a scheme that offers very strong incentives to do that – that focusses the incentive around that minimum level. This is precisely what a threshold scheme does.

So rather than simply pointing out that threshold designs strongly focus attention (which is what they’re supposed to do), the questions to ask are: is there some level of attainment that has that characteristic of being a minimum level of competence? And if so, what is it? If society feels that 5 grade C’s is a fair approximation to a minimum level that we want everyone to achieve, then it is absolutely right to have a ‘cliff-edge’ there because inducing schools to work very hard to get pupils past that level is exactly what society wants.  It may be that we are equally happy to see grades increase for the very brightest children, those in the middle or those at the lower end of the ability distribution. Or not: all the main political parties express a desire to raise attainment at the lower end and narrow gaps.

The argument should be about where to put the threshold, not whether to have one or not. Perhaps we are starting to see a recognition of this in the recent policy announcement that all pupils will have to continue studying until they have passed English and Maths.

The second counter-argument is based on a scepticism of what is likely to happen without the 5A*-C(EM) threshold acting as a focal point.

The core strategic decision facing a headteacher is how best to deploy her main resource: the teachers. Specifically: how best to assign teachers of varying effectiveness to different classes. It has been said that schools will be free to focus equally on all pupils.

Well, maybe. Or perhaps we should think of the pressures on the headteacher, in this instance from teachers themselves. Effective teachers are very valuable to a school and any headteacher will be keen to keep her most effective teachers happy and loyal. It seems likely (I have no evidence on this, and would be keen to hear of any) that top teachers would typically prefer to teach top sets. If so, we might see a drift of the more effective teachers towards the more able classes in a school (and therefore on average, the more affluent pupils). The imperative of the C/D threshold gave headteachers an unanswerable argument to push against this.

So threshold metrics have an important role to play in communicating to schools where society wants them to focus their effort. The current threshold, at 5 C grades, may or may not be at the right level; but discussing what the right level is, is a more useful debate to have.

Is education policy a blunt instrument when it comes to ‘social mobility’?

September 20, 2013 Leave a comment

Author: Matt Dickson

Is education policy a blunt instrument when it comes to ‘social mobility’?

Earlier this week, Tony Blair’s former speech-writer Philip Collins told a fringe meeting at the Liberal Democrats conference that social mobility was a ‘terrible objective’ and that in any case, education policy could do little to affect it.

“I can’t think of a single education reform in the 20th Century that had a marked impact on relative social mobility at all. Not one,” he remarked.

This conclusion depends on who you think it is important to be “relative” to. On the one hand you might think it is important to be compared to your own parents i.e. where you started, on the other hand you could think it is important to be compared to your peers – where you sit in the distribution compared to your peers from different backgrounds. Let’s think about the former comparison.

The 1972 raising of the minimum school leaving age (RoSLA) has been shown in numerous pieces of research to have increased the education, employment and earnings of the young people affected – relative to their school-mates in the years before the reform. Given that we know that the people who were made to remain in school an additional year were disproportionately from lower socio-economic backgrounds, this policy improved the economic position of young people at the lower end of the economic scale.

“The dull child of the middle class parent has to come down the wrung in order for me to go up, otherwise you don’t have social mobility,” is another problem that Collins identified with the objective of social mobility.

However, nobody had to come down the earnings or education ladder in order for the young people affected by RoSLA to move up – so this policy improved the chances that young people with low taste for education and/or lower ability and from poorer backgrounds, would gaining qualifications, employment and greater earnings. Technically this would be considered “absolute social mobility” and Collins is right in making the assertion that for there to be upward “relative social mobility” there needs to be an offsetting downward move of some.

But Collins is taking a very strong line here – arguably, what we should care about as a society is the extent to which people from all backgrounds can maximize their potential and not have their opportunities curtailed purely because of their parents’ education, income or class. This encapsulates what ‘social mobility’ is all about – and why it remains an important objective.

Moreover, it is an objective that is amenable to policy, as demonstrated by the impact of RoSLA and other education policies of the last fifty years. Another major structural reform in the post-war era was the abolition of selective education in most of the country. Despite on-going controversies, we know that the grammar school system was detrimental to the majority of children from poor households and its ending reduced a major source of income-based differentiation in life chances.

Furthermore, the expansion of higher education in recent decades has seen increases in young people from poorer backgrounds accessing university and the opportunities for progression that this affords. A study by the Institute for Fiscal Studies for the Nuffield Foundation last year showed that while higher education participation has been rising in general over time, it has been rising quickest for young people from the poorest families. This represents genuine ‘social mobility’, driven by a reduction in the educational inequality that separates children from better off and poorer backgrounds.

Taking a longer perspective, one hundred years ago most pupils left school aged 12, People “knew their place” in society and the education system offered very little means of escape for children from poorer families. While the labour market has also changed dramatically since those days, it seems very unlikely that education policy and the revolution in secondary education in particular has had no effect on the chances for poorer pupils of getting on in life.

Arguing about funding obscures important issues of quality research

August 29, 2013 Leave a comment

Author: Michael Sanders

Arguing about funding obscures important issues of quality research

Richard Thaler, the Chicago professor of economics and incoming president of the American Economic Association, has as one of his many mantras the truism that “we can’t do evidence based policy without evidence”. The government’s recent decision to establish a number of “What Works Centres” to collate, analyse and, in some cases, produce, evidence on a number of policy areas seek to address the very problem of a lack of evidence.

Evidence itself, however, is not in short supply. Newspapers fill their pages, day after day, with the results of studies into some facet of human behaviour, or statistics on the state of the world. So, there need to be two other criteria for evidence than mere ‘existence’ – goodness, and usability. I should be clear at the outset that when I say “Good”, I mean “Capable of determining a causal relationship between an input and an output”. Sadly, not all evidence which is good is useable, and often tragically, not all evidence that is usable is good.

As Ben Goldacre points out in his recent paper for the department for education, many researchers in that field like qualitative work, and use this as the basis for their findings. As an economist, I have a natural scepticism for such research, but I cannot dispute that it is eminently useable. The arguments constructed by such research are easily and well presented. They offer solutions which are simple, and neat. However, as H.L. Mencken said, these arguments are almost always also wrong. This research is usable but very much of it is not good.

On the other side, much research which is good, and detailed, and thorough, presents complicated and nuanced answers which reflect reality but whose methods are impenetrable to anyone who might actually have the power to change policy accordingly.

Randomised Controlled Trials (RCTs) are both useable, with the majority of results presentable in an easily understood way and the methodology being simple enough to explain to a lay person in about five minutes. As the recognised ‘gold standard’ of evaluation, they are also indisputably good.

In a blog post for the LSE impact blog, Neil Harris, a researcher at Bristol’s Centre for Causal Analysis in Translation Epidemiology, argues that education research is a public good and needs to be funded by the state, as, unlike in medicine, there is not money to be made by researchers through patent development, education being a public good. He is, of course, absolutely right. The structure of his argument implies however, that in order to get good evidence, it will need to be paid for – i.e. that RCTs are expensive, while qualitative research is cheap. If the government wants better education research, they should give researchers more money. But, well, we would say that, wouldn’t we?

The argument that RCTs are expensive is a well-worn one, but is not helpful, and often dangerously distracting. Saying that an RCT is expensive is akin to saying “Vehicles are expensive”. If one chooses to put up Ferraris as an example, then of course they are. A scooter, however, is not. Both are better than walking.

A good quality, robust RCT need not be outlandishly expensive, and certainly not any more so than qualitative analysis. Unlike medical trials, the marginal cost of interventions in policy is often not far above that of treatment as usual (the most logical control condition). Teaching phonics in 50 schools and not in 50 others should not require vast resources once allocation has taken place. Although the government does not spend as much money on policy research as it does on medicine, it spends a lot of money gathering data on the outcomes many researchers are interested in. At the end of a child’s GCSEs, finding out how well they did does not require specialist staff to draw their blood and perform expensive tests on them. The school knows the answer.

It is important not to downplay the risks or costs associated with RCTs, but nor is it possible to present these costs as a reason for conducting, or accepting, substandard research. As researchers, if our work is of low quality, there is only so far the buck can be passed.

Follow

Get every new post delivered to your Inbox.

Join 40 other followers