Thursday 7 September 2017

Whose money is it anyway?

It’s hard not to notice the current focus by some in government, parliament and the media on universities, and in particular issues of value (levels of tuition fees) and accountability (how can VC’s high salaries be justified).

There’s lots to be said on this, but in this blog I want to focus on an underlying issue: whose money is it anyway? Put bluntly, if universities are spending private money, then it’s no business of the state what they spend it on, as long as it’s legal.

Universities get money from lots of sources, and they publish information annually – through their annual accounts and through statutory returns to the Higher Education Statistics Agency (HESA) – about what exactly they get and from who. The information is in a standard format, with many categories. Bear with me while I list these; it’s worth seeing to give context to the argument I’ll be making later. There are:

  • Funding body grants


  • Tuition fees, comprising Full-time undergraduate, Full-time postgraduate, Part-time undergraduate, Part-time postgraduate, PGCE, Non-EU domicile, Non-credit-bearing course fees, FE course fees, and Research training support grants.


  • Research grants and contracts, comprising grants from: BEIS Research Councils, The Royal Society, British Academy and The Royal Society of Edinburgh; UK-based charities; UK central government bodies/local authorities, health and hospital authorities; UK central government tax credits for research and development expenditure; UK industry, commerce and public corporations; other UK sources; EU government bodies; EU-based charities; EU industry, commerce and public corporations; EU (excluding UK) other; Non-EU-based charities; Non-EU industry, commerce and public corporations; Non-EU other


  • Other services rendered, comprising income from BEIS Research Councils, UK central government/local authorities, health and hospital authorities, EU government bodies and other sources


  • Other income, comprising: Residences and catering operations (including conferences); Grants from local authorities; Income from health and hospital authorities (excluding teaching contracts for student provision); Other grant income; Capital grants recognised in the year; Income from intellectual property rights; and Other operating income


  • Donations and endowments, comprising New endowments; Donations with restrictions and Unrestricted donations

If you’ve made it through the list (well done!) you’ll see that some of these come from public sources (eg BEIS research grants), some of these are private (eg UK industry grants). Add together all of the public income for a university, divide by the total incomer, and you can work out what percentage of the university’s income is from public sources. Which is surely relevant for understanding how accountable universities need to be with their spending choices.

For some categories, though, it isn’t obvious if it’s public money. The big one here is tuition fee income.

For income from non-EU students, it is clearly private income. Even if they’re supported by their own government, the UK government doesn’t have a duty or obligation in relation to the money.

For postgraduate tuition fees paid by home and EU students, it will be a mixed bag: some will be paid by the students themselves or their employers; some will be funded via postgraduate grants; some will be paid via public PG loans schemes.

For home and EU undergraduate fees, we need to think about it. Where students have to pay tuition fees (remember that Scottish students in Scotland pay no fees) they are able to take out a loan, on less than commercial terms, from the Student Loans Company. And students do this. After graduation, students make repayments towards the loan from their salary; the amount they repay depends on how much they earn. And after 30 years the remaining debt is cancelled. The initial funds are provided to the Student Loans Company by the state; and an allowance for the ultimately unrepaid element – called the RAB charge – is also part of government spending. So is it public or private money? With hindsight, a proportion of it is private, and a proportion public. Up front, the cash is public.

On this basis it is possible to masker the calculation about the proportion of universities income which comes from public funds. I’ve included home and EU undergraduate tuition fees; I’ve excluded postgraduate tuition fees; I’ve included research and other services rendered sources from UK government and public bodies, and from EU government and public bodies (the income for this ultimately derives from UK government funds, as we’re a net contributor to the EU budget.)

What this shows is that universities receive significant public funding. Across the UK as a whole, 58% of income in 2015-16 (the most recent year for which HESA data is available) comes from public sources. In actual money, that is £20.3 billion out of a total income of £34.7 billion. Yes, I did say billion. It is a lot of money!

Nation
% Publicly-funded
England
58%
Wales
65%
Scotland
59%
Northern Ireland
75%
Total UK
58%

Of course this varies between individual universities. Some have very little income (comparatively!) from non-public sources; a few have very little (again, comparatively!) from the public. 

The graph shows the data: each university is one of the bars; they’re rank ordered from the most dependent on the left (Plymouth College of Art, since you ask, with 96% dependency on public funding) through to Heythrop College on the right (with no public funding whatsoever.) Even the famously-private Buckingham University has a little public income - £95k in funding body grants and research income from UK public bodies. Which means that it is second from the right, with about 0.25% of its income from public sources.

Source: HESA data
What of other universities? The Russell Group members range from the mid 20s (LSe with 24%) to the high 60s (Queen’s Belfast with 69%). The big post 1992 civic universities range from the mid 50s (Sunderland with 56%) to the mid 80s (Liverpool John Moores with 86%). The smaller or specialist research intensives (the 1994 Group, as was) range from the high 30s (SOAS with 38%) to the mid 60s (Birkbeck College, with 66%).

So does the state have an interest in how universities spend their money? The data say yes: at least to the extent that the money derives from public sources.

This doesn’t mean that all of the criticisms made of universities are valid. And it doesn’t mean that university autonomy isn’t a good idea. History, and international comparisons, tell us that the best universities are those that have the most freedom to make their own academic choices.

But it does lend validity to arguments that universities need to be accountable for their spending choices. In my experience, universities don’t disagree with this need for accountability. 

What of current criticisms? The danger is that the huge good that universities do for individuals and for society as a whole is forgotten amongst the current hubbub, and damage is then done. To avoid this, those making the noise need to be careful that their criticisms are well-founded. There’s an anti-elitism in current public discourse which easily mutates into unthinking policy.

And universities themselves need to be aware that (some at least) of the criticisms come from a real point. Are student always the first thought? Sometimes research sees like it is king. And is there real transparency? A few universities have a student on their remuneration committees, and their world has not fallen down. Why not more?

Tuesday 15 August 2017

Make the National Student Survey Great Again!

The NSS data was out last week. This year it’s a new set of questions – some are the same as in previous surveys, some are amended versions of previous questions, and some are entirely new. This means that year-on-year comparisons need to be treated with a little caution.

But one aspect of reporting continues to bother me. The survey measures final year undergraduate student responses to a number of statements. For instance, “Overall, I am satisfied with the quality of the course” on a Likert scale - that is, a 1-5 scale, where 1 = definitely disagree; 2 = mostly disagree; 3 = neither agree nor disagree; 4 = mostly agree; and 5 = definitely agree. The data is presented with a simple summing of the percentages who respond 4 or 5 to given a ‘% agree’ score for every question at every institution. Which in turn means universities can say “93% satisfaction” or whatever it might be.

This is simple and straightforward, but loses important data which could be summarised by using a GPA (Grade Point Average) approach – just like the HE sector commonly uses in other responses, for instance in REF outcomes. Using a GPA, an overall score for a question reflects the proportion giving the five different responses.

To calculate a GPA, there’s a simple sum:

GPA = (% saying ‘1’ x 1) + (% saying ‘2’ x 2) + (% saying ‘3’ x 3) + (% saying ‘4’ x 4) + (% saying ‘5’ x 5)

This gives a number which will be 5 at most (if all respondents definitely agreed) and a minimum of 1 (if all respondents definitely disagreed).

If GPA was used for the reporting, there’d still be one number which users would see, but it would contain more nuance. GPA measures how extreme people’s agreement or disagreement is, not just the proportion who are positive. And this matters.

I looked at the raw data for all 457 teaching institutions in the 2017 NSS. (This is not just universities but also FE Colleges, which work with universities to provide foundation years, foundation degrees and top-up degrees, and alternative providers.)  I calculated the agreement score and the GPA for all teaching institutions for question 27: Overall, I am satisfied with the quality of the course. And then I rank-ordered the institutions using each method.

What this gives you are two ordered lists, each with 457 institutions in it. Obviously, in some cases institutions get the same score; where this happens, they all have the same rank order. And institutional rank is reflects the number of institutions above them in the rank order.

So, for example, on the ‘agreement score’ method, 27 institutions score 100%, the top score available in this method. So they are all joint first place. One institution scored 99%: so this is placed 28th.  Similarly, on the GPA ranking, one institution scored 5.00, the top score using the GPA method. The next highest score was 4.92, which two institutions got. So those two are both joint second.

What I did next was compare the rank orders, to see what difference it made. And it makes a big difference! Take, for example, the Anglo-European College of Chiropractic. It’s 100% score on the ‘agreement score’ method puts it in joint first place. But its GPA of 4.39 places it in joint 79th place. In this instance, its agreement score was 61% ‘mostly agree’ and 39% ‘definitely agree’. Very creditable. But clearly not as overwhelmingly positive as Newbury College, which with 100% ‘definitely agree’ was joint 1st on the agreement score method and also in first place (and all on its own) on the GPA measure.

The different measures can lead to very significant rank-order differences. The examples I’m going to give relate to institutions lower down the pecking order.  I’m not into name and shame so I won’t be saying which ones (top tip – the data is public so if you’re really curious you can find out for yourself with just a bit of Excel work), but take a look at these cases:

Institution A: With a score of 87% on the agreement score method, it is ranked 138/457 overall: just outside the top 30%. With a GPA of 3.95, it is ranked 349/457: in the bottom quarter.

Same institution, same data. 

Or try Institution B: with an agreement score of 73% it is ranked 382/457, putting it in the bottom one-sixth of institutions. But its GPA of 4.28 places it at 129/457, well within the top 30%.

Again, same institution, same data.

In the case of Institution A, 9% of respondents ‘definitely disagreed’ with the overall satisfaction statement. This means that the GPA was brought down. Nearly one in ten students were definitely not satisfied overall.

In the case of institution B, no students at all disagreed that they were satisfied overall (although a decent number, more than a quarter, were neutral on the subject.) This means that their GPA was higher, but the overall satisfaction reflected the non-committal quarter.

I’m not saying that institution A is better than B or vice versa. It would be easy to argue that the 9% definitely disagree was simply a bad experience for one class, and unlikely to be repeated. Or that the 27% non-committal indicated a lack of enthusiasm. Or that the 9% definitely disagree was a worrying rump who were ignored. But what I am saying is that we’re doing a disservice by not making it easier for applicants to access a more meaningful picture.

The whole point of the National Student Survey is to help prospective students make judgements about where they want to study. By using a simple ‘agreement’ measure, the HE sector is letting them down. Without any more complexity we can give a more nuanced picture, and help prospective students. It’ll also give a stronger incentive to universities to work on ensuring that nobody is unhappy. Can this be a bad thing?

GPA is just as simple as the ‘agreement score’. It communicates more information. It encourages universities to address real dissatisfaction.

So this is my call: let’s make 2017 the last year that we report student satisfaction in the crude ‘agreement score’ way. GPA now.

Tuesday 1 August 2017

Value for money

Universities seems to be having a torrid time, at least as far as their standing in the political firmament goes. As well as pension headaches for USS member institutions (mostly the pre-1992's), there are high-profile stories on VC salaries, Lord Adonis' campaign about a fee-setting cartel, and (low) teaching contact hours. So far, so not very good at all.

There's a feeling that this might be more than a quiet season set of grumbles: David Morris at Wonkhe writes interestingly on this. For what its worth, I suspect that this is indeed politically driven rather than accidental. Maybe Lord Adonis is marking out ground for his re-emergence within a new model Labour Party; maybe Jo Johnson is preparing for tough discussions around future fees. But whatever the end point, it's worth looking at whether the concerns are real.

An underlying point is value for money. The charge is that (English) students don't get a lot for their money. One quick way to look at this is university spend on staff, the single biggest item on university's accounts. HESA publish handy data on student numbers and staff numbers. It's straightforward to calculate the ratio of students to academic staff over the years.

source: HESA, my calculations
The data show that from 2004-05 to 2011-12, for every member of academic staff there were about 14 students. In 2012-13 - the first year of the new fees regime in England - this ratio started to fall, and by 2015-16 there were just over 11 students for every member of academic staff.

Does this mean that the stories of low contact hours, and questionable value for money are wrong? Not necessarily - the data doesn't speak to the reality at individual universities or programmes, nor does it describe any individual student's experience. But it does show that universities have invested in the most important element of their provision: academic staff.

Monday 24 July 2017

Reflections on the UK 'Carnegie Classification'

I posted yesterday on how UK universities and higher education institutions would map onto the US Carnegie classifications. That post simply presented the data; its worth a little reflection on what the data show.

Unsurprisingly, perhaps, the Doctoral classification is the biggest, but there is more differentiation than I expected. When I did a similar exercise 15-20 years ago there weren’t any UK HEI’s in the Masters or Baccalaureate categories - every institutions was R1 or R2 (in the old Carnegie scheme). This reflects the broadening of the pool in the UK (there are more universities now than there were); but also, perhaps, the more selective approach to funding of research studentships. Some of the institutions which are now Master's universities made the Doctoral cut, if I remember correctly, on the old classification.

Also not present in this list are HE and FE Colleges, which between them would occupy the Baccalaureate category, the Baccalaureate/Associate’s category and the Associate’s category. In the UK the equivalent of the Associate's category is Foundation Degrees and foundation years, but the principle remains the same. Its clear that if you want to understand the breadth of UK HE you need to look at HE delivered in FE Colleges. This is a challenge to most of the usual narratives about UK higher education; perhaps it reflects the university-sector 'ownership' of some sector wide bodies such as UCAS, HESA etc. (I'm using 'ownership' loosely here.)

And of course, as the UK did most of its historical oppressing in countries which are now sovereign, there isn’t an equivalent of the Tribal Colleges category.

Is this a useful analytical framework? At the moment the categories used are often driven by mission groups. Whilst membership of these is in part driven by data, it isn't transparent. And as mission groups are clubs not leagues, there isn't often relegation, although promotion does happen. (Remember the expansion of the Russell Group in 2012). Perhaps we need a UK equivalent, to allow for more transparent analysis of the section and how it is developing?

Sunday 23 July 2017

Mapping UK universities against the US Carnegie Classifications

In 1970 the Carnegie Commission on Higher Education developed a framework for analysing university and colleges, to facilitate research and policy development on higher education. The framework has continued to be developed by higher education researchers in the US. As England’s HE policy framework begins to approximate the US – with an increasing emphasis on diversity of institution, market entry and exit, and the student as consumer – in this blog I’m looking at the US classifications, what they tell us, and how the UK’s HEI’s would map onto the Carnegie classification.

The current Carnegie Classifications – using a methodology last significantly updated in 2005 – divide universities and colleges into seven broad classes. In some classes there is further differentiation by scale of activity. These are:

Doctoral University (with subclasses R1, R2, R3 defined by scale)
Institutions that award at least 20 PhD/DPhil degrees per year
Master’s University (with subclasses M1, M2, M3 defined by scale)
Institutions that award at last 50 Master’s degrees per year
Baccalaureate Colleges
Institutions where bachelor’s degrees made up more than 50% of degrees awarded
Baccalaureate/Associates Colleges
Institutions with at last one Bachelor’s programme and with more than 50% of awards at the Associate degree level
Associate's Colleges
Institutions whose highest qualification awarded was an Associate degree
Special Focus Institutions
Institutions where more than 75% of degree awards relates to a single field or set of related fields
Tribal Colleges
Institutions which are members of the American Indian Higher Education Consortium

The first four categories are applied hierarchically: if you’re doctoral, you’re not counted in Masters; if you’re Master’s, you’re not counted in Baccalaureate, even though numerically you’d meet both sets of criteria.

An associate degree is a two-year undergraduate qualification. Typically it would equate to the first two years of the four-year baccalaureate degree. 

Using HESA data from 2015-16, it’s possible to match UK institutions to these categories. I haven’t done the detailed analysis required to categorise Doctoral universities as RE1, R2 or R3; or Master’s universities similarly. 

All UK HEI’s – or at least those which reported to HESA in 2015-16 – fall within one of the Doctoral, Master’s, Baccalaureate or Specialist classes. Here’s the classification. 

Doctoral Universities

  • Aberystwyth University
  • Anglia Ruskin University
  • Aston University
  • Bangor University
  • Birkbeck College
  • Birmingham City University
  • Bournemouth University
  • Brunel University London
  • Canterbury Christ Church University
  • Cardiff Metropolitan University
  • Cardiff University
  • City, University of London
  • Coventry University
  • Cranfield University
  • De Montfort University
  • Edinburgh Napier University
  • Glasgow Caledonian University
  • Goldsmiths College
  • Heriot-Watt University
  • Imperial College of Science, Technology and Medicine
  • Keele University
  • King's College London
  • Kingston University
  • Leeds Beckett University
  • Liverpool John Moores University
  • London Metropolitan University
  • London School of Economics and Political Science
  • London South Bank University
  • Loughborough University
  • Middlesex University
  • Newcastle University
  • Oxford Brookes University
  • Queen Margaret University, Edinburgh
  • Queen Mary University of London
  • Roehampton University
  • Royal Holloway and Bedford New College
  • Sheffield Hallam University
  • St George's, University of London
  • Staffordshire University
  • Swansea University
  • Teesside University
  • The Institute of Cancer Research
  • The Manchester Metropolitan University
  • The Nottingham Trent University
  • The Open University
  • The Queen's University of Belfast
  • The Robert Gordon University
  • The School of Oriental and African Studies
  • The University of Aberdeen
  • The University of Bath
  • The University of Birmingham
  • The University of Bradford
  • The University of Brighton
  • The University of Bristol
  • The University of Buckingham
  • The University of Cambridge
  • The University of Central Lancashire
  • The University of Dundee
  • The University of East Anglia
  • The University of East London
  • The University of Edinburgh
  • The University of Essex
  • The University of Exeter
  • The University of Glasgow
  • The University of Greenwich
  • The University of Huddersfield
  • The University of Hull
  • The University of Kent
  • The University of Lancaster
  • The University of Leeds
  • The University of Leicester
  • The University of Lincoln
  • The University of Liverpool†
  • The University of Manchester
  • The University of Northampton
  • The University of Oxford
  • The University of Portsmouth
  • The University of Reading
  • The University of Salford
  • The University of Sheffield
  • The University of Southampton
  • The University of St Andrews
  • The University of Stirling
  • The University of Strathclyde
  • The University of Sunderland
  • The University of Surrey
  • The University of Sussex
  • The University of Warwick
  • The University of Westminster
  • The University of Wolverhampton
  • The University of York
  • University College London
  • University of Abertay Dundee
  • University of Bedfordshire
  • University of Chester
  • University of Durham
  • University of Gloucestershire
  • University of Hertfordshire
  • University of Northumbria at Newcastle
  • University of Nottingham
  • University of Plymouth
  • University of South Wales
  • University of the Arts, London
  • University of the Highlands and Islands
  • University of the West of England, Bristol
  • University of Ulster
  • University of Wales Trinity Saint David
Master's Universities
  • Bath Spa University
  • Buckinghamshire New University
  • Edge Hill University
  • Falmouth University
  • Glyndŵr University
  • Liverpool Hope University
  • Newman University
  • Royal Agricultural University
  • Southampton Solent University
  • St Mary's University, Twickenham
  • The University of Bolton
  • The University of Chichester
  • The University of the West of Scotland
  • The University of West London
  • The University of Winchester
  • University College Birmingham
  • University of Cumbria
  • University of Derby
  • University of Suffolk
  • University of Worcester
  • York St John University
Baccalaureate Universities
  • Bishop Grosseteste University
  • Leeds Trinity University
  • SRUC
  • St Mary's University College
  • University of St Mark and St John
Specialist Focus Institutions
  • Conservatoire for Dance and Drama
  • Courtauld Institute of Art
  • Glasgow School of Art
  • Guildhall School of Music and Drama
  • Harper Adams University
  • Heythrop College
  • Leeds College of Art
  • Liverpool School of Tropical Medicine
  • London Business School
  • London School of Hygiene and Tropical Medicine
  • Norwich University of the Arts
  • Plymouth College of Art
  • Ravensbourne
  • Rose Bruford College
  • Royal Academy of Music
  • Royal College of Art
  • Royal College of Music
  • Royal Conservatoire of Scotland
  • Royal Northern College of Music
  • Stranmillis University College
  • The Arts University Bournemouth
  • The Liverpool Institute for Performing Arts
  • The National Film and Television School
  • The Royal Central School of Speech and Drama
  • The Royal Veterinary College
  • The University of Wales (central functions)
  • Trinity Laban Conservatoire of Music and Dance
  • University for the Creative Arts
  • University of London (Institutes and activities)
  • Writtle University College


Monday 17 July 2017

How many overseas students are there?

With more-than-usual amounts of chatter around UK HE policy at the moment, its useful to remind ourselves about evidence and actuality. With this in mind, I’m presenting two views of UK HE and its reliance upon students from other countries.

The chart below shows this – it’s drawn from HESA data, and shows the headcount of students – all levels, all modes – from outside the UK. This includes other EU and EEA countries, and students from all other countries.  It’s about domicile recorded for fees purposes, which practitioners will know is not a simple correlation with passport.


The data show that there’s a wild and fluctuating market, with sharp declines and even sharper rises. It must be a nightmare for universities in such a market – no wonder the emphasis given to recruitment activities.

Let’s look at a different chart. Again, students from outside the UK. The data here tell a different story – a stable market (you might even say strong and stable). Universities can concentrate on what that do best – teaching and supporting students learning; and conducting research. Our global partnerships look safe and secure.


It’s the same data of course. The trickery is in the Y axis (the horizontal one) – in the first chart the scale is truncated to start at 415k students; in the second chart the scale starts at 0.  The first gives the detail, the second the big picture. The chicanery is that the eye is tempted to focus on the line not the numbers. The conclusions drawn from the two charts are quite different.

There’s two lessons in this.

Firstly, don’t be fooled by bad charts. Darrell Huff’s How to lie with statistics is essential reading for everyone, in my view.

Secondly, the details does matter. Although total numbers are stable, the total is made up of the totals at all of the UK’s universities and HEIs – there’s almost 200 components to this, and maintaining, or growing, your numbers make the difference between adding jobs and giving a better student experience, or retrenchment, retraction and job losses.

So, look at the detail; remember the bigger picture might say something different; and try not to mistake wood for tees, and vice versa!


Monday 10 July 2017

Cartel schmartel

The question of university tuition fees in England is causing brouhaha again. Part of the issue seems to be some Conservative ministers wanting to catch up on an issue where the Labour Party had the edge on them at the recent General Election. Part of the issue appears to be a Damascene conversion by Andrew Adonis, the behind-the-scenes architect of the 2004-06 increases in tuition fees, and in particular his assertions that universities are operating a cartel about fees. A particular trigger appears to be recent revelations about the amount of debt that students will incur and not repayhttp://www.bbc.co.uk/news/uk-politics-40547740.

At heart, a lot of the issues appear to relate to the consistent £9k fee charged by universities in England. This costs more than the government planned (an estimated cost of £7.5k per student per year underpinned the calculations in 2010 and 2011). So why do universities charge £9,000 per year? And why do students pay?

(Cautionary note: although I graduated from LSE, I am not an economist. But I don’t think that arguments I make are bad arguments. Second cautionary note – the Higher Education and Research Act and the Office for Students changes a lot of the nomenclature in this, but the fundamentals remain the same.)

(by the way, on this general topic here is a really good article from the BBC showing some data about this complex topic)

Firstly, let’s look at the question of why students pay.

The student loans scheme in England is not like a normal loan. Repayments are contingent upon income levels (ie you don’t pay until you earn £21,000 per year; and then you pay a flat rate 9% of income over 21,000 per year.). You keep paying until you’ve paid off the debt, or until 30 years post-graduation.

This means that many won’t fully repay their student loans, as their income levels between 21 and 50 aren’t high enough to have paid for enough years. It also means that for any given level of income, the amount you repay would be the same regardless of the amount of loan you took out. Let’s show how that works:

Student A borrowed £9,000 per year - £27,000 in total – to fund tuition fees at Poppleton University. They now earn £25,000 per year, and so every year they repay £360 – that is, 9% of £4,000, which is the difference between their income (£25,000) and the threshold (£21,000).

Student B went to the Poppleton Metropolitan University which charges £6,000 fees per year, so they borrowed in total £18,000. They now earn £25,000 per year, and so every year they repay £360 – that is, 9% of £4,000, which is the difference between their income (£25,000) and the threshold (£21,000).
You can see that the annual amount of repayments is not related to the amount borrowed. Total amount repaid does relate – in principle – to the amount of borrowing, but you’ll only be expected to repay all of the debt if you earn enough.

This leads to my first proposition: the loans system does not encourage students to be price-sensitive.

The second question relates to why universities charge £9,000. It’s important to understand how this £9,000 is made up. The legislation provides for a standard amount (£6,000 per year) and a variable element (initially an additional £3,000 per year, now £3,250.)

Any approved English HE provider can charge £6,000 per year. Institutions can charge an additional fee if they have an access agreement with the Director of the Office for Fair Access (OFFA).  The Access Agreement sets out what additional fee they plan to charge, and what they’ll do to ensure that this does not militate against fair access. The assumption was that only in exceptional cases would £9,000 be payable; but there was no mechanism to enforce this. OFFA had to judge each application on its merits. And so if, say, University A had got an access agreement for £9,000 fees with fair access spend of £500 per student, then University B which proposed £9,000 fee with £500 fair access spend per student would be able to argue that to deny them the right to charge £9k would be perverse. The quality of the university was not a consideration.

In that initial round, nearly every university had its access agreement approved without conditions; a few had to make revisions. But none were turned down. £9,000 proved not to be exceptional.
Why did universities even suggest such a fee level? In part because fees act as a marker for quality. If your rival charges £9,000, why would you charge less? To do so could be to indicate that you weren’t as confident as your rival in the value of your offer.  It isn’t about greed or excess; simply about market position.

So this is my second proposition: there was no incentive for universities not to charge £9,000.

Also relevant was the actual behaviour of students. When setting higher fees for the first time, many university governing bodies recognised that they were entering the unknown. There was a recognition that student numbers might well fall. So a £9,000 fee, together within internal budgeting for fewer students, would make sense. We put on a brave face for the world, but plan for hard times. As it turned out, student numbers did not (after the first year) decline. And this is the third factor which I think is relevant.

Student recruitment has historically been controlled by the government, through a funding cap. Essentially, universities had a recruitment target set by HEFCE; there were penalties for exceeding this. When the cap was relaxed and then removed, as happened in 2013-14 onwards, universities were free to recruit as many students as they liked. This meant that a university which was growing could afford to spend resources on other activities, as in most cases the £9,000 fee was greater than the full cost of teaching. This enabled development of new subjects; investment in new buildings for teaching and research; and investment to improve reputation in, for instance, the REF.

This creates my third proposition: universities had many incentives to grow; few to remain the same size.

Put these three factors together, and a lot of the features of the current system become clear. £9,000 fees were the natural desire for universities. There was no price competition between universities, because enough students (in a growing market) were not price sensitive. So £9,000 becomes the norm.

This isn’t a cartel. Universities are by habit compliant with law and regulation, and the injunction ‘do not discuss fees with your peers’ was, in my experience, very well observed.  But it was bad regulation. The design of the system did not, beyond pious words, prevent £9k becoming the norm.

What would Nick do?
 Equally, universities had for decades been enjoined to behave more entrepreneurially. In relation to PGT fees, universities were encouraged to see these as a price, not a cost. Would students pay the fee? If so, charge it. If the fee wasn’t enough, stop teaching the programme. And this habit infused undergraduate fee decisions. It would be possible to regulate or legislate for a fee regime that reflected cost not market price. But this wasn’t done.

So what is to be done now? I’ve a simple plan. Ask Nick Barr. He’s Professor of Public Economics at LSE, knows more about higher education funding than almost anybody else in the world, and is wise and fair-minded. Ask Nick Barr how to fund students and universities on a fair and sustainable basis. And do what he says.


Monday 3 July 2017

Increased tuition fees do not cause increased participation

Tuition fees are back on the political agenda, big time. Arguably a significant component in the unexpected relative success of the Labour Party at the 2017 General Election, there are now calls by senior Ministers for a ‘national debate’ on the issue: see, for instance, Damian Green’s speech to the Bright Blue think tank on the weekend.

One aspect of this which is worth examining is the connection between fees and access: on the one hand there us the fear that debt will put people off university (and hinder their subsequent life-chances); on the other there is the evidence that participation by students from less advantaged backgrounds has grown since the introduction of higher tuition fees in 2012. I've seen it argued - by people including a Vice-Chancellor of a UK university - that fees have helped with this process.

But of course, correlation does not imply causality. And because higher education is a devolved matter, there’s a simple experiment which can shed light on the question about whether fees encourage participation.

Scottish domiciled students pay no tuition fees if they attend Scottish universities. If they attend universities in England they pay the normal home rate – that is, £9,000. HESA data lets us see whether more Scottish students attended universities in England after fees were increased in 2012, which they should if tuition fees encouraged greater participation.

Here’s the data. It shows the number of undergraduate students attending universities in England from each of the four UK nations: England, Scotland, Wales and Northern Ireland (NI). The data covers the years 2008-09 to 2015-16: that is, four years under the £3k tuition fee regime and four years under the £9k regime. The number of English students is several orders of magnitude higher than those from Scotland, Wales and Northern Ireland, so I’ve used two vertical axes. The left had axis show student numbers from Wales, Scotland and NI; the right hand axis shows students from England.

Data from HESA Student data, table N
The chart shows that the number of English students at English universities grew over the eight years, from about 780,000 to 925,000. The number of Welsh and Northern Irish students at English Universities also grew, from 15,000 to 22,000 in the case of Wales, and from just shy of 8,000 to just over 9,000 in the case of Northern Ireland.

The number of Scottish students at English universities did not grow. 4,840 Scottish students attended English universities in 2008-09; 4,255 attended in 2015-16.  And just to be clear, there isn’t a peculiar effect of declining number of eligible Scottish students: the number of Scottish students attending Scottish universities grew from about 84,000 to about 94,000 over the same period.

So it seems that tuition fees do not cause increased participation. The growth in English students can be explained by the availability of places: the cap on recruitment was removed in the couple of years following 2012, giving universities an incentive to recruit as many students as they wished. But if students could study in their home country for free, as in the case of Scotland, they were immune to the charms and marketing persuasions of English universities.

So when the debate plays out, be careful to spot when correlation (growth in student numbers in England) elides into claims of causation. The evidence is that tuition fees do not cause a growth in student numbers.

Monday 26 June 2017

Lessons for Leaders from the election

The UK’s general election and its fallout were certainly interesting. The outcome means that certainly about funding and policymaking is likely to be harder to come by in the coming years (or months, maybe?). This will have impacts on many areas of professional life.

The fates of the leaders of the two main parties – Theresa May and Jeremy Corbyn – are also instructive. I’m not seeking to make any political points, here, but it seems to me that leaders in any organisation can learn two important lessons from the campaign.

Firstly, let’s look at Theresa May.  Before calling the election her party had a commanding lead in the polls, and her standing as Prime Minister was high. She had attained her party leadership and the Prime-Ministership by acclaim, as all other candidates withdrew. Her confident messaging on Brexit, perhaps the defining issue for our political times, seemed to resonate with her party and with voters at large. And so she called an election, clearly believing that a larger majority was hers for the taking. And the campaign was for Theresa May’s Conservatives – the Prime Minister’s personal identify and appeal to the voters.

But as we all know, it was not to be. The campaign exposed the lack of depth in her manifesto - for instance, cabinet ministers not involved in drafting key policies, and only aware shortly before publication of key and controversial policy commitments. This led to the removal of a manifesto commitment about social care costs within a couple of days of its being announced. Another issue was the repetition of one key message – Strong and Stable Leadership – to levels approaching parody. When a question time audience openly laughs at the Prime Minister, it is clear that something isn’t right. And a theme emerged of a person who was unable to respond to real issues; who appeared to have taken the election for granted.

Even so, the election outcome was a shock for many. The opinion poll from the impressive Professor John Curtice – which turned out to be pretty damn accurate – was ignored and talked down by all pundits and all parties for the first two-to-three hours of results. But by the next morning it was clear that Theresa May had lost her overall majority, and had lost the ability to command without consensus in her party.

What had gone wrong? To my mind, she had committed the sin of believing her own propaganda. Her message of strength and stability had not been tested by an internal party challenge. She had had no feedback at that point. She ran a tight circle within Number 10: her policy staff were said to exclude many of her own MPs and minsters from effective discussion. And so no voice was there to say that that things may not have been as she saw them. Theresa May 0, Hubris 1.

And what of the other side? Jeremy Corbyn went into the election with many convinced that his party was going to be humiliated. His own allies were setting a low bar for success – anything above 200 seats, according to Len McCluskey of Unite, would be good. And note that this 200 seats mark would mean losing 30 current seats. His enemies within his own party had been seeking to have him removed as leader ever since his election. And opinion polls and spread betting companies were suggesting that the party might finish with as few as 130-150 seats, plumbing depths not seen since the 1930’s.

Although the Labour Party was uncharacteristically united during the election campaign, the mood amongst its candidates was not good. Few reportedly featured Jeremy Corbyn on their literature; the campaign was a defensive one, targeting marginal seats held by Labour. And the campaign had some poor performances, with a shadow cabinet member hidden from view to avoid media focus on her apparent inability to work with numbers; and Jeremy Corbyn himself in difficulty over the gap between his personal view of nuclear weapons and that of his party.

When the results were in, Labour had come a distant second; with fewer seats than Gordon Brown won in 2010. It was clear that there was no realistic prospect of a Labour Government: the parliamentary arithmetic just did not allow it. But the narrative was all about the fantastic performance of the Labour Party.

  • 40% of the vote! (Never mind that the Conservatives had got 42.4%). 
  • A +9.5% swing to the party. (Never mind that the Conservatives had a +5.5% swing). 
  • Thirty extra seats! (Never mind that the party wasn’t in power.)

What had Jeremy done right? He clearly had authenticity and communicated better than his adversary. That got the election result. And he managed expectations very, very well. Everyone expected humiliation; he ended up only defeated. This becomes a triumph in itself – defying the odds, the underdog emerges unbowed. And we do all love an underdog, don’t we?

So, leaders, take note.

Firstly, you’re probably not as brilliant as you may think. You need to listen to others. Don’t be like Theresa.

Secondly, manage expectations. Under promise and over deliver. Everybody loves a positive surprise; nobody likes to be disappointed.

Both Theresa May and Jeremy Corbyn live on to fight another day. Theresa is still PM, although clearly her tenure is time limited. And Jeremy will fight the next election as leader of the Labour Party, if that is what he chooses to do. But Theresa now needs to bring strength and stability where she cannot. And Jeremy can no longer be the underdog: his party expects him to deliver government. Tough times for them both.

Tuesday 21 February 2017

Copy that, Minister

Jo Johnson, Universities minister, has taken time from piloting the Higher Education and Research Bill through Parliament, to taking strong action – or at the very least issuing a strongly worded press release – enjoining universities and the QAA to do more to prevent plagiarism. Specifically, to take action against essay mills.

Properly cited lyrics, so it isn't plagiarism
This is an issue which has been bubbling around for a while. The government brought together stakeholders last summer to consider the problems; in August 2016 the QAA published a pretty thorough analysis of the issues and made recommendations about what could be done.

The problem is simple to describe. There is a concern that some students are buying essays written by others and passing them off as their own. This is plagiarism, and is cheating. It isn’t clear how many students are doing this, but a few businesses which supply such essays are obviously doing decent business, and by implication the number of students doing this is non-negligible.  The Telegraph cites 20,000.

The QAA recommend a three-pronged approach:

  • Partnership working by the sector to tackle the problem
  • The possibility of legislation, perhaps like that in New Zealand
  • Using regulators to prevent essay mills from advertising.

Jo Johnson’s statement focuses on the first and third approaches: the sector should sort it out, by means of guidelines and tougher penalties for students; the QAA should also go after the essay mills’ advertising. Legislation isn’t ruled out as a possibility, but is definitely kicked into the long grass. And, gratifyingly for the minister, some proper outraged press coverage about those naughty students.

I’m torn as to the best approach on this.

Early in my career I handled cases of student cheating, and plagiarism was by far the most common ‘irregularity’. This was in the very early days of the internet; pre Turnitin, and spotting plagiarism was down to eagle-eyed markers. And it was often pretty obvious: passages copied from books without editing, meaning that ‘as I argued on page 34’ for instance, showed up as out of place in a 6 page essay. Often the source material was cited in the bibliography. Whilst there was no doubt that the work had been copied, and without proper attribution, it was difficult with examples like this to see it as a deliberate attempt to deceive, as opposed to a complete lack of understanding about what the essay was trying to test. Often coupled with poor self-organisation meaning that at the last minute the student panicked.

In such cases throwing the book at the student would have been wrong, and that was often the academic judgement. A clear fail for the essay; a requirement to resubmit; and work to help the student understands what the problem was: this was a good remedy.

In the case of a student submitting as their own an essay which they’ve bought online, it is harder to be so forgiving. There seems to be more agency involved in committing the exam irregularity. Buying an essay and passing it off really isn’t a failure to understand the practices of academic referencing; it’s a straightforward attempt to cheat. And on that argument, the minister is right. Sort it out, HE sector.

But legislation surely can’t be a bad thing to get on with. Even if the ‘crime’ is committed by the student who submits the purchased essay, there really isn’t an argument for leaving the essay providers untouched. The notion that these are bought for ‘research’ purposes is laughable. The idea that tailored exemplars are a good way to learn is not a strong one. (The use of model answers afterwards to help feed back to students on their performance is a different question.) And the New Zealand law, which empowers the higher education quality regulator to prosecute essay mills, does not look like a bad law.

Would it solve the problem entirely? No – essay mills may be based outside the UK, and beyond the scope of the law. Would it help to convey the message that the practice is wrong? Yes. Would it help to demonstrate that the UK takes academic standards seriously? Yes. Is there a convenient current parliamentary vehicle to achieve this? Unbelievably, yes there is.  The minister could do more to make this happen now.

There’s another angle here. A student who submits a plagiarised essay may get caught, and face a penalty, or may not. But what is absolutely certain is that they will not learn as much about the essay topic as a student who reads and writes and submits honestly, even if they fail or get a poor mark. The plagiariser knows less than the honest student. And this cost is borne throughout their life. To plagiarise is to fail.

If universities could convey this message better to students, we might be getting somewhere important.

Wednesday 15 February 2017

The people have spoken

The votes are in, and there’s a clear favourite: the book I write should be about the future for professional services in higher education.

It's a start, anyway ...
Over 55% went for that option; nearly 40% for the option of organisational change in higher education; and 5% said I should write something else entirely.

A lot of people also highlighted – correctly – that there’s an overlap between the topics. Or at least a causality. If the future of higher education is one of change, then let’s get the organisational change right. Conversely, change is better when the reasons for it are correctly identified.

So perhaps there’s room for two books here. But one at a time seems like a good rule, so I’ll go with the will of the people.

I’ll be setting up a dedicated website for the new book, and a mailing list so that you can stay specifically in touch about that. (In due course there’ll probably be one of those sign-up widgets but in the meantime, if you’d like to receive weekly emails from me about the book, drop me a line: hugh @ hughjonesconsulting.co.uk …)

That means that this blog here will continue to focus on broad topics of interest. Who knows, maybe even managing organisational change in higher education might show its face from time to time.

Wednesday 8 February 2017

I'm writing a book ...

I think it’s time to write a book. That sentence was easy to write; I expect it’ll get harder.

My first decision is the topic, and I’d like your help. I’ve got two possible subjects in mind, and it’d be great to hear which you think is the best.

The first topic is about the future for professional services in universities. We know that change is always a constant, but at the moment, with changes in HE policy and funding; changes in technology; and changes in society, it seems like there's a lot we don;t really have a handle on. So what do you need to know now that will help you over the next twenty years or so?

The second topic is about re-organisations in universities. Sometimes it can feel like the ink doesn’t dry on an organisation chart before it's changing again. So why do these happen? What can you do if you’re being re-organised? And if you have to lead a re-organisation, how do you make it work?

I’ve set up a (very) short survey, and I’d like to know which of these you think is the best. Or perhaps there’s another topic you’d rather see. If you’d like to hear more as I write, that’d be great too – please let me know.

You can take part in the survey here.

Please let colleagues know about the survey – the more voices, the better the final choice …

Thank you!