Tuesday 15 August 2017

Make the National Student Survey Great Again!

The NSS data was out last week. This year it’s a new set of questions – some are the same as in previous surveys, some are amended versions of previous questions, and some are entirely new. This means that year-on-year comparisons need to be treated with a little caution.

But one aspect of reporting continues to bother me. The survey measures final year undergraduate student responses to a number of statements. For instance, “Overall, I am satisfied with the quality of the course” on a Likert scale - that is, a 1-5 scale, where 1 = definitely disagree; 2 = mostly disagree; 3 = neither agree nor disagree; 4 = mostly agree; and 5 = definitely agree. The data is presented with a simple summing of the percentages who respond 4 or 5 to given a ‘% agree’ score for every question at every institution. Which in turn means universities can say “93% satisfaction” or whatever it might be.

This is simple and straightforward, but loses important data which could be summarised by using a GPA (Grade Point Average) approach – just like the HE sector commonly uses in other responses, for instance in REF outcomes. Using a GPA, an overall score for a question reflects the proportion giving the five different responses.

To calculate a GPA, there’s a simple sum:

GPA = (% saying ‘1’ x 1) + (% saying ‘2’ x 2) + (% saying ‘3’ x 3) + (% saying ‘4’ x 4) + (% saying ‘5’ x 5)

This gives a number which will be 5 at most (if all respondents definitely agreed) and a minimum of 1 (if all respondents definitely disagreed).

If GPA was used for the reporting, there’d still be one number which users would see, but it would contain more nuance. GPA measures how extreme people’s agreement or disagreement is, not just the proportion who are positive. And this matters.

I looked at the raw data for all 457 teaching institutions in the 2017 NSS. (This is not just universities but also FE Colleges, which work with universities to provide foundation years, foundation degrees and top-up degrees, and alternative providers.)  I calculated the agreement score and the GPA for all teaching institutions for question 27: Overall, I am satisfied with the quality of the course. And then I rank-ordered the institutions using each method.

What this gives you are two ordered lists, each with 457 institutions in it. Obviously, in some cases institutions get the same score; where this happens, they all have the same rank order. And institutional rank is reflects the number of institutions above them in the rank order.

So, for example, on the ‘agreement score’ method, 27 institutions score 100%, the top score available in this method. So they are all joint first place. One institution scored 99%: so this is placed 28th.  Similarly, on the GPA ranking, one institution scored 5.00, the top score using the GPA method. The next highest score was 4.92, which two institutions got. So those two are both joint second.

What I did next was compare the rank orders, to see what difference it made. And it makes a big difference! Take, for example, the Anglo-European College of Chiropractic. It’s 100% score on the ‘agreement score’ method puts it in joint first place. But its GPA of 4.39 places it in joint 79th place. In this instance, its agreement score was 61% ‘mostly agree’ and 39% ‘definitely agree’. Very creditable. But clearly not as overwhelmingly positive as Newbury College, which with 100% ‘definitely agree’ was joint 1st on the agreement score method and also in first place (and all on its own) on the GPA measure.

The different measures can lead to very significant rank-order differences. The examples I’m going to give relate to institutions lower down the pecking order.  I’m not into name and shame so I won’t be saying which ones (top tip – the data is public so if you’re really curious you can find out for yourself with just a bit of Excel work), but take a look at these cases:

Institution A: With a score of 87% on the agreement score method, it is ranked 138/457 overall: just outside the top 30%. With a GPA of 3.95, it is ranked 349/457: in the bottom quarter.

Same institution, same data. 

Or try Institution B: with an agreement score of 73% it is ranked 382/457, putting it in the bottom one-sixth of institutions. But its GPA of 4.28 places it at 129/457, well within the top 30%.

Again, same institution, same data.

In the case of Institution A, 9% of respondents ‘definitely disagreed’ with the overall satisfaction statement. This means that the GPA was brought down. Nearly one in ten students were definitely not satisfied overall.

In the case of institution B, no students at all disagreed that they were satisfied overall (although a decent number, more than a quarter, were neutral on the subject.) This means that their GPA was higher, but the overall satisfaction reflected the non-committal quarter.

I’m not saying that institution A is better than B or vice versa. It would be easy to argue that the 9% definitely disagree was simply a bad experience for one class, and unlikely to be repeated. Or that the 27% non-committal indicated a lack of enthusiasm. Or that the 9% definitely disagree was a worrying rump who were ignored. But what I am saying is that we’re doing a disservice by not making it easier for applicants to access a more meaningful picture.

The whole point of the National Student Survey is to help prospective students make judgements about where they want to study. By using a simple ‘agreement’ measure, the HE sector is letting them down. Without any more complexity we can give a more nuanced picture, and help prospective students. It’ll also give a stronger incentive to universities to work on ensuring that nobody is unhappy. Can this be a bad thing?

GPA is just as simple as the ‘agreement score’. It communicates more information. It encourages universities to address real dissatisfaction.

So this is my call: let’s make 2017 the last year that we report student satisfaction in the crude ‘agreement score’ way. GPA now.

Tuesday 1 August 2017

Value for money

Universities seems to be having a torrid time, at least as far as their standing in the political firmament goes. As well as pension headaches for USS member institutions (mostly the pre-1992's), there are high-profile stories on VC salaries, Lord Adonis' campaign about a fee-setting cartel, and (low) teaching contact hours. So far, so not very good at all.

There's a feeling that this might be more than a quiet season set of grumbles: David Morris at Wonkhe writes interestingly on this. For what its worth, I suspect that this is indeed politically driven rather than accidental. Maybe Lord Adonis is marking out ground for his re-emergence within a new model Labour Party; maybe Jo Johnson is preparing for tough discussions around future fees. But whatever the end point, it's worth looking at whether the concerns are real.

An underlying point is value for money. The charge is that (English) students don't get a lot for their money. One quick way to look at this is university spend on staff, the single biggest item on university's accounts. HESA publish handy data on student numbers and staff numbers. It's straightforward to calculate the ratio of students to academic staff over the years.

source: HESA, my calculations
The data show that from 2004-05 to 2011-12, for every member of academic staff there were about 14 students. In 2012-13 - the first year of the new fees regime in England - this ratio started to fall, and by 2015-16 there were just over 11 students for every member of academic staff.

Does this mean that the stories of low contact hours, and questionable value for money are wrong? Not necessarily - the data doesn't speak to the reality at individual universities or programmes, nor does it describe any individual student's experience. But it does show that universities have invested in the most important element of their provision: academic staff.