There are many reasons not to use measures of study quality in budget schemes for Higher Education. Most importantly, we don’t really know yet how to measure study quality. The easiest way to do it is to ask students about their assessment. But it seems obvious that this can never represent the whole picture: Students might be experts about their own learning, but not about teaching. And in their perspective, easy courses and nice and funny teachers contribute to their well-being, but we would assume that both is not directly linked to the quality of their studies and especially their learning outcomes. On the other hand, we would like to see students gain in knowledge and competences, both of which might profit from hard work, long study nights, and challenging teachers. “Becoming an academic” is – or should be – a transformation process which might come with discomfort, exhaustion, and sometimes phases of disorientation and even anxiety, and the students might only realise later in life if that experience was worthwhile or rather soul-crushing. And because of such contradictions, we need to make sure that when assessing study quality, we are not producing negative incentives – e.g. inflationary good grades, non-challenging teachers, or taking on only students who slip easily into the role of an academic (see Kaili Rimfield, “The same genes influence exam results across a range of school subjects”).
What makes this issue additionally complicated is the fact that the assessment of study quality serves a wide range of different purposes: First of all, it is an important information for future students when choosing their university. Secondly, it is part of the universities’ accountability towards to prove that they indeed offer value for money. Thirdly, it seems obvious that universities should have an interest in knowing how good the study quality they are offering is – in relation to their own educational goals and in comparison to other universities.
Having said that, offering an education of good quality is one of the core reasons why we have higher education institutions and why we fund them with public money. So it stands to reason that study quality should be on dimension in how we assess universities. And if we want to eradicate bad teaching quality, we should also consider study quality when distributing money to higher education institutions.
So how do we assess study quality? By the professional success of the graduates? By how teachers assess the students gain in competences? Or by how students assess their studies? The answer is most probably: All of the above, and then, not all of the above at once.
In order to assess the quality of teaching, we need to do at least three things. First, we need to acknowledge the fact that there are different instruments to measure study quality, and that these can easily contradict each other, without losing out as an instrument in their own right. Second, we need to differentiate between instruments which take the perspective of the students from those which take the perspective of the educators, those of the funding institution and those which take the perspective of employers or general society. And third, which might be the hardest, we need to acknowledge the purpose for which a certain measure was developed and not use it for inappropriate purposes.
There are several examples of methods to reflect aspects of study quality for a specific purpose which clearly are not applicable for other purposes:
Course evaluations. These are most helpful for teachers and educators – both as a feedback from the students and as a pedagogical tool, helping the students to identify and assess those aspects in their learning environment which helps them succeed. There are also examples of course evaluations which help the students to self-assess their learning progress – which is again useful information for the teacher for further development of the course. So while course evaluations might influence and help develop study quality, it is not advisable to use them to assess the study quality of an institution: The results need to be assessed against the specific circumstances of the course. They might be comparable within one faculty, but already between faculties, course evaluations cannot be compared directly, much less between universities.
Professional success of graduates. The professional success of alumni is also considered by employers as well as policy makers as reflecting study quality. Considering the fact that Higher Education is the most expensive education, it makes sense that policy makers want to make sure that it leads reliably to a good and life-long income. So far, Higher Education on average continues to outperform other kinds of professional or vocational education. However, differences between fields and rising unemployment in times of crisis lead time and again to the discussion if Higher Education focusses enough on aspects of employability.
On the other hand, professional success of graduates is also part of the reputation game. Employers often see the institution from which an applicant graduated as a proxy for the quality of education that they enjoyed. Some employers use university of field-specific rankings to identify the “best” university. But especially the university rankings are often created for a different purpose, and do not reflect the institutions’ ability to develop their students’ intellectual competences or the employability skills that the employers seek.
Success rates. In many countries, success rates – i.e. the probability of a student to successfully graduate in a course – are part of the funding scheme. Sometimes they also include the planned duration of study. In some countries, access and retention of underrepresented groups are considered as one aspect of study quality. However, as academics don’t tire to point out, this can lead to unintended effects, like lowered standards. Demanding too much from students or not offering enough support is certainly an aspect of low study quality, but improving teaching and learning is only one possibility to address it.
University Rankings including student feedback as one of several information sources. Not all rankings include student feedback, but at least those which are specifically created as an information source for future students – such as U-Multirank – include the students’ perspective on their university and course.
One of the first national survey was the National student survey in England, initiated by the National student union more than a decade ago, and now carried out by a private company on behalf of the HE funding authority hefce, and its results, published in the form of a ranking, is meant to hold HEIs accountable. Generally, the results are widely published and acknowledged, but there is also a growing dissatisfaction with the survey as it is not really clear what the results reflect.
Another example for a national student survey is the Kandipalaute survey in Finland. Targeting all graduates of Bachelor programmes at Finnish universities, the survey is used for two distinct purposes: First, there is a set of 13 items, reflecting specific aspects of the students’ view on study quality, which are used to distribute 3% of the university budget. Second, the complete survey, with over 130 questions, is used to gather a complete picture of the students’ study experience, and is part of the universities’ management information system and quality management. The Kandipalaute project goes back to an initiative by the national student union in Finland. The development of the survey, though, is in the hand of a working group, where the 14 Finnish universities, the student union and the ministry is represented. Thus, the 13 “budget items” reflect a current and common view on study quality of the HE sector in Finland.
The British Minister for University and Science recently announced the plan to develop a Teaching Excellence Framework. It is not yet clear how this will be set up or used, but apparently the results will be used to limit or raise the student fees per university. These plans raise many questions and also concerns among academics and university leadership.
Give universities the opportunity to prioritise study quality! And yes, that means: Money.
The data collected by instruments like rankings, student surveys, or in the future a Teaching Excellence Framework, are only valuable for the universities if they can derive lessons learnt from it. And those lessons learnt are addressing very different questions, depending on who is concerned:
In many countries, the game is rigged for research at universities. This is where reputation and money is, and even more importantly, it is where academic careers are made. Thus, the idea to assess universities by their ability to offer a high quality study experience and learning outcomes is something that could finally provide university teaching with as much weight and importance as research already has. At least on an institutional level – academics will still need to put a lot of focus on research in order to have a successful career. And it could offer the opportunity to create a common understanding of what the HE sector considers study quality: This is at least one of the outcome of the Finnish student survey. The debate around the survey and the budget scheme lead the way to a common understanding of what should be aspects of study quality, across a very divers group of institutions, expressed in only 13 questions.
While policy makers and employers might be content with more reliable data and league tables, this is not what the universities need. In the Finnish survey, the 13 questions used in the budget scheme are part of a longer survey which offers other insights into the student experience. This information can be used by the universities in their quality management systems in order to improve their offers according to their profile and educational goals.