I keep saying I have mixed emotions about the Alberta government’s new decision to instil performance-based measures on post-secondary education to allocate funding. In reality, I have pretty strong emotions about this: it stinks and I have piles of doubt.
I understand the intention behind the decision. We’re in a global recession; everybody is tightening the purse strings. But how we make decisions, particularly regarding education and knowledge creation, should involve awareness and evidence.
Here are five points that rose to the surface as I dug through my anger and fear about this decision.
1) There is little evidence that performance-based measures benefit post-secondary institutions, education, research, or our society.
Installing performance measures for learning and research institutions has already been done elsewhere in the world and has achieved less than stellar results. The UK and Australia both have performance-based measures for post-secondary institutions for research, and even teaching. I’ve spoken to research administrators from UK institutions and heard stories of how the research measures (ie, numbers of publications, number of collaborating research partners) have adversely impacted the quality of research and the type of research being done. Since the introduction of performance-based measures, researchers now focus on what research has the potential to be scored high and, therefore, highly funded, rather than doing research for research’s sake or engaging the community to help make a change. In fact, some researchers are quite vocal about how these performance-based measures are impeding research, education, and community engagement (see Further Reading, below).
2) The measure often becomes the target.
Was your allowance ever based on to how many ‘A’s you were awarded on your report card? This measure teaches children that the result is more important than the learning.
As any tenure-tracked or tenured academic will likely attest, the measurement can often become the target and can influence the result. If tenure and promotion is dependent upon how many scholarly articles are published and how much external funding is awarded, then those measures will become the academic’s target. Community engagement is often de-prioritized under ‘Service’, pushing making a change in the world far behind publishing in journals that only other researchers read. The measurements become a box within which an academic is graded; if the academic wishes to disseminate results, teach, or do research outside this box of measurements, they will be penalized for it. Individuality and creativity are sacrificed to the need to conform to specific criteria measured in specific ways.
As well, just about any student will tell you that if something isn’t being graded, then there’s no reason to learn it. This has likely contributed to the reported lack of soft skills and critical analysis in the younger generations — these things either aren’t graded or they escape the ability to be graded within the current assessments. Consequently, they aren’t learned. And, as a society, we are suffering for it.
3) Academics are knowledge creators, and creativity can be difficult to measure.
How do we measure art? Is a Picasso painting more valuable to the world than one of his drawings? Is art more valuable than biology?
Creativity and innovation are often only valued by the resultant product: telephone, radio, painting, book, etc. How do we measure — acknowledge the value of — the creative process? The process to attain a tangible, impactful result can take years of effort, sometimes involving the direct or indirect efforts of many people. Thomas Edison’s famous quote: “I have not failed 10,000 times. I have not failed once. I have succeeded in proving that those 10,000 ways will not work,” wonderfully illustrates the creative and research process.
Research and education are processes that involve creativity, failure, collaboration, time, and space. Post-secondary institutions value the processes, and historically are safer places within which to think outside the box, challenge the status quo, and create new knowledge and approaches. Making mistakes and failing is how we learn. Performance-based measures do not value the process of research, of learning, of creativity, of failure; they reward the successful products of the process. In fact, these measures would have been prohibitive to Edison’s innovation.
4) The value of impact is a complex picture, not conducive to performance-based measures.
Our society is impacted by post-secondary institutions, at minimum, by educating students and by creating knowledge through research.
Canadian researchers reach out and involve community members, non-profit agencies, and volunteer groups in research and teaching. Community engagement in research is primarily driven by external research funders, but it also makes sense. It makes sense to involve those groups who may be affected by the research. Community engagement in teaching is driven by the need to prepare the students for reality by providing practical scenarios, and can greatly improve the student experience. Engagement in both research and teaching benefits the institution, community groups, and Canadian society.
In the UK, community engagement is part of the research impact measurement. This sounds good; however, research impact can be very difficult to measure: often it is qualitative, indirect, and can take years, if not decades, to realize. As Dr. David Phipps states, of 6675 case studies of research impact collected by UK  REF, “there are 3709 unique ways to get to impact.” This demonstrates that there is no one way to measure the impact of research on society because there is no one way research impacts society. I suspect the same could be said for the way in which teaching impacts students and society.
In the Alberta model, impact may need to be visible and measurable for annual reports. Is this realistic?
5) Who will evaluate whether or not an institution has performed well?
The UK and Australian governments which instituted and manage the performance-based measures also evaluate the post-secondary institutions. If this is the expectation of the Alberta government, then who will be employed to evaluate the performance of our post-secondary institutions? Public-servants? Academics? How will these people be trained to evaluate the performance of these measures? If the Alberta government is employing people to evaluate post-secondary institutions, is it really saving money by installing these measures?
While the new performance-based measures are to be determined by each institution, we should be keenly aware that these measures may ultimately affect the learning, teaching, and research of the institution. Enrolment and graduation numbers are already standard measurements, but how well these measurements capture the quality of the student experience is already questionable. The number of external research grants held by an institution doesn’t capture the hard questions being asked, and the long process of investigation. What will happen to the student experience when money is attached to these existing measures? Will new knowledge be created through research even though it doesn’t directly and visibly impact society in the immediate future? How much creativity and innovation can be expected when money is only dished out for products?
How much does an ‘A’ grade cost? I argue it costs creativity, individuality, and the ability to learn and grow. And these are expensive prices, indeed.
** Since writing this, I discovered that Ontario has performance-based measures for their post-secondary institutions. I would be grateful to hear from folks in Ontario regarding this system.
Addendum (Feb 3, 2020): Kudos to Dr. Sarah Eaton for calling it. See her post here about the McKinnon report and its possible consequences.
Selected Further reading
CBC – New performance-based post-secondary funding model elicits mixed reactions, January 20, 2020
Oxfamblogs.org – If academics are serious about research impact, they need to learn from advocates, July 4, 2017.
Derek Sayer – Rank Hypocrisies: The Insult of the REF, (2014)
Department for Business, Energy & Industrial Strategy (UK) – Review of the Research Excellence Framework: evidence report, July 28, 2016.
Alexander Clark & Bailey Sousa – How to be a Happy Academic, (2018)
The Research Whisperer – The measurement tail should not be wagging the impact dog, December 4, 2018.