Amidst the increasing popularity, global university rankings are not free from criticism.
While the rankings could be used by each institution to plan further improvements, the problem lies with the use of those rankings for branding, and to attract more students. A lack of a tool to measure the holistic performance of a university is also a concern.
Academic and research institutions have always been ranked. Looking at it objectively, it is indeed unacceptable if those rankings are used for “branding” while ignoring the importance and impact of universities as the highest seats of learning and wisdom, as well as being the vanguard of ethical practices.
Nonetheless, the rankings – with the disparity they present – seem to have advantages. Hence, despite the riddle of those rankings, academic institutions are tempted to secure ever higher positions.
Consequently, a large number of universities in the world participate in these rankings every year. Ranking bodies such as the Quacquarelli Symonds (QS) or Times Higher Education (THE) global university rankings use more or less similar quantitative indicators: reputation in teaching, staff (academic)-student ratio, doctoral-bachelor student ratio, number and citation counts of papers published by the staff, the proportion of international staff and students, international collaborations, and knowledge transfer for industrial or societal applications.
The indicators QS or THE use scores in selected quantifiable criteria. At the end of the evaluation, each university earns a score based on its performance in those selected indicators.
At the bottom of the chain of the rankings race – academicians are inspired (read forced) to increase their research productivity mainly in terms of publications and citations. To some extent, the race for higher numbers of publications and citations has compromised the authenticity and integrity of academic practices.
Quite recently, members of the academia were shocked to find out that there were more than 10,000 retractions in the year 2023. Various forms of scientific misconduct, including compromised peer reviews, caused those retractions.
Added to that was the recent revelation of anomalous trends of self-citations used by many academicians to boost their research impact.
Hence, adopting an effective means to curb these unethical practices in the academic and scientific world has become more imperative than ever before. To identify an effective means, we may want to borrow the phrase coined by Dr Samuel Hahnemann, the founder of homeopathy – ‘Similia similibus curantur’ – which in English means ‘like is cured by like’.
The ‘temptation’ for a higher ranking works as a gas pedal, persuading academicians to resort to unethical means to increase the number of publications and citations. A brake pedal to penalise unethical means might help put the brakes on the rankings race.
In other words, a measure to assign negative scoring for the activities that affect the authentic pursuit of science and knowledge should also be factored in, in the global university rankings. More specifically, while a university is credited for the number of publications, it should be ‘discredited’ for any papers retracted for scientific misconduct.
Similarly, anomalous self-citations should also result in points deductions.
Such measures would make policymakers think twice about pushing researchers back against the wall in their pursuit for endless publications and citations, and higher rankings.
Prof Dr Mohammad Tariqur Rahman is the Associate Dean (Continuing Education), Faculty of Dentistry, and Associate Member, UM LEAD, Universiti Malaya.
The views expressed here are the personal opinion of the writer and do not necessarily represent that of Twentytwo13.