Meeta Sengupta works at the cusp of policy and practice across the education and skills spectrum and enjoys sharing her gleanings via her writing for a wider audience. She has been an investment banker, a researcher, an editor, a teacher and school leader across continents. A keen observer of how economics, foreign policy and investments affect the policy and thence practice of education, she works with leaders to design interventions that improve the quality and process of education. Designing education processes to realise the potential of individual students is at the centre of her education philosophy. Meeta has worked both as a policy observer, and at the coalface of education across the board and across countries. She has served as a governor of an aided school, part of the management committee of a residential school, managed an academic centre in an elite post graduate management school and led a business school supported by a community college. She has worked with children, teenagers, business school and PhD candidates and has also worked with those seeking to rebuild their lives via education. Meeta W Sengupta is a Fellow of the Salzburg Global Seminar, among others and can be contacted via her personal blog at meetawsengupta.wordpress.com/about
Just this week the annual QS league tables were released, and as expected Indian Universities were not in the top 200 or 300 ranks. This has been the trend for years in all the league tables including the Times Higher Education Rankings and the Shanghai Rankings. The distress at not ranking high is palpable, but before we do aim to do so, it might be a thought to review what league tables can and cannot do for us.
League Tables are over rated, clearly. Then why do we pay so much attention to them? Because they are all we have as a tool to bring an objective comparison to the quality of institutions of education.
Every soul of some erudition knows that the right thing to do is to look into the middle distance, then narrow their eyes and shake their head while slowly saying, “Well, you know... league tables.. they are not really a measure”
True. They are not a measure (of what? Presumably Quality). They are a collation of proxy measures of Quality in education institutions that enable comparison.
League tables can only measure the things that can be quantified. Their criteria have to be designed in such a manner that ensures that they are able to collect data from all educational institutions in a consistent manner. This is their claim to fame - being able to bring objectivity and therefore the ability to compare across contexts.
There is a lot that cannot be captured by league tables. Some will say, especially in education, that none of the real things that matter can be captured by league tables. There is as much truth in that as saying that a photograph in two dimensions cannot capture three dimensional reality.
So, while research output can be measured by the number of papers published, one cannot really use that to judge the quality of those papers. To manage that, one tries to restrict the set to journals that are of known and accepted quality, often peer reviewed. If the papers are accepted by these journals, then they will be included in research output, else not. There are a few problems here. As every academic writer knows, journals are often accessible to a certain clique who are the guardians of tradition (another proxy for quality, possibly). Each journal has its own style of analysis or writing. So a young bright academic finds that even good work has to conform in many ways, and if rejected, they must then recast it for another journal - and that means rewrite completely. If the academic - and in our example a bright young academic - is to be judged on the quantity of papers accepted, then it would be quite unfair to that person. On the other hand, those with access to the journals would be able to churn out many more articles than their research warranted. To create three papers out of one significant piece of research is not unknown.
Then if mere quantity is not enough, one should look for research impact - that is objectively measured in citations. Well, that is the best objectivity can do, little more. One could - and I do not know any league tables that do that - include citations included in papers that supported patent applications. If a paper has been in the citation chain for an actual innovation patented, then it has impact, else the research was merely theoretical.
Before the historians and philosophers start baying for my blood, let me step up and say - I agree! Research is meant to be theoretical. It is meant to add to the body of knowledge regardless of its current usability. There are enough examples in the sciences too that point to innovations that were used decades later. When they were created, they would not have added to the count of any league table, especially if they languished merely as working papers.
Similarly for teaching and learning. How can we measure for teaching quality? Discussed here. We can measure for teacher qualifications, for student achievement in standardised tests, but who can really say objectively that excellent or quality learning happens in the classrooms of this institution and not at another. How can they be graded and ranked? There is only so much league tables can do. There is no way the impact of learning can be measured across the lifetime of the learner. Nor can any league table ever measure the value of peer networks - though this is where it gets interesting. While there is no clear metric to peer networks, the universities that top the league tables are the ones with the most useful networks.
What the league tables cannot cover is the culture and ethos of an organisation. No league tables can capture what the HBR case study on gender and class just discussed here
Does this mean we reject league tables straight out?
Of course not.
But when we use and quote league tables, especially in education we must do so with caution and cognition.
Firstly, do not take them too seriously. They are a snapshot and serve a particular purpose. They cannot serve all institutions and countries, but can merely indicate what is important to most. For example, India may not be ready to climb up the league tables yet - it may not be the priority for the nation as discussed here. If India wants to make a place for itself on the league tables a priority, then it should focus on that, as described here
Newspaper and magazine league tables have to be a simple composite in order to be able to have a global span. But there are other ways of constructing comparative benchmarks that may be more productive depending upon the need and the purpose of that benchmarking exercise. All benchmarking exercises do not create rankings as league tables must. Quality can be tracked through simple yet sophisticated exercises that can easily be designed according to the need of the group or individual institutions. Till we invest in creating more customised benchmarks and trackers, we will have to make do with league tables.