You are currently viewing University Rankings, Publishing, and the Future of Knowledge Production and Academic Employment

University Rankings, Publishing, and the Future of Knowledge Production and Academic Employment

The world of academic knowledge production today is very different from what it was just 5-10 years ago and it will continue to change rapidly in the years ahead.

I will attempt to write some posts to explain and document what is happening, as I think there are many people who are unaware of some of the developments that are taking place (or who have only a partial grasp of what is happening) in different parts of the world.

I myself am also struggling to understand what is happening, and this post is thus also my own attempt to try to begin to “map out” what is going on.

At the meta-level, there are various forces that are affecting the world of academic publishing and knowledge production.

One of the most powerful is the desire of university administrators to get their respective universities to move up in the global university rankings. This, in turn, is taking place in tandem with the massive development, expansion, and internationalization of universities in various parts of the globe.

So, what exactly does this mean? What does this look like in actual terms?

Let’s imagine a dynamic university in Asia or the Middle East or Eastern Europe or (the list goes on and on). It uses English as a medium of instruction (EMI) so that its students can be “globally competitive” and it can attract international students. It also encourages its faculty to publish in “international” journals (which usually means publishing in English).

Finally, to ensure that those publications will improve the university’s global ranking, the staff will be encouraged to only publish in journals of a certain rank.

There are various journal ranking systems, and they vary in prestige, but let’s look at a popular one used by many upcoming universities – Scopus.

While people will often refer to the “Scopus ranking” of a journal (and, for convenience, I do so myself), Scopus does not actually rank journals. It is an indexing service that provides data on things like the average number of times articles are cited in a given journal.

Another organization, SCImago, then uses the metrics that Scopus produces to create a ranking of journals, known as the SJR (SCImago Journal Rank). In doing so, SCImago also includes into its calculations metrics that are somehow supposed to account for the prestige of each journal. Its algorithm then ranks journals in different “quartiles”: Q1, Q2, Q3, Q4.

How exactly SCImago’s prestige metric is developed and works remains a mystery to me (and everyone, as far as I know). However, it’s of critical importance as it clearly exerts an extremely powerful influence.

Here is how it is described on the SCImago website:

“The SJR is a size-independent prestige indicator that ranks journals by their ‘average prestige per article.’ It is based on the idea that ‘all citations are not created equal.’ SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from. It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is.”

This is all a bit too vague for me to really get a sense of how this works, but let’s move on.

I first became aware of the SCImago/Scopus journal ranking somewhere around 2015. By that time, I had been involved in Asian Studies for over 20 years (as a graduate student and professor), and I believed that I had in my head a clear sense of which journals were the most prestigious, and I’m confident that my assessment was shared by my colleagues.

I was then very surprised to find that the SCImago/Scopus rankings did not fit my understanding of which journals were better than others. It did in some cases, but not in others.

For scholars in places like North America, where the rankings game has not taken hold in the way that it has in some other parts of the world, how SCImago/Scopus ranks journals doesn’t really matter, because when a scholar goes up for tenure or promotion, the committee and Dean who review her/his dossier will consider what they understand to be the quality of the journals that the scholar published in (using the same knowledge that I have in my head from working in a field for decades), and will not base their decisions on what SCImago/Scopus says.

In other parts of the world, however, those decisions will be made by relying heavily, if not entirely, on metrics like those produced by SCImago/Scopus.

This has extremely serious consequences for journals and knowledge production, as well as for the careers of scholars. Let’s take a look now at why that is the case.

The above two images list titles of Asian Studies journals that are categorized as Q1 and Q4 journals. What can we see?

One thing we can see is that some people, like historians of Ming Dynasty China, have a bleak future. In places like North America, there are very few (or no) jobs getting advertised these days that such a historian could apply for.

Meanwhile, for universities in other parts of the world that are playing the rankings game by using Scopus metrics, a historian of Ming China is not going to have the publication record that will help boost a university’s rankings.

Ming Studies is a solid and respected journal, but our Ming historian can’t publish there, because its Q4 status will bring down a university’s ranking.

At the Q1 level, our Ming historian could publish in the Journal of Asian Studies, but the submission-to-publication time of that journal is really long (like 3 years), and the rankings-driven university needs publications now, and the scholar also has KPIs to meet now.

That pretty much just leaves the journal, Asian Studies, in Slovenia, but obviously one can’t publish in the same journal year after year.

So, what does that mean? I think the dark answer is that it means good-bye to Ming history and our Ming historian. However, it’s also bad news for journals that find themselves categorized as Q3 or Q4.

Ok, maybe there’s an EMI university or program in China where this person can find a job, but China has its own (and extremely rigorous) set of publication rules, and they have a tendency to change over time, so there is no guarantee that our Ming historian will find gainful employment and stability in the Middle Kingdom.

Ming historians are not the only people who are being affected by the nexus of university rankings and Scopus metrics. It affects scholars in many other fields that are not “central to the global scientific discussion.”

Further, while not all universities that require their staff to contribute to the rankings employ Scopus/SCImago metrics, they all employ something comparable, and the outcome for scholars in fields that are not “central to the global scientific discussion” is likewise similar.

If any of this is unfamiliar to you, I would encourage you to visit the SCImago website (https://www.scimagojr.com/). Look up journals, compare them, and try to figure out why they are ranked the way they are.

Then also think about the impact that those journal rankings, when combined with the drive for higher university rankings, will exert on knowledge production and employment in the years ahead.

Subscribe
Notify of
guest

8 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Raqib
Raqib
4 years ago

A very thoughtful piece, one that should be read and reflected upon by both seasoned and new researchers.

JD
JD
Reply to  Raqib
4 years ago

Quite the contrary and in fact unnecessary. The majority of academics almost instinctively realize how the incentive structures work and how to collaborate in order to survive in the system.
It would be wholesome – if one considers the search for truth and working solutions to actual problems as the purpose of the sciences – if the piece was read and reflected upon by those people in the political and administrative realm who set the “standards of success”, because financially dependent academics (= almost all of them) will deliver not necessarily what is true or valuable but what is requested by the structures.
In some sense the academic system works like a planed economy. The party cadres define the benchmarks, and the academics are supposed to piece together their products in order to achieve the output targets.
The larger problem is the disconnect between the short-term benefits for the people collaborating more or less willingly with a corrupt system and the long-term damage done to the societies who pay for the spectacle in form of resources misallocated to projects that produce crap.

JD
JD
Reply to  Raqib
4 years ago

From a purely practical point of view, you’re of course right.

Saigon Buffalo
4 years ago

JD: “…because financially dependent academics (= almost all of them) will deliver not necessarily what is true or valuable but what is requested by the structures.”

This sounds eerily similar to the accusation reformist literati threw at the competitive examination system.

JD
JD
Reply to  Saigon Buffalo
4 years ago

I don’t think that we would fundamentally disagree about the principals of good governance if we were to exchange our views in a face-to-face conversation instead of using the comment section of a blog that is not ours.

The actual problem, however, remains; our “Ming-China scholar” is still looking for steady employment and/or needs to publish in Q1-journals on a regular basis to satisfy his current employer.

It might be useful to try to find out how journals enter into the Q1 bracket and then to try to join/create one.