The Impact Factor is Dead (It Just Doesn’t Know It Yet)

Losing an IF is not punishment if you never wanted one in the first place.

your Sunday read 24th November, 2024

A well-researched original piece to get you thinking.

Was this newsletter forwarded to you? Sign up to get it in your inbox every week. Know someone who will enjoy The Scholarly Letter? Forward it to them.

Hey Scholar👋

Our story this week is about power, influence and a poor understanding of statistics. It's a story 60 years in the making, about the little guy who defied the system and, at least for now, seems to be winning. 

So grab a beverage and your reading glasses, let's get thinking.

The Impact Factor is Dead (It Just Doesn’t Know It Yet)

- Written by The Tatler

No matter how you feel about it, the golden rule to succeeding in academia is this:

“publish in journals with a high Impact Factor"

Personally, I can't recall ever being told what an Impact Factor (IF) is and why we should care about it so much. That didn't stop me in my PhD interview from confidently answering that the best place to find trustworthy articles was in “peer-reviewed, high-impact journals.” My supervisor-to-be nodded approvingly. I was offered the position. Thank goodness they didn't ask me, “What is the purpose of an Impact Factor?”. I wouldn't have known. 

There is nothing more desirable in modern academia than being associated with a high Impact Factor in some way: 

  • researchers of all levels, from the wide-eyed PhD student to the professor at the head of the department, aim to publish in a high IF journal;

  • the hiring committee wants a candidate with strong IF scores in their publication list;

  • all over the world, Editors-in-Chief fret over whether the IF of their journal will rise or fall (I should know; I used to work for them).

I scream, you scream, we all scream for Impact Factors.

Without an impact factor, it's incredibly hard for a journal to be taken seriously. Impact factors are proof that the information contained inside a journal is safe for consumption. Losing an IF is serious - it's basically how journals are punished for dodgy practices like excessive self-citation (which is, ironically, a cheat code to boost IFs) or for a lack of rigor in peer review.

A few weeks ago, one of the biggest stories in the publishing world was the news headline:


“Journal eLife to lose impact factor”

This, to put it mildly, absolutely blew my socks off. I could not believe my eyes. If you're not familiar with eLife, I am not exaggerating when I say that this journal is the Cool Uncle of Publishing with a well-deserved reputation for publishing good, honest science. In my mind, there was no way they had been cutting corners. I was right.

Before continuing with this story, I'd like to ask you a question:

'what is the purpose of an Impact Factor?'

We're talking about it an awful lot, but truly, do you know?

One of the most harmful myths in academia is that the IF measures the quality of research published in a journal. It was actually developed to estimate how popular a journal was among members of the research community before the Internet. The idea was that if a journal received a lot of citations overall, that probably meant a lot of researchers were reading the journal. In the 1960s, when the IF was created, journals were less specialized, and librarians used it to choose subscriptions for the broadest audience on tight budgets. Using IF to measure article quality is a mistake 60 years in the making.

The only thing worse than using a metric incorrectly is using an obsolete metric incorrectly. With research going digital, estimating readership using journal-level metrics is no longer necessary: you can see how many times an individual article was viewed, downloaded, or cited, right on the journal website.

So eLife was stripped of this useless, but somehow still widely used, metric. Why? The thing eLife has done to have its IF, which in 2023 was a respectable 6.4, removed is innovate.

but who can take an IF away anyway?

The answer is Clarivate. A for-profit company that operates Web of Science, holds the copyright to the Impact Factor and exclusively assigns it each year to journals that meet specific quality criteria.

Accused of implementing a radical, transparent, and innovative peer review process that falls outside what the unofficial regulator of good publishing practice considers acceptable, eLife has pleaded guilty (and is proud of it).

  • The journal has always experimented with peer review, being one of the first to accept Registered Reports, where studies are published based on sound methodology, even if the experiments fail.

  • Correspondence between reviewers and editors has always been openly shared alongside the final text of published articles. While reading these conversations between authors and reviewers may be rather dry, it is good practice. If your work hinges on the results of a few studies, you might be interested in the conversations that help make it ready to publish (see here for an example).

The last straw, however, was eLife no longer making accept/reject decisions following peer review. Instead, members of the Editorial Board select articles to be sent for peer review: authors may choose how much (if any) of the reviewer's feedback they wish to incorporate into the final manuscript.

At the end of the process, the final manuscript is published alongside the full reviewer reports. Readers must then make their own assessment of the quality of the article using the final version and peer review correspondence. This unusual practice was too much for Clarivate to bear.

It sounds crazy, because it is. But all genius needs a little bit of crazy. Regardless of what you think of this “eLife Model,” it's transparent and requires researchers to think about what they are reading and citing, instead of just taking a published article at face value.

The funny thing is that for many years, Clarivate acted as an unofficial regulator, and no one seemed to mind. Clarivate's quality controls have served as the norm, even the standard, for good publishing practice. Some journals that had their IF removed in the last few years may have even deserved the punishment. If anything, we’ve felt grateful there was someone watching over us. But there is something that doesn't sit right about this particular situation.

Should a private company like Clarivate, answerable only to its shareholders, wield such influence over research practices?

When Clarivate stripped the International Journal of Environmental Research and Public Health (IJERPH) of its IF in March 2023, the drop in its output was immediate and brutal. Publications shrank from 1,800 articles a month to only 200—1.5 years later, the IF has not been reinstated, and the journal output has not recovered.

An article written by a former Clarivate employee in the months after the decision was quick to point out:

“IJERPH's decline is a stark reminder that in the absence of an Impact Factor, other journal traits matter little.”

But eLife is not crippled without an IF. According to the journal website, submissions from China have fallen (China is a country that relies very heavily on the IF to assess its academics), but other than that, the journal has continued as normal. If anything, the decline in Chinese submissions appears to be offset by large increases in submissions from other countries.

Maybe the journal track record is so good that it has transcended the need to have an IF. eLife has never published an IF on its website and according to one of their founders, even asked Clarivate not to bother giving them one:

“We want to kill the journal impact factor. We tried to prevent people who do the impact factors from giving us one. They gave us one anyway.” 

Losing an IF is not punishment if you never wanted one in the first place.

To end this week's piece here's some data which I think shows that real rigor in publishing doesn't come from a number like an IF. In the last 10 years:

  • Nature has published 47 Retraction Notes and 5,433 total articles, which gives it a retraction rate of 8.65 retractions per 1,000 articles.

  • eLife has published 9 retraction notices and 15,552 total articles, giving it a rate of 0.58 retractions per 1,000 articles.

Innovation need not, and it turns out has not, come at the expense of rigor.

Did you enjoy your Sunday read?

Login or Subscribe to participate in polls.

If you have any questions, feedback or comments on The Scholarly Letter, feel free to reply to this email. We'd love to hear from you!

Spread the Word

If you know more now than you did before reading today's Letter, we would appreciate you forwarding this to a friend. It'll take you 2 seconds. It took us 25 hours to research and write today's edition.

As always, thanks for reading🍏

- The Critic & The Tatler