- The Scholarly Letter
- Posts
- Doughnut Shaped Scholarship
Doughnut Shaped Scholarship
And yet, this framing of quality science has rarely been taken up in earnest, particularly in how science is evaluated or valued in broader society. The dominant understanding of “quality” science – especially in public and policy discourse – is

🍏your Thursday Essay 31st July, 2025
A well-researched original piece to get you thinking.
Was this newsletter forwarded to you? Sign up receive weekly letters rooted in curiosity, care and connection.
Know someone who will enjoy The Scholarly Letter? Forward it to them.
All previous editions of The Scholarly Letter are available on our website.
Hi Scholar,
This was a difficult essay to write. Some of the ideas presented – especially in the first half – may also be difficult to read, particularly at a time when the importance and value of science are under attack, most notably by the Trump administration in the United States.
As much as we Scholars might want to defend science as infallible, the reality is that it isn’t perfect. Like all systems, the scientific system has flaws with parts that don’t work well: things that could, and should, be improved. To admit this is difficult at the best of times. In the face of what we perceive as systematic efforts to undermine science, it can feel nearly impossible.
But however strongly we may feel compelled to defend the status quo, it’s important to resist becoming dogmatic. To worry about the state of our scientific system is to show our care for it. Efforts to critique can be rooted in love. As such, this essay is a critique, yes, but it is, above all, an act of love.
Doughnut Shaped Science
- Written by The Tatler
I stumbled across this remarkable graph during the research period for the essay On Scholar Led Knowledge Creation.

It appeared in an article titled “Article 50 Million: An Estimate of the Number of Scholarly Articles in Existence,” which estimated that the world’s 50 millionth peer-reviewed scholarly article was published sometime in 2009. Isn’t that astonishing?
The image speaks for itself: the growth of research publications over time is nothing short of extraordinary.
And it hasn’t slowed down. In the years since 2009, the rate of published articles has only increased in speed. If I were to reproduce this graph using more recent data, the curve would be even steeper. According to the Dimensions.ai database, between 2010 and now, an additional 65 million articles have been published – an average of around 4 million per year!
As we discussed in The Scholar Manifesto, the presence of a journal article is, to an extent, proof that knowledge has been produced. We cannot escape the fact that publications are the basic unit of knowledge of modern science and have been since its emergence in the 17th century.
So what do we make of this graph, then?
The most obvious interpretation is that it illustrates the enormous success of our collective scientific efforts in creating new knowledge. After all, more scientific knowledge should mean more scientific progress. Each new published article becomes a building block on which new knowledge can be built upon. It would appear that the larger our knowledge base gets, the easier it becomes to produce new knowledge. This is a textbook case of what economists call scale elasticity.
And judging by the graph above, our scientific system seems to have a high scale elasticity – more knowledge begets even more knowledge, faster.
Scholar, I wonder what feelings this graph invokes in you?
Take a moment to sit with it and consider what it represents.
As for me, I must confess, it does not fill me with joy or satisfaction. To be honest, I am alarmed. I should say, though, that the source of my concern isn’t simply that we are producing more research over time. I accept that more research is not only necessary but inevitable.
What unsettles me is the rate at which it’s happening:
the extreme rate of acceleration itself.
At first glance, the curve appeared exponential to me. However, a kind stranger on TikTok pointed out that we may actually be looking at hyperbolic growth. And there is quite a big difference between exponential growth and hyperbolic growth.
Exponential growth occurs when the rate of increase is proportional to the size of what is growing. This kind of growth typically relies on external factors, like funding or resource availability, and can slow down if those factors change.
Hyperbolic growth, however, is even more rapid: as the size of what is growing increases, the rate of growth accelerates disproportionately. Theoretically, hyperbolic growth assumes no limiting factors, and the rate of increase in size becomes so fast that it approaches infinity in a finite amount of time.
If the graph above represents hyperbolic growth, then the production of journal articles has become decoupled from the very systems and factors that should be contributing to (or limiting) its growth. It follows, then, that this growth will continue unchecked, reaching infinity no matter what. And if nothing can slow it down, I – as a scholar who cares deeply about knowledge, who feels a duty of care toward it – find myself asking:
Can we sustain this rate of knowledge production without compromising on the quality of the knowledge being produced?
Perhaps it’s a naive question. But let us explore it together, just for this Essay. After all, what is the purpose of all this production if the product is mediocre?
And so, we arrive at the core question that inspired this week’s reflection, one which I invite you to ponder alongside me:
Does science have a limit to how fast it can progress without compromising on quality?
The Edge of Infinity
What I refer to as "science" in this context is a living system that creates knowledge, composed of many interdependent parts: researchers, funders, journals, institutions, technologies, infrastructures, publics, and politics. It adapts, evolves, and expands in response to its internal logics and external pressures.
If we assume that journal articles can be taken as a proxy for the creation of new knowledge, the hyperbolic nature of production depicted above suggests that science can progress (indeed, it is already progressing) faster and faster to infinity. Intuitively, this does not feel right.
Firstly, one can think of several factors that might limit the speed at which science can progress: available funding, the number of researchers, technology, the real or perceived limits of science itself - the list goes on.
Secondly, it feels unlikely that this amazing increase in speed could come without some sort of trade-offs in quality. Factors like time spent on a project, the ability of researchers, the rigor of peer review, and the risk appetite of funders and/or researchers could all plausibly affect quality. McDonald’s produces and sells 6.48 million burgers a day, while a smaller restaurant making their burgers by hand might sell 80. I appreciate this is a crude metaphor, but you get my point.
Scaling the production of anything while maintaining its quality is difficult.
And yet, the possibility of slowing down rarely enters our discourse on knowledge creation. We all conduct ourselves, and our research, as if ever-increasing acceleration is the default. A culture of hyper-productivity and emphasis on fast research drives our knowledge economy, where time saving tips and hacks are the order of the day. Tech-bros at the heads of companies producing AI research software prescribe speed through their marketing, proclaiming that research should be fast(er).
Like any complex system, science can (theoretically) grow to a point where its scale outpaces its purpose. As a self-reinforcing system, it may have become too entangled with growth and scaling up – no longer able to sustain the conditions necessary for high-quality knowledge creation.
To measure the speed of science, or the rate of new knowledge creation, is relatively straightforward. It can be (crudely) approximated using indicators such as scientific publication output, patents filed, scientists trained, or cancer survival rates.
Exactly how to define the "quality" of scientific progress, or knowledge, is more of a challenge. But rather than getting lost in definitions of what "quality" science is, I believe we can have a more fruitful discussion by asking instead on what "quality" science does.
JWN Sullivan opens his book, The Limitations of Science, by declaring:
Science, like everything else that man has created, exists, of course, to gratify certain human needs and desires.
He goes on to suggest three primary ways in which science meets these needs and desires:
it offers practical advantages,
it satisfies disinterested curiosity,
it provides the contemplative imagination with objects of great aesthetic charm.
These seem, at least to me, reasonable explanations for why we humans continue to devote so much of our collective time and resources to growing scientific knowledge. Sullivan’s three functions offer a workable definition of what "quality" science does: it solves problems, it satisfies our desire to know, and it delights us.
If this is what high-quality does, then low-quality science is functionally limited in its ability to solve problems, intellectually stagnant and unable to satisfy our curiosity, and aesthetically barren.
I am confident that all three dimensions are appreciated equally by, and will resonate with, readers of The Scholarly Letter. And yet, this framing of quality science has rarely been taken up in earnest, particularly in how science is evaluated or valued in broader society.
The dominant understanding of “quality” science – especially in public and policy discourse – is overwhelmingly utilitarian: science is expected to deliver practical outcomes, solve pressing problems, and drive economic or technological advancement. Because of this dominance, in answering the question at the heart of this week’s essay – ‘Does science have a limit and how fast can it progress without compromising quality?’ – I’ve had to lean heavily on literature that defines quality in terms of science’s practical effectiveness.
As you well know, Scholar, we are called upon in our work to engage with the discourses that are already there – to speak with what exists, even when we resist its framing. That is why I turn, in what follows, to these mainstream accounts.
I hope you’ll forgive what may at times feel like a one-sided conversation. But when we’ve finished, I hope you’ll consider writing to me with your own sense of the dimensions of scientific quality that remain unsaid or unexplored here.
And so, we return to the heart of the question: how fast can science progress without compromising on quality? We are producing more research faster than ever before and we have not paused to ask what the speed of production might be doing to the knowledge itself.
In what follows, I turn to recent attempts – largely economic – to measure this tension between quantity and quality in scientific production.
Diminishing Returns and No New Ideas
At no point in human history have we spent more on scientific research, employed more professional scientists, or created as many scientific articles as we do today. By many measures, scientific progress is accelerating at an ever faster rate. The size of "science" has never been bigger. And this increase in speed and size have brought tangible benefits: declining global poverty, longer and healthier lives, and amazing technologies that improve our everyday existence.
And yet, there is a growing body of evidence from the field of economics that suggests that by some measures, research productivity is stalling.
Research productivity is a concept that attempts to gauge the returns of our scientific efforts. It asks questions such as:
How many dollars must be spent on cancer research in order to increase the life expectancy of a cancer patient by 1 year?
How many hours of scientific labour are needed for the processing power of a computer chip to double?
How many publications on wheat seed biology are needed to increase the crop yield of an acre of agricultural land by 5%?
Research productivity is considered high when fewer resources yield better outcomes. And, as mentioned earlier, science is valued primarily and dominantly for its ability to deliver such outcomes. In an era increasingly shaped by neoliberal logics, “value” is often measured in terms of time saved, money saved, or ideally, both. Research productivity, in this view, becomes a proxy for scientific “quality,” at least along one axis: practical usefulness. While such a view accounts for one of Sullivan’s three suggested functions of sciences, the other two dimensions of satisfying curiosity and delighting us, are ignored.
With this in mind, we now turn to analyses that suggest recent declines in research productivity might be explained by changes in the broader ecosystem that enables knowledge creation.
A recent paper titled "Are Ideas Getting Harder to Find?" argues that research productivity is indeed falling globally. Scientific progress continues but delivering its practical advantages has become more expensive to achieve. It now takes more people, more dollars, and more publications to yield the same advances that once required far less. For example, more of everything is needed to increase the performance of computer chips than was needed in the past. The authors find the same is true for agricultural crop yields and years of life saved from disease. Progress, it seems, is still happening but it is driven by an ever-increasing scientific effort.
Could this be interpreted as a sign that we are nearing the speed limit that high-quality science can tolerate?

The authors’ explanation is striking: the decline in productivity, they argue, is because new ideas are getting harder to find. The low-hanging fruit has been picked. In other words, these authors argue that our knowledge base has low scale elasticity: as our knowledge base grows, achieving further growth becomes more difficult. If they are right, then it appears we are destined for ever diminishing returns on our scientific efforts.
There are, of course, alternative explanations for the patterns they observe. It is possible that there is simply a limit to how much more we can extend the lifespan of someone afflicted by disease, or how much more optimization a corn seed can undergo. This would mean there is not a lack of ideas but a genuine limit we are bumping up against. Another possibility is that we are producing lower-quality knowledge, weakening the foundation for future discoveries. In this scenario, ideas may indeed be getting harder to come by, but this is due to lower quality inputs which form the raw materials for ideas.
Either way, the observation the authors make regarding reduced productivity seems quite plausible. But as I’m sure you’ve noticed, there are several ways to critique the arguments they present. For the point of this discussion, I want to steer us away from critiques that concern the comprehensiveness of their methodology, or cleanliness of their data – the low hanging fruit, if you will. Even their assumption that exponential growth must be achieved, though relevant, is too easy a target. The weakest aspect of their proposal is the idea itself that ideas are getting harder to find.
This claim rests on the assumption that ideas exist in finite supply. That is, if ideas are getting harder to find, it implies a finite number of pre-existing ideas waiting to be discovered in a closed system, with no possibility of spontaneous creation. However, creating knowledge is not only about revealing the unknown in a linear fashion. To say that we build science up brick by brick, as if it were a wall, is overly simplistic. Sometimes the conditions which supported the wall and enabled its existence can change dramatically due to spontaneous, paradigm shifting, knowledge creation – causing it to crumble. The future, and what ideas may be created there, is not just unknown, but unknowable by principle: new problems for science to solve are not only hidden from view, the conditions that make their existence possible may not exist yet. Exactly what new knowledge gets created is emergent and affected by social and material processes.
It is to these social and material processes that facilitate the creation of new knowledge that we now turn.
A compelling alternative explanation comes from Ekerdt and Wu in a paper titled "Self-Selection and the Diminishing Returns of Research". Rather than focusing on the scarcity of ideas, Ekerdt and Wu suggest that declining research productivity is due to difficulty in finding good researchers. Using data from the USA, they argue that while the number of scientists has increased by over 200% since the 1960s, this growth has not produced a proportional acceleration in research productivity. In fact, they estimate that researcher productivity has declined by 49% over that period.
Ekerdt and Wu assume that people choose to become researchers based on their abilities and expected payoffs - those who are best suited (or expect the best return) self-select into science. To put it simply, they conclude that as science has rapidly expanded, more people have chosen to work in research, which has lowered the average ability of the overall population of researchers.
In other words, rapid growth may be diluting talent.
Now, this argument needs to be handled delicately – and its methodological choices offer plenty of starting points for scrutiny. Most notably, the authors have assumed that wages are a suitable indicator for scientific ability: the logic goes that better scientists can command higher wages compared to less able ones. Under this assumption, a greater range in scientists' wages (i.e. some are paid very highly and some are paid very lowly with a lot of people in-between) could indicate that the average ability of researchers has declined. Using wages as a proxy for scientific ability also rests on the assumption that the academic job market is a fair, competitive market that rewards individual merit. In practice, the wages a scientist is able to command are affected by factors such as funding structures that privilege certain disciplines over others (e.g. STEM v Social Sciences V Arts & Humanities), the prestige of the institution that employs them, and the scarcity of permanent positions relative to the number of PhD graduates.
Once again, I would encourage you, Scholar, to resist the urge to dismiss the analysis Ekerdt and Wu present outright. The point is not to endorse their every assumption, but to explore what their framing reveals about how we think about the conditions of research.
The idea that the rapid expansion of the scientific workforce might weigh on productivity is, at the very least, an interesting provocation that deserves a closer look.
It invites us to consider a different kind of question – not about the number of ideas we have left to find, but about the institutional, social, and economic systems in which knowledge is created.
The Conditions of Knowledge Creation
The core argument that Ekerdt and Wu present is that the scientific workforce has expanded faster than the system can sustain high-quality research. If everyone who works in science self-selects as the authors assume – then it’s worth questioning what expected payoffs are influencing their career decisions.
Could increased funding for science be attracting people with the promise of a well-paid job?
Certainly in my childhood, adults around me – family, teachers and career counsellors – portrayed scientific research as a stable career path with decent prospects. Take, for example, this blog post for prospective science students produced by the University of Queensland:
Granted, this example is skewed toward STEM degrees, but you get the idea.
Or could it be that the social status and desire for prestige that come with being a scientist are stronger primary motivators than a genuine passion for research?
Albert Einstein gave a speech in 1918(!) that is worth quoting at length here, for it elegantly sums up the point that I am trying to make:
In the temple of science are many mansions, and various indeed are they that dwell therein and the motives that have led them thither. Many take to science out of a joyful sense of superior intellectual power; science is their own special sport to which they look for vivid experience and the satisfaction of ambition; many others... have offered the products of their brains on this altar for purely utilitarian purposes. Were an angel of the Lord to come and drive all the people belonging to these two categories out of the temple, the assemblage would be seriously depleted... but there would still be some... left inside.
So perhaps the real reason behind the decline in scientific productivity is not down to ability but motivation. This shifts our diagnosis from who does science to why they choose it in the first place, and why they stay.
We have previously discussed how the nature of academia – the bastion of knowledge creation – affects who succeeds, or not, in more detail for previous essays (The Scholar Manifesto and On Breaking Up With Academia especially). To briefly recap, succeeding in academia requires one to be a rapid producer of knowledge who is able to outcompete their colleagues for promotions and funding. Truly novel research, and the risk taking and difference in thinking that it requires, is discouraged. To quote from The Scholar Manifesto:
A disconnect from knowledge, a lack of care or sense of stewardship for the integrity of knowledge is not only possible, but a logical conclusion of a system that no longer asks what knowledge is or why it matters. Only that it appears.
A system that encourages fast production of knowledge at the expense of risk and novelty – coupled with a sizable proportion of the research population who (though capable and intelligent) see their work primarily as a stable paycheck – is at risk of stagnating.
The problem is not that there are no new ideas to be found but perhaps a lack of incentive to think of and pursue new ideas given that the current system rewards staying within well-worn paradigms. Can we really be surprised by reports of stalling research productivity – as though it came out of nowhere?
How can we expect quality science when we may possibly be experiencing paradigm fatigue: trapped within familiar frameworks, circling the same terrain, unable to break new ground?
Perhaps, just perhaps, this could also explain why, despite the rapid and seemingly explosive growth in publication rates, much of what’s being produced may be iterative rather than innovative. In that sense, the hyperbolic curve that launched this inquiry may appear like a sign of acceleration on the surface, but hides stagnation underneath.
Even if we were to leave aside quality science as defined by research productivity, i.e. science’s practical usefulness to society, what of quality science as evaluated according to the other two dimensions Sullivan suggests: curiosity and aesthetic delight?
For those Einstein imagined would remain in the temple of science – after those motivated by ambition or utility had been driven out – what defines quality science can hardly be reduced to its practical application. For us, it is science’s ability to satisfy our desire to know. Its beauty. Its ability to delight.
These are the features the Critic and I treasure most and why we created a space for them in The Scholarly Letter.
As with “quality,” these are not easy terms to define. What sparks curiosity varies for each of us, as does what we may find beautiful (as we’ve discussed in previous Digests). Mary Douglas, in Thought Styles: Critical Essays on Good Taste, notes that it is often easier to agree on what is ugly or disliked than on what is aesthetically beautiful.
And so I ask you: even if we cannot agree on what beautiful science is, might we agree on what feels ugly in our system?
What gets in the way of curiosity, of delight, of intellectual care?
I suspect that many of you reading this are, in some regard, dissatisfied with the current conditions of knowledge creation. I know I have struggled at times to stay curious, to appreciate the beauty in my work, when faced with the accelerating pressures of modern academia.
I could cite surveys reporting low job satisfaction among academics and early career researchers. But why bother, when I’ve read the letters you write back to us?
Can dissatisfied scientists produce satisfying science?
You, like us, delight in curiosity and the pleasures of knowledge for its own sake – in addition to the benefits it can bring. And I know that for Scholars like us, “high quality” science can never be defined by usefulness alone.
Doughnut Shaped Science
This essay began with a question:
Does science have a limit to how fast it can progress without compromising on quality?
Together, we’ve examined the accelerating pace of knowledge production, the diminishing returns of research investment, and the shifting motivations for entering and staying in science. We asked what quality science does and whether current conditions still allow for it to satisfy our curiosity, solve our problems, or delight our imagination.
But, Scholar, we must now step back even further.
Because the truth is, science is not the only domain where speed and scale are seen as inherently good. We live inside a paradigm – economic, scientific, cultural – where growth is synonymous with progress. If we are not growing, we are failing. If the line on the graph isn’t going up, something must be wrong.
This is the framework that shapes how we think about success: more papers, more funding, more citations, more breakthroughs. But could it be that this paradigm is part of the problem?
It’s something you can’t unsee once you notice it: our overwhelming preference for upward-sloping graphs. The growth of journal articles, human life expectancy, labour productivity, global food production. Each line rising toward infinity.
But what comes after the final data point?
Can growth really continue forever?
Inputs needed for knowledge creation – scientists, funding, prior knowledge – are growing rapidly. And yet, this speed is bringing with it a marked decline in “quality.” Research is costing more and delivering less. Scientists are overworked, underpaid, and increasingly disillusioned. A small but growing proportion of the scientific knowledge we collectively create is at risk of being declared untrustworthy after it is published. The structure of the system cannot withstand the speed at which it’s moving.
More than exceeding a speed limit that is safe, we may be approaching a firmer limit in how fast science can meaningfully go. In our single-minded, doggedly determined effort to break through, we risk the system that allows us to progress in the first place collapsing under the strain.
I’m not declaring the end of science, or progress, or ideas. Whether there is a true limit to knowledge itself is a topic for another time. But I do worry that in our eagerness to reach infinity, we are damaging the system that could get us there.
This is where I turn to an unexpected ally: the Doughnut, a framework from the fringes of mainstream economy theory.
In her book Doughnut Economics, Kate Raworth challenges the idea that economic growth is always good, always necessary, always unquestionable. She shows how this obsession with more – more GDP, more extraction, more profit – has come at the expense of the planet that sustains us: it poisons the air we breathe, the water we drink, the soil that grows our food. Growth, she argues, can reach a point where it undermines its own foundation.
As I read her work, I couldn’t help but notice the parallels to science. Just as economies that grow beyond its planetary limits begin to undermine the foundation on which it stands, so too does a science that outpaces its own limits begin to erode the foundations of knowledge itself.
Excessive growth of the scientific system means it now consumes more resources while delivering less returns. In our quest for faster and faster scientific growth, we are destroying the conditions that are required for a science that provides practical advantages, rewards disinterested curiosity and knowledge that is beautiful. I fear what remains will be a science that is increasingly functionally limited, intellectually stagnant, and aesthetically barren.
Raworth's alternative framework for envisioning progress is the Doughnut: a circular structure that has both a floor (minimum requirements) and a ceiling (maximum limits). Being developed in the context of economic growth, the floor of Raworth's Doughnut represents the minimum requirements for human well-being, and the ceiling represents the limits of what the planet's resources can sustain. The space between is the safe and just zone for humanity.
What if we imagined a similar doughnut for science?
What if, like Raworth, we sketched out a boundary for science – space within which it can thrive, and beyond which it begins to fray?
Adapting this model in the context of science might look like this:

We might imagine, then, a Scholarly Doughnut. Its inner ring, or floor, would represent the minimum conditions required to sustain high-quality scholarship. These are not luxuries or perks, but essential features of a healthy knowledge ecosystem:
Time for reflection, deep thinking, and intellectual care.
Secure, well-supported research careers
Curiosity-driven inquiry, untethered from narrow performance metrics.
Robust peer review and strong ethics, to safeguard trust and integrity.
Opportunities for risk-taking, for venturing beyond established paradigms.
The outer boundary limit, beyond which scientific production begins to harm itself could be demarcated by:
Excessive publication pressures (“publish or perish”)
A bloated research sector where the primary motivation for entering science are monetary or status-based
Systemic burnout and job insecurity
Aversion to risk and novelty
A preference for speed and quick returns on research activity
Excessive commodification of knowledge.
The more we chase acceleration and growth for its own sake, the more likely we are to breach this ceiling. And the farther we get from the floor, the more fragile the foundation becomes.
This Scholarly Doughnut might provide us with an intellectual and moral framework that seeks to protect the foundations of what makes "high quality" science possible while staying within the limits at which knowledge can progress without degradation. Adopting it to rethink the progress of science might just enable us to, quoting Raworth, "be agnostic about growth". By accepting this Doughnut in our collective imagination of how science progresses, we may just be able to bring about something beautiful:
Degrowth or slowing down will no longer be a cause for alarm, but necessary and natural for a science that is not only productive and useful but also meaningful and fulfilling.
Post Script:
Last week we announced the founding of the Scholar Initiative CIC, set up with the aim of creating a space where scholarship can be done differently. The boundaries of the Scholarly Doughnut as outlined above are under-developed and loosely defined. They may, however, provide an alternative set of principles for the kind of spaces a community of Scholars might want to build for themselves, which is exactly the purpose the Scholar Initiative was created to serve.
If you have any questions, feedback or comments on The Scholarly Letter, drop a comment or write to us at [email protected]. We'd love to hear from you!
Spread the Word
If you know more now than you did before reading today's Letter, we would appreciate you forwarding this to a friend. It'll take you 2 seconds. It took The Tatler 71 hours to research and write today's essay.
As always, thanks for reading🍏