- The Scholarly Letter
- Posts
- The Scholar's Dilemma: single-use AI, reading less, and poor pay
The Scholar's Dilemma: single-use AI, reading less, and poor pay
The job was advertised in Nature and required an honours degree - meaning research experience proven by a thesis - and a minimum of 2 years experience for an annual salary of £155 (roughly equivalent to £8,778.18 or $11,658 in today’s money).

Your Scholarly Digest 7th August, 2025
Scholarly essentials hand-picked fortnightly for the mindful scholar
Was this newsletter forwarded to you? Sign up to receive weekly letters rooted in curiosity, care, and connection.
Know someone who will enjoy The Scholarly Letter? Forward it to them.
All previous editions of The Letter are available on our website.
Hi Scholar,
It has been 1 year, 6 months and 20 days since the first edition of The Scholarly Letter was sent out on the 17th of January 2024. This is our 74th Letter to you.
We’ve been building The Scholarly Letter slowly, deliberately, as a space for thoughtful, critical writing about knowledge, inquiry, and the deeper meaning of intellectual work. With each letter, we have tried to hold open a space for scholars, para-academics, and curious minds who care about the integrity of thought and the future of scholarship.
And so far, we’ve been able to write freely because we’ve chosen not to rely on funders, advertisers, or universities. We’ve intentionally kept this space independent – free from the editorial pressures that ask us to soften a critique, align with an agenda, or speed up a thought. That independence has mattered. It’s allowed us to ask difficult questions, write from the margins, and speak without compromise.
But to keep going, we need your support.
We want The Scholarly Letter to be reader-supported – sustained not by external interests, but by those who read it, value it, and want it to exist. We ask for your patronage because even a small, slow publication like this one has costs: time, labour, care. All of which we’d like to continue investing – freely, honestly, and without compromise.
So if our Letters have offered you something – a provocation, a sense of companionship in the strange task of scholarship , a renewed sense of scholarly purpose – then we hope you’ll consider becoming a paying supporter.
Your patronage is not just financial, it is philosophical. It says: this kind of thinking about our scholarly world matters.
If you want to, if you can, if you care:
And if you can’t afford to right now, that’s okay. If you’d still like to help, share one of our Letters with a fellow scholar. A Letter is nothing more than words on a page unless someone cares enough to read it.
Thanks for reading,
The Critic & The Tatler
BRAIN FOOD
On Plastic and AI: A Cautionary Parallel
Unpacking the groceries after a recent supermarket run, my mum muttered, “It’s so annoying how they wrap everything in plastic, even a cucumber!”
She’s not wrong. In the UK, as in many parts of the world, just about everything we buy at the supermarket, from mushrooms to mangoes, comes shrink-wrapped, sealed, or bagged in plastic.
Of course, that plastic is overused and over-relied upon – not just in the UK, but globally – and that this overuse has had dangerous consequences for the environment is hardly a novel observation. What was once, and in some ways still is, a revolutionary invention has infiltrated our lives so completely that it’s now the very thing choking us and the planet. This isn’t a hot take. It’s so commonly accepted I haven’t even bothered to “cite” it.
What struck me though – and why this moment sparked this week’s Brain Food – is the parallel between our use of plastic and our use of AI.
Can we really deny that the invention of generative AI is revolutionary? Some of you might respond with a familiar critique: “It’s just a glorified auto-complete machine.” And while there’s some truth to that, perhaps such dismissals do a disservice to the immense scientific and scholarly work that made this technology possible. Belittling generative AI’s significance may not be the most scholarly response.
The invention of generative AI is just as revolutionary as plastic. But what went wrong with plastic wasn’t its invention, it was our overuse of it. From bin bags to bottled water, takeaway containers to pens, plastic became the default material of everyday life. Its ubiquity became its downfall.
And so I ask: if we overuse AI – if we turn to it for every little task, for every fleeting thought, if we let it answer our questions before we’ve even sat with them – what will happen to our intellectual lives and abilities? What kind of thinkers will we become?
Much of the discourse online splits into two camps. One insists: Don’t use AI! It’s destroying everything, the other: You need AI! It’ll make your life easier and more productive. These polarised views shouldn’t surprise us; we live in a time of mass polarisation.
But my interest lies not in staking a side but, as I’ve already asked, what this means for our intellectual selves.
A recent pre-print claimed that it’s when gen AI is used in educational settings that matters most: using it at certain stages can have adverse effects on learning outcomes. Thinking more on this finding, I couldn’t help but wonder whether overuse – that creeping, habitual dependence – might eventually dull our capacity to think deeply and independently. Perhaps, then, it’s not just a matter of ‘when’ we use AI, but also ‘how much.’
Still, I don’t think abstinence is the answer either. Just as plastic has enabled remarkable innovations, so too will generative AI no doubt be instrumental for future breakthroughs. The same pre-print mentioned earlier claimed that when participants used AI after relying on their own independent faculties, they showed heightened neural engagement and memory reactivation.
So perhaps the point isn’t to avoid AI entirely, but to resist the urge to reach for it reflexively and use it moderately. ‘Moderate use’ is a vague term, I know, unmeasurable and unscientific. And yet… it might still be the best approach we’ve got.
There is no neat answer here, Scholar. But perhaps that’s the point. Not every question should be answered quickly or definitively. Not every intelligent question deserves artificially certain answers.
What I do know, from musing on this cautionary parallel between plastic and AI, is this: just as we’ve had to reckon with the unintended consequences of plastic, we may one day find ourselves reckoning with the intellectual consequences of how we’ve used AI.
The challenge isn’t to reject it outright, nor to surrender entirely to it, but to remain alert to what it might be doing to us as thinkers.
Just as we’ve begun to replace single-use plastic bags with reusable ones, or swapped out plastic water bottles for steel ones, perhaps it’s time to consider where we might be using AI like plastic: reflexively, habitually, without thought. Before we reach the tipping point of our intellectual well-being, we ought to ask: when does convenience become dependence?
This week’s brain food, then, is an interpellation, a halting, a moment to ask:
How much artificial intelligence are you feeding your brain?
What role do you want AI to play in your intellectual life?
And how will you know when it’s playing too much of it?
NEWS
Study with Me, ChatGPT
In an attempt to improve their public perception as nothing more than a homework machine OpenAI has launched “Study Mode” for ChatGPT. Judging by the video announcing the update posted on Youtube, these settings are an attempt to provide a version of the LLM which acts more like a teacher and less like an overly excited know-it-all who’s dying to tell you everything it can that is vaguely related to your initial question. Study Mode, which must be turned on by users, has been introduced in response to what many see as an epidemic of cheating in universities and colleges around the world: according to OpenAI’s own survey, about more than 40% of 18-24 year olds in the USA are using AI for academic reasons including summarizing papers, exploring topics and brainstorming. Meanwhile in the UK, the incidence of AI use that violates academic integrity among university students has risen by 218% since 2023.
Following on from the Brain Food above, it seems this would be a good place to ask what responsible AI use looks like? Is it even possible? Whether using AI is acceptable or not, especially in the context of academic work, is, like most debates these days, an incredibly polarized issue. Those who reject it do so with gusto, vowing that they will never (ever!) use AI: it is a technology for those who are lazy and will not think for themselves. Those who enthusiastically embrace AI (especially if English is their second or even third language), it is a technology that enables them to communicate their ideas and thoughts more clearly, leveling the academic playing field somewhat. What gets lost in the noise of this debate is who should bear the responsibility for ethical use.
If we were feeling generous, we would say that the introduction of a version of ChatGPT which does not immediately provide the answers (or complete essays) is a step towards responsible use. If we were feeling cynical, we would note that Study Mode can simply be turned off (or never turned on in the first place), and nothing will really change. The existence of the feature is useful, but users must make the choice to turn it on. Ultimately, like a lot of things in life, responsible AI in academia (or even responsible academic conduct in general) is a matter of personal responsibility.
RESOURCE
Don’t Read More, Read Better
At a time of ever-increasing volumes of scholarly publications, the advice “Don’t read more” might seem counterintuitive, if not outright irresponsible. And yet, as Jared Henderson argues in his video essay, the impulse to consume more and more content is rooted in a culture of toxic productivity.
While Henderson is primarily speaking to reading trends within book-focused internet communities – BookTok, Bookstagram, BookTube – we believe his critique extends well beyond them. Excessive consumption isn’t just a problem in popular reading culture; it’s an intellectual habit that’s gripping academia as well.
Yes, critiques of the supply side of knowledge, i.e. the relentless pace of publishing, are warranted. But the demand side deserves scrutiny too. Scholars aren’t necessarily asking for more and more papers, but we’ve absorbed the idea that reading and citing more is always a scholarly good. The result is a compulsive engagement with papers that rarely leaves room for depth.
In our rush to consume more, how often do we actually walk away with something meaningful?
That’s the heart of Henderson’s message: reading more often means engaging more superficially. We agree. But what makes his essay especially valuable is that it doesn’t stop at critique, he offers practical thoughts on how to read better, not just less.
It’s worth a watch especially if you’re feeling overwhelmed by the reading lists, preprints, and PDFs stacking up in your browser tabs. It might just shift your relationship to knowledge, learning, and your approach to scholarship itself.
OPPORTUNITIES
Funded PhDs, Postdocs and Academic Job Openings
Postdoctoral and Faculty Positions @ Pennsylvania State University, USA: Postdocs: click here / Faculty positions: click here
PhD and Postdoctoral Positions @ TU Delft University, Netherlands: PhD positions click here / Postdocs click here
Postdoctoral Positions and Fellowship Opportunities @ Kings College London, UK: Postdocs: click here / Fellowships: click here
PhD Positions @ Coventry University, UK: click here
KEEPING IT REAL
Underpaid Then, Underpaid Now
“I don’t get paid enough for this” is a phrase muttered under one's breath every day in universities, laboratories and governmental research departments all around the world. And it was probably uttered at least once by the person who was employed by the UK Government’s Department of Agriculture and Fisheries as a Botanist in 1939. The job was advertised in Nature and required an honours degree - meaning research experience proven by a thesis - and a minimum of 2 years experience for an annual salary of £155 (roughly equivalent to £8,778.18 or $11,658 in today’s money). The offering was so poor that it prompted the Secretary of the Association of Scientific Workers to write a “Letter to the Editor” where they shamed the University of Southampton for advertising a similar job with a salary of £50 per year and lamented on the poor pay of scientifically qualified individuals more generally. As is usually the case when diving into historical archives or sources, it is striking how little seems to have changed in the 86 years since this was published.
Which section did you enjoy the most in today's Letter? |
We care about what you think and would love to hear from you. Hit reply or drop a comment and tell us what you like (or don't) about The Scholarly Letter.
Spread the Word
If you know more now than you did before reading today's Letter, we would appreciate you forwarding this to a friend. It'll take you 2 seconds. It took us 32 hours to research and write today's edition.
As always, thanks for reading🍎