First, let me be very clear: I’m not a fan of blanket “yep, we’re screwed” statements without at least trying to offer a solution or constructive comments.
But yep. We’re screwed. Collectively that is. As individuals, we have hope. As individuals, we have even more: we have the keys to the kingdom.
So… this edition of my newsletter is a bit different. I will offer no solution. No framework. No guides. Just a reflection and venting about some personal experiences with GenAI in the past, in the context of academia and higher education in general.
Higher Education has a Bumpy Road Ahead
It’s time to talk about the future of higher education.
An article in the Guardian discussed how Australian universities are offering discounted degrees and outsourcing parts like English requirements to proxy applicants online. This isn’t new of course: people always go where it’s easiest. Cheating has always happened. I remember an entrepreneurial buddy of mine completing coding assignments for a 24-pack of beer for buddies back when I was a student.
(Obviously, that wasn’t me)
Anyway, people with no English skills are getting degrees from English-speaking institutions.
How can that be?
Generative AI.
While the article is cautious on point blank saying “people generate their stuff”, it’s clear many students do. They just do. There’s no point sugarcoating what is true. I quite liked this take from another article a few months ago:
Yeah, a novel feature indeed.
Some educators are busy radiating their moral superiority online by saying “hey let’s just pay more attention to students and personal teaching/learning, it’s our job anyway” but try doing that when your course has 400+ students.
And all of this sucks.
We know generating essays, code, designs, theses, everything leaves students with zero learning. It also leaves teachers toothless against any of this, as there are no tools that can accurately identify generated bs. And in any case, thinking plagiarism checkers will solve this issue is short-sighted. I mean, yes they work somewhat with the current family of language models but they’re far from perfect. And the burden of proof falls on the teachers.
I don’t know what the future of education looks like. But I know it looks very different because these tools are here to stay.
And we need courage to just call out people using these tools. I found this pretty fitting:
The Good News
Given the pace of how GenAI tools are developing, higher education has to change.
That’s good!
I think any change driven by a set of tools that has *real* (like, real real, not the kind of real as we know from self-help books) potential to explode our output bandwidth is a great one in the long term. Even if it’s a painful one in the short term.
It’s crucial to make students understand we’re all aiming for the same goal here: learning and creating awesome, wonderful new things with less pain for everyone involved, and not just grading exams and collecting credit points.
I know in many places many teachers still believe in pointless tasks and memorising how e.g. to code (in computer science), but that mindset must now change. It’s too easy to have ChatGPTs of the world just generate all of it.
Collectively, the change will be slow and painful. But as individuals, we can do better.
I failed to account for widespread use of ChatGPT in my course last year, and I need to be better prepared this time in a couple of months when my class starts again.
How can I ensure 300 students don’t cheat using ChatGPT? I don’t mind if people choose not to learn; that’s their problem. But I do care if they break rules in a way that insults me by being easily caught yet unprovable.
I want students to use AI beneficially for everyone involved. While I don’t have a solution yet, I’m venting my frustration and watching how universities adapt to these new tools that render old teaching methods inefficient.
Personally, I’m going to allow generative AI use but require students to disclose exactly how they used it, including prompts and chat links or screenshots. For essay questions, I might test peer reviews—if two out of three peers think it was AI-generated, we’ll have a one-on-one chat.
Let’s see how this works out and hope we’re wiser in a few months about this situation.
The Really Good News
It’s not just education that’s going to change. I think the whole idea of what a publication is will change too. With generative AI, a fake publication can be created by a system with just a handful of different agents. One agent generates the data, another analyses it, a third one finds a suitable scaffolding from related work (through APIs to services like SciSpace), and the fourth one generates the paper.
This development could reduce the “publish or perish” culture, as it will become impossible to compete with “low quality” papers. Too easy to run experiments, even real ones, and print them as papers. We have to all focus on quality.
Coincidentally, just past week I reviewed the first paper I gave an “accept” recommendation and that was clearly ChatGPT-written for the most part.
It was just that good. Usually, the AI-generated papers have been of lower quality overall.
So, soon we will have even the good and reputable labs sending papers written by AI. It’s coming. There’s no dodging it.
Coincidentally, just past week I also stumbled upon a review that was 100% written with ChatGPT. And I suspect the tool was prompted just to say good things about the paper.
Truly baffling and impossible to say whether that was what happened or not. I just don’t know what to do here.
Time will tell.
The publication culture has to change.
But I am sure we have to evolve and not resist too much. What I think also is that blockchains will find new (in reality, old) uses. I remember when they became popular a few years ago. Quickly, there were a bunch of European Union projects funded that explored their use for data integrity and authenticity.
The projects died as fast as they were started. Maybe it’s time to dust off the findings and start thinking how to guarantee that a paper is built on valid data?
The field of medicine is already ahead in this area. They host reputable cohorts and data sources that they use for publications. We might see other sciences adopting this approach instead of constantly doing new experiments that are hard to verify. Reputable institutions and conferences will still thrive without being destroyed by these changes.
The Extremely Good News
All in all, I think we’re living in exciting times!
For the first time in my 20-year academic career, I feel like things are changing fast, and it’s still excitingly unclear if this is real or not.
But it doesn’t seem to be just hype. Or at least this hype feels different. It’s also a bit scary, as things are actually pretty well now.
- Yes, the peer review process is broken.
- Yes, the culture is sick in many ways.
- Yes, there’s a lot of competition.
But right now, academia is still a wonderful place with a lot of money floating to be won. And I say this knowing there’s not enough for everyone. That’s just how the inconvenient math works. Not everyone can become a postdoc. Not every postdoc can become a tenure track professor. Not every tenure track professor can become a full professor and not every full professor can become a dean or whatever crazy dreams they might have.
That’s just how our career ladder works.
We need to end this on a positive note. Right now, generative artificial intelligence is the best thing happening for a long time in academia and/or higher education.
It is a wonderful tool, a great toy, and a liberator of human creativity.
We will all benefit from it, first individually and then after the transition pains collectively too.
A lot is happening fast. I just spent 45 minutes today to bring alive an old research tool of mine that had gone 3 years without updates. Would have taken me 2 weeks of debugging before these tools.
Qualitative analysis is a breeze with specialised tools built on OpenAI APIs or even local models that cost nothing but a few gigabytes of bandwidth to download.
And yes, these tools will steal the unfair edge that native English speakers and writers have in academia.
I love it.