Not a subscriber?

CATAPULT YOUR ACADEMIC SUCCESS:

Get my best tips weekly directly to your inbox - sign up below:

(You'll also get exclusive musings and personal updates never included in the public posts below)

[EA#4] The Do’s and Don’ts of AI in Academia

2024-07-08

ChatGPT.

Oh, ChatGPT.

It’s been so long already…like what 2 years? Almost.

But you truly are such a gift that just keeps on giving that I cannot avoid talking about you.

First a disclaimer: Of course there are GREAT ways to use AI (especially the ChatGPTs and the equivalents) in all sorts of academic work. But there are ridiculous ways as well, and let me tell you precisely what happens when people offload the important writing tasks to ChatGPT:

  1. People outsourcing writing to ChatGPT will lose confidence in their own writing and thinking skills
  2. They never get to develop those skills further
  3. The people reading the smelly emails and proposals (yes, ChatGPT has a distinct smell, at least for now) don’t trust the sender, even if it all was done with good intentions
  4. The world as we know it comes crumbling down

OK the last one is a bit exaggerated but makes my stance clear. Don’t use ChatGPT to generate your cold emails, proposals, or any writing that matters in human-to-human communications.

Because with humans, first impressions matter.

And what makes this all even worse is of course the suffering of the innocent: all the collateral damage. There are people who write fancy English and use expressions like delve, journey and meticulous in their own language. And that should be just fine. But when people who are fed up with botshit read text like that, they’ll think it was generated. Doesn’t really matter if it was or not.

And that erodes trust.

It’s a mess.

Anyway, this all comes from a viral post on Twitter (X) where a bunch of academics ended up arguing whether a piece of text was generated by AI or not. This one:

Of course it was.

Now we can get into an endless debate about if sharing an actual piece of text from a human being was the right thing to do (it probably wasn’t) but let’s not do that here. That’s a discussion to be had elsewhere.

But with this one, there are just too many giveaways, including the font change that happens when people copy paste stuff from ChatGPT and add to it manually.

To my surprise, there were dozens and dozens of people arguing otherwise. This is how people write in specific regions or fields. Maybe so, but I’d bet the little money I have – or even left kidney if we go that far – is on ChatGPT.

I have to repeat this: readers don’t trust people who send proposals that sound like generated ones. I know this sounds harsh or even unfair, but that’s how it goes. I’ve had plenty of discussions about this with many colleagues, some of which go as far as just saying they don’t even read a proposal or a job application where the cover letter smells too much of ChatGPT.

“But it’s wroooong.”

Yes. No. Maybe. Even if there are evaluation guidelines and instructions on assessing things, we still communicate human-to-human here.

And humans err.

The tragedy is what’s happening next. Very soon, someone will release a model that is trivial to instruct to write in the exact tone of the author. And that’s when we lose the game. People will lose confidence in natural human communication, and we will merge with the machine if we’re not intentional about still using our own thinking and writing skills.

  • – Maybe it’s evolution.
  • – Maybe it’s disaster.
  • – Maybe it’s nothingburger.

But we know it’s coming, and it’s coming fast. I’ll just leave this here as I find it pretty funny but also nicely representative of what’s happening (source):

The Good Stuff of AI in Academia

So, with all that ranting, let’s talk about the good uses of AI in academia. In the PhD Power Trio, I have a full module with videos on how to do all this in practice, but I’ll give a summary here.

Here’s the rundown. 

Preparation and Principles

Until the audio interaction with these tools improves, we’ll still have to deal with a lot of writing, i.e. prompting. 

And for your work, you’ll be reusing some text snippets a lot, so you might as well make it easy.

Store a few critical text snippets in a separate text file, Notion, any note-taking software, a browser extension such as Text Blaze or a desktop tool like Alfred on Mac. Anything works as long as you can easily reuse the text in your prompts. You’ll need three snippets:

  • Describe your audience: their level of knowledge about the topics you write, what they are knowledgeable about, etc. What is their field, and what kind of text do they appreciate?
  • Text style: This partially overlaps with the one above, but I’ve found using both in prompts, e.g., ChatGPT, to produce a better output. Should the output text be concise, persuasive, or what? I never want to use ChatGPT to write the final text anyway, so I prefer a concise and direct output style. 
  • Yourself: who are you? What’s your field, and what are your goals with your writing tasks? An example would be you are a PhD student and your goal is mostly to get diverse viewpoints when brainstorming for new ideas in a discussion of an article. 

These will nicely make sense when we consider the three essential article-writing tasks that ChatGPT (or the many equally well-working alternatives today) can help us with:

Literature review (but not the way you think). 

Add all the snippets above to the prompt, and then ask for a literature review outline given your paper’s overall topic and idea. Just describe it in your own words, and remember to remind ChatGPT to format it according to the conventions of your field. For me, that would be max 3-4 subsections, no sections under those in most cases.

Ask it to include the main ideas in each subsection in bullet points.

Then, when you get the outline, tweak it in a text editor to match your liking. Then, paste it back to ChatGPT and ask for a research question that matches each of the bullet points. This will lead to a handful of questions that match the topics your literature review will cover. 

Then, in a second step, feed those questions to tools like typeset.io or elicit.com

“Ahh, now we finally do it the super easy way!”

Alas, no.

There’s no shortcut here, either. But now you have all the papers you need and much more for each bullet point we formulated as questions. 

And now you can simply manually rewrite each of the summary sections from the tools mentioned above, keeping only what you like and think will match the paper’s grand narrative.

Improving discussion (and creativity).

One of the most difficult parts of a paper in my field for new academics is having something interesting to say about the results. And this is pretty important. Why did you even do the work if you can say nothing about it?

Explain the study in the introduction. Present a great study design…analyse data and results. Then…crickets. Or, in most cases, people just repeat the results.

Don’t get me wrong here – just repeating results might get you published just fine. This is the case especially if the results are exciting enough. 

But it’ll make for a boring paper. 

So, let’s make this easy with ChatGPT. Here are the steps you can just plug in and follow:

  1. Copy-paste the introduction of the paper into ChatGPT and ask it to study them silently
  2. Copy-paste the related work outline (not the full text) produced earlier with ChatGPT to it and ask it to study it silently
  3. Copy-paste a simple summary of the results you got, and ask it to study them silently

Now, after those steps, ask your AI buddy to:

“Now, considering the initial framing of the paper and the related work research areas, suggest me 1) connections between the results and the related work areas, 2) societally meaningful and interesting topics that warrant discussion and have a clear connection to my results, 3) any uncommon, exceptional or even provocative discussion topics that relate to my results and make the discussion more interesting to a reader”

If you think of the above, that’s the purpose of a good discussion. We should all aim to situate the results in related work and then discuss what the work overall means to the world. 

Fixing out silly mistakes (because we all make them).

OK, this one is perhaps the most straightforward one but still worth doing. You know how often people send papers to peer review without doing what the paper promises in the introduction?

Some typical mistakes are articulating research questions without ever looping back to them, setting an aim that is nowhere near what the paper even tries to do, or making claims that the results do not back up.

So, ask ChatGPT to spot any inconsistencies:

  1. Feed it the introduction
  2. Feed it the results
  3. Feed it the discussion

At all stages above, just ask it to “silently analyse the content I will paste below and say OK when you’re ready to receive more instructions.” 

After that, ask it to spot errors:

“Given what you now know about my paper, check for any inconsistencies you might spot, and recommend any additions to the discussion and introduction that you think the results warrant. Don’t write the text for me, just explain in bullet points what might be missing or what would make the paper better streamlined.”

It will find any obvious mistakes and, more often than not, give you suggestions on what to add. 

What is AI Transforming Into?

Here’s what I already hear:

“But Simo your prompts are not perfect.”

The first premise of using these tools now is not to overrely on specific prompts. Some time ago, though, you needed to be more careful. ChatGPT 3.5 was a bit like that: You had to have precise megaprompts to get things done. But how they work now is…you just talk to them, like a friend. 

You don’t need to instruct a human word by word. It’s the same with AI now. Just talk to them like they’re a good conversational partner working on your paper too! A co-author of sorts. 

And speaking of co-authorship, here are the obvious disclaimers:

  • – Always check your field’s and venue’s standards on AI use and disclaimers. Be transparent about how you use AI.
  • – Always check your institutional policies too, and adhere to those. 
  • – Always use AI tools that don’t steal your data and comply with the policies you must comply with.

ChatGPT, for example, has a (somewhat hidden) checkbox that ensures it won’t use your data to train itself. Maybe you can trust that? Maybe not. Your call.

But there are great local alternatives too that you can explore, using tools like Ollama or LM Studio. In any case, these tools are moving so fast that I suggest not to get stuck tool-hopping but to just keep talking to the AIs and seeing what works for your purposes and doesn’t break your local institution and field’s rules too much. 

Try the process above, it’s pretty helpful especially with discussion!

About the author 

Simo Hosio  -  Simo is an award-winning scientist, Academy Research Fellow, research group leader, professor, and digital builder. This site exists to empower people to build passion projects that support professional growth and make money.

Leave a Reply

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}