AI scientists have a problem: AI bots are reviewing their work
ChatGPT is wreaking chaos in the field that birthed it.
In San Francisco, where I live, the artificial-intelligence hype cycle is inescapable. I leave my house and a driverless Waymo waits for me to cross the street. The skyline is filled with billboards for startups that are supposedly using AI to revolutionize health care, or clean energy, or HR software. One of those homegrown startups, of course, is OpenAI, the maker of ChatGPT. Since it was released on November 30, 2022, no one can be entirely sure that what they read was written by a human.
I’ve been curious about how generative-AI tools are altering the nature of scholarship, and recently, I came across some studies about how a growing number of peer reviews of academic manuscripts seem to be substantially AI-written. (They’re “commendable”! “Meticulous”! “Intricate”!) Then I realized these findings applied to papers in AI — the field that, of course, invented ChatGPT in the first place.
Naturally, I wondered: How do AI scientists feel when their colleagues, asked to evaluate their hard work, clearly phone it in? As you might imagine, they’re annoyed … but they get that they kinda brought this on themselves. Also, they admit that they themselves like using ChatGPT sometimes.
From my newest piece in The Chronicle of Higher Education (free to read by making an account with your email):
Already, there are signs that AI evaluations could be corrupting the integrity of knowledge production. Computer-generated feedback may slightly boost a manuscript’s chance of approval, and uploading someone’s unpublished data into a chatbot in order to produce a review could amount to a breach of confidentiality policies. These are problems without easy solutions, ones that organizers of computer-science conferences — the main venues for publishing research in that field — are just beginning to acknowledge.
And as one scientist told me:
“I think AI tools coming out at the same time that peer review is becoming unsustainable is going to accelerate its demise.”
You can read the full story here.
I’ll be continuing to look for stories about how AI is changing the enterprise of scholarly research, for better and for worse, so if you have any ideas in that vein, feel free to shoot me an email. If you use ChatGPT to write it, I’ll forgive you.