Generative AI Should Not Replace Thinking at My University

I’m dismayed that any academic institution would encourage us to use chatbots 
rather than our intellects.

By Douglas Hofstadter<https://www.theatlantic.com/author/douglas-hofstadter/>
June 22, 2023

<https://accounts.theatlantic.com/accounts/saved-stories/>

I used to drive a stick-shift car, but a few years ago, I switched over to an 
automatic. I didn’t mind relinquishing the control of gear-changing to a 
machine. It was different, however, when spell checkers came around. I didn’t 
want a mechanical device constantly looking over my shoulder and automatically 
changing my typing, such as replacing hte with the. I had always been a good 
speller and I wanted to be self-reliant, not machine-reliant. Perhaps more 
important, I often write playfully, and I didn’t want to be “corrected” if I 
deliberately played with words. So I made sure to turn off this feature in any 
word processor that I used. Some years later, when “grammar correctors” became 
an option with word processors, I felt the same instinctive repugnance, but 
with considerably more intensity, so of course I always disabled such devices.


<https://www.theatlantic.com/technology/archive/2022/08/stick-shift-manual-transmission-cars/671078/>

It was thus with great dismay that I read the email that just arrived from 
University Information Technology Services at Indiana University, where I have 
taught for several decades. The subject line was “Experiment with AI,” and to 
my horror, “Experiment” was an imperative verb, not a noun. The idea of the 
university-wide message was to encourage all faculty, staff, and students to 
jump on the bandwagon of “generative AI tools” (it specifically cited ChatGPT, 
Microsoft Copilot, and Google Bard) in creating our own lectures, essays, 
emails, reviews, courses, syllabi, posters, designs, and so forth. Although it 
offered some warnings about not releasing private data, such as students’ names 
and grades, it essentially gave the green light to all “IU affiliates” to let 
machines hop into the driver’s seat and do much more than change gears for them.

Here is the key passage from the 
website<https://kb.iu.edu/d/biit?j=1572945&sfmc_sub=17641562&l=67086_HTML&u=41022911&mid=7209910&jb=4>
 that the bureaucratic email pointed to—and please don’t ask me what “from a 
data management perspective” means, because I don’t have the foggiest idea:

>From a data management perspective, examples of acceptable uses of generative 
>AI include:

• Syllabus and lesson planning: Instructors can use generative AI to help 
outline course syllabi and lesson plans, getting suggestions for learning 
objectives, teaching strategies, and assessment methods. Course materials that 
the instructor has authored (such as course notes) may be submitted by the 
instructor.

• Correspondence when no student or employee information is provided: Students, 
faculty, or staff may use fake information (such as an invented name for the 
recipient of an email message) to generate drafts of correspondence using AI 
tools, as long as they are using general queries and do not include 
institutional data.

• Professional development and training presentations: Faculty and staff can 
use AI to draft materials for potential professional development opportunities, 
including workshops, conferences, and online courses related to their field.

• Event planning: AI can assist in drafting event plans, including suggesting 
themes, activities, timelines, and checklists.

• Reviewing publicly accessible content: AI can help you draft a review, 
analyze publicly accessible content (for example, proposals, papers and 
articles) to aid in drafting summaries, or pull together ideas.

I was completely blown away with shock when I read this passage. It seemed that 
the humans behind this message had decided that all people at this institution 
of learning were now replaceable by chatbots. In other words, they’d decided 
that ChatGPT and its ilk were now just as capable as I myself am of writing (or 
at least drafting) my essays and books; ditto for my lectures and my courses, 
my book reviews and my grant reviews, my grant proposals, my emails, and so on. 
The tone was clear: I should be thrilled to hand over all of these sorts of 
chores to the brand-new mechanical “tools” that could deal with them all very 
efficiently for me.

I’m sorry, but I can’t imagine the cowardly, cowed, and counterfeit-embracing 
mentality that it would take for a thinking human being to ask such a system to 
write in their place, say, an email to a colleague in distress, or an essay 
setting forth original ideas, or even a paragraph or a single sentence thereof. 
Such a concession would be like intentionally lying down and inviting machines 
to walk all over you.


It’s bad enough when the public is eagerly playing with chatbots and seeing 
them as just amusing toys when, despite their cute-sounding name, chatbots are 
in fact a grave menace to our entire culture and society, but it’s even worse 
when people who are employed to use their minds in creating and expressing new 
ideas are told, by their own institution, to step aside and let their minds 
take a back seat to mechanical systems whose behavior no one on Earth can 
explain, and which are constantly churning out bizarre, if not crazy, word 
salads. (In recent weeks, friends sent me two different “proofs” of Fermat’s 
last theorem created by ChatGPT, both of which made pathetic errors at a 
middle-school level.)

When, many years ago, I joined Indiana University’s faculty, I conceived of AI 
as a profound philosophical quest to try to unveil the mysterious nature of 
thinking. It never occurred to me that my university would one day encourage me 
to replace myself—my ideas, my words, my creativity—with AI systems that have 
ingested as much text as have all the professors in the whole world, but that, 
as far as I can tell, have not understood anything they’ve ingested in the way 
that an intelligent human being would. And I suspect that my university is not 
alone in our land in encouraging its thinkers to roll over and play brain-dead. 
This is not just a shameful development, but a deeply frightening one.

<https://www.theatlantic.com/ideas/archive/2023/06/generative-artificial-intelligence-universities/674473><https://www.theatlantic.com/ideas/archive/2023/06/generative-artificial-intelligence-universities/674473/>https://www.theatlantic.com/ideas/archive/2023/06/generative-artificial-intelligence-universities/674473/
_______________________________________________
nexa mailing list
[email protected]
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to