Before AI, there was code completion in IDEs that would suggest alternative 
functions that could be used. At the time this was introduced, I was suspicious 
of this because it seemed to show that developers could not remember their 
code. When I’m working on a code a lot, I do remember details and don’t need to 
be prompted. For programming languages I learned early in life, like C, the 
grammar is present as a motor skill. 

Later when I adopted functional programming, I also adopted the Wadler 
philosophy [1] and sought to make types reveal more meaning. Autocompletion 
makes more sense in this context, especially with languages like Lean, as the 
types can be rich in meaning. 

With generative AI, it is a step further. Drafting code is automatic but one 
must thoroughly read the code to see if there is a misunderstanding, or if 
underspecification is creating a mess. To some extent debugging code is 
automatic too. However, I sometimes find my intuition is quite different from 
the AI tactics. In part it is declarative programming, but also sampling the 
space of programs in the vicinity of an idea. Given the reality of context 
window limits, using AI for programming is iterative over multiple sessions. 
Thus, one communicates not just with natural language but with the evolving 
code itself -- code that can have informative types and even carry proofs. The 
weirdest thing about using AI is that it has no opinions. Claude will rewrite 
code without asking (seemingly having no self-control), but it will not 
confront you like a frustrated colleague might. It is happy to let you make a 
mess provided its sense of idiomatic code patterns are satisfied. 

[1] https://people.mpi-sws.org/~dreyer/tor/papers/wadler.pdf 
<https://people.mpi-sws.org/~dreyer/tor/papers/wadler.pdf> 

From: Friam <[email protected]> on behalf of steve smith 
<[email protected]>
Date: Tuesday, February 11, 2025 at 8:22 AM
To: [email protected] <[email protected]>
Subject: Re: [FRIAM] genai and critical thinking 


On 2/11/25 8:20 AM, glen wrote:
> That's a fraught question. First, editors need not have been writers 
> before they became editors. But barring that, my answer would be "No". 
> But they prolly *do* lose facility for writing, the ease with which 
> they write. It's simple reinforcement. Use it or lose it.
>
> E.g. I can still code in Ada. But I'm way worse at it now than I was 
> when I did it multiple days per week. A better question might be: Do 
> editors lose their ability to read? And that question bears an even 
> deeper problem ... something akin to Gell-Mann amnesia ... and I blame 
> it for me losing my taste for reading for *fun*. Up until ~1998 or so, 
> I did a lot of reading for fun. It was fun to read. Now reading is 
> merely a means to some other end. Make something your job and it 
> ceases to be a hobby. So even if editors retain their ability to read, 
> the *quality* of their reading must change in deep ways.

I think this is the more salient aspect of the general question... and 
it may even be "generational" in the sense not that readers/writers lose 
their skills through atrophy but if they lose their "taste" or 
"facility" for it and a new generation simply *never acquires it*. I 
never acquired a significant ability or facility for writing 
longhand/cursive, and I do think it limits me and how I 
think/feel/perceive the world.

"kids these days" who have never read anything longer than a short 
paragraph on the back of a cereal box (Boomers/X) or a Tweet 
(millenial/Z) probably do perceive the world somewhat differently than 
those of us who may still read novels, entire non-fiction books, and 
long-form journalism. I myself have atrophied in this regard... I 
tend to look to YouTube and Audiobooks (and Podcasts) to consume what I 
once looked to full-length printed books.

But to your (glen's) point, there are qualitative thresholds which are 
perhaps more salient than the quantitative ones...


>
>
> On 2/11/25 6:57 AM, Marcus Daniels wrote:
>> Do editors lose the ability to write?
>>
>>> On Feb 11, 2025, at 6:43 AM, glen <[email protected]> wrote:
>>>
>>> The Impact of Generative AI on Critical Thinking: Self-Reported 
>>> Reductions in Cognitive Effort and Confidence Effects From a Survey 
>>> of Knowledge Workers
>>> https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf
>>>  
>>> <https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf>
>>>  
>>>
>>>
>>> It really doesn't seem that different to me from numerical analysis. 
>>> It shifts the work from doing the computing to declaring what the 
>>> computing should do.
>
> 


Attachment: smime.p7s
Description: S/MIME cryptographic signature

.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... 
--- -- . / .- .-. . / ..- ... . ..-. ..- .-..
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to