[email protected] said on Thu, 29 Jan 2026 15:28:10 -0500

>Anyone want to discuss?

Yes.

I use ChatGPT and Google's AI answers and find them wonderful TOOLS.
TOOLS, not SOLUTIONS.

Anybody in 2026 who lets an LLM code up a 10K line of code system,
in other words, uses the LLM as a SOLUTION, is walking the highway to
heartache. 2026 LLMs make mistakes, and each mistake requires extensive
debugging. And if the LLM runs the debugging session, what a
catastrophe. I know these things from personal experience.

But an LLM makes a wonderful TOOL. Lowest hanging fruit is using it as
an instant manual that does the lookup for you, saving minutes on every
usage. I've also had discussions with ChatGPT about the basic
architecture about a system I'm putting together, and then had ChatGPT
suggest code for small, standalone components and modules. At each
step, I tech edit ChatGPT's suggestions, which is easy, because I
demand thin interfaces. When things go wrong, I debug, sometimes asking
ChatGPT for a good diagnostic test to rule out a big chunk (ideally
half) of the remaining root cause scope, and sometimes asking ChatGPT
what a diagnostic test result indicates. Bottom line: I rule the
architecture, I rule the coding, I rule the debugging. With all three,
I use ChatGPT as an assistant, much like a lawyer would use a legal
secretary.

One more thing: When it comes to AI, I channel Ronald Reagan: Trust,
but verify. Those who follow this principle don't get into trouble.

SteveT

Steve Litt 

http://444domains.com
_______________________________________________
Discuss mailing list
[email protected]
https://lists.blu.org/mailman/listinfo/discuss

Reply via email to