On Thu, Jan 15, 2026 at 10:34:33PM +0530, Aryan Karamtoth wrote:
I think one thing that people often forget is that you cannot hold an LLM accountable if anything breaks. If a human makes some kind of mistake in the code and someone points it out, the person can easily recall the steps he has done, discuss with the others and fix the bug.

If I commit AI generated code I have reviewed and tested it and would probably have written worse code myself. Of course I can fix bugs in code that was AI generated on my prompting.

I sometimes use AI for the boring part of coding, like "give me a python class with the following internal values, getter and setter methods etc" and happily take the generated slop. Another place where I happily accept slop is command line parsing. A lot of the small helpers I have written for myself in the last months would never have been written if I didnt have had the LLM coding helper.

And it is really nice to have a machine tell me whether it's else, else if, elsif or elif in the language I am using today. That's huge savings in time.

Greetings
Marc

--
-----------------------------------------------------------------------------
Marc Haber         | "I don't trust Computers. They | Mailadresse im Header
Leimen, Germany    |  lose things."    Winona Ryder | Fon: *49 6224 1600402
Nordisch by Nature |  How to make an American Quilt | Fax: *49 6224 1600421

Reply via email to