the problem i have with LLMs is that,
historically, humans have written code a magnitude
of times slower than they can read it

LLMs completely invert that.

that in itself might not look like a problem,
but we are now being forced to look through
LLM-generated code, being disingenuously presented
as serious code, in a world where most of it
already _sucks_. this is outright disrespectful.

LLM-generated code is empty.  when a human
writes code, you can ask them what they were
thinking. they had a theory behind the problem,
they made tradeoffs, some were wrong, but the
reasoning is there to interrogate.

you can't ask an LLM what it was thinking, because
the reasoning doesn't exist. so who's responsible
for the code it writes? nobody? well, scale that
up and all we will get is: codebases that still
compile, still run, but that are way beyond
humanity's collective ability to understand.

now, when it's being used upfront like this, it
certainly earns my respect; because it's being
used _honestly_ as a means of prototyping and
previewing a feature that might or might not
be worth it, especially when we are literally
being told to _not_ read the generated slop.

what i do concern about is us getting too
comfortable with these prototypes and not
cleaning them up and reimplementing them properly,
with reason.

Reply via email to