The point is that you don't need a water tight spec anymore, because LLMs 
(ideally) have the same capabilities of including all the implicit context that 
humans use to communicate ideas.

> AI is not intelligence.

Humans are to a very large part also just "plagiarism machines", there is a 
reason why it tooks thousands of years until we've got to where humanity is 
now, even though the brain of humans 50,000 years ago was pretty much the same 
as at is today.

There are many capable software engineers that are almost not creative at all.

> I see that it either contradict itself or something else specified before... 
> then it's back to the drawing board. I don't see how an AI can do that 
> correctly, or with how many bugs?

What you're doing as a human is basically tree search, and the moves you 
explore first are chosen by your intuition. While there is some research into 
that direction (e.g., ["AlphaZero-Like Tree-Search can Guide Large Language 
Model Decoding and Training"](https://arxiv.org/abs/2309.17179)), I think there 
is still a lot of potential. As far as I know, all the ChatGPTs and Claude chat 
bots currently basically do the human equivalent of gut reactions.

Reply via email to