There was a recent thread about this on python-list, including someone's 
experiments.  Here's what I wrote -

" People need to remember that ChatGPT-like systems put words together the 
way that many humans usually do.  So what they emit usually sounds smooth 
and human-like.  If it's code they emit, it will tend to seem plausible 
because lines of code are basically sentences, and learning how to 
construct plausible sentences is what these systems are built to do. That's 
**plausible**, not "logical" or "correct". 

The vast size of these systems means that they can include a larger context 
in figuring out what words to place next compared with earlier, smaller 
systems. 

But consider: what if you wrote code as a stream-of-consciousness process?  
That code might seem plausible, but why would you have any confidence in 
it?  Or to put it another way, what if most of ChatGPT's exposure to code 
came from StackOverflow archives? 

On top of that, ChapGPT-like systems do not know your requirements nor the 
reasons behind your requests.  They only know that when other people put 
words and phrases together like you did, they tended to make responses that 
sound like what the chatbot emits next.  It's basically cargo-culting its 
responses. 

Apparently researchers have been learning that the more parameters that a 
system like this has, the more likely it is to learn how to emit responses 
that the questioner likes.  Essentially, it could become the ultimate 
yes-man! 

So there is some probability that the system will tell you interesting or 
useful things, some probability that it will try to tell you what it thinks 
you want [to] hear, some probability that it will tell you incorrect things 
that other people have repeated, and some probability that it will 
perseverate - simply make things up. 

If I were going to write a novel about an alternate history, I think that a 
chatGPT-like system would be a fantastic writing assistant. Code? Not so 
much."   

On Thursday, April 13, 2023 at 8:15:59 AM UTC-4 Edward K. Ream wrote:

> On Thursday, April 13, 2023 at 12:15:17 AM UTC-5 Félix wrote:
>
> Here I am, (simple screenshot below) working on leojs, converting the 
> stripBOM function from leoGlobals.py from python to typescript.
>
> Have you tried? Any thoughts or particular productivity tips to share?
>
>
> My impression is that chatGPT does well on small tests. I wouldn't trust 
> it with larger tasks.
>  
>
> (*I eventually plan to use Leo to organize and automate calls to it's 
> API, to make some kind of agi-assistant experiment.*)
>
>
> chatGPT has spawned many creative ideas including yours. Please let us 
> know what happens :-)
>
> Edward
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/0995d4b7-8d77-4ae8-b477-15bc7db1f904n%40googlegroups.com.

Reply via email to