It is interesting how many times I've seen examples of ChatGPT getting 
something wrong but defending its answers with plausible arguments. In one 
example it gives a "proof" that all odd numbers are prime. It requires some 
thought to find the mistake. In another thread I saw on Twitter the user asks 
for an anagram. It gives a wrong answer (missing one letter) and the argument 
boils down to it insisting that the word "chat" does not have the letter "h". 
But instead of admitting it is wrong, it sticks to it's guns. Humans don't like 
to be wrong either.
In 1950, Turing gave an example of a computer playing the imitation game giving 
the wrong answer to an arithmetic problem. I think if he saw GPT-3 he would 
would say that AI has arrived.

Sent from Yahoo Mail on Android 
 
  On Sun, Dec 11, 2022 at 4:37 AM, 
immortal.discover...@gmail.com<immortal.discover...@gmail.com> wrote:   On 
Sunday, December 11, 2022, at 1:34 AM, WriterOfMinds wrote:

If I tried to generate multiple e-mails on the same topic (which would be the 
goal - I like to bother my representatives on the regular), they started 
looking very similar. Telling GPT to "rewrite that in different words" just 
produced another copy of the same output.


I found yesterday that codex has it's temp in the OpenAI Playground set to 0 as 
if it is something working different than in GPT. It seems codex at 0 predicts 
somewhat the same thing yes. This is so the code works right I think. I know 
sometimes a weird prediction can be the answer, but it seems to like a more 
frozen setting of "cold more stable" prediction so things are kept in order 
more, mostly every time. Perhaps it's because they know the things it may try 
to say are directly word by word from a human, and that makes them quite a 
likely correct thing to be saying (though again many prompts call for new 
completions overall). Anyway Idk but ya it does seem to complete with the same 
thing like codex, very close actually at least for the first 2 sentences I seen 
were exact to a story completion! Lol.

BTW chatGPt seems to use Dialogue and Instruct and Code now, which makes it 
different I mean that GPT-3. It is a GPT-3.5 BTW they call it. It basically 
makes up facts less often and tries to act like a human / assistant, and know 
code and math more better - something tricky GPT-3 fails at easy. And Dialogue 
IDK what exactly if these are all differently applied but Dialogue seems to be 
the goals and beliefs it thinks/ says to make it try to obey OpenAI's laws and 
act useful. So this is part of why you see less outputs like "I have a dog>it's 
a robot dog!!! Tuesday Ramholt said why not just...". It's less random in one 
sense. More frozen (and aligned as they call it).Artificial General 
Intelligence List / AGI / seediscussions +participants +delivery 
optionsPermalink  

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T357de2f46d742838-M6026886aa1b4de2db1dadb7c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to