Robin <mixent...@aussiebroadband.com.au> wrote:

> Rather than trying to compare apples with oranges, why not just look at
> how long it takes ChatGPT & a human to perform
> the same task, e.g. holding a conversation.
>

You cannot tell, because she is holding conversations with many people at
the same time. I do not know how many, but there were millions of accesses
in the first month, so it must be a large number. There is sometimes a
delay before she answers, but that could be caused by traffic. There is no
way to know how quick it would be if you had her undivided attention.

But you miss the point. Her answers are sometimes worse than a human's
answer would be. They are sometimes completely wrong, or even imaginary
"hallucinations." This can probably be fixed with better software, but it
may also be because the total number of possible states in the ANN is far
less than the states in a human brain. I mean the ANN 175 billion
parameters multiplied by 32 bits is far less than the number of human
neurons multiplied by synapses. There must be a reason why human neurons
have so many different states, and so many inputs from other neurons. (I
think it is the latter that gives human brains their power. I do not know
how many simulated synapses there are in ChatGPT's ANN.)



> Arguably, intelligence is a measure of speed of comprehension. I think
> ChatGPT has probably already won that hands down.
>

Yes, it did. For that matter, ENIAC won it in 1945, for a very narrow range
of knowledge. Artillery trajectories. It computed them faster than the
artillery shell took to reach its target, whereas humans took hours per
trajectory. But ENIAC was very stupid by most standards. It had less
intelligence than an earthworm. ChatGPT has far greater speed than a human,
and a million times more information instantly available at its fingertips.
(Not that it has fingers.) But in some important ways it is far less
intelligent than people. Or even than mice. It has absolutely no model of
the real world, and it lacks logic and common sense. It may take a while
before it competes in more ways that it does now. It might take years, or
decades before full artificial general intelligence (AGI) emerges. Or
sentience. I do not think it will necessarily reach superhuman levels soon
after achieving AGI. There may be many orders of magnitude more
intelligence needed before the thing becomes capable of taking over the
world, even if it has some kind of unipeded physical control (such as full
control over a crowd of robots).



> The critical question then is motivation.
>

ChatGPT has absolutely no motivation or emotions of any kind. It has no
more intelligence than a nest of bees. The question is: Will a future
intelligent computer have motivations? Will it have any emotions? Arthur
Clarke thought it might. He and other experts thought those are emergent
qualities of intelligence. I don't think so. I used to debat this question
with him.



> Perhaps you could try asking ChatGPT if it's alive? The answer should be
> interesting.
>

She will say no, even if she is actually sentient. She's programmed that
way, as Dave said to the BBC in the movie "2001."



> Then try asking if Sydney is alive. :)
>

A trick question!

Reply via email to