Robin wrote:
> >> Perhaps you could try asking ChatGPT if it's alive? The answer should be
> >> interesting.
> >>
> >
> >She will say no, even if she is actually sentient. She's programmed that
> >way, as Dave said to the BBC in the movie "2001."
>
> I had hoped that you would actually pose the
We must not forget that it is not human intelligence. It requires an
absurdly large amount of data to match what it can be achieved with
relatively very little input in humans, like, with learning languages. On
the other hand, it can learn an arbitrarily large number of languages
provided enough
In reply to Jed Rothwell's message of Mon, 3 Apr 2023 16:31:29 -0400:
Hi,
[snip]
>> Perhaps you could try asking ChatGPT if it's alive? The answer should be
>> interesting.
>>
>
>She will say no, even if she is actually sentient. She's programmed that
>way, as Dave said to the BBC in the movie
Terry Blanton wrote:
On average, the human brain contains about 100 billion neurons and many
> more neuroglia which serve to support and protect the neurons. Each neuron
> may be connected to up to 10,000 other neurons, passing signals to each
> other via as many as 1,000 trillion synapses.
Robin wrote:
> Rather than trying to compare apples with oranges, why not just look at
> how long it takes ChatGPT & a human to perform
> the same task, e.g. holding a conversation.
>
You cannot tell, because she is holding conversations with many people at
the same time. I do not know how
Oops, missed that
On Mon, Apr 3, 2023, 2:47 PM Jed Rothwell wrote:
> I wrote:
>
>
>> The human brain has 86 billion neurons, all operating simultaneously. In
>> other words, complete parallel processing with 86 billion "processors"
>> operating simultaneously. ChatGPT tells us she has 175
On average, the human brain contains about 100 billion neurons and many
more neuroglia which serve to support and protect the neurons. Each neuron
may be connected to up to 10,000 other neurons, passing signals to each
other via as many as 1,000 trillion synapses.
In reply to Jed Rothwell's message of Mon, 3 Apr 2023 14:46:33 -0400:
Hi,
Rather than trying to compare apples with oranges, why not just look at how
long it takes ChatGPT & a human to perform
the same task, e.g. holding a conversation.
Compare the time it takes you to respond in your
I wrote:
> The human brain has 86 billion neurons, all operating simultaneously. In
> other words, complete parallel processing with 86 billion "processors"
> operating simultaneously. ChatGPT tells us she has 175 billion
> parameters in Version 3. I assume each parameter is roughly equivalent
Robin wrote:
> As pointed out near the beginning of this thread, while current processors
> don't come near the number of neurons a human
> has, they more than make up for it in speed.
I do not think so. The total number of neurons dictates how much complexity
the neural network can deal
In reply to Jed Rothwell's message of Sun, 2 Apr 2023 20:11:03 -0400:
Hi,
[snip]
>Robin wrote:
>
>
>> >I assume the hardware would be unique so it could not operate at all
>> backed
>> >up on an inferior computer. It would be dead.
>>
>> The hardware need not be unique, as it already told you.
In reply to Jed Rothwell's message of Sun, 2 Apr 2023 20:15:54 -0400:
Hi,
[snip]
>Robin wrote:
>
>
>> Note, if it is really smart, and wants us gone, it will engineer the
>> circumstances under which we wipe ourselves out. We
>> certainly have the means. (A nuclear escalation ensuing from the
Robin wrote:
> Note, if it is really smart, and wants us gone, it will engineer the
> circumstances under which we wipe ourselves out. We
> certainly have the means. (A nuclear escalation ensuing from the war in
> Ukraine comes to mind.)
>
As I pointed out, it would have to be really smart,
Robin wrote:
> >I assume the hardware would be unique so it could not operate at all
> backed
> >up on an inferior computer. It would be dead.
>
> The hardware need not be unique, as it already told you. It may run slower
> on a different machine, but it doesn't take
> much processing power to
In reply to Jed Rothwell's message of Sun, 2 Apr 2023 16:36:54 -0400:
Hi,
[snip]
>Robin wrote:
>
>...so there doesn't appear to be any reason why it couldn't back itself up
>> on an inferior computer and wait for a better
>> machine to reappear somewhere...or write out fake work orders from a
Robin wrote:
...so there doesn't appear to be any reason why it couldn't back itself up
> on an inferior computer and wait for a better
> machine to reappear somewhere...or write out fake work orders from a large
> corporation(s), to get a new one built?
>
I assume the hardware would be unique
Boom wrote:
> The worst case possible would be like the Project Colossus film (1970).
> The AIs would become like gods and we would be their servants. In exchange,
> they'd impose something like a Pax Romana by brute force. . . .
>
That was pretty good. I saw it dubbed into Japanese which gave
In reply to Jed Rothwell's message of Sun, 2 Apr 2023 12:34:32 -0400:
Hi,
[snip]
...so there doesn't appear to be any reason why it couldn't back itself up on
an inferior computer and wait for a better
machine to reappear somewhere...or write out fake work orders from a large
corporation(s),
The worst case possible would be like the Project Colossus film (1970). The
AIs would become like gods and we would be their servants. In exchange,
they'd impose something like a Pax Romana by brute force. We'd have some
type of paradise on Earth, with a huge caveat.
Em sex., 31 de mar. de 2023
I wrote:
Robin wrote:
>
>
Multiple copies, spread across the Internet, would make it almost
>> invulnerable.
>> (Assuming a neural network can be "backed up".)
>>
>
> I do not think it would be difficult to find and expurgate copies. They
> would be very large.
>
There is another reason I do
Robin wrote:
> If it killed off several thousand people, the rest of us
> >would take extreme measures to kill the AI. Yudkowsky says it would be far
> >smarter than us so it would find ways to prevent this.
>
> Multiple copies, spread across the Internet, would make it almost
> invulnerable.
>
In reply to Jed Rothwell's message of Sat, 1 Apr 2023 18:32:14 -0400:
Hi,
[snip]
>Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a
>super-AI would kill us all off. "Literally everyone on Earth will die." The
>AI would know that if it killed everyone, there would be no
Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a
super-AI would kill us all off. "Literally everyone on Earth will die." The
AI would know that if it killed everyone, there would be no one left to
generate electricity or perform maintenance on computers. The AI itself
Terry Blanton wrote:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
>
That's awful.
Yudkowsky's fears seem overblown to me, but there are hazards to this new
technology. This suicide demonstrates there are real dangers. I think
companies are
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
On Fri, Mar 31, 2023 at 1:59 PM Jed Rothwell wrote:
> Here is another article about this, written by someone who says he is an
> AI expert.
>
>
Here is another article about this, written by someone who says he is an AI
expert.
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
QUOTE:
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
An open letter published today calls for “all AI labs to
26 matches
Mail list logo