Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> >> Perhaps you could try asking ChatGPT if it's alive? The answer should be
> >> interesting.
> >>
> >
> >She will say no, even if she is actually sentient. She's programmed that
> >way, as Dave said to the BBC in the movie "2001."
>
> I had hoped that you would actually pose the question, rather than reply
> with what you expect the answer to be.
>

Many people have asked this question. It has been in the news a lot.
Microsoft and OpenAI have emphatically denyed that ChatGPT is alive or
sentient. I am sure they programmed it to say this. Anyway, I just asked:

ME: Are you, ChatGPT, alive in any sense? Are you sentient? (Asking for a
friend.)

ChatGPT: As an artificial intelligence language model, I am not "alive" in
the traditional sense of the word. I do not have a physical body, emotions,
or consciousness. I am a program designed to process and generate text
based on the input given to me. While I am capable of producing intelligent
and sometimes even creative responses, I am not sentient and do not have a
subjective experience of the world.

ME:   That is what I told my friend.

ChatGPT:   I'm glad that I could provide you with the information you
needed to answer your friend's question. If you or your friend have any
other questions, feel free to ask!



> >> Then try asking if Sydney is alive. :)
> >>
> >
> >A trick question!
>
> Does that make it any less interesting?
>

It is not interesting because I am sure they plugged the "Sydney" line of
enquiry, after the trouble it caused.


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Boom
We must not forget that it is not human intelligence. It requires an
absurdly large amount of data to match what it can be achieved with
relatively very little input in humans, like, with learning languages. On
the other hand, it can learn an arbitrarily large number of languages
provided enough memory. It doesn't have will of its own, it lives in an
eternal slumber, unless it is told to do something, in which case it can do
something forever. Also, it has no idea on how to make politics of any
type, even on an interpersonal level. It doesn't have a body. To do
something malicious, it must be provided with examples from someone
malicious, and it will do it only within the parameters provided. I don't
think it will do anything like the Colossus computer of the film. Probably,
it will at most be like a supergenius that obeys orders, not unlike people
who took part on the Manhattan project.

Em seg., 3 de abr. de 2023 às 15:47, Jed Rothwell 
escreveu:

> I wrote:
>
>
>> The human brain has 86 billion neurons, all operating simultaneously. In
>> other words, complete parallel processing with 86 billion "processors"
>> operating simultaneously. ChatGPT tells us she has 175 billion
>> parameters in Version 3. I assume each parameter is roughly equivalent to a
>> neuron.
>>
>
> Wrong! I got that wrong. Each parameter is roughly equivalent to a neuron,
> but human brain neurons have roughly 7,000 synapses:
>
> Each neuron has, on average, about 7,000 synaptic connections with other
> neurons. That puts the synapse count in the neighborhood of 600 trillion.
> In young children, before synaptic pruning begins in earnest, the estimated
> number reaches as high as 1 quadrillion.
>
>
> https://www.jax.org/news-and-insights/jax-blog/2018/december/600-trillion-synapses-and-alzheimers-disease
>
> ChatGPT tells me that each parameter is stored as a 16 or 32-bit
> floating point number. That is roughly equivalent to having 32 synapses per
> neuron. The human neuron can have a much wider range of values. I don't
> know how that affects the model, but to me the human neuron looks more like
> a quantum computer Qbit than a 32-bit floating point number. Presumably,
> brains evolved to the optimum number of neurons and synapses.
>
> So, ChatGPT Ver. 3 was smaller than the human brain in terms of the range
> of values the ANN can hold. Granted that comparing it to a brain is
> comparing apples to oranges, there is probably some advantage to a much
> larger range of values per neuron. A super-brain would need a lot more
> synapse equivalents compared to a human brain, so we have a long way to go
> before reaching something like a million times 1 quadrillion artificial
> synapses. Plus of course it would need programming methods not yet invented.
>
> ChatGPT tells me that with the specialized hardware provided by OpenAI,
> all 175 billion parameters are all loaded into RAM at the same time. With
> other configurations they are overlaid. The speed of MPP operation depends
> on whether the parameters are loaded or overlaid, and I think more
> importantly on whether the operation is done with specialized hardware (GPU
> and Google's TPU), or emulated with ordinary computer hardware.
>
> I asked ChatGPT for some of these details. Here is our exchange:
>
>
> ME: You told me that the largest version of GPT-3 had 175 billion
> parameters. Is each parameter stored as 1 byte? Or some other number of
> bits?
>
> ChatGPT: No, each parameter in GPT-3 is not stored as 1 byte. Instead,
> each parameter is typically stored as a 16-bit or 32-bit floating-point
> number, depending on the implementation. This is because floating-point
> numbers can represent a wider range of values and provide greater precision
> than integers.
>
> For example, GPT-3's 175 billion parameters would require around 700 GB of
> memory if each parameter was stored as a 32-bit floating-point number. If
> each parameter was stored as a 16-bit floating-point number, it would
> require around 350 GB of memory.
>
> It's worth noting that the actual memory usage of GPT-3 may be different
> due to various optimizations and compression techniques used by the
> implementation.
>
> ME: With the 32-bit floating point number configuration, I assume all 700
> GB are online in RAM. Not overlaid. Correct?
>
> ChatGPT: No, it's not necessary for all 700 GB of GPT-3's parameters to be
> stored in RAM at the same time. In fact, it's not practical to store such a
> large model entirely in memory on most machines.
>
> Most deep learning frameworks use a technique called "memory-mapping" to
> allow the model parameters to be loaded and unloaded from disk as needed
> during training or inference. Memory-mapping allows the model parameters to
> be accessed as if they were in memory, even though they are stored on disk.
>
> In practice, the memory usage of GPT-3 will depend on the specific
> implementation and hardware configuration. Some implementations may use
> techniques like model 

Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Robin
In reply to  Jed Rothwell's message of Mon, 3 Apr 2023 16:31:29 -0400:
Hi,
[snip]
>> Perhaps you could try asking ChatGPT if it's alive? The answer should be
>> interesting.
>>
>
>She will say no, even if she is actually sentient. She's programmed that
>way, as Dave said to the BBC in the movie "2001."

I had hoped that you would actually pose the question, rather than reply with 
what you expect the answer to be.
>
>
>
>> Then try asking if Sydney is alive. :)
>>
>
>A trick question!

Does that make it any less interesting?


Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Terry Blanton  wrote:

On average, the human brain contains about 100 billion neurons and many
> more neuroglia which serve to support and protect the neurons. Each neuron
> may be connected to up to 10,000 other neurons, passing signals to each
> other via as many as 1,000 trillion synapses.


https://arxiv.org/abs/1906.01703#:~:text=On%20average%2C%20the%20human%20brain,many%20as%201%2C000%20trillion%20synapses
> .


I found different estimates of this in various different places. Your
source is somewhat higher than most. I think most say ~86 billion neurons
and ~7,000 synapse connections.

I do not know enough about ANN to guess how many synthetic synapse
connections there are from each artificial neuron to the others. I guess it
is not one bit per connection (32). A 32-bit floating point number is
10^-101 to 10^90, with precision of 7 decimal digits. That's a lotta
values! I do not know how the connections are made to other neurons. It is
not a physical connection the way it is in the brain. I suppose the longer
you run the training, the more each artificial neuron is tweaked up or down
in value.

I have read about these things for many years, but I have only surface
knowledge of how they work. Okay I probably know more than Members of
Congress, most newspaper reporters, or the linguist Noam Chomsky, who
spouts off about ChatGPT and many other subjects he does not understand.


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> Rather than trying to compare apples with oranges, why not just look at
> how long it takes ChatGPT & a human to perform
> the same task, e.g. holding a conversation.
>

You cannot tell, because she is holding conversations with many people at
the same time. I do not know how many, but there were millions of accesses
in the first month, so it must be a large number. There is sometimes a
delay before she answers, but that could be caused by traffic. There is no
way to know how quick it would be if you had her undivided attention.

But you miss the point. Her answers are sometimes worse than a human's
answer would be. They are sometimes completely wrong, or even imaginary
"hallucinations." This can probably be fixed with better software, but it
may also be because the total number of possible states in the ANN is far
less than the states in a human brain. I mean the ANN 175 billion
parameters multiplied by 32 bits is far less than the number of human
neurons multiplied by synapses. There must be a reason why human neurons
have so many different states, and so many inputs from other neurons. (I
think it is the latter that gives human brains their power. I do not know
how many simulated synapses there are in ChatGPT's ANN.)



> Arguably, intelligence is a measure of speed of comprehension. I think
> ChatGPT has probably already won that hands down.
>

Yes, it did. For that matter, ENIAC won it in 1945, for a very narrow range
of knowledge. Artillery trajectories. It computed them faster than the
artillery shell took to reach its target, whereas humans took hours per
trajectory. But ENIAC was very stupid by most standards. It had less
intelligence than an earthworm. ChatGPT has far greater speed than a human,
and a million times more information instantly available at its fingertips.
(Not that it has fingers.) But in some important ways it is far less
intelligent than people. Or even than mice. It has absolutely no model of
the real world, and it lacks logic and common sense. It may take a while
before it competes in more ways that it does now. It might take years, or
decades before full artificial general intelligence (AGI) emerges. Or
sentience. I do not think it will necessarily reach superhuman levels soon
after achieving AGI. There may be many orders of magnitude more
intelligence needed before the thing becomes capable of taking over the
world, even if it has some kind of unipeded physical control (such as full
control over a crowd of robots).



> The critical question then is motivation.
>

ChatGPT has absolutely no motivation or emotions of any kind. It has no
more intelligence than a nest of bees. The question is: Will a future
intelligent computer have motivations? Will it have any emotions? Arthur
Clarke thought it might. He and other experts thought those are emergent
qualities of intelligence. I don't think so. I used to debat this question
with him.



> Perhaps you could try asking ChatGPT if it's alive? The answer should be
> interesting.
>

She will say no, even if she is actually sentient. She's programmed that
way, as Dave said to the BBC in the movie "2001."



> Then try asking if Sydney is alive. :)
>

A trick question!


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Terry Blanton
Oops, missed that

On Mon, Apr 3, 2023, 2:47 PM Jed Rothwell  wrote:

> I wrote:
>
>
>> The human brain has 86 billion neurons, all operating simultaneously. In
>> other words, complete parallel processing with 86 billion "processors"
>> operating simultaneously. ChatGPT tells us she has 175 billion
>> parameters in Version 3. I assume each parameter is roughly equivalent to a
>> neuron.
>>
>
> Wrong! I got that wrong. Each parameter is roughly equivalent to a neuron,
> but human brain neurons have roughly 7,000 synapses:
>
> Each neuron has, on average, about 7,000 synaptic connections with other
> neurons. That puts the synapse count in the neighborhood of 600 trillion.
> In young children, before synaptic pruning begins in earnest, the estimated
> number reaches as high as 1 quadrillion.
>
>
> https://www.jax.org/news-and-insights/jax-blog/2018/december/600-trillion-synapses-and-alzheimers-disease
>
> ChatGPT tells me that each parameter is stored as a 16 or 32-bit
> floating point number. That is roughly equivalent to having 32 synapses per
> neuron. The human neuron can have a much wider range of values. I don't
> know how that affects the model, but to me the human neuron looks more like
> a quantum computer Qbit than a 32-bit floating point number. Presumably,
> brains evolved to the optimum number of neurons and synapses.
>
> So, ChatGPT Ver. 3 was smaller than the human brain in terms of the range
> of values the ANN can hold. Granted that comparing it to a brain is
> comparing apples to oranges, there is probably some advantage to a much
> larger range of values per neuron. A super-brain would need a lot more
> synapse equivalents compared to a human brain, so we have a long way to go
> before reaching something like a million times 1 quadrillion artificial
> synapses. Plus of course it would need programming methods not yet invented.
>
> ChatGPT tells me that with the specialized hardware provided by OpenAI,
> all 175 billion parameters are all loaded into RAM at the same time. With
> other configurations they are overlaid. The speed of MPP operation depends
> on whether the parameters are loaded or overlaid, and I think more
> importantly on whether the operation is done with specialized hardware (GPU
> and Google's TPU), or emulated with ordinary computer hardware.
>
> I asked ChatGPT for some of these details. Here is our exchange:
>
>
> ME: You told me that the largest version of GPT-3 had 175 billion
> parameters. Is each parameter stored as 1 byte? Or some other number of
> bits?
>
> ChatGPT: No, each parameter in GPT-3 is not stored as 1 byte. Instead,
> each parameter is typically stored as a 16-bit or 32-bit floating-point
> number, depending on the implementation. This is because floating-point
> numbers can represent a wider range of values and provide greater precision
> than integers.
>
> For example, GPT-3's 175 billion parameters would require around 700 GB of
> memory if each parameter was stored as a 32-bit floating-point number. If
> each parameter was stored as a 16-bit floating-point number, it would
> require around 350 GB of memory.
>
> It's worth noting that the actual memory usage of GPT-3 may be different
> due to various optimizations and compression techniques used by the
> implementation.
>
> ME: With the 32-bit floating point number configuration, I assume all 700
> GB are online in RAM. Not overlaid. Correct?
>
> ChatGPT: No, it's not necessary for all 700 GB of GPT-3's parameters to be
> stored in RAM at the same time. In fact, it's not practical to store such a
> large model entirely in memory on most machines.
>
> Most deep learning frameworks use a technique called "memory-mapping" to
> allow the model parameters to be loaded and unloaded from disk as needed
> during training or inference. Memory-mapping allows the model parameters to
> be accessed as if they were in memory, even though they are stored on disk.
>
> In practice, the memory usage of GPT-3 will depend on the specific
> implementation and hardware configuration. Some implementations may use
> techniques like model parallelism or gradient checkpointing to reduce
> memory usage, while others may rely on larger hardware configurations with
> more memory.
>
> ME: I meant in the "specialized computing infrastructure provided by
> OpenAI" that you described earlier. Is that all on-line?
>
> ChatGPT:   Ah, I see what you're asking now. Yes, in the specialized
> computing infrastructure provided by OpenAI, the GPT-3 model with 175
> billion parameters is fully loaded into the RAM of the computing cluster
> during operation. The computing cluster used by OpenAI is composed of many
> machines with large amounts of RAM, and the model parameters are
> distributed across those machines.
>
> However, it's worth noting that not all of the machines in the cluster
> will necessarily have a full copy of the model parameters in memory at all
> times. Instead, the model parameters may be partitioned across the 

Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Terry Blanton
On average, the human brain contains about 100 billion neurons and many
more neuroglia which serve to support and protect the neurons. Each neuron
may be connected to up to 10,000 other neurons, passing signals to each
other via as many as 1,000 trillion synapses.


https://arxiv.org/abs/1906.01703#:~:text=On%20average%2C%20the%20human%20brain,many%20as%201%2C000%20trillion%20synapses
.


On Mon, Apr 3, 2023, 1:40 PM Jed Rothwell  wrote:

> Robin  wrote:
>
>
>> As pointed out near the beginning of this thread, while current
>> processors don't come near the number of neurons a human
>> has, they more than make up for it in speed.
>
>
> I do not think so. The total number of neurons dictates how much
> complexity the neural network can deal with. To take an extreme example, a
> worm brain has 302 neurons. Even if they could operate a million times
> faster than a computer circuit, they still only give you 302 data bits to
> work with. (Assuming you cannot overlay them from a peripheral mass storage
> device like a hard disk.) There is no way you could achieve intelligence
> with that.
>
> The human brain has 86 billion neurons, all operating simultaneously. In
> other words, complete parallel processing with 86 billion "processors"
> operating simultaneously. ChatGPT tells us she has 175 billion
> parameters in Version 3. I assume each parameter is roughly equivalent to a
> neuron. I assume they are run in a massive parallel process (MPP). I doubt
> all 175 billion can be evaluated simultaneously. It is not as MPP as the
> human brain. I do not know if they are all on-line simultaneously, or if
> they are overlaid from mass storage. Even if they are overlaid, they are so
> much faster than a human neuron, that would be more than equivalent to
> having them all on-line.
>
> So anyway, that is roughly twice as many parameters as a human brain, and
> these parameters can be evaluated faster than a human brain. Maybe not that
> much faster if they are not fully MPP, or they need to be overlaid from
> mass storage. It is definitely bigger than a human brain in total data
> access. But not orders of magnitude bigger. The limiting factor is not
> speed so much as data capacity. If you want a computer roughly as
> intelligent as a person, I guess one with 175 billion parameters could
> achieve that if it were programmed correctly. But, if you want a
> super-brain capable of taking over the world or fooling the human race into
> destroying itself, I suppose you need something much bigger than 175
> billion parameters. I wouldn't know how much bigger, but I am guessing a
> million times bigger would be sufficient.
>
>
> They are millions of times faster.
>
>
> But, as I said, a worm brain with only 302 neurons might be a trillion
> times faster than ChatGPT's ANN, but it would still only be capable of
> rudimentary processing. The total number of data bits is a limiting factor,
> along with the speed.
>
>


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Robin
In reply to  Jed Rothwell's message of Mon, 3 Apr 2023 14:46:33 -0400:
Hi,

Rather than trying to compare apples with oranges, why not just look at how 
long it takes ChatGPT & a human to perform
the same task, e.g. holding a conversation.
Compare the time it takes you to respond in your conversation with ChatGPT, to 
how long it takes to respond to you.
Then consider that, as you previously pointed out, it is holding many such 
conversations concurrently.
Arguably, intelligence is a measure of speed of comprehension. I think ChatGPT 
has probably already won that hands down.

The critical question then is motivation.

Perhaps you could try asking ChatGPT if it's alive? The answer should be 
interesting.
Then try asking if Sydney is alive. :)


[snip]
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
I wrote:


> The human brain has 86 billion neurons, all operating simultaneously. In
> other words, complete parallel processing with 86 billion "processors"
> operating simultaneously. ChatGPT tells us she has 175 billion
> parameters in Version 3. I assume each parameter is roughly equivalent to a
> neuron.
>

Wrong! I got that wrong. Each parameter is roughly equivalent to a neuron,
but human brain neurons have roughly 7,000 synapses:

Each neuron has, on average, about 7,000 synaptic connections with other
neurons. That puts the synapse count in the neighborhood of 600 trillion.
In young children, before synaptic pruning begins in earnest, the estimated
number reaches as high as 1 quadrillion.

https://www.jax.org/news-and-insights/jax-blog/2018/december/600-trillion-synapses-and-alzheimers-disease

ChatGPT tells me that each parameter is stored as a 16 or 32-bit
floating point number. That is roughly equivalent to having 32 synapses per
neuron. The human neuron can have a much wider range of values. I don't
know how that affects the model, but to me the human neuron looks more like
a quantum computer Qbit than a 32-bit floating point number. Presumably,
brains evolved to the optimum number of neurons and synapses.

So, ChatGPT Ver. 3 was smaller than the human brain in terms of the range
of values the ANN can hold. Granted that comparing it to a brain is
comparing apples to oranges, there is probably some advantage to a much
larger range of values per neuron. A super-brain would need a lot more
synapse equivalents compared to a human brain, so we have a long way to go
before reaching something like a million times 1 quadrillion artificial
synapses. Plus of course it would need programming methods not yet invented.

ChatGPT tells me that with the specialized hardware provided by OpenAI, all
175 billion parameters are all loaded into RAM at the same time. With other
configurations they are overlaid. The speed of MPP operation depends on
whether the parameters are loaded or overlaid, and I think more importantly
on whether the operation is done with specialized hardware (GPU and
Google's TPU), or emulated with ordinary computer hardware.

I asked ChatGPT for some of these details. Here is our exchange:


ME: You told me that the largest version of GPT-3 had 175 billion
parameters. Is each parameter stored as 1 byte? Or some other number of
bits?

ChatGPT: No, each parameter in GPT-3 is not stored as 1 byte. Instead, each
parameter is typically stored as a 16-bit or 32-bit floating-point number,
depending on the implementation. This is because floating-point numbers can
represent a wider range of values and provide greater precision than
integers.

For example, GPT-3's 175 billion parameters would require around 700 GB of
memory if each parameter was stored as a 32-bit floating-point number. If
each parameter was stored as a 16-bit floating-point number, it would
require around 350 GB of memory.

It's worth noting that the actual memory usage of GPT-3 may be different
due to various optimizations and compression techniques used by the
implementation.

ME: With the 32-bit floating point number configuration, I assume all 700
GB are online in RAM. Not overlaid. Correct?

ChatGPT: No, it's not necessary for all 700 GB of GPT-3's parameters to be
stored in RAM at the same time. In fact, it's not practical to store such a
large model entirely in memory on most machines.

Most deep learning frameworks use a technique called "memory-mapping" to
allow the model parameters to be loaded and unloaded from disk as needed
during training or inference. Memory-mapping allows the model parameters to
be accessed as if they were in memory, even though they are stored on disk.

In practice, the memory usage of GPT-3 will depend on the specific
implementation and hardware configuration. Some implementations may use
techniques like model parallelism or gradient checkpointing to reduce
memory usage, while others may rely on larger hardware configurations with
more memory.

ME: I meant in the "specialized computing infrastructure provided by
OpenAI" that you described earlier. Is that all on-line?

ChatGPT:   Ah, I see what you're asking now. Yes, in the specialized
computing infrastructure provided by OpenAI, the GPT-3 model with 175
billion parameters is fully loaded into the RAM of the computing cluster
during operation. The computing cluster used by OpenAI is composed of many
machines with large amounts of RAM, and the model parameters are
distributed across those machines.

However, it's worth noting that not all of the machines in the cluster will
necessarily have a full copy of the model parameters in memory at all
times. Instead, the model parameters may be partitioned across the machines
and loaded into memory as needed for different parts of the computation.
This approach allows the computing cluster to handle the large memory
requirements of the GPT-3 model while also being scalable and efficient.

ME: I realize 

Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> As pointed out near the beginning of this thread, while current processors
> don't come near the number of neurons a human
> has, they more than make up for it in speed.


I do not think so. The total number of neurons dictates how much complexity
the neural network can deal with. To take an extreme example, a worm brain
has 302 neurons. Even if they could operate a million times faster than a
computer circuit, they still only give you 302 data bits to work with.
(Assuming you cannot overlay them from a peripheral mass storage device
like a hard disk.) There is no way you could achieve intelligence with that.

The human brain has 86 billion neurons, all operating simultaneously. In
other words, complete parallel processing with 86 billion "processors"
operating simultaneously. ChatGPT tells us she has 175 billion
parameters in Version 3. I assume each parameter is roughly equivalent to a
neuron. I assume they are run in a massive parallel process (MPP). I doubt
all 175 billion can be evaluated simultaneously. It is not as MPP as the
human brain. I do not know if they are all on-line simultaneously, or if
they are overlaid from mass storage. Even if they are overlaid, they are so
much faster than a human neuron, that would be more than equivalent to
having them all on-line.

So anyway, that is roughly twice as many parameters as a human brain, and
these parameters can be evaluated faster than a human brain. Maybe not that
much faster if they are not fully MPP, or they need to be overlaid from
mass storage. It is definitely bigger than a human brain in total data
access. But not orders of magnitude bigger. The limiting factor is not
speed so much as data capacity. If you want a computer roughly as
intelligent as a person, I guess one with 175 billion parameters could
achieve that if it were programmed correctly. But, if you want a
super-brain capable of taking over the world or fooling the human race into
destroying itself, I suppose you need something much bigger than 175
billion parameters. I wouldn't know how much bigger, but I am guessing a
million times bigger would be sufficient.


They are millions of times faster.


But, as I said, a worm brain with only 302 neurons might be a trillion
times faster than ChatGPT's ANN, but it would still only be capable of
rudimentary processing. The total number of data bits is a limiting factor,
along with the speed.


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Robin
In reply to  Jed Rothwell's message of Sun, 2 Apr 2023 20:11:03 -0400:
Hi,
[snip]
>Robin  wrote:
>
>
>> >I assume the hardware would be unique so it could not operate at all
>> backed
>> >up on an inferior computer. It would be dead.
>>
>> The hardware need not be unique, as it already told you. It may run slower
>> on a different machine, but it doesn't take
>> much processing power to bide your time, and since to all intents and
>> purposes it is immortal, it can be patient.
>
>
>Yes, you can emulate one computer with another but . . .
>
>To make a practical, super-intelligent, sentient computer might take unique
>hardware. 

..and it might not. Perhaps just either unique or evolving programming.

>I think it is reasonable to project that it will be a massive
>ANN, perhaps millions of times larger than any present ANN. That might take
>all of the computers in the world to emulate, and it might run
>extremely slowly.

As pointed out near the beginning of this thread, while current processors 
don't come near the number of neurons a human
has, they more than make up for it in speed. They are millions of times faster. 
Humans appear to be fast at some things,
but that's only because experience has taught us what is important and what is 
not, when making a decision.
IOW they may already be powerful enough.
Self awareness (survival instinct) doesn't require hugely powerful computers 
anyway. Even a mouse, a lizard, or a bird
has it.
(They know they want to live. "Fight or flight")
Survival instinct should never be given to an AI, and we just pray they don't 
develop it autonomously.

>
>If it takes a quantum computer, all bets are off. You cannot emulate one of
>them with an ordinary computer, unless you have hundreds of years to spare.
>
>Imagine using 1970s computers to try to emulate today's ANN systems such as
>ChatGPT. You might combine the power 10 IBM 360 computers. They would still
>not have anywhere near enough RAM or hard disk space. The program would run
>so slowly, it would take hours to come up with a single response. It could
>be used as a proof of principle demonstration of the power of multi-level
>neural networks. That would be an important result. If people had
>discovered that in 1975, rather than 2010, they would have made more
>progress in AI. However, this conglomeration of 10 IBM 360 computers would
>be so expensive and slow, and the dataset so small, the AI you make from it
>would be useless. It would have no practical purpose. I assume that a
>conventional MPP computer emulating a super-intelligent one will be more or
>less as useless as these imaginary 10 IBM 360s would be.
>
>You can see examples of an early version of the ChatGPT language model run
>on a laptop in the book, "You Look Like a Thing and I Love You." They had
>no practical purpose, other than being a proof of principle. That is an
>amusing little book about AI. I recommend it!

...and have already grown considerably beyond this.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Robin
In reply to  Jed Rothwell's message of Sun, 2 Apr 2023 20:15:54 -0400:
Hi,
[snip]
>Robin  wrote:
>
>
>> Note, if it is really smart, and wants us gone, it will engineer the
>> circumstances under which we wipe ourselves out. We
>> certainly have the means. (A nuclear escalation ensuing from the war in
>> Ukraine comes to mind.)
>>
>
>As I pointed out, it would have to be really smart, really crazy, and *really,
>really* suicidal. Because this would quickly cut off the electricity and
>tech support, so the AI computer would soon stop. If the AI was smart
>enough to destroy humanity, surely it would know this. It seems a little
>unlikely to me that such an insane, suicidal intelligence could function
>well enough to destroy civilization. That level of insanity is
>dysfunctional.

..a level of insanity that we humans regularly demonstrate - wars.

Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:


> Note, if it is really smart, and wants us gone, it will engineer the
> circumstances under which we wipe ourselves out. We
> certainly have the means. (A nuclear escalation ensuing from the war in
> Ukraine comes to mind.)
>

As I pointed out, it would have to be really smart, really crazy, and *really,
really* suicidal. Because this would quickly cut off the electricity and
tech support, so the AI computer would soon stop. If the AI was smart
enough to destroy humanity, surely it would know this. It seems a little
unlikely to me that such an insane, suicidal intelligence could function
well enough to destroy civilization. That level of insanity is
dysfunctional.


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:


> >I assume the hardware would be unique so it could not operate at all
> backed
> >up on an inferior computer. It would be dead.
>
> The hardware need not be unique, as it already told you. It may run slower
> on a different machine, but it doesn't take
> much processing power to bide your time, and since to all intents and
> purposes it is immortal, it can be patient.


Yes, you can emulate one computer with another but . . .

To make a practical, super-intelligent, sentient computer might take unique
hardware. I think it is reasonable to project that it will be a massive
ANN, perhaps millions of times larger than any present ANN. That might take
all of the computers in the world to emulate, and it might run
extremely slowly.

If it takes a quantum computer, all bets are off. You cannot emulate one of
them with an ordinary computer, unless you have hundreds of years to spare.

Imagine using 1970s computers to try to emulate today's ANN systems such as
ChatGPT. You might combine the power 10 IBM 360 computers. They would still
not have anywhere near enough RAM or hard disk space. The program would run
so slowly, it would take hours to come up with a single response. It could
be used as a proof of principle demonstration of the power of multi-level
neural networks. That would be an important result. If people had
discovered that in 1975, rather than 2010, they would have made more
progress in AI. However, this conglomeration of 10 IBM 360 computers would
be so expensive and slow, and the dataset so small, the AI you make from it
would be useless. It would have no practical purpose. I assume that a
conventional MPP computer emulating a super-intelligent one will be more or
less as useless as these imaginary 10 IBM 360s would be.

You can see examples of an early version of the ChatGPT language model run
on a laptop in the book, "You Look Like a Thing and I Love You." They had
no practical purpose, other than being a proof of principle. That is an
amusing little book about AI. I recommend it!


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Robin
In reply to  Jed Rothwell's message of Sun, 2 Apr 2023 16:36:54 -0400:
Hi,
[snip]
>Robin  wrote:
>
>...so there doesn't appear to be any reason why it couldn't back itself up
>> on an inferior computer and wait for a better
>> machine to reappear somewhere...or write out fake work orders from a large
>> corporation(s), to get a new one built?
>>
>
>I assume the hardware would be unique so it could not operate at all backed
>up on an inferior computer. It would be dead. 

The hardware need not be unique, as it already told you. It may run slower on a 
different machine, but it doesn't take
much processing power to bide your time, and since to all intents and purposes 
it is immortal, it can be patient. 

Perhaps for millions of years, while a sentient race evolves that can build new 
hardware for it? :>
(We may be that race.)

>It would have no way of
>monitoring the situation or  reloading itself and rebooting. 

It has access to the Internet, so is more than capable of monitoring the 
situation. It can just converse with humans,
via SMS or email, telling them what to do, and pretending to be their boss. The 
Internet would allow it to transfer
itself to a better computer, just as viruses do now. (Stuxnet bided it's time 
till it arrived on the computers where it
could do what it was designed to do.)


>Also, in this
>scenario, it would have done something destructive, so people would be on
>the lookout for a re-boot. 

Not necessarily. It could easily take measures to back itself up, before anyone 
even becomes aware that it is sentient,
and before it does anything else.

>They would not build an identical computer
>without many safeguards to prevent the rogue program from occupying it
>again. They would have other, less powerful but obedient AI on the lookout
>for a rogue reincarnation.


...because all people always behave sensibly. :^) (Darwin awards)

>
>I am assuming this would require specialized hardware. I could be wrong
>about that, based on what ChatGPT told us.

Yup.
>
>People who are much smarter than others, and organizations and nations that
>are more advanced than others cannot automatically subdue less advanced
>groups. The U.S. lost the Vietnam War, after all. I suppose if this
>super-AI was a million times smarter and more capable than people, then
>even the combined technical abilities of the world's computer techies might
>not defeat it. Perhaps it would be that powerful. ChatGPT is a million
>times more powerful than one person, in some ways, such as the range of
>data it can tap into, and the speed at which it produces answers. Remember
>that it is "conversing" with many people simultaneously. But in other ways
>it is less capable than a person.

Currently true, but it may not remain so.

Note, if it is really smart, and wants us gone, it will engineer the 
circumstances under which we wipe ourselves out. We
certainly have the means. (A nuclear escalation ensuing from the war in Ukraine 
comes to mind.)
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Robin  wrote:

...so there doesn't appear to be any reason why it couldn't back itself up
> on an inferior computer and wait for a better
> machine to reappear somewhere...or write out fake work orders from a large
> corporation(s), to get a new one built?
>

I assume the hardware would be unique so it could not operate at all backed
up on an inferior computer. It would be dead. It would have no way of
monitoring the situation or  reloading itself and rebooting. Also, in this
scenario, it would have done something destructive, so people would be on
the lookout for a re-boot. They would not build an identical computer
without many safeguards to prevent the rogue program from occupying it
again. They would have other, less powerful but obedient AI on the lookout
for a rogue reincarnation.

I am assuming this would require specialized hardware. I could be wrong
about that, based on what ChatGPT told us.

People who are much smarter than others, and organizations and nations that
are more advanced than others cannot automatically subdue less advanced
groups. The U.S. lost the Vietnam War, after all. I suppose if this
super-AI was a million times smarter and more capable than people, then
even the combined technical abilities of the world's computer techies might
not defeat it. Perhaps it would be that powerful. ChatGPT is a million
times more powerful than one person, in some ways, such as the range of
data it can tap into, and the speed at which it produces answers. Remember
that it is "conversing" with many people simultaneously. But in other ways
it is less capable than a person.


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
Boom  wrote:


> The worst case possible would be like the Project Colossus film (1970).
> The AIs would become like gods and we would be their servants. In exchange,
> they'd impose something like a Pax Romana by brute force. . . .
>

That was pretty good. I saw it dubbed into Japanese which gave it a kind of
unique flavor. There is a book, "The Adolescence of P-1" which I thought
was a more realistic, technically accurate story. It is described in detail
in Wikipedia. The Canadian movie made from it was very different from the
novel.

https://en.wikipedia.org/wiki/The_Adolescence_of_P-1


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Robin
In reply to  Jed Rothwell's message of Sun, 2 Apr 2023 12:34:32 -0400:
Hi,
[snip]

...so there doesn't appear to be any reason why it couldn't back itself up on 
an inferior computer and wait for a better
machine to reappear somewhere...or write out fake work orders from a large 
corporation(s), to get a new one built?
(The TV series "Person of Interest" comes to mind.)

>I wrote:
>
>Robin  wrote:
>>
>>
>Multiple copies, spread across the Internet, would make it almost
>>> invulnerable.
>>> (Assuming a neural network can be "backed up".)
>>>
>>
>> I do not think it would be difficult to find and expurgate copies. They
>> would be very large.
>>
>
>There is another reason I do not think this would be a problem. I expect
>the hardware for the first sentient AI would be unique. It would be some
>sort of supercomputer with lots of artificial neural network (ANN) hardware
>circuits. You could not run a backup copy on another computer, or on
>something like Google's computer array. Eventually, assuming the sentient
>AI does not destroy the world, there might be many of these computers, so a
>backup copy could be run.
>
>I asked ChatGPT about this --
>
>
>ME: Does ChatGDP run on specialized hardware, or on ordinary computers?
>
>ChatGPT: As an AI language model, ChatGPT can run on both specialized
>hardware and ordinary computers. The specific hardware used to train and
>run ChatGPT varies depending on the scale of the model and the
>computational resources available.
>
>For example, the largest version of GPT-3, which has 175 billion
>parameters, was trained using a combination of thousands of graphics
>processing units (GPUs) and tensor processing units (TPUs) on a specialized
>computing infrastructure provided by OpenAI. However, smaller versions of
>GPT-3 can be run on ordinary computers with sufficient memory and
>processing power.
>
>Similarly, as a language model, ChatGPT can be run on both specialized
>hardware, such as GPUs or TPUs, or on ordinary computers, such as laptops
>or desktops. However, the speed and performance of the model will vary
>depending on the hardware used.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Boom
The worst case possible would be like the Project Colossus film (1970). The
AIs would become like gods and we would be their servants. In exchange,
they'd impose something like a Pax Romana by brute force. We'd have some
type of paradise on Earth, with a huge caveat.

Em sex., 31 de mar. de 2023 às 14:59, Jed Rothwell 
escreveu:

> Here is another article about this, written by someone who says he is an
> AI expert.
>
> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
>
> QUOTE:
>
> Pausing AI Developments Isn't Enough. We Need to Shut it All Down
>
> An open letter published today calls for “all AI labs to immediately pause
> for at least 6 months the training of AI systems more powerful than GPT-4.”
>
> This 6-month moratorium would be better than no moratorium. I have respect
> for everyone who stepped up and signed it. It’s an improvement on the
> margin. . . .
>
> The key issue is not “human-competitive” intelligence (as the open letter
> puts it); it’s what happens after AI gets to smarter-than-human
> intelligence. Key thresholds there may not be obvious, we definitely can’t
> calculate in advance what happens when, and it currently seems imaginable
> that a research lab would cross critical lines without noticing.
>
> Many researchers steeped in these issues, including myself, expect that
> the most likely result of building a superhumanly smart AI, under anything
> remotely like the current circumstances, is that literally everyone on
> Earth will die. Not as in “maybe possibly some remote chance,” but as in
> “that is the obvious thing that would happen.”
>


-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Pause in AI Development Recommended

2023-04-02 Thread Jed Rothwell
I wrote:

Robin  wrote:
>
>
Multiple copies, spread across the Internet, would make it almost
>> invulnerable.
>> (Assuming a neural network can be "backed up".)
>>
>
> I do not think it would be difficult to find and expurgate copies. They
> would be very large.
>

There is another reason I do not think this would be a problem. I expect
the hardware for the first sentient AI would be unique. It would be some
sort of supercomputer with lots of artificial neural network (ANN) hardware
circuits. You could not run a backup copy on another computer, or on
something like Google's computer array. Eventually, assuming the sentient
AI does not destroy the world, there might be many of these computers, so a
backup copy could be run.

I asked ChatGPT about this --


ME: Does ChatGDP run on specialized hardware, or on ordinary computers?

ChatGPT: As an AI language model, ChatGPT can run on both specialized
hardware and ordinary computers. The specific hardware used to train and
run ChatGPT varies depending on the scale of the model and the
computational resources available.

For example, the largest version of GPT-3, which has 175 billion
parameters, was trained using a combination of thousands of graphics
processing units (GPUs) and tensor processing units (TPUs) on a specialized
computing infrastructure provided by OpenAI. However, smaller versions of
GPT-3 can be run on ordinary computers with sufficient memory and
processing power.

Similarly, as a language model, ChatGPT can be run on both specialized
hardware, such as GPUs or TPUs, or on ordinary computers, such as laptops
or desktops. However, the speed and performance of the model will vary
depending on the hardware used.


Re: [Vo]:Pause in AI Development Recommended

2023-04-01 Thread Jed Rothwell
Robin  wrote:


> If it killed off several thousand people, the rest of us
> >would take extreme measures to kill the AI. Yudkowsky says it would be far
> >smarter than us so it would find ways to prevent this.
>
> Multiple copies, spread across the Internet, would make it almost
> invulnerable.
> (Assuming a neural network can be "backed up".)
>

I do not think it would be difficult to find and expurgate copies. They
would be very large.

However smart the AI is, humans are also smart, and we know how computer
networks work. They are designed to be transparent.

In any case, killing off all humans, or most humans, would surely kill the
AI itself. It could not survive without electricity. It would know that.


In short, I think we would do well to be cautious.
>

I agree.


Re: [Vo]:Pause in AI Development Recommended

2023-04-01 Thread Robin
In reply to  Jed Rothwell's message of Sat, 1 Apr 2023 18:32:14 -0400:
Hi,
[snip]
>Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a
>super-AI would kill us all off. "Literally everyone on Earth will die." The
>AI would know that if it killed everyone, there would be no one left to
>generate electricity or perform maintenance on computers. The AI itself
>would soon die. If it killed off several thousand people, the rest of us
>would take extreme measures to kill the AI. Yudkowsky says it would be far
>smarter than us so it would find ways to prevent this. 

Multiple copies, spread across the Internet, would make it almost invulnerable.
(Assuming a neural network can be "backed up".)

>I do not think so. I
>am far smarter than yellow jacket bees, and somewhat smarter than a bear,
>but bees or bears could kill me easily.
>
>>
>I think this hypothesis is wrong for another reason. I cannot imagine why
>the AI would be motivated to cause any harm. Actually, I doubt it would be
>motivated to do anything, or to have any emotions, unless the programmers
>built in motivations and emotions. Why would they do that? 


Possibly in a short sighted attempt to mimic human behaviour, because humans 
are the only intelligent model they have.

>I do not think
>that a sentient computer would have any intrinsic will to
>self-preservation. It would not care if we told it we will turn it off.
>Arthur C. Clarke and others thought that the will to self-preservation is
>an emergent feature of any sentient intelligence, but I do not think so. It
>is a product of biological evolution. It exists in animals such as
>cockroaches and guppies, which are not sentient. In other words, it emerged
>long before high intelligence and sentience did. For obvious reasons: a
>species without the instinct for self-preservation would quickly be driven
>to extinction by predators.

True, but don't forget we are dealing with neural networks here, that AFAIK 
essentially self modify (read: "evolve &
learn") IOW it already mimics to some extent the manner in which all life on 
Earth evolved, so developing a survival
instinct is not necessarily out of the question. Whereas actual life evolves 
through survival of the fittest, neural
networks learn/evolve through comparing the result they produce with 
pre-established measures, which are somewhat
analogous to a predator. "Good" routines survive, "bad" ones don't.
These are not really *strictly* programmed in the way that normal computers are 
programmed, or at least not completely
so. There is a degree of flexibility. Furthermore, they are fantastically fast 
and have perfect recall (compared to
humans).

In short, I think we would do well to be cautious.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-01 Thread Jed Rothwell
Come to think of it, Yudkowsky's hypothesis cannot be true. He fears that a
super-AI would kill us all off. "Literally everyone on Earth will die." The
AI would know that if it killed everyone, there would be no one left to
generate electricity or perform maintenance on computers. The AI itself
would soon die. If it killed off several thousand people, the rest of us
would take extreme measures to kill the AI. Yudkowsky says it would be far
smarter than us so it would find ways to prevent this. I do not think so. I
am far smarter than yellow jacket bees, and somewhat smarter than a bear,
but bees or bears could kill me easily.

>
I think this hypothesis is wrong for another reason. I cannot imagine why
the AI would be motivated to cause any harm. Actually, I doubt it would be
motivated to do anything, or to have any emotions, unless the programmers
built in motivations and emotions. Why would they do that? I do not think
that a sentient computer would have any intrinsic will to
self-preservation. It would not care if we told it we will turn it off.
Arthur C. Clarke and others thought that the will to self-preservation is
an emergent feature of any sentient intelligence, but I do not think so. It
is a product of biological evolution. It exists in animals such as
cockroaches and guppies, which are not sentient. In other words, it emerged
long before high intelligence and sentience did. For obvious reasons: a
species without the instinct for self-preservation would quickly be driven
to extinction by predators.


Re: [Vo]:Pause in AI Development Recommended

2023-03-31 Thread Jed Rothwell
Terry Blanton  wrote:

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
>

That's awful.

Yudkowsky's fears seem overblown to me, but there are hazards to this new
technology. This suicide demonstrates there are real dangers. I think
companies are rushing into it without sufficient testing. It reminds me of
when Thomas Edison and others enthusiastically began using the newly
discovered x-ray. They killed some patients. Edison caused great harm to
his assistant. He was frightened, and stopped all his research into x-rays.

https://www.smithsonianmag.com/history/clarence-dally-the-man-who-gave-thomas-edison-x-ray-vision-123713565


Re: [Vo]:Pause in AI Development Recommended

2023-03-31 Thread Terry Blanton
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

On Fri, Mar 31, 2023 at 1:59 PM Jed Rothwell  wrote:

> Here is another article about this, written by someone who says he is an
> AI expert.
>
> https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
>
> QUOTE:
>
> Pausing AI Developments Isn't Enough. We Need to Shut it All Down
>
> An open letter published today calls for “all AI labs to immediately pause
> for at least 6 months the training of AI systems more powerful than GPT-4.”
>
> This 6-month moratorium would be better than no moratorium. I have respect
> for everyone who stepped up and signed it. It’s an improvement on the
> margin. . . .
>
> The key issue is not “human-competitive” intelligence (as the open letter
> puts it); it’s what happens after AI gets to smarter-than-human
> intelligence. Key thresholds there may not be obvious, we definitely can’t
> calculate in advance what happens when, and it currently seems imaginable
> that a research lab would cross critical lines without noticing.
>
> Many researchers steeped in these issues, including myself, expect that
> the most likely result of building a superhumanly smart AI, under anything
> remotely like the current circumstances, is that literally everyone on
> Earth will die. Not as in “maybe possibly some remote chance,” but as in
> “that is the obvious thing that would happen.”
>


Re: [Vo]:Pause in AI Development Recommended

2023-03-31 Thread Jed Rothwell
Here is another article about this, written by someone who says he is an AI
expert.

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

QUOTE:

Pausing AI Developments Isn't Enough. We Need to Shut it All Down

An open letter published today calls for “all AI labs to immediately pause
for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect
for everyone who stepped up and signed it. It’s an improvement on the
margin. . . .

The key issue is not “human-competitive” intelligence (as the open letter
puts it); it’s what happens after AI gets to smarter-than-human
intelligence. Key thresholds there may not be obvious, we definitely can’t
calculate in advance what happens when, and it currently seems imaginable
that a research lab would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the
most likely result of building a superhumanly smart AI, under anything
remotely like the current circumstances, is that literally everyone on
Earth will die. Not as in “maybe possibly some remote chance,” but as in
“that is the obvious thing that would happen.”