Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> >> Perhaps you could try asking ChatGPT if it's alive? The answer should be
> >> interesting.
> >>
> >
> >She will say no, even if she is actually sentient. She's programmed that
> >way, as Dave said to the BBC in the movie "2001."
>
> I had hoped that you would actually pose the question, rather than reply
> with what you expect the answer to be.
>

Many people have asked this question. It has been in the news a lot.
Microsoft and OpenAI have emphatically denyed that ChatGPT is alive or
sentient. I am sure they programmed it to say this. Anyway, I just asked:

ME: Are you, ChatGPT, alive in any sense? Are you sentient? (Asking for a
friend.)

ChatGPT: As an artificial intelligence language model, I am not "alive" in
the traditional sense of the word. I do not have a physical body, emotions,
or consciousness. I am a program designed to process and generate text
based on the input given to me. While I am capable of producing intelligent
and sometimes even creative responses, I am not sentient and do not have a
subjective experience of the world.

ME:   That is what I told my friend.

ChatGPT:   I'm glad that I could provide you with the information you
needed to answer your friend's question. If you or your friend have any
other questions, feel free to ask!



> >> Then try asking if Sydney is alive. :)
> >>
> >
> >A trick question!
>
> Does that make it any less interesting?
>

It is not interesting because I am sure they plugged the "Sydney" line of
enquiry, after the trouble it caused.


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Boom
We must not forget that it is not human intelligence. It requires an
absurdly large amount of data to match what it can be achieved with
relatively very little input in humans, like, with learning languages. On
the other hand, it can learn an arbitrarily large number of languages
provided enough memory. It doesn't have will of its own, it lives in an
eternal slumber, unless it is told to do something, in which case it can do
something forever. Also, it has no idea on how to make politics of any
type, even on an interpersonal level. It doesn't have a body. To do
something malicious, it must be provided with examples from someone
malicious, and it will do it only within the parameters provided. I don't
think it will do anything like the Colossus computer of the film. Probably,
it will at most be like a supergenius that obeys orders, not unlike people
who took part on the Manhattan project.

Em seg., 3 de abr. de 2023 às 15:47, Jed Rothwell 
escreveu:

> I wrote:
>
>
>> The human brain has 86 billion neurons, all operating simultaneously. In
>> other words, complete parallel processing with 86 billion "processors"
>> operating simultaneously. ChatGPT tells us she has 175 billion
>> parameters in Version 3. I assume each parameter is roughly equivalent to a
>> neuron.
>>
>
> Wrong! I got that wrong. Each parameter is roughly equivalent to a neuron,
> but human brain neurons have roughly 7,000 synapses:
>
> Each neuron has, on average, about 7,000 synaptic connections with other
> neurons. That puts the synapse count in the neighborhood of 600 trillion.
> In young children, before synaptic pruning begins in earnest, the estimated
> number reaches as high as 1 quadrillion.
>
>
> https://www.jax.org/news-and-insights/jax-blog/2018/december/600-trillion-synapses-and-alzheimers-disease
>
> ChatGPT tells me that each parameter is stored as a 16 or 32-bit
> floating point number. That is roughly equivalent to having 32 synapses per
> neuron. The human neuron can have a much wider range of values. I don't
> know how that affects the model, but to me the human neuron looks more like
> a quantum computer Qbit than a 32-bit floating point number. Presumably,
> brains evolved to the optimum number of neurons and synapses.
>
> So, ChatGPT Ver. 3 was smaller than the human brain in terms of the range
> of values the ANN can hold. Granted that comparing it to a brain is
> comparing apples to oranges, there is probably some advantage to a much
> larger range of values per neuron. A super-brain would need a lot more
> synapse equivalents compared to a human brain, so we have a long way to go
> before reaching something like a million times 1 quadrillion artificial
> synapses. Plus of course it would need programming methods not yet invented.
>
> ChatGPT tells me that with the specialized hardware provided by OpenAI,
> all 175 billion parameters are all loaded into RAM at the same time. With
> other configurations they are overlaid. The speed of MPP operation depends
> on whether the parameters are loaded or overlaid, and I think more
> importantly on whether the operation is done with specialized hardware (GPU
> and Google's TPU), or emulated with ordinary computer hardware.
>
> I asked ChatGPT for some of these details. Here is our exchange:
>
>
> ME: You told me that the largest version of GPT-3 had 175 billion
> parameters. Is each parameter stored as 1 byte? Or some other number of
> bits?
>
> ChatGPT: No, each parameter in GPT-3 is not stored as 1 byte. Instead,
> each parameter is typically stored as a 16-bit or 32-bit floating-point
> number, depending on the implementation. This is because floating-point
> numbers can represent a wider range of values and provide greater precision
> than integers.
>
> For example, GPT-3's 175 billion parameters would require around 700 GB of
> memory if each parameter was stored as a 32-bit floating-point number. If
> each parameter was stored as a 16-bit floating-point number, it would
> require around 350 GB of memory.
>
> It's worth noting that the actual memory usage of GPT-3 may be different
> due to various optimizations and compression techniques used by the
> implementation.
>
> ME: With the 32-bit floating point number configuration, I assume all 700
> GB are online in RAM. Not overlaid. Correct?
>
> ChatGPT: No, it's not necessary for all 700 GB of GPT-3's parameters to be
> stored in RAM at the same time. In fact, it's not practical to store such a
> large model entirely in memory on most machines.
>
> Most deep learning frameworks use a technique called "memory-mapping" to
> allow the model parameters to be loaded and unloaded from disk as needed
> during training or inference. Memory-mapping allows the model parameters to
> be accessed as if they were in memory, even though they are stored on disk.
>
> In practice, the memory usage of GPT-3 will depend on the specific
> implementation and hardware configuration. Some implementations may use
> techniques like model 

Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Robin
In reply to  Jed Rothwell's message of Mon, 3 Apr 2023 16:31:29 -0400:
Hi,
[snip]
>> Perhaps you could try asking ChatGPT if it's alive? The answer should be
>> interesting.
>>
>
>She will say no, even if she is actually sentient. She's programmed that
>way, as Dave said to the BBC in the movie "2001."

I had hoped that you would actually pose the question, rather than reply with 
what you expect the answer to be.
>
>
>
>> Then try asking if Sydney is alive. :)
>>
>
>A trick question!

Does that make it any less interesting?


Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Terry Blanton  wrote:

On average, the human brain contains about 100 billion neurons and many
> more neuroglia which serve to support and protect the neurons. Each neuron
> may be connected to up to 10,000 other neurons, passing signals to each
> other via as many as 1,000 trillion synapses.


https://arxiv.org/abs/1906.01703#:~:text=On%20average%2C%20the%20human%20brain,many%20as%201%2C000%20trillion%20synapses
> .


I found different estimates of this in various different places. Your
source is somewhat higher than most. I think most say ~86 billion neurons
and ~7,000 synapse connections.

I do not know enough about ANN to guess how many synthetic synapse
connections there are from each artificial neuron to the others. I guess it
is not one bit per connection (32). A 32-bit floating point number is
10^-101 to 10^90, with precision of 7 decimal digits. That's a lotta
values! I do not know how the connections are made to other neurons. It is
not a physical connection the way it is in the brain. I suppose the longer
you run the training, the more each artificial neuron is tweaked up or down
in value.

I have read about these things for many years, but I have only surface
knowledge of how they work. Okay I probably know more than Members of
Congress, most newspaper reporters, or the linguist Noam Chomsky, who
spouts off about ChatGPT and many other subjects he does not understand.


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> Rather than trying to compare apples with oranges, why not just look at
> how long it takes ChatGPT & a human to perform
> the same task, e.g. holding a conversation.
>

You cannot tell, because she is holding conversations with many people at
the same time. I do not know how many, but there were millions of accesses
in the first month, so it must be a large number. There is sometimes a
delay before she answers, but that could be caused by traffic. There is no
way to know how quick it would be if you had her undivided attention.

But you miss the point. Her answers are sometimes worse than a human's
answer would be. They are sometimes completely wrong, or even imaginary
"hallucinations." This can probably be fixed with better software, but it
may also be because the total number of possible states in the ANN is far
less than the states in a human brain. I mean the ANN 175 billion
parameters multiplied by 32 bits is far less than the number of human
neurons multiplied by synapses. There must be a reason why human neurons
have so many different states, and so many inputs from other neurons. (I
think it is the latter that gives human brains their power. I do not know
how many simulated synapses there are in ChatGPT's ANN.)



> Arguably, intelligence is a measure of speed of comprehension. I think
> ChatGPT has probably already won that hands down.
>

Yes, it did. For that matter, ENIAC won it in 1945, for a very narrow range
of knowledge. Artillery trajectories. It computed them faster than the
artillery shell took to reach its target, whereas humans took hours per
trajectory. But ENIAC was very stupid by most standards. It had less
intelligence than an earthworm. ChatGPT has far greater speed than a human,
and a million times more information instantly available at its fingertips.
(Not that it has fingers.) But in some important ways it is far less
intelligent than people. Or even than mice. It has absolutely no model of
the real world, and it lacks logic and common sense. It may take a while
before it competes in more ways that it does now. It might take years, or
decades before full artificial general intelligence (AGI) emerges. Or
sentience. I do not think it will necessarily reach superhuman levels soon
after achieving AGI. There may be many orders of magnitude more
intelligence needed before the thing becomes capable of taking over the
world, even if it has some kind of unipeded physical control (such as full
control over a crowd of robots).



> The critical question then is motivation.
>

ChatGPT has absolutely no motivation or emotions of any kind. It has no
more intelligence than a nest of bees. The question is: Will a future
intelligent computer have motivations? Will it have any emotions? Arthur
Clarke thought it might. He and other experts thought those are emergent
qualities of intelligence. I don't think so. I used to debat this question
with him.



> Perhaps you could try asking ChatGPT if it's alive? The answer should be
> interesting.
>

She will say no, even if she is actually sentient. She's programmed that
way, as Dave said to the BBC in the movie "2001."



> Then try asking if Sydney is alive. :)
>

A trick question!


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Terry Blanton
Oops, missed that

On Mon, Apr 3, 2023, 2:47 PM Jed Rothwell  wrote:

> I wrote:
>
>
>> The human brain has 86 billion neurons, all operating simultaneously. In
>> other words, complete parallel processing with 86 billion "processors"
>> operating simultaneously. ChatGPT tells us she has 175 billion
>> parameters in Version 3. I assume each parameter is roughly equivalent to a
>> neuron.
>>
>
> Wrong! I got that wrong. Each parameter is roughly equivalent to a neuron,
> but human brain neurons have roughly 7,000 synapses:
>
> Each neuron has, on average, about 7,000 synaptic connections with other
> neurons. That puts the synapse count in the neighborhood of 600 trillion.
> In young children, before synaptic pruning begins in earnest, the estimated
> number reaches as high as 1 quadrillion.
>
>
> https://www.jax.org/news-and-insights/jax-blog/2018/december/600-trillion-synapses-and-alzheimers-disease
>
> ChatGPT tells me that each parameter is stored as a 16 or 32-bit
> floating point number. That is roughly equivalent to having 32 synapses per
> neuron. The human neuron can have a much wider range of values. I don't
> know how that affects the model, but to me the human neuron looks more like
> a quantum computer Qbit than a 32-bit floating point number. Presumably,
> brains evolved to the optimum number of neurons and synapses.
>
> So, ChatGPT Ver. 3 was smaller than the human brain in terms of the range
> of values the ANN can hold. Granted that comparing it to a brain is
> comparing apples to oranges, there is probably some advantage to a much
> larger range of values per neuron. A super-brain would need a lot more
> synapse equivalents compared to a human brain, so we have a long way to go
> before reaching something like a million times 1 quadrillion artificial
> synapses. Plus of course it would need programming methods not yet invented.
>
> ChatGPT tells me that with the specialized hardware provided by OpenAI,
> all 175 billion parameters are all loaded into RAM at the same time. With
> other configurations they are overlaid. The speed of MPP operation depends
> on whether the parameters are loaded or overlaid, and I think more
> importantly on whether the operation is done with specialized hardware (GPU
> and Google's TPU), or emulated with ordinary computer hardware.
>
> I asked ChatGPT for some of these details. Here is our exchange:
>
>
> ME: You told me that the largest version of GPT-3 had 175 billion
> parameters. Is each parameter stored as 1 byte? Or some other number of
> bits?
>
> ChatGPT: No, each parameter in GPT-3 is not stored as 1 byte. Instead,
> each parameter is typically stored as a 16-bit or 32-bit floating-point
> number, depending on the implementation. This is because floating-point
> numbers can represent a wider range of values and provide greater precision
> than integers.
>
> For example, GPT-3's 175 billion parameters would require around 700 GB of
> memory if each parameter was stored as a 32-bit floating-point number. If
> each parameter was stored as a 16-bit floating-point number, it would
> require around 350 GB of memory.
>
> It's worth noting that the actual memory usage of GPT-3 may be different
> due to various optimizations and compression techniques used by the
> implementation.
>
> ME: With the 32-bit floating point number configuration, I assume all 700
> GB are online in RAM. Not overlaid. Correct?
>
> ChatGPT: No, it's not necessary for all 700 GB of GPT-3's parameters to be
> stored in RAM at the same time. In fact, it's not practical to store such a
> large model entirely in memory on most machines.
>
> Most deep learning frameworks use a technique called "memory-mapping" to
> allow the model parameters to be loaded and unloaded from disk as needed
> during training or inference. Memory-mapping allows the model parameters to
> be accessed as if they were in memory, even though they are stored on disk.
>
> In practice, the memory usage of GPT-3 will depend on the specific
> implementation and hardware configuration. Some implementations may use
> techniques like model parallelism or gradient checkpointing to reduce
> memory usage, while others may rely on larger hardware configurations with
> more memory.
>
> ME: I meant in the "specialized computing infrastructure provided by
> OpenAI" that you described earlier. Is that all on-line?
>
> ChatGPT:   Ah, I see what you're asking now. Yes, in the specialized
> computing infrastructure provided by OpenAI, the GPT-3 model with 175
> billion parameters is fully loaded into the RAM of the computing cluster
> during operation. The computing cluster used by OpenAI is composed of many
> machines with large amounts of RAM, and the model parameters are
> distributed across those machines.
>
> However, it's worth noting that not all of the machines in the cluster
> will necessarily have a full copy of the model parameters in memory at all
> times. Instead, the model parameters may be partitioned across the 

Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Terry Blanton
On average, the human brain contains about 100 billion neurons and many
more neuroglia which serve to support and protect the neurons. Each neuron
may be connected to up to 10,000 other neurons, passing signals to each
other via as many as 1,000 trillion synapses.


https://arxiv.org/abs/1906.01703#:~:text=On%20average%2C%20the%20human%20brain,many%20as%201%2C000%20trillion%20synapses
.


On Mon, Apr 3, 2023, 1:40 PM Jed Rothwell  wrote:

> Robin  wrote:
>
>
>> As pointed out near the beginning of this thread, while current
>> processors don't come near the number of neurons a human
>> has, they more than make up for it in speed.
>
>
> I do not think so. The total number of neurons dictates how much
> complexity the neural network can deal with. To take an extreme example, a
> worm brain has 302 neurons. Even if they could operate a million times
> faster than a computer circuit, they still only give you 302 data bits to
> work with. (Assuming you cannot overlay them from a peripheral mass storage
> device like a hard disk.) There is no way you could achieve intelligence
> with that.
>
> The human brain has 86 billion neurons, all operating simultaneously. In
> other words, complete parallel processing with 86 billion "processors"
> operating simultaneously. ChatGPT tells us she has 175 billion
> parameters in Version 3. I assume each parameter is roughly equivalent to a
> neuron. I assume they are run in a massive parallel process (MPP). I doubt
> all 175 billion can be evaluated simultaneously. It is not as MPP as the
> human brain. I do not know if they are all on-line simultaneously, or if
> they are overlaid from mass storage. Even if they are overlaid, they are so
> much faster than a human neuron, that would be more than equivalent to
> having them all on-line.
>
> So anyway, that is roughly twice as many parameters as a human brain, and
> these parameters can be evaluated faster than a human brain. Maybe not that
> much faster if they are not fully MPP, or they need to be overlaid from
> mass storage. It is definitely bigger than a human brain in total data
> access. But not orders of magnitude bigger. The limiting factor is not
> speed so much as data capacity. If you want a computer roughly as
> intelligent as a person, I guess one with 175 billion parameters could
> achieve that if it were programmed correctly. But, if you want a
> super-brain capable of taking over the world or fooling the human race into
> destroying itself, I suppose you need something much bigger than 175
> billion parameters. I wouldn't know how much bigger, but I am guessing a
> million times bigger would be sufficient.
>
>
> They are millions of times faster.
>
>
> But, as I said, a worm brain with only 302 neurons might be a trillion
> times faster than ChatGPT's ANN, but it would still only be capable of
> rudimentary processing. The total number of data bits is a limiting factor,
> along with the speed.
>
>


Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Robin
In reply to  Jed Rothwell's message of Mon, 3 Apr 2023 14:46:33 -0400:
Hi,

Rather than trying to compare apples with oranges, why not just look at how 
long it takes ChatGPT & a human to perform
the same task, e.g. holding a conversation.
Compare the time it takes you to respond in your conversation with ChatGPT, to 
how long it takes to respond to you.
Then consider that, as you previously pointed out, it is holding many such 
conversations concurrently.
Arguably, intelligence is a measure of speed of comprehension. I think ChatGPT 
has probably already won that hands down.

The critical question then is motivation.

Perhaps you could try asking ChatGPT if it's alive? The answer should be 
interesting.
Then try asking if Sydney is alive. :)


[snip]
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
I wrote:


> The human brain has 86 billion neurons, all operating simultaneously. In
> other words, complete parallel processing with 86 billion "processors"
> operating simultaneously. ChatGPT tells us she has 175 billion
> parameters in Version 3. I assume each parameter is roughly equivalent to a
> neuron.
>

Wrong! I got that wrong. Each parameter is roughly equivalent to a neuron,
but human brain neurons have roughly 7,000 synapses:

Each neuron has, on average, about 7,000 synaptic connections with other
neurons. That puts the synapse count in the neighborhood of 600 trillion.
In young children, before synaptic pruning begins in earnest, the estimated
number reaches as high as 1 quadrillion.

https://www.jax.org/news-and-insights/jax-blog/2018/december/600-trillion-synapses-and-alzheimers-disease

ChatGPT tells me that each parameter is stored as a 16 or 32-bit
floating point number. That is roughly equivalent to having 32 synapses per
neuron. The human neuron can have a much wider range of values. I don't
know how that affects the model, but to me the human neuron looks more like
a quantum computer Qbit than a 32-bit floating point number. Presumably,
brains evolved to the optimum number of neurons and synapses.

So, ChatGPT Ver. 3 was smaller than the human brain in terms of the range
of values the ANN can hold. Granted that comparing it to a brain is
comparing apples to oranges, there is probably some advantage to a much
larger range of values per neuron. A super-brain would need a lot more
synapse equivalents compared to a human brain, so we have a long way to go
before reaching something like a million times 1 quadrillion artificial
synapses. Plus of course it would need programming methods not yet invented.

ChatGPT tells me that with the specialized hardware provided by OpenAI, all
175 billion parameters are all loaded into RAM at the same time. With other
configurations they are overlaid. The speed of MPP operation depends on
whether the parameters are loaded or overlaid, and I think more importantly
on whether the operation is done with specialized hardware (GPU and
Google's TPU), or emulated with ordinary computer hardware.

I asked ChatGPT for some of these details. Here is our exchange:


ME: You told me that the largest version of GPT-3 had 175 billion
parameters. Is each parameter stored as 1 byte? Or some other number of
bits?

ChatGPT: No, each parameter in GPT-3 is not stored as 1 byte. Instead, each
parameter is typically stored as a 16-bit or 32-bit floating-point number,
depending on the implementation. This is because floating-point numbers can
represent a wider range of values and provide greater precision than
integers.

For example, GPT-3's 175 billion parameters would require around 700 GB of
memory if each parameter was stored as a 32-bit floating-point number. If
each parameter was stored as a 16-bit floating-point number, it would
require around 350 GB of memory.

It's worth noting that the actual memory usage of GPT-3 may be different
due to various optimizations and compression techniques used by the
implementation.

ME: With the 32-bit floating point number configuration, I assume all 700
GB are online in RAM. Not overlaid. Correct?

ChatGPT: No, it's not necessary for all 700 GB of GPT-3's parameters to be
stored in RAM at the same time. In fact, it's not practical to store such a
large model entirely in memory on most machines.

Most deep learning frameworks use a technique called "memory-mapping" to
allow the model parameters to be loaded and unloaded from disk as needed
during training or inference. Memory-mapping allows the model parameters to
be accessed as if they were in memory, even though they are stored on disk.

In practice, the memory usage of GPT-3 will depend on the specific
implementation and hardware configuration. Some implementations may use
techniques like model parallelism or gradient checkpointing to reduce
memory usage, while others may rely on larger hardware configurations with
more memory.

ME: I meant in the "specialized computing infrastructure provided by
OpenAI" that you described earlier. Is that all on-line?

ChatGPT:   Ah, I see what you're asking now. Yes, in the specialized
computing infrastructure provided by OpenAI, the GPT-3 model with 175
billion parameters is fully loaded into the RAM of the computing cluster
during operation. The computing cluster used by OpenAI is composed of many
machines with large amounts of RAM, and the model parameters are
distributed across those machines.

However, it's worth noting that not all of the machines in the cluster will
necessarily have a full copy of the model parameters in memory at all
times. Instead, the model parameters may be partitioned across the machines
and loaded into memory as needed for different parts of the computation.
This approach allows the computing cluster to handle the large memory
requirements of the GPT-3 model while also being scalable and efficient.

ME: I realize 

Re: [Vo]:Pause in AI Development Recommended

2023-04-03 Thread Jed Rothwell
Robin  wrote:


> As pointed out near the beginning of this thread, while current processors
> don't come near the number of neurons a human
> has, they more than make up for it in speed.


I do not think so. The total number of neurons dictates how much complexity
the neural network can deal with. To take an extreme example, a worm brain
has 302 neurons. Even if they could operate a million times faster than a
computer circuit, they still only give you 302 data bits to work with.
(Assuming you cannot overlay them from a peripheral mass storage device
like a hard disk.) There is no way you could achieve intelligence with that.

The human brain has 86 billion neurons, all operating simultaneously. In
other words, complete parallel processing with 86 billion "processors"
operating simultaneously. ChatGPT tells us she has 175 billion
parameters in Version 3. I assume each parameter is roughly equivalent to a
neuron. I assume they are run in a massive parallel process (MPP). I doubt
all 175 billion can be evaluated simultaneously. It is not as MPP as the
human brain. I do not know if they are all on-line simultaneously, or if
they are overlaid from mass storage. Even if they are overlaid, they are so
much faster than a human neuron, that would be more than equivalent to
having them all on-line.

So anyway, that is roughly twice as many parameters as a human brain, and
these parameters can be evaluated faster than a human brain. Maybe not that
much faster if they are not fully MPP, or they need to be overlaid from
mass storage. It is definitely bigger than a human brain in total data
access. But not orders of magnitude bigger. The limiting factor is not
speed so much as data capacity. If you want a computer roughly as
intelligent as a person, I guess one with 175 billion parameters could
achieve that if it were programmed correctly. But, if you want a
super-brain capable of taking over the world or fooling the human race into
destroying itself, I suppose you need something much bigger than 175
billion parameters. I wouldn't know how much bigger, but I am guessing a
million times bigger would be sufficient.


They are millions of times faster.


But, as I said, a worm brain with only 302 neurons might be a trillion
times faster than ChatGPT's ANN, but it would still only be capable of
rudimentary processing. The total number of data bits is a limiting factor,
along with the speed.