Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Jürg Wyttenbach

GPT is at tool used in computer linguistics since more than 10 years.


It was just a matter of time until some brainless nerds would use it for 
KI...


GPT just analysis and classifies text >the texts you give GPT. So 
its not KI its the condensed shit some people want to throw at you.



But honestly what the US government does since 2020 where Biden founded 
the project veritas  - Orwell 1984 = truth ministry - is nothing else 
than chat GPT does with. you.


Most newspapers today do no longer contain information. The focus is on 
propaganda = spreading the view of the dominant class.


I regularly compare about 10 world top journals 4 languages/ continents 
and all I see is identical "(dis-) information".



The top source of fake news are NYT,BBC,FAZ,NZZ, Figaro,  Only a few 
tiny local papers provide real information.


So please focus on how to get independent news and not on how to get 
condensed shit from a KI text mixer...


J.W.



On 10.04.2023 22:50, Boom wrote:
Indeed, it can. It comes up with fake information. But now it is 
heavily moderated to not allow that.


Em seg., 10 de abr. de 2023 às 16:33, H L V  
escreveu:


Can it dream?
Harry

On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda
 wrote:

There are works to allow LLM to discuss in order to have
reflection...
I've seen reference to an architecture where two GPT instances
talk to each other, with different roles, one as a searcher,
the other as a critic...
Look at this article.
LLM may just be the building block of something bigger...

https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html

add to that, they can use external applications (plugin), talk
to generative AI like Dall-E...

Many people say it is not intelligent, but are we ?
I see AI making mistakes very similar to the one I do when I'm
tired, or beginner...

The real difference is that today, AI are not the fruit of a
Darwinian evolution, with struggle to survive, dominate, eat
or be eaten, so it's less frightening than people or animals.
The only serious fear I've heard is that we become so
satisfied by those AIs, that we delegate our genetic evolution
to them, and we lose our individualistic Darwinian struggle to
survive, innovate, seduce a partner, enjoying a bee-Hive
mentality, at the service of the AI system, like bee-workers
and bee-queen... The promoter of that theory estimate it will
take a millennium.
Anyway there is nothing to stop, as if a majority decide to
stop developing AI, a minority will develop them at their
service, and China is ready, with great experts and great
belief in the future. Only the West is afraid. (there is a
paper on that circulating, where fear of AI is linked to GDP/head)


Le lun. 10 avr. 2023 à 16:47, Jed Rothwell
 a écrit :

I wrote:

Food is contaminated despite our best efforts to
prevent that. Contamination is a complex process that
we do not fully understand or control, although of
course we know a lot about it. It seems to me that as
AI becomes more capable it may become easier to
understand, and more transparent.


My unfinished thought here is that knowing more about
contamination and seeing more complexity in it has
improved our ability to control it.


Sean True  wrote:

I think it’s fair to say no AGI until those are
designed in, particularly the ability to actually
learn from experience.


Definitely! ChatGPT agrees with you!



--
Daniel Rocha - RJ
danieldi...@gmail.com


--
Jürg Wyttenbach
Bifangstr. 22
8910 Affoltern am Albis

+41 44 760 14 18
+41 79 246 36 06


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Boom
Indeed, it can. It comes up with fake information. But now it is heavily
moderated to not allow that.

Em seg., 10 de abr. de 2023 às 16:33, H L V  escreveu:

> Can it dream?
> Harry
>
> On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda 
> wrote:
>
>> There are works to allow LLM to discuss in order to have reflection...
>> I've seen reference to an architecture where two GPT instances talk to
>> each other, with different roles, one as a searcher, the other as a
>> critic...
>> Look at this article.
>> LLM may just be the building block of something bigger...
>>
>> https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html
>>
>> add to that, they can use external applications (plugin), talk to
>> generative AI like Dall-E...
>>
>> Many people say it is not intelligent, but are we ?
>> I see AI making mistakes very similar to the one I do when I'm tired, or
>> beginner...
>>
>> The real difference is that today, AI are not the fruit of a Darwinian
>> evolution, with struggle to survive, dominate, eat or be eaten, so it's
>> less frightening than people or animals.
>> The only serious fear I've heard is that we become so satisfied by those
>> AIs, that we delegate our genetic evolution to them, and we lose our
>> individualistic Darwinian struggle to survive, innovate, seduce a partner,
>> enjoying a bee-Hive mentality, at the service of the AI system, like
>> bee-workers and bee-queen... The promoter of that theory estimate it will
>> take a millennium.
>> Anyway there is nothing to stop, as if a majority decide to stop
>> developing AI, a minority will develop them at their service, and China is
>> ready, with great experts and great belief in the future. Only the West is
>> afraid. (there is a paper on that circulating, where fear of AI is linked
>> to GDP/head)
>>
>>
>> Le lun. 10 avr. 2023 à 16:47, Jed Rothwell  a
>> écrit :
>>
>>> I wrote:
>>>
>>>
 Food is contaminated despite our best efforts to prevent that.
 Contamination is a complex process that we do not fully understand or
 control, although of course we know a lot about it. It seems to me that as
 AI becomes more capable it may become easier to understand, and more
 transparent.

>>>
>>> My unfinished thought here is that knowing more about contamination and
>>> seeing more complexity in it has improved our ability to control it.
>>>
>>>
>>> Sean True  wrote:
>>>
>>> I think it’s fair to say no AGI until those are designed in,
 particularly the ability to actually learn from experience.

>>>
>>> Definitely! ChatGPT agrees with you!
>>>
>>

-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Robin
In reply to  Jed Rothwell's message of Mon, 10 Apr 2023 09:33:48 -0400:
Hi,
[snip]
>I hope that an advanced AGI *will* have a concept of the real world, and it
>will know the difference. I do not think that the word "care" applies here,
>but if we tell it not to use a machine gun in the real world, I expect it
>will follow orders. Because that's what computers do. Of course, if someone
>programs it to use a machine gun in the real world, it would do that too!
[snip]
I think you can count on the R departments of the armed forces of nations 
around the World, to be working on this as
we speak.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Robin
In reply to  Alain Sepeda's message of Mon, 10 Apr 2023 17:48:38 +0200:
Hi,
[snip]
>The real difference is that today, AI are not the fruit of a Darwinian
>evolution, with struggle to survive, dominate, eat or be eaten, so it's
>less frightening than people or animals.

The way a neural network learns is conceptually analogous to Darwinian 
evolution.
(Only the programs/routines, most suited to purpose, survive.)
...but it happens much, much faster.

>The only serious fear I've heard is that we become so satisfied by those
>AIs, that we delegate our genetic evolution to them, and we lose our
>individualistic Darwinian struggle to survive, innovate, seduce a partner,
>enjoying a bee-Hive mentality, at the service of the AI system, like
>bee-workers and bee-queen... The promoter of that theory estimate it will
>take a millennium.
>Anyway there is nothing to stop, as if a majority decide to stop developing
>AI, a minority will develop them at their service, and China is ready, with
>great experts and great belief in the future. Only the West is afraid.
>(there is a paper on that circulating, where fear of AI is linked to
>GDP/head)

Anything that increases productivity can lead to an increase in GDP/head.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread H L V
Can it dream?
Harry

On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda 
wrote:

> There are works to allow LLM to discuss in order to have reflection...
> I've seen reference to an architecture where two GPT instances talk to
> each other, with different roles, one as a searcher, the other as a
> critic...
> Look at this article.
> LLM may just be the building block of something bigger...
>
> https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html
>
> add to that, they can use external applications (plugin), talk to
> generative AI like Dall-E...
>
> Many people say it is not intelligent, but are we ?
> I see AI making mistakes very similar to the one I do when I'm tired, or
> beginner...
>
> The real difference is that today, AI are not the fruit of a Darwinian
> evolution, with struggle to survive, dominate, eat or be eaten, so it's
> less frightening than people or animals.
> The only serious fear I've heard is that we become so satisfied by those
> AIs, that we delegate our genetic evolution to them, and we lose our
> individualistic Darwinian struggle to survive, innovate, seduce a partner,
> enjoying a bee-Hive mentality, at the service of the AI system, like
> bee-workers and bee-queen... The promoter of that theory estimate it will
> take a millennium.
> Anyway there is nothing to stop, as if a majority decide to stop
> developing AI, a minority will develop them at their service, and China is
> ready, with great experts and great belief in the future. Only the West is
> afraid. (there is a paper on that circulating, where fear of AI is linked
> to GDP/head)
>
>
> Le lun. 10 avr. 2023 à 16:47, Jed Rothwell  a
> écrit :
>
>> I wrote:
>>
>>
>>> Food is contaminated despite our best efforts to prevent that.
>>> Contamination is a complex process that we do not fully understand or
>>> control, although of course we know a lot about it. It seems to me that as
>>> AI becomes more capable it may become easier to understand, and more
>>> transparent.
>>>
>>
>> My unfinished thought here is that knowing more about contamination and
>> seeing more complexity in it has improved our ability to control it.
>>
>>
>> Sean True  wrote:
>>
>> I think it’s fair to say no AGI until those are designed in, particularly
>>> the ability to actually learn from experience.
>>>
>>
>> Definitely! ChatGPT agrees with you!
>>
>


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Alain Sepeda
There are works to allow LLM to discuss in order to have reflection...
I've seen reference to an architecture where two GPT instances talk to each
other, with different roles, one as a searcher, the other as a critic...
Look at this article.
LLM may just be the building block of something bigger...
https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html

add to that, they can use external applications (plugin), talk to
generative AI like Dall-E...

Many people say it is not intelligent, but are we ?
I see AI making mistakes very similar to the one I do when I'm tired, or
beginner...

The real difference is that today, AI are not the fruit of a Darwinian
evolution, with struggle to survive, dominate, eat or be eaten, so it's
less frightening than people or animals.
The only serious fear I've heard is that we become so satisfied by those
AIs, that we delegate our genetic evolution to them, and we lose our
individualistic Darwinian struggle to survive, innovate, seduce a partner,
enjoying a bee-Hive mentality, at the service of the AI system, like
bee-workers and bee-queen... The promoter of that theory estimate it will
take a millennium.
Anyway there is nothing to stop, as if a majority decide to stop developing
AI, a minority will develop them at their service, and China is ready, with
great experts and great belief in the future. Only the West is afraid.
(there is a paper on that circulating, where fear of AI is linked to
GDP/head)


Le lun. 10 avr. 2023 à 16:47, Jed Rothwell  a écrit :

> I wrote:
>
>
>> Food is contaminated despite our best efforts to prevent that.
>> Contamination is a complex process that we do not fully understand or
>> control, although of course we know a lot about it. It seems to me that as
>> AI becomes more capable it may become easier to understand, and more
>> transparent.
>>
>
> My unfinished thought here is that knowing more about contamination and
> seeing more complexity in it has improved our ability to control it.
>
>
> Sean True  wrote:
>
> I think it’s fair to say no AGI until those are designed in, particularly
>> the ability to actually learn from experience.
>>
>
> Definitely! ChatGPT agrees with you!
>


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Jed Rothwell
I wrote:


> Food is contaminated despite our best efforts to prevent that.
> Contamination is a complex process that we do not fully understand or
> control, although of course we know a lot about it. It seems to me that as
> AI becomes more capable it may become easier to understand, and more
> transparent.
>

My unfinished thought here is that knowing more about contamination and
seeing more complexity in it has improved our ability to control it.


Sean True  wrote:

I think it’s fair to say no AGI until those are designed in, particularly
> the ability to actually learn from experience.
>

Definitely! ChatGPT agrees with you!


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Sean True
LLM do not have intrinsic short or modifiable long term memory. Both require supplemental systems - reprompting of recent history or expensive offline fine tuning or even more expensive retraining.I think it’s fair to say no AGI until those are designed in, particularly the ability to actually learn from experience.Sent from my iPhoneOn Apr 10, 2023, at 9:34 AM, Jed Rothwell  wrote:Robin  wrote:As I said earlier, it may not make any difference whether an AI feels/thinks as we do, or just mimics the process.That is certainly true.As you pointed out, the AI has no concept of the real world, so it's not going to care whether it's shooting people up
in a video game, or using a robot with a real machine  gun in the real world.I hope that an advanced AGI will have a concept of the real world, and it will know the difference. I do not think that the word "care" applies here, but if we tell it not to use a machine gun in the real world, I expect it will follow orders. Because that's what computers do. Of course, if someone programs it to use a machine gun in the real world, it would do that too!I hope we can devise something like Asamov's laws at the core of the operating system to prevent people from programming things like that. I do not if that is possible.
It may be "just a tool", but the more capable we make it the greater the chances that something unforeseen will go
wrong, especially if it has the ability to connect with other AIs over the Internet, because this adds exponentially to
the complexity, and hence our ability to predict what will happen decreases proportionately.I am not sure I agree. There are many analog processes that we do not fully understand. They sometimes go catastrophically wrong. For example, water gets into coal and causes explosions in coal fired generators. Food is contaminated despite our best efforts to prevent that. Contamination is a complex process that we do not fully understand or control, although of course we know a lot about it. It seems to me that as AI becomes more capable it may become easier to understand, and more transparent. If it is engineered right, the AI will be able to explain its actions to us in ways that transcend complexity and give us the gist of the situation. For example, I use the Delphi 10.4 compiler for Pascal and C++. It has some AI built into it, for the Refactoring and some other features. It is enormously complex compared to compilers from decades ago. It has hundreds of canned procedures and functions. Despite this complexity, it is easier for me to see what it is doing than it was in the past, because it has extensive debugging facilities. You can stop execution and look at variables and internal states in ways that would have been impossible in the past. You can install add-ons that monitor for things like memory leaks. With refactoring and other features you can ask it to look for code that may cause problems. I don't mean code that does not compile, or warning signs such as variables that are never used. It has been able to do that for a long time. I mean more subtle errors.I think it also gives helpful hints for upgrading legacy code to modern standards, but I have not explored that feature. The point is, increased complexity gives me more control and more understanding of what it is doing, not less.


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Jed Rothwell
Robin  wrote:

As I said earlier, it may not make any difference whether an AI
> feels/thinks as we do, or just mimics the process.


That is certainly true.


As you pointed out, the AI has no concept of the real world, so it's not
> going to care whether it's shooting people up
> in a video game, or using a robot with a real machine  gun in the real
> world.
>

I hope that an advanced AGI *will* have a concept of the real world, and it
will know the difference. I do not think that the word "care" applies here,
but if we tell it not to use a machine gun in the real world, I expect it
will follow orders. Because that's what computers do. Of course, if someone
programs it to use a machine gun in the real world, it would do that too!

I hope we can devise something like Asamov's laws at the core of the
operating system to prevent people from programming things like that. I do
not if that is possible.


It may be "just a tool", but the more capable we make it the greater the
> chances that something unforeseen will go
> wrong, especially if it has the ability to connect with other AIs over the
> Internet, because this adds exponentially to
> the complexity, and hence our ability to predict what will happen
> decreases proportionately.
>

I am not sure I agree. There are many analog processes that we do not fully
understand. They sometimes go catastrophically wrong. For example, water
gets into coal and causes explosions in coal fired generators. Food is
contaminated despite our best efforts to prevent that. Contamination is a
complex process that we do not fully understand or control, although of
course we know a lot about it. It seems to me that as AI becomes more
capable it may become easier to understand, and more transparent. If it is
engineered right, the AI will be able to explain its actions to us in ways
that transcend complexity and give us the gist of the situation. For
example, I use the Delphi 10.4 compiler for Pascal and C++. It has some AI
built into it, for the Refactoring and some other features. It is
enormously complex compared to compilers from decades ago. It has hundreds
of canned procedures and functions. Despite this complexity, it is easier
for me to see what it is doing than it was in the past, because it has
extensive debugging facilities. You can stop execution and look at
variables and internal states in ways that would have been impossible in
the past. You can install add-ons that monitor for things like memory
leaks. With refactoring and other features you can ask it to look for code
that may cause problems. I don't mean code that does not compile, or
warning signs such as variables that are never used. It has been able to do
that for a long time. I mean more subtle errors.

I think it also gives helpful hints for upgrading legacy code to modern
standards, but I have not explored that feature. The point is, increased
complexity gives me more control and more understanding of what it is
doing, not less.


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Blaze Spinnaker
GPT4 can have unlimited memory, right?  Just give it access to a query
engine.  Max token context length (input PLUS output) is 32k in the latest
model.  GPT3.5 is 4096.

https://openai.com/pricing

Importantly, GPT4 has built 'world models' as a side effect of its
training.  And when it predicts the next token, that token is not merely
compatible with the current context, but also the entirety of its network
and the internal world models that it has built.  I think the word tokens
themselves are about 12K dimensional vectors and the networks are billions
and billions of parameters.  You can do a lot with that, and they have.

https://thegradient.pub/othello/

I think there's a lot of 'human consciousness is center of universe' type
vanity going around, which is why a lot of people have a hard time
accepting that LLMs are enough.

On Sat, Apr 8, 2023 at 7:38 PM Boom  wrote:

> The most recent versions of Stockfish, the best chess engines, combines
> "brute force", the usual branching algorithm, with NN. ChatGTP 4.0 (which
> is actually quite similar to 3.5) uses plugins to be smarter. For example,
> it can evoke wolfram alpha if it needs to make calculations. This modular
> approach is quickly becoming more common.
>
>
> Em sáb., 8 de abr. de 2023 às 21:05, Jed Rothwell 
> escreveu:
>
>> I wrote:
>>
>>
>>> The methods used to program ChatGPT and light years away from anything
>>> like human cognition. As different as what bees do with their brains
>>> compared to what we do.
>>>
>>
>> To take another example, the human brain can add 2 + 2 = 4. A computer
>> ALU can also do this, in binary arithmetic. The brain and the ALU get the
>> same answer, but the methods are COMPLETELY different. Some people claim
>> that ChatGPT is somewhat intelligent. Artificially intelligent. For the
>> sake of argument, let us say this is a form of intelligence. In that case,
>> it is an alien form as different from human intelligence as an ALU. A bee
>> brain is probably closer to ours than ChatGPT. It may be that a future AGI,
>> even a sentient one, has totally different mechanisms than the human brain.
>> As alien as an ALU. In that case, I do not think it will be possible for
>> the AGI to actually emulate a human, although it might be able to imitate
>> one, the way ChatGPT does. I doubt it will ever be able to feel what it is
>> like to be a human. We humans cannot imagine what it feels like to be a
>> bee, or even a more intelligent creature such as a bat, because bats have
>> such a different way of living, and sensing (echolocation). We do know what
>> it is like being a chimpanzee, because we share so much DNA and we have
>> many behaviors in common, such as anger, politics, and grieving over dead
>> children.
>>
>>
>
> --
> Daniel Rocha - RJ
> danieldi...@gmail.com
>


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
The most recent versions of Stockfish, the best chess engines, combines
"brute force", the usual branching algorithm, with NN. ChatGTP 4.0 (which
is actually quite similar to 3.5) uses plugins to be smarter. For example,
it can evoke wolfram alpha if it needs to make calculations. This modular
approach is quickly becoming more common.


Em sáb., 8 de abr. de 2023 às 21:05, Jed Rothwell 
escreveu:

> I wrote:
>
>
>> The methods used to program ChatGPT and light years away from anything
>> like human cognition. As different as what bees do with their brains
>> compared to what we do.
>>
>
> To take another example, the human brain can add 2 + 2 = 4. A computer ALU
> can also do this, in binary arithmetic. The brain and the ALU get the same
> answer, but the methods are COMPLETELY different. Some people claim that
> ChatGPT is somewhat intelligent. Artificially intelligent. For the sake of
> argument, let us say this is a form of intelligence. In that case, it is an
> alien form as different from human intelligence as an ALU. A bee brain is
> probably closer to ours than ChatGPT. It may be that a future AGI, even a
> sentient one, has totally different mechanisms than the human brain. As
> alien as an ALU. In that case, I do not think it will be possible for the
> AGI to actually emulate a human, although it might be able to imitate one,
> the way ChatGPT does. I doubt it will ever be able to feel what it is like
> to be a human. We humans cannot imagine what it feels like to be a bee, or
> even a more intelligent creature such as a bat, because bats have such a
> different way of living, and sensing (echolocation). We do know what it is
> like being a chimpanzee, because we share so much DNA and we have many
> behaviors in common, such as anger, politics, and grieving over dead
> children.
>
>

-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Robin
In reply to  Jed Rothwell's message of Sat, 8 Apr 2023 20:04:46 -0400:
Hi,

As I said earlier, it may not make any difference whether an AI feels/thinks as 
we do, or just mimics the process. The
outcome could be just as disastrous if it mimics committing murder, as it would 
be if it had murder "in it's heart".

A dumb machine gun can kill just fine, and it has no brains at all. 
As you pointed out, the AI has no concept of the real world, so it's not going 
to care whether it's shooting people up
in a video game, or using a robot with a real machine  gun in the real world.
It may be "just a tool", but the more capable we make it the greater the 
chances that something unforeseen will go
wrong, especially if it has the ability to connect with other AIs over the 
Internet, because this adds exponentially to
the complexity, and hence our ability to predict what will happen decreases 
proportionately.

>I wrote:
>
>
>> The methods used to program ChatGPT and light years away from anything
>> like human cognition. As different as what bees do with their brains
>> compared to what we do.
>>
>
>To take another example, the human brain can add 2 + 2 = 4. A computer ALU
>can also do this, in binary arithmetic. The brain and the ALU get the same
>answer, but the methods are COMPLETELY different. Some people claim that
>ChatGPT is somewhat intelligent. Artificially intelligent. For the sake of
>argument, let us say this is a form of intelligence. In that case, it is an
>alien form as different from human intelligence as an ALU. A bee brain is
>probably closer to ours than ChatGPT. It may be that a future AGI, even a
>sentient one, has totally different mechanisms than the human brain. As
>alien as an ALU. In that case, I do not think it will be possible for the
>AGI to actually emulate a human, although it might be able to imitate one,
>the way ChatGPT does. I doubt it will ever be able to feel what it is like
>to be a human. We humans cannot imagine what it feels like to be a bee, or
>even a more intelligent creature such as a bat, because bats have such a
>different way of living, and sensing (echolocation). We do know what it is
>like being a chimpanzee, because we share so much DNA and we have many
>behaviors in common, such as anger, politics, and grieving over dead
>children.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Jed Rothwell
I wrote:


> The methods used to program ChatGPT and light years away from anything
> like human cognition. As different as what bees do with their brains
> compared to what we do.
>

To take another example, the human brain can add 2 + 2 = 4. A computer ALU
can also do this, in binary arithmetic. The brain and the ALU get the same
answer, but the methods are COMPLETELY different. Some people claim that
ChatGPT is somewhat intelligent. Artificially intelligent. For the sake of
argument, let us say this is a form of intelligence. In that case, it is an
alien form as different from human intelligence as an ALU. A bee brain is
probably closer to ours than ChatGPT. It may be that a future AGI, even a
sentient one, has totally different mechanisms than the human brain. As
alien as an ALU. In that case, I do not think it will be possible for the
AGI to actually emulate a human, although it might be able to imitate one,
the way ChatGPT does. I doubt it will ever be able to feel what it is like
to be a human. We humans cannot imagine what it feels like to be a bee, or
even a more intelligent creature such as a bat, because bats have such a
different way of living, and sensing (echolocation). We do know what it is
like being a chimpanzee, because we share so much DNA and we have many
behaviors in common, such as anger, politics, and grieving over dead
children.


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
Example, I used chatgtp to come up with a theory explaining the origin of
eukaryotes. The part I enhanced was something that chatgtp came up with.

In the theory of the origin of eukaryotes, we have discussed how colonies
of prokaryotic cells started transporting vesicles by kinesin, which
crossed cell membranes. This led to the formation of a pseudo-multicellular
organism, where each prokaryotic cell represented an organelle, linked
together by sharing nutrients and other materials through kinesins. This
pseudo-multicellular organism eventually fused, with each cell representing
an organelle, into the first eukaryote.

But how did the nucleus come into existence? In this revised theory, we
will explore the idea that the segregation of genetic information in the
nucleus was a critical factor in the evolution of eukaryotes.

As the pseudo-multicellular organism developed, the exchange of genetic
material became more complex. Kinesins were used to transport vesicles
containing proteins and other cellular components between the different
organelles. **However, this system had limitations in terms of transporting
larger and more complex molecules, such as DNA.**

To overcome this limitation, a new set of proteins evolved known as the
SNAREs and Rabs GTPase. These proteins played a crucial role in the
formation of membrane-bound vesicles, which could transport larger
molecules such as DNA. The SNAREs acted as a ziplock mechanism, bringing
the vesicle and the target organelle membrane close enough for fusion to
occur. On the other hand, the Rabs GTPase acted as a molecular switch,
regulating vesicle trafficking by controlling vesicle formation, transport,
and fusion.

As the complexity of the genome increased, the need for efficient storage
and segregation of genetic material became more critical. **The vesicles
containing DNA eventually fused together, forming a compartment that would
become the nucleus. The SNAREs and Rabs GTPase played a crucial role in the
formation of the nuclear envelope, which encloses the genetic material
within the nucleus.**

The formation of the nucleus allowed for more efficient regulation of gene
expression, leading to increased complexity in cellular processes. The
segregation of genetic material allowed for a higher level of control over
gene expression, as specific genes could be activated or repressed
depending on the cellular requirements.

In conclusion, the evolution of eukaryotes was a complex process that
involved the cooperation of multiple cellular components. The use of
kinesin to transport vesicles and the evolution of SNAREs and Rabs GTPase
played a crucial role in the formation of the first eukaryotes. The
formation of the nucleus was a critical factor in the evolution of
eukaryotes, as it allowed for the segregation and efficient regulation of
genetic material. The evolution of the nucleus was a necessary step in the
development of the complex eukaryotic cell, which has allowed for the
incredible diversity of life on earth.

Em sáb., 8 de abr. de 2023 às 20:32, Jed Rothwell 
escreveu:

> Robin  wrote:
>
>
>> For example, if asked "Can you pour water into
>> >   a glass made of sugar?", ChatGPT might provide a grammatically
>> correct but
>> >   nonsensical response, whereas a human with common sense would
>> recognize
>> >   that a sugar glass would dissolve in water.
>>
>> so where did it get this example from, or all of these points for
>> that matter?
>>
>
> Probably from a human author writing about AI. I asked it in a follow up
> question:
>
> ME: So, how would you respond to the question: "Can you pour water into a
> glass made of sugar?"
>
> ChatGPT: As an AI language model, I can provide a grammatically correct
> response based on my training data. However, my response may not reflect
> common sense or practicality in the real world.
>
> If I were to respond based on my training data, I might say something
> like: "Yes, you can pour water into a glass made of sugar, but the glass
> will dissolve." However, it's important to note that this is not a
> practical or sensible thing to do in reality because the sugar glass would
> quickly dissolve and the water would spill out.
>
> It's worth mentioning that humans have common sense and contextual
> knowledge that allow us to understand the impracticality of pouring water
> into a glass made of sugar. This is a good example of a situation where AI
> language models like myself may struggle to understand and respond in a way
> that reflects human common sense and practicality.
>
> This response may reflect the earlier part of the conversation.
>
>
>>
>>
> >   It cannot draw on
>> >   personal experiences, general knowledge, or reasoning to inform its
>> >   responses in the way a human would.
>>
>> General knowledge can be obtained from the Inet. "reasoning" is clearly
>> not true. Without it, one cannot reply to a
>> sentence.
>>
>
> I assume you mean "the Internet." ChatGPT can scrape content from human
> 

Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Jed Rothwell
Robin  wrote:


> For example, if asked "Can you pour water into
> >   a glass made of sugar?", ChatGPT might provide a grammatically correct
> but
> >   nonsensical response, whereas a human with common sense would recognize
> >   that a sugar glass would dissolve in water.
>
> so where did it get this example from, or all of these points for that
> matter?
>

Probably from a human author writing about AI. I asked it in a follow up
question:

ME: So, how would you respond to the question: "Can you pour water into a
glass made of sugar?"

ChatGPT: As an AI language model, I can provide a grammatically correct
response based on my training data. However, my response may not reflect
common sense or practicality in the real world.

If I were to respond based on my training data, I might say something like:
"Yes, you can pour water into a glass made of sugar, but the glass will
dissolve." However, it's important to note that this is not a practical or
sensible thing to do in reality because the sugar glass would quickly
dissolve and the water would spill out.

It's worth mentioning that humans have common sense and contextual
knowledge that allow us to understand the impracticality of pouring water
into a glass made of sugar. This is a good example of a situation where AI
language models like myself may struggle to understand and respond in a way
that reflects human common sense and practicality.

This response may reflect the earlier part of the conversation.


>
>
>   It cannot draw on
> >   personal experiences, general knowledge, or reasoning to inform its
> >   responses in the way a human would.
>
> General knowledge can be obtained from the Inet. "reasoning" is clearly
> not true. Without it, one cannot reply to a
> sentence.
>

I assume you mean "the Internet." ChatGPT can scrape content from human
authors on the internet, but it has no reasoning. It literally does not
know what it is talking about. In the same sense that a bee making a nest
or directing other bees to a source of nectar does not know what it is
doing. The bee is acting by instinct with no planning or awareness. ChatGPT
is acting by programming with no plan or awareness. That is why it cannot
tell the difference between reality and what are now called
"hallucinations" (fake information invented by ChatGPT).


>   world. It cannot perform physical tasks like walking, manipulating
> objects,
> >   or performing surgery, which are essential for many real-world
> applications.
>
> There are already robots that perform these things. They require only
> programming to interact with the real worldand
> many already have Inet connectivity, either directly or indirectly.
>

When these robots are controlled by advanced AI in the future, they may
approach or achieve AGI partly because of that. ChatGPT is not saying that
AGI is impossible; she is saying that some kind of robotic control over
physical objects is probably a necessary component of AGI, which she
herself has not yet achieved.



> >   5. Lack of self-awareness: ChatGPT does not have the ability to reflect
> >   on its own thoughts, actions, or limitations in the way that a
> self-aware
> >   human being can. It cannot introspect, learn from its mistakes, or
> engage
> >   in critical self-reflection.
>
> AutoGPT?
>

Not yet.


The point I have been trying to make is that if we program something to
> behave like a human, it may end up doing exactly
> that.


The methods used to program ChatGPT and light years away from anything like
human cognition. As different as what bees do with their brains compared to
what we do. ChatGPT is not programmed to behave like a human in any sense.
A future AI might be, but this one is not. The results of ChatGPT
programming look like the results from human thinking, but they are not.
The results from bee-brain hive construction look like conscious human
structural engineering, but they are not. Bees do not attend MIT.


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Robin
In reply to  Boom's message of Sat, 8 Apr 2023 20:26:43 -0300:
Hi,
[snip]
>It has a very short memory. It's something like 30kb. 

 ...so's mine nowadays. :(


>If the conversation
>gets a little bit longer, it starts forgetting stuff, though it more ore
>less keep track of the sense of the topic.
>
>Em sáb., 8 de abr. de 2023 às 19:50, Robin 
>escreveu:
>
>> Hi,
>>
>> The point I have been trying to make is that if we program something to
>> behave like a human, it may end up doing exactly
>> that.
>>
>> Cloud storage:-
>>
>> Unsafe, Slow, Expensive
>>
>> ...pick any three.
>>
>>
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
It has a very short memory. It's something like 30kb. If the conversation
gets a little bit longer, it starts forgetting stuff, though it more ore
less keep track of the sense of the topic.

Em sáb., 8 de abr. de 2023 às 19:50, Robin 
escreveu:

> Hi,
>
> The point I have been trying to make is that if we program something to
> behave like a human, it may end up doing exactly
> that.
>
> Cloud storage:-
>
> Unsafe, Slow, Expensive
>
> ...pick any three.
>
>

-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Robin
Hi,

The point I have been trying to make is that if we program something to behave 
like a human, it may end up doing exactly
that.

Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
Yes, but have you tried to jailbreak it, this was a condition I told you
about. This type of answer is done by a moderation bot.

Em sáb., 8 de abr. de 2023 às 15:40, Jed Rothwell 
escreveu:

> Boom  wrote:
>
>
>> For those who used it in the first few days, when bot moderation was not
>> installed properly, of right now, if it is jailbroken, GPT works just as
>> well as a very smart human. With a few tweeks (like making it use math AI,
>> wolfram alpha which surpassed humans decades ago, or NN, or scan OCR), it
>> easily overcomes the vast majority of people. So, why isn't it considered
>> AGI right now?
>>
>
> Because it lacks several important qualities of AGI, especially any
> knowledge of the real world. Let's ask ChatGPT for additional details:
>
> ME: ChatGPT is not considered artificial general intelligence (AGI). What
> qualities of AGI are lacking in ChatGPT?
>
> ChatGPT: ChatGPT, as a language model, has a narrow focus on generating
> human-like text based on input it receives. While it can produce highly
> coherent and contextually relevant responses, it lacks several key
> qualities that are typically associated with artificial general
> intelligence:
>
>1. Lack of common sense: ChatGPT relies on statistical patterns
>learned from large amounts of text data and does not have a common-sense
>understanding of the world. For example, if asked "Can you pour water into
>a glass made of sugar?", ChatGPT might provide a grammatically correct but
>nonsensical response, whereas a human with common sense would recognize
>that a sugar glass would dissolve in water.
>2. Limited understanding of context: Although ChatGPT has the ability
>to generate responses based on context, its understanding of context is
>limited to what it can learn from its training data. It cannot draw on
>personal experiences, general knowledge, or reasoning to inform its
>responses in the way a human would.
>3. Inability to perform physical tasks: ChatGPT is purely a
>software-based system and lacks the ability to interact with the physical
>world. It cannot perform physical tasks like walking, manipulating objects,
>or performing surgery, which are essential for many real-world 
> applications.
>4. Limited creativity: While ChatGPT can generate novel and
>contextually appropriate responses, its ability to create genuinely new and
>original ideas or concepts is limited. It can only draw on patterns and
>associations that it has learned from the training data and cannot think
>beyond that.
>5. Lack of self-awareness: ChatGPT does not have the ability to
>reflect on its own thoughts, actions, or limitations in the way that a
>self-aware human being can. It cannot introspect, learn from its mistakes,
>or engage in critical self-reflection.
>
> Overall, while ChatGPT is an impressive language model, it lacks several
> key qualities that are associated with true artificial general intelligence.
>


-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Robin
In reply to  Jed Rothwell's message of Sat, 8 Apr 2023 14:40:08 -0400:
Hi,
[snip]
>ME: ChatGPT is not considered artificial general intelligence (AGI). What
>qualities of AGI are lacking in ChatGPT?
>
>ChatGPT: ChatGPT, as a language model, has a narrow focus on generating
>human-like text based on input it receives. While it can produce highly
>coherent and contextually relevant responses, it lacks several key
>qualities that are typically associated with artificial general
>intelligence:
>
>   1. Lack of common sense: ChatGPT relies on statistical patterns learned
>   from large amounts of text data and does not have a common-sense
>   understanding of the world. For example, if asked "Can you pour water into
>   a glass made of sugar?", ChatGPT might provide a grammatically correct but
>   nonsensical response, whereas a human with common sense would recognize
>   that a sugar glass would dissolve in water.

so where did it get this example from, or all of these points for that 
matter?

>   2. Limited understanding of context: Although ChatGPT has the ability to
>   generate responses based on context, its understanding of context is
>   limited to what it can learn from its training data. It cannot draw on
>   personal experiences, general knowledge, or reasoning to inform its
>   responses in the way a human would.

General knowledge can be obtained from the Inet. "reasoning" is clearly not 
true. Without it, one cannot reply to a
sentence.

>   3. Inability to perform physical tasks: ChatGPT is purely a
>   software-based system and lacks the ability to interact with the physical
>   world. It cannot perform physical tasks like walking, manipulating objects,
>   or performing surgery, which are essential for many real-world applications.

There are already robots that perform these things. They require only 
programming to interact with the real worldand
many already have Inet connectivity, either directly or indirectly.

>   4. Limited creativity: While ChatGPT can generate novel and contextually
>   appropriate responses, its ability to create genuinely new and original
>   ideas or concepts is limited. It can only draw on patterns and associations
>   that it has learned from the training data and cannot think beyond that.

...also true of many humans, except that we have a different set of training 
data.

>   5. Lack of self-awareness: ChatGPT does not have the ability to reflect
>   on its own thoughts, actions, or limitations in the way that a self-aware
>   human being can. It cannot introspect, learn from its mistakes, or engage
>   in critical self-reflection.

AutoGPT?

>
>Overall, while ChatGPT is an impressive language model, it lacks several
>key qualities that are associated with true artificial general intelligence.

As a species we appear to be doing our best to "correct" this issue. :(
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Jed Rothwell
Boom  wrote:


> For those who used it in the first few days, when bot moderation was not
> installed properly, of right now, if it is jailbroken, GPT works just as
> well as a very smart human. With a few tweeks (like making it use math AI,
> wolfram alpha which surpassed humans decades ago, or NN, or scan OCR), it
> easily overcomes the vast majority of people. So, why isn't it considered
> AGI right now?
>

Because it lacks several important qualities of AGI, especially any
knowledge of the real world. Let's ask ChatGPT for additional details:

ME: ChatGPT is not considered artificial general intelligence (AGI). What
qualities of AGI are lacking in ChatGPT?

ChatGPT: ChatGPT, as a language model, has a narrow focus on generating
human-like text based on input it receives. While it can produce highly
coherent and contextually relevant responses, it lacks several key
qualities that are typically associated with artificial general
intelligence:

   1. Lack of common sense: ChatGPT relies on statistical patterns learned
   from large amounts of text data and does not have a common-sense
   understanding of the world. For example, if asked "Can you pour water into
   a glass made of sugar?", ChatGPT might provide a grammatically correct but
   nonsensical response, whereas a human with common sense would recognize
   that a sugar glass would dissolve in water.
   2. Limited understanding of context: Although ChatGPT has the ability to
   generate responses based on context, its understanding of context is
   limited to what it can learn from its training data. It cannot draw on
   personal experiences, general knowledge, or reasoning to inform its
   responses in the way a human would.
   3. Inability to perform physical tasks: ChatGPT is purely a
   software-based system and lacks the ability to interact with the physical
   world. It cannot perform physical tasks like walking, manipulating objects,
   or performing surgery, which are essential for many real-world applications.
   4. Limited creativity: While ChatGPT can generate novel and contextually
   appropriate responses, its ability to create genuinely new and original
   ideas or concepts is limited. It can only draw on patterns and associations
   that it has learned from the training data and cannot think beyond that.
   5. Lack of self-awareness: ChatGPT does not have the ability to reflect
   on its own thoughts, actions, or limitations in the way that a self-aware
   human being can. It cannot introspect, learn from its mistakes, or engage
   in critical self-reflection.

Overall, while ChatGPT is an impressive language model, it lacks several
key qualities that are associated with true artificial general intelligence.


[Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-08 Thread Boom
For those who used it in the first few days, when bot moderation was not
installed properly, of right now, if it is jailbroken, GPT works just as
well as a very smart human. With a few tweeks (like making it use math AI,
wolfram alpha which surpassed humans decades ago, or NN, or scan OCR), it
easily overcomes the vast majority of people. So, why isn't it considered
AGI right now? If you let it hold memory of conversations with a single
person, and let it hold memory (there is a very small cap of around
4thousand free tokens, which is like 30kb) it is already supersmart.