Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Jürg Wyttenbach

GPT is at tool used in computer linguistics since more than 10 years.


It was just a matter of time until some brainless nerds would use it for 
KI...


GPT just analysis and classifies text >the texts you give GPT. So 
its not KI its the condensed shit some people want to throw at you.



But honestly what the US government does since 2020 where Biden founded 
the project veritas  - Orwell 1984 = truth ministry - is nothing else 
than chat GPT does with. you.


Most newspapers today do no longer contain information. The focus is on 
propaganda = spreading the view of the dominant class.


I regularly compare about 10 world top journals 4 languages/ continents 
and all I see is identical "(dis-) information".



The top source of fake news are NYT,BBC,FAZ,NZZ, Figaro,  Only a few 
tiny local papers provide real information.


So please focus on how to get independent news and not on how to get 
condensed shit from a KI text mixer...


J.W.



On 10.04.2023 22:50, Boom wrote:
Indeed, it can. It comes up with fake information. But now it is 
heavily moderated to not allow that.


Em seg., 10 de abr. de 2023 às 16:33, H L V  
escreveu:


Can it dream?
Harry

On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda
 wrote:

There are works to allow LLM to discuss in order to have
reflection...
I've seen reference to an architecture where two GPT instances
talk to each other, with different roles, one as a searcher,
the other as a critic...
Look at this article.
LLM may just be the building block of something bigger...

https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html

add to that, they can use external applications (plugin), talk
to generative AI like Dall-E...

Many people say it is not intelligent, but are we ?
I see AI making mistakes very similar to the one I do when I'm
tired, or beginner...

The real difference is that today, AI are not the fruit of a
Darwinian evolution, with struggle to survive, dominate, eat
or be eaten, so it's less frightening than people or animals.
The only serious fear I've heard is that we become so
satisfied by those AIs, that we delegate our genetic evolution
to them, and we lose our individualistic Darwinian struggle to
survive, innovate, seduce a partner, enjoying a bee-Hive
mentality, at the service of the AI system, like bee-workers
and bee-queen... The promoter of that theory estimate it will
take a millennium.
Anyway there is nothing to stop, as if a majority decide to
stop developing AI, a minority will develop them at their
service, and China is ready, with great experts and great
belief in the future. Only the West is afraid. (there is a
paper on that circulating, where fear of AI is linked to GDP/head)


Le lun. 10 avr. 2023 à 16:47, Jed Rothwell
 a écrit :

I wrote:

Food is contaminated despite our best efforts to
prevent that. Contamination is a complex process that
we do not fully understand or control, although of
course we know a lot about it. It seems to me that as
AI becomes more capable it may become easier to
understand, and more transparent.


My unfinished thought here is that knowing more about
contamination and seeing more complexity in it has
improved our ability to control it.


Sean True  wrote:

I think it’s fair to say no AGI until those are
designed in, particularly the ability to actually
learn from experience.


Definitely! ChatGPT agrees with you!



--
Daniel Rocha - RJ
danieldi...@gmail.com


--
Jürg Wyttenbach
Bifangstr. 22
8910 Affoltern am Albis

+41 44 760 14 18
+41 79 246 36 06


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Boom
Indeed, it can. It comes up with fake information. But now it is heavily
moderated to not allow that.

Em seg., 10 de abr. de 2023 às 16:33, H L V  escreveu:

> Can it dream?
> Harry
>
> On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda 
> wrote:
>
>> There are works to allow LLM to discuss in order to have reflection...
>> I've seen reference to an architecture where two GPT instances talk to
>> each other, with different roles, one as a searcher, the other as a
>> critic...
>> Look at this article.
>> LLM may just be the building block of something bigger...
>>
>> https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html
>>
>> add to that, they can use external applications (plugin), talk to
>> generative AI like Dall-E...
>>
>> Many people say it is not intelligent, but are we ?
>> I see AI making mistakes very similar to the one I do when I'm tired, or
>> beginner...
>>
>> The real difference is that today, AI are not the fruit of a Darwinian
>> evolution, with struggle to survive, dominate, eat or be eaten, so it's
>> less frightening than people or animals.
>> The only serious fear I've heard is that we become so satisfied by those
>> AIs, that we delegate our genetic evolution to them, and we lose our
>> individualistic Darwinian struggle to survive, innovate, seduce a partner,
>> enjoying a bee-Hive mentality, at the service of the AI system, like
>> bee-workers and bee-queen... The promoter of that theory estimate it will
>> take a millennium.
>> Anyway there is nothing to stop, as if a majority decide to stop
>> developing AI, a minority will develop them at their service, and China is
>> ready, with great experts and great belief in the future. Only the West is
>> afraid. (there is a paper on that circulating, where fear of AI is linked
>> to GDP/head)
>>
>>
>> Le lun. 10 avr. 2023 à 16:47, Jed Rothwell  a
>> écrit :
>>
>>> I wrote:
>>>
>>>
 Food is contaminated despite our best efforts to prevent that.
 Contamination is a complex process that we do not fully understand or
 control, although of course we know a lot about it. It seems to me that as
 AI becomes more capable it may become easier to understand, and more
 transparent.

>>>
>>> My unfinished thought here is that knowing more about contamination and
>>> seeing more complexity in it has improved our ability to control it.
>>>
>>>
>>> Sean True  wrote:
>>>
>>> I think it’s fair to say no AGI until those are designed in,
 particularly the ability to actually learn from experience.

>>>
>>> Definitely! ChatGPT agrees with you!
>>>
>>

-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Robin
In reply to  Jed Rothwell's message of Mon, 10 Apr 2023 09:33:48 -0400:
Hi,
[snip]
>I hope that an advanced AGI *will* have a concept of the real world, and it
>will know the difference. I do not think that the word "care" applies here,
>but if we tell it not to use a machine gun in the real world, I expect it
>will follow orders. Because that's what computers do. Of course, if someone
>programs it to use a machine gun in the real world, it would do that too!
[snip]
I think you can count on the R departments of the armed forces of nations 
around the World, to be working on this as
we speak.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Robin
In reply to  Alain Sepeda's message of Mon, 10 Apr 2023 17:48:38 +0200:
Hi,
[snip]
>The real difference is that today, AI are not the fruit of a Darwinian
>evolution, with struggle to survive, dominate, eat or be eaten, so it's
>less frightening than people or animals.

The way a neural network learns is conceptually analogous to Darwinian 
evolution.
(Only the programs/routines, most suited to purpose, survive.)
...but it happens much, much faster.

>The only serious fear I've heard is that we become so satisfied by those
>AIs, that we delegate our genetic evolution to them, and we lose our
>individualistic Darwinian struggle to survive, innovate, seduce a partner,
>enjoying a bee-Hive mentality, at the service of the AI system, like
>bee-workers and bee-queen... The promoter of that theory estimate it will
>take a millennium.
>Anyway there is nothing to stop, as if a majority decide to stop developing
>AI, a minority will develop them at their service, and China is ready, with
>great experts and great belief in the future. Only the West is afraid.
>(there is a paper on that circulating, where fear of AI is linked to
>GDP/head)

Anything that increases productivity can lead to an increase in GDP/head.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread H L V
Can it dream?
Harry

On Mon, Apr 10, 2023 at 11:49 AM Alain Sepeda 
wrote:

> There are works to allow LLM to discuss in order to have reflection...
> I've seen reference to an architecture where two GPT instances talk to
> each other, with different roles, one as a searcher, the other as a
> critic...
> Look at this article.
> LLM may just be the building block of something bigger...
>
> https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html
>
> add to that, they can use external applications (plugin), talk to
> generative AI like Dall-E...
>
> Many people say it is not intelligent, but are we ?
> I see AI making mistakes very similar to the one I do when I'm tired, or
> beginner...
>
> The real difference is that today, AI are not the fruit of a Darwinian
> evolution, with struggle to survive, dominate, eat or be eaten, so it's
> less frightening than people or animals.
> The only serious fear I've heard is that we become so satisfied by those
> AIs, that we delegate our genetic evolution to them, and we lose our
> individualistic Darwinian struggle to survive, innovate, seduce a partner,
> enjoying a bee-Hive mentality, at the service of the AI system, like
> bee-workers and bee-queen... The promoter of that theory estimate it will
> take a millennium.
> Anyway there is nothing to stop, as if a majority decide to stop
> developing AI, a minority will develop them at their service, and China is
> ready, with great experts and great belief in the future. Only the West is
> afraid. (there is a paper on that circulating, where fear of AI is linked
> to GDP/head)
>
>
> Le lun. 10 avr. 2023 à 16:47, Jed Rothwell  a
> écrit :
>
>> I wrote:
>>
>>
>>> Food is contaminated despite our best efforts to prevent that.
>>> Contamination is a complex process that we do not fully understand or
>>> control, although of course we know a lot about it. It seems to me that as
>>> AI becomes more capable it may become easier to understand, and more
>>> transparent.
>>>
>>
>> My unfinished thought here is that knowing more about contamination and
>> seeing more complexity in it has improved our ability to control it.
>>
>>
>> Sean True  wrote:
>>
>> I think it’s fair to say no AGI until those are designed in, particularly
>>> the ability to actually learn from experience.
>>>
>>
>> Definitely! ChatGPT agrees with you!
>>
>


Re: [Vo]:Wolfram's Take

2023-04-10 Thread Jed Rothwell
I may have posted this here before . . . Here is Stephen Wolfram writing
about the new Wolfram plugin for ChatGPT, with examples of how the plugin
enhances ChatGPT's capabilities:

>
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Alain Sepeda
There are works to allow LLM to discuss in order to have reflection...
I've seen reference to an architecture where two GPT instances talk to each
other, with different roles, one as a searcher, the other as a critic...
Look at this article.
LLM may just be the building block of something bigger...
https://www.nextbigfuture.com/2023/04/gpt4-with-reflexion-has-a-superior-coding-score.html

add to that, they can use external applications (plugin), talk to
generative AI like Dall-E...

Many people say it is not intelligent, but are we ?
I see AI making mistakes very similar to the one I do when I'm tired, or
beginner...

The real difference is that today, AI are not the fruit of a Darwinian
evolution, with struggle to survive, dominate, eat or be eaten, so it's
less frightening than people or animals.
The only serious fear I've heard is that we become so satisfied by those
AIs, that we delegate our genetic evolution to them, and we lose our
individualistic Darwinian struggle to survive, innovate, seduce a partner,
enjoying a bee-Hive mentality, at the service of the AI system, like
bee-workers and bee-queen... The promoter of that theory estimate it will
take a millennium.
Anyway there is nothing to stop, as if a majority decide to stop developing
AI, a minority will develop them at their service, and China is ready, with
great experts and great belief in the future. Only the West is afraid.
(there is a paper on that circulating, where fear of AI is linked to
GDP/head)


Le lun. 10 avr. 2023 à 16:47, Jed Rothwell  a écrit :

> I wrote:
>
>
>> Food is contaminated despite our best efforts to prevent that.
>> Contamination is a complex process that we do not fully understand or
>> control, although of course we know a lot about it. It seems to me that as
>> AI becomes more capable it may become easier to understand, and more
>> transparent.
>>
>
> My unfinished thought here is that knowing more about contamination and
> seeing more complexity in it has improved our ability to control it.
>
>
> Sean True  wrote:
>
> I think it’s fair to say no AGI until those are designed in, particularly
>> the ability to actually learn from experience.
>>
>
> Definitely! ChatGPT agrees with you!
>


Re: [Vo]:Wolfram's Take

2023-04-10 Thread Terry Blanton
*The first thing to explain is that what ChatGPT is always fundamentally
trying to do is to produce a “reasonable continuation” of whatever text
it’s got so far, where by “reasonable” we mean “what one might expect
someone to write after seeing what people have written on billions of
webpages, etc.”*

On Mon, Apr 10, 2023, 11:01 AM Terry Blanton  wrote:

> What Is ChatGPT Doing ... and Why Does It Work? https://a.co/d/glEBRxd
>


[Vo]:Wolfram's Take

2023-04-10 Thread Terry Blanton
What Is ChatGPT Doing ... and Why Does It Work? https://a.co/d/glEBRxd


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Jed Rothwell
I wrote:


> Food is contaminated despite our best efforts to prevent that.
> Contamination is a complex process that we do not fully understand or
> control, although of course we know a lot about it. It seems to me that as
> AI becomes more capable it may become easier to understand, and more
> transparent.
>

My unfinished thought here is that knowing more about contamination and
seeing more complexity in it has improved our ability to control it.


Sean True  wrote:

I think it’s fair to say no AGI until those are designed in, particularly
> the ability to actually learn from experience.
>

Definitely! ChatGPT agrees with you!


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Sean True
LLM do not have intrinsic short or modifiable long term memory. Both require supplemental systems - reprompting of recent history or expensive offline fine tuning or even more expensive retraining.I think it’s fair to say no AGI until those are designed in, particularly the ability to actually learn from experience.Sent from my iPhoneOn Apr 10, 2023, at 9:34 AM, Jed Rothwell  wrote:Robin  wrote:As I said earlier, it may not make any difference whether an AI feels/thinks as we do, or just mimics the process.That is certainly true.As you pointed out, the AI has no concept of the real world, so it's not going to care whether it's shooting people up
in a video game, or using a robot with a real machine  gun in the real world.I hope that an advanced AGI will have a concept of the real world, and it will know the difference. I do not think that the word "care" applies here, but if we tell it not to use a machine gun in the real world, I expect it will follow orders. Because that's what computers do. Of course, if someone programs it to use a machine gun in the real world, it would do that too!I hope we can devise something like Asamov's laws at the core of the operating system to prevent people from programming things like that. I do not if that is possible.
It may be "just a tool", but the more capable we make it the greater the chances that something unforeseen will go
wrong, especially if it has the ability to connect with other AIs over the Internet, because this adds exponentially to
the complexity, and hence our ability to predict what will happen decreases proportionately.I am not sure I agree. There are many analog processes that we do not fully understand. They sometimes go catastrophically wrong. For example, water gets into coal and causes explosions in coal fired generators. Food is contaminated despite our best efforts to prevent that. Contamination is a complex process that we do not fully understand or control, although of course we know a lot about it. It seems to me that as AI becomes more capable it may become easier to understand, and more transparent. If it is engineered right, the AI will be able to explain its actions to us in ways that transcend complexity and give us the gist of the situation. For example, I use the Delphi 10.4 compiler for Pascal and C++. It has some AI built into it, for the Refactoring and some other features. It is enormously complex compared to compilers from decades ago. It has hundreds of canned procedures and functions. Despite this complexity, it is easier for me to see what it is doing than it was in the past, because it has extensive debugging facilities. You can stop execution and look at variables and internal states in ways that would have been impossible in the past. You can install add-ons that monitor for things like memory leaks. With refactoring and other features you can ask it to look for code that may cause problems. I don't mean code that does not compile, or warning signs such as variables that are never used. It has been able to do that for a long time. I mean more subtle errors.I think it also gives helpful hints for upgrading legacy code to modern standards, but I have not explored that feature. The point is, increased complexity gives me more control and more understanding of what it is doing, not less.


Re: [Vo]:Shouldn't we consider the free chat GPT3.5 AGI?

2023-04-10 Thread Jed Rothwell
Robin  wrote:

As I said earlier, it may not make any difference whether an AI
> feels/thinks as we do, or just mimics the process.


That is certainly true.


As you pointed out, the AI has no concept of the real world, so it's not
> going to care whether it's shooting people up
> in a video game, or using a robot with a real machine  gun in the real
> world.
>

I hope that an advanced AGI *will* have a concept of the real world, and it
will know the difference. I do not think that the word "care" applies here,
but if we tell it not to use a machine gun in the real world, I expect it
will follow orders. Because that's what computers do. Of course, if someone
programs it to use a machine gun in the real world, it would do that too!

I hope we can devise something like Asamov's laws at the core of the
operating system to prevent people from programming things like that. I do
not if that is possible.


It may be "just a tool", but the more capable we make it the greater the
> chances that something unforeseen will go
> wrong, especially if it has the ability to connect with other AIs over the
> Internet, because this adds exponentially to
> the complexity, and hence our ability to predict what will happen
> decreases proportionately.
>

I am not sure I agree. There are many analog processes that we do not fully
understand. They sometimes go catastrophically wrong. For example, water
gets into coal and causes explosions in coal fired generators. Food is
contaminated despite our best efforts to prevent that. Contamination is a
complex process that we do not fully understand or control, although of
course we know a lot about it. It seems to me that as AI becomes more
capable it may become easier to understand, and more transparent. If it is
engineered right, the AI will be able to explain its actions to us in ways
that transcend complexity and give us the gist of the situation. For
example, I use the Delphi 10.4 compiler for Pascal and C++. It has some AI
built into it, for the Refactoring and some other features. It is
enormously complex compared to compilers from decades ago. It has hundreds
of canned procedures and functions. Despite this complexity, it is easier
for me to see what it is doing than it was in the past, because it has
extensive debugging facilities. You can stop execution and look at
variables and internal states in ways that would have been impossible in
the past. You can install add-ons that monitor for things like memory
leaks. With refactoring and other features you can ask it to look for code
that may cause problems. I don't mean code that does not compile, or
warning signs such as variables that are never used. It has been able to do
that for a long time. I mean more subtle errors.

I think it also gives helpful hints for upgrading legacy code to modern
standards, but I have not explored that feature. The point is, increased
complexity gives me more control and more understanding of what it is
doing, not less.