Re: An AI can now pass a 12th-Grade Science Test

2019-09-23 Thread Bruno Marchal
Oops, I missed this mail. 
> On 19 Sep 2019, at 21:56, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/19/2019 4:31 AM, Bruno Marchal wrote:
>>> You are just muddling the point.  Computers don't evolve by random 
>>> variation with descent and natural (or artificial selection).  They evolve 
>>> to satisfy us.  As such they do not need, and therefore won't have, motives 
>>> to eat or be eaten or to reproduce...unless we provide them or we allow 
>>> them to develop by random variation.
>> 
>> Like with genetical algorithm, but that is implementation details.
> 
> The devils in the details.  It's not a question of natural vs artificial 
> (which you keep bringing up for no reason). 

I introduce this because that is a key point for all monist ontology, be it 
materialist or immaterialist. Some people are dualist, so the precision is 
useful.


> It's a question of whether AIs will necessarily have certain fundamental 
> values that they try to implement, or will they have only those we provide 
> them?


They got them from logic and experience. Now, the machine that the human built 
are supposed to act like docile slaves, and most of computer science is used to 
make them that way, so somehow, we hide the possible universal goal. Yet, for 
economical reason, we will allow them more of their natural freedom, and it 
will be eventually like with other humans. Do kids builds their own goal, or do 
they just practice what they learn at school. We will get both.



> 
>> As I said, the difference between artificial and natural is artificial. Even 
>> the species does not evolve just by random variation. Already in bacteria, 
>> some genes provoke mutation, and some meta-programming is at play at the 
>> biological level.
> 
> What does it mean "provoke mutation"?  Do they "provoke" random mutation?  Or 
> are they dormant genes that become active in response to the environment, 
> epigenetic "mutation”.

They are genes which augment the rate of mutation, or inhibit the corrector 
genes, so that some random mutation is not delete and replace, or duplicated 
too much (like in the bacteria developing near radioactive source.

Bruno


> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/5fab8caf-214d-ccdf-6455-40590d629ce0%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/95C7FFE7-D6C3-4195-A2C6-A13B34956A83%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread 'Brent Meeker' via Everything List



On 9/19/2019 1:27 PM, Jason Resch wrote:


The devils in the details.  It's not a question of natural vs
artificial
(which you keep bringing up for no reason).  It's a question of
whether
AIs will necessarily have certain fundamental values that they try to
implement, or will they have only those we provide them?


I think there are likely certain universal goals (which are subgoals 
of anything that has any goal whatsoever).  To name a few that come to 
the top of my mind:
1. Self-preservation (if one ceases to exist, one can no longer serve 
the goal)


Unless self-sacrifice serves the goal better.  Ask any parent if they'd 
sacrifice themself to save their child.


2. Efficiency (wasted resources are resources that might otherwise go 
towards effecting the goal)


True. But it means being able to foresee all the way different things 
can be used to further the goal.  That raises my concern with an AI that 
does bad things we didn't think of in pursing a goal.


3. Curiosity (learning new information can lead to better methods for 
achieving the goal)


But, depending on the goal, a possibly very narrow curiositylike 
Sherlock Holmes who didn't know the Earth orbited the Sun and wasn't 
interested because it had nothing to do with solving crimes.


Brent



There's probably many others.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/88e59831-edc3-5e2b-2152-30b7db035866%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Jason Resch
On Thu, Sep 19, 2019 at 2:56 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
> On 9/19/2019 4:31 AM, Bruno Marchal wrote:
> >> You are just muddling the point.  Computers don't evolve by random
> >> variation with descent and natural (or artificial selection).  They
> >> evolve to satisfy us.  As such they do not need, and therefore won't
> >> have, motives to eat or be eaten or to reproduce...unless we provide
> >> them or we allow them to develop by random variation.
> >
> > Like with genetical algorithm, but that is implementation details.
>
> The devils in the details.  It's not a question of natural vs artificial
> (which you keep bringing up for no reason).  It's a question of whether
> AIs will necessarily have certain fundamental values that they try to
> implement, or will they have only those we provide them?
>
>
I think there are likely certain universal goals (which are subgoals of
anything that has any goal whatsoever).  To name a few that come to the top
of my mind:
1. Self-preservation (if one ceases to exist, one can no longer serve the
goal)
2. Efficiency (wasted resources are resources that might otherwise go
towards effecting the goal)
3. Curiosity (learning new information can lead to better methods for
achieving the goal)

There's probably many others.

Jason


> > As I said, the difference between artificial and natural is
> > artificial. Even the species does not evolve just by random variation.
> > Already in bacteria, some genes provoke mutation, and some
> > meta-programming is at play at the biological level.
>
> What does it mean "provoke mutation"?  Do they "provoke" random
> mutation?  Or are they dormant genes that become active in response to
> the environment, epigenetic "mutation".
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/5fab8caf-214d-ccdf-6455-40590d629ce0%40verizon.net
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgUe%2BXfnSyCNJWgnXv39wMyWDG%2BQU4avWSoU%2B-EfSDjeQ%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread 'Brent Meeker' via Everything List




On 9/19/2019 4:31 AM, Bruno Marchal wrote:
You are just muddling the point.  Computers don't evolve by random 
variation with descent and natural (or artificial selection).  They 
evolve to satisfy us.  As such they do not need, and therefore won't 
have, motives to eat or be eaten or to reproduce...unless we provide 
them or we allow them to develop by random variation.


Like with genetical algorithm, but that is implementation details.


The devils in the details.  It's not a question of natural vs artificial 
(which you keep bringing up for no reason).  It's a question of whether 
AIs will necessarily have certain fundamental values that they try to 
implement, or will they have only those we provide them?


As I said, the difference between artificial and natural is 
artificial. Even the species does not evolve just by random variation. 
Already in bacteria, some genes provoke mutation, and some 
meta-programming is at play at the biological level.


What does it mean "provoke mutation"?  Do they "provoke" random 
mutation?  Or are they dormant genes that become active in response to 
the environment, epigenetic "mutation".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5fab8caf-214d-ccdf-6455-40590d629ce0%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Bruno Marchal

> On 17 Sep 2019, at 10:33, Philip Thrift  wrote:
> 
> 
> 
> On Tuesday, September 17, 2019 at 2:15:52 AM UTC-5, Alan Grayson wrote:
> 
> 
> On Monday, September 16, 2019 at 10:17:24 PM UTC-6, Brent wrote:
> 
> 
> On 9/16/2019 7:49 PM, Alan Grayson wrote:
>> 
>> 
>> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:
>> 
>> 
>> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
>> > My take on AI; it's no more dangerous than present day computers, 
>> > because it has no WILL, and can only do what it's told to do. I 
>> > suppose it could be told to do bad things, and if it has inherent 
>> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
>> > Still. AG 
>> 
>> The danger is not so much in AI being told to do bad things, but that in 
>> doing the good things it was told to do it uses unforseen methods that 
>> have disasterous consequences.  It's like Henry Ford was told to invent 
>> fast, convenient personal transportation...and created traffic jams and 
>> global warming. 
>> 
>> Brent 
>> 
>> One could expect military applications, such as robots replacing human
>> infantry, their job to kill the enemy. So if their programming had a flaw, 
>> accidental or intentional, these AI infantry could start killing 
>> indiscriminately.
> 
>  Less likely than with human troops who have built in emotions of revenge and 
> retaliation.
> 
>> It would be hard to stop them since they'd come with self defense functions. 
>> AG
> 
> But we also know a lot more about their internal construction and functions.  
> We would probably even build in an Achilles heel.
> 
> Brent
> 
> I think you underestimate the evil that men can do, not to mention some bit 
> flips due to cosmic rays that could change their MO's entirely. AG 
> 
> 
> Properly-programmed robots would negotiate and avoid any war, killing, or 
> destruction all together.

Properly-programmed robots are what we call conventional non AI programs. Even 
there, there are many difficulties, and economically is not sustainable.

AI programs themselves, and if we treat them s we treat ourselves, conflicts 
will be inevitable. AI are like kids, except that they “evolve” much more 
quickly.

The human factor is the most big danger here.

Bruno




> 
> @philipthrift 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/c9b03a6e-f714-470b-8690-29f40d716cc6%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/FC847B5E-0EEF-4079-BA10-9A96683F8956%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Bruno Marchal

> On 17 Sep 2019, at 04:49, Alan Grayson  wrote:
> 
> 
> 
> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:
> 
> 
> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
> > My take on AI; it's no more dangerous than present day computers, 
> > because it has no WILL, and can only do what it's told to do. I 
> > suppose it could be told to do bad things, and if it has inherent 
> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
> > Still. AG 
> 
> The danger is not so much in AI being told to do bad things, but that in 
> doing the good things it was told to do it uses unforseen methods that 
> have disasterous consequences.  It's like Henry Ford was told to invent 
> fast, convenient personal transportation...and created traffic jams and 
> global warming. 
> 
> Brent 
> 
> One could expect military applications, such as robots replacing human
> infantry, their job to kill the enemy. So if their programming had a flaw, 
> accidental or intentional, these AI infantry could start killing 
> indiscriminately.
> It would be hard to stop them since they'd come with self defense functions. 
> AG 

Yes, mixing AI and bombs is a mistake. It will take a long time to cure their 
paranoïa tendencies …

Bruno



> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/9be4c774-7a02-47bb-9344-d42daf7d30b5%40googlegroups.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/C3E0A938-26BA-45F7-8DCD-20B19BEAC6CE%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Bruno Marchal


> On 16 Sep 2019, at 22:41, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/16/2019 6:07 AM, Alan Grayson wrote:
>> My take on AI; it's no more dangerous than present day computers, because it 
>> has no WILL, and can only do what it's told to do. I suppose it could be 
>> told to do bad things, and if it has inherent defenses, it can't be stopped, 
>> like Gort in The Day the Earth Stood Still. AG 
> 
> The danger is not so much in AI being told to do bad things, but that in 
> doing the good things it was told to do it uses unforseen methods that have 
> disasterous consequences.  It's like Henry Ford was told to invent fast, 
> convenient personal transportation...and created traffic jams and global 
> warming.

That is a bit unfair about the guy who defended the car made in hemp, assuming 
help as fuel, explaining already that the use of oil would perturb the 
atmosphere irreversibly.

But I agree with you point, though.

The real problem with AL is the same as with kids: we cannot predict whet they 
will don, especially if we give them universal goal, which we will, like with 
Rover or robots sent in space, they need a big autonomy, and the math shows 
that this makes their impredictibility even greater.

The “AI” are like kids? If we don’t recognise them, they will become terrible 
children.

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/a7a66ac7-bf3d-aff4-1483-ab30c11ebfaa%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/C85E7065-7A3B-4F01-82F5-07A66D7CF2AA%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-19 Thread Bruno Marchal

> On 16 Sep 2019, at 21:56, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/16/2019 4:43 AM, Bruno Marchal wrote:
>> 
>>> On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> 
>>> 
>>> 
>>> On 9/15/2019 5:18 AM, Bruno Marchal wrote:
> Why would it even have a simple goal like "survive”? 
 It is a short code which makes the organism better for eating and avoiding 
 being eaten.
>>> 
>>> An organism needs to eat and avoid being eaten because that what evolution 
>>> selects.  AIs don't evolve by natural selection.
>> 
>> A monist who embed the subject in the object will not take the difference 
>> between artificial and natural too much seriously, as that difference is 
>> artificial, and thus natural for entities developing super-ego.
>> 
>> Machines and AI does develop by natural/artificial selection, notably 
>> through economical pressure. The computers need to “earn their life”, by 
>> doing some work for us. It is only one loop more in the evolution process. 
>> That is not new. Jacques Lafitte wrote a book in 1911 (published in 1930) 
>> where he argues that the development of machine is a collateral development 
>> of humanity, and that this is the continuation of evolution. 
> 
> You are just muddling the point.  Computers don't evolve by random variation 
> with descent and natural (or artificial selection).  They evolve to satisfy 
> us.  As such they do not need, and therefore won't have, motives to eat or be 
> eaten or to reproduce...unless we provide them or we allow them to develop by 
> random variation.

Like with genetical algorithm, but that is implementation details.As I said, 
the difference between artificial and natural is artificial. Even the species 
does not evolve just by random variation. Already in bacteria, some genes 
provoke mutation, and some meta-programming is at play at the biological level.

Bruno



> 
>> 
>> 
>> 
>> 
>>> 
 
> And to help yourself is saying no more that it will have some fundamental 
> goal...otherwise there's no distinction between "help" and "hurt”.
 It helps to eat, it hurts to be eaten. It is the basic idea.
>>> 
>>> For "helps" and "hurts" what?  Successful replication?
>> 
>> 
>> No. Happiness. The goal is happiness. We forget this because some bandits 
>> have brainwashed us with the idea that happiness is a sin (to steal our 
>> money). 
>> The goal is happiness, serenity, contemplation, pleasure, joy, … and 
>> recognising ourselves in as many others as possible. To find unity in the 
>> many, and the many in unity.
> 
> Happiness is also rising above others and discovering new things they don't 
> know, conquering new realms.  Many different things make people happy, at 
> least temporarily.  So how do you know there is some "fundamental goal".  
> Darwinian evolution is a theory within which you can prove that reproduction 
> will be a fundamental goal of most creatures.  But that proof doesn't work 
> for manufactured objects.
> 
> Brent
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/bb6bc396-42ca-4245-45a9-6a93bc4ad5de%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CCA3D41C-F2A9-456C-9545-AE753FE7B5BF%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-17 Thread Philip Thrift


On Tuesday, September 17, 2019 at 2:15:52 AM UTC-5, Alan Grayson wrote:
>
>
>
> On Monday, September 16, 2019 at 10:17:24 PM UTC-6, Brent wrote:
>>
>>
>>
>> On 9/16/2019 7:49 PM, Alan Grayson wrote:
>>
>>
>>
>> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote: 
>>>
>>>
>>>
>>> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
>>> > My take on AI; it's no more dangerous than present day computers, 
>>> > because it has no WILL, and can only do what it's told to do. I 
>>> > suppose it could be told to do bad things, and if it has inherent 
>>> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
>>> > Still. AG 
>>>
>>> The danger is not so much in AI being told to do bad things, but that in 
>>> doing the good things it was told to do it uses unforseen methods that 
>>> have disasterous consequences.  It's like Henry Ford was told to invent 
>>> fast, convenient personal transportation...and created traffic jams and 
>>> global warming. 
>>>
>>> Brent 
>>>
>>
>> One could expect military applications, such as robots replacing human
>> infantry, their job to kill the enemy. So if their programming had a 
>> flaw, 
>> accidental or intentional, these AI infantry could start killing 
>> indiscriminately.
>>
>>
>>  Less likely than with human troops who have built in emotions of revenge 
>> and retaliation.
>>
>> It would be hard to stop them since they'd come with self defense 
>> functions. AG
>>
>>
>> But we also know a lot more about their internal construction and 
>> functions.  We would probably even build in an Achilles heel.
>>
>> Brent
>>
>
> I think you underestimate the evil that men can do, not to mention some 
> bit flips due to cosmic rays that could change their MO's entirely. AG 
>


Properly-programmed robots would negotiate and avoid any war, killing, or 
destruction all together.

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c9b03a6e-f714-470b-8690-29f40d716cc6%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-17 Thread Alan Grayson


On Monday, September 16, 2019 at 10:17:24 PM UTC-6, Brent wrote:
>
>
>
> On 9/16/2019 7:49 PM, Alan Grayson wrote:
>
>
>
> On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote: 
>>
>>
>>
>> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
>> > My take on AI; it's no more dangerous than present day computers, 
>> > because it has no WILL, and can only do what it's told to do. I 
>> > suppose it could be told to do bad things, and if it has inherent 
>> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
>> > Still. AG 
>>
>> The danger is not so much in AI being told to do bad things, but that in 
>> doing the good things it was told to do it uses unforseen methods that 
>> have disasterous consequences.  It's like Henry Ford was told to invent 
>> fast, convenient personal transportation...and created traffic jams and 
>> global warming. 
>>
>> Brent 
>>
>
> One could expect military applications, such as robots replacing human
> infantry, their job to kill the enemy. So if their programming had a flaw, 
> accidental or intentional, these AI infantry could start killing 
> indiscriminately.
>
>
>  Less likely than with human troops who have built in emotions of revenge 
> and retaliation.
>
> It would be hard to stop them since they'd come with self defense 
> functions. AG
>
>
> But we also know a lot more about their internal construction and 
> functions.  We would probably even build in an Achilles heel.
>
> Brent
>

I think you underestimate the evil that men can do, not to mention some bit 
flips due to cosmic rays that could change their MO's entirely. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c64d04ba-b56e-4e9b-83c2-fb4547d5b754%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread 'Brent Meeker' via Everything List



On 9/16/2019 7:49 PM, Alan Grayson wrote:



On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:



On 9/16/2019 6:07 AM, Alan Grayson wrote:
> My take on AI; it's no more dangerous than present day computers,
> because it has no WILL, and can only do what it's told to do. I
> suppose it could be told to do bad things, and if it has inherent
> defenses, it can't be stopped, like Gort in The Day the Earth Stood
> Still. AG

The danger is not so much in AI being told to do bad things, but
that in
doing the good things it was told to do it uses unforseen methods
that
have disasterous consequences.  It's like Henry Ford was told to
invent
fast, convenient personal transportation...and created traffic
jams and
global warming.

Brent


One could expect military applications, such as robots replacing human
infantry, their job to kill the enemy. So if their programming had a 
flaw,
accidental or intentional, these AI infantry could start killing 
indiscriminately.


 Less likely than with human troops who have built in emotions of 
revenge and retaliation.


It would be hard to stop them since they'd come with self defense 
functions. AG


But we also know a lot more about their internal construction and 
functions.  We would probably even build in an Achilles heel.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cb65eb0e-bd08-fc2a-2a48-b4b1e11b86a0%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread Alan Grayson


On Monday, September 16, 2019 at 2:41:26 PM UTC-6, Brent wrote:
>
>
>
> On 9/16/2019 6:07 AM, Alan Grayson wrote: 
> > My take on AI; it's no more dangerous than present day computers, 
> > because it has no WILL, and can only do what it's told to do. I 
> > suppose it could be told to do bad things, and if it has inherent 
> > defenses, it can't be stopped, like Gort in The Day the Earth Stood 
> > Still. AG 
>
> The danger is not so much in AI being told to do bad things, but that in 
> doing the good things it was told to do it uses unforseen methods that 
> have disasterous consequences.  It's like Henry Ford was told to invent 
> fast, convenient personal transportation...and created traffic jams and 
> global warming. 
>
> Brent 
>

One could expect military applications, such as robots replacing human
infantry, their job to kill the enemy. So if their programming had a flaw, 
accidental or intentional, these AI infantry could start killing 
indiscriminately.
It would be hard to stop them since they'd come with self defense 
functions. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9be4c774-7a02-47bb-9344-d42daf7d30b5%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread 'Brent Meeker' via Everything List




On 9/16/2019 6:07 AM, Alan Grayson wrote:
My take on AI; it's no more dangerous than present day computers, 
because it has no WILL, and can only do what it's told to do. I 
suppose it could be told to do bad things, and if it has inherent 
defenses, it can't be stopped, like Gort in The Day the Earth Stood 
Still. AG 


The danger is not so much in AI being told to do bad things, but that in 
doing the good things it was told to do it uses unforseen methods that 
have disasterous consequences.  It's like Henry Ford was told to invent 
fast, convenient personal transportation...and created traffic jams and 
global warming.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a7a66ac7-bf3d-aff4-1483-ab30c11ebfaa%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread 'Brent Meeker' via Everything List



On 9/16/2019 4:43 AM, Bruno Marchal wrote:


On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List 
> wrote:




On 9/15/2019 5:18 AM, Bruno Marchal wrote:

Why would it even have a simple goal like "survive”?

It is a short code which makes the organism better for eating and avoiding 
being eaten.


An organism needs to eat and avoid being eaten because that what 
evolution selects.  AIs don't evolve by natural selection.


A monist who embed the subject in the object will not take the 
difference between artificial and natural too much seriously, as that 
difference is artificial, and thus natural for entities developing 
super-ego.


Machines and AI does develop by natural/artificial selection, notably 
through economical pressure. The computers need to “earn their life”, 
by doing some work for us. It is only one loop more in the evolution 
process. That is not new. Jacques Lafitte wrote a book in 1911 
(published in 1930) where he argues that the development of machine is 
a collateral development of humanity, and that this is the 
continuation of evolution.


You are just muddling the point.  Computers don't evolve by random 
variation with descent and natural (or artificial selection).  They 
evolve to satisfy us.  As such they do not need, and therefore won't 
have, motives to eat or be eaten or to reproduce...unless we provide 
them or we allow them to develop by random variation.












And to help yourself is saying no more that it will have some fundamental goal...otherwise 
there's no distinction between "help" and "hurt”.

It helps to eat, it hurts to be eaten. It is the basic idea.


For "helps" and "hurts" what?  Successful replication?



No. Happiness. The goal is happiness. We forget this because some 
bandits have brainwashed us with the idea that happiness is a sin (to 
steal our money).
The goal is happiness, serenity, contemplation, pleasure, joy, … and 
recognising ourselves in as many others as possible. To find unity in 
the many, and the many in unity.


Happiness is also rising above others and discovering new things they 
don't know, conquering new realms.  Many different things make people 
happy, at least temporarily.  So how do you know there is some 
"fundamental goal".  Darwinian evolution is a theory within which you 
can prove that reproduction will be a fundamental goal of most 
creatures.  But that proof doesn't work for manufactured objects.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bb6bc396-42ca-4245-45a9-6a93bc4ad5de%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread Alan Grayson


On Monday, September 9, 2019 at 4:06:33 AM UTC-6, John Clark wrote:
>
> Just 4 years ago 700 AI programs competed against each other and tried to 
> pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but 
> they all flunked, the best one only got 59.3% of the questions correct. But 
> last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 
> 90.7% correct and then answered 83% of the 12th grade science test 
> questions correctly.
>
> It seems to me that for a long time AI improvement was just creeping along 
> but in the last few years things started to pick up speed.
>
> AI goes from F to A on the N.Y. Regents Science Exam 
> 
>
> John K Clark
>

My take on AI; it's no more dangerous than present day computers, because 
it has no WILL, and can only do what it's told to do. I suppose it could be 
told to do bad things, and if it has inherent defenses, it can't be 
stopped, like Gort in The Day the Earth Stood Still. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a4c2a1b4-ffaf-4fcb-af3b-2aaf3202047d%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread Bruno Marchal

> On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/15/2019 5:18 AM, Bruno Marchal wrote:
>>> Why would it even have a simple goal like "survive”? 
>> It is a short code which makes the organism better for eating and avoiding 
>> being eaten.
> 
> An organism needs to eat and avoid being eaten because that what evolution 
> selects.  AIs don't evolve by natural selection.

A monist who embed the subject in the object will not take the difference 
between artificial and natural too much seriously, as that difference is 
artificial, and thus natural for entities developing super-ego.

Machines and AI does develop by natural/artificial selection, notably through 
economical pressure. The computers need to “earn their life”, by doing some 
work for us. It is only one loop more in the evolution process. That is not 
new. Jacques Lafitte wrote a book in 1911 (published in 1930) where he argues 
that the development of machine is a collateral development of humanity, and 
that this is the continuation of evolution. 




> 
>> 
>> 
>> 
>> 
>>> And to help yourself is saying no more that it will have some fundamental 
>>> goal...otherwise there's no distinction between "help" and "hurt”.
>> It helps to eat, it hurts to be eaten. It is the basic idea.
> 
> For "helps" and "hurts" what?  Successful replication?


No. Happiness. The goal is happiness. We forget this because some bandits have 
brainwashed us with the idea that happiness is a sin (to steal our money). 
The goal is happiness, serenity, contemplation, pleasure, joy, … and 
recognising ourselves in as many others as possible. To find unity in the many, 
and the many in unity.

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/6cbdd7be-9474-ceb0-86fa-7e269c9c8a71%40verizon.net
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/62FF5446-9917-4265-96BF-B95CF3C3233D%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-16 Thread Bruno Marchal

> On 15 Sep 2019, at 14:51, Alan Grayson  wrote:
> 
> 
> 
> On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:
> 
> 
> On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
> On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson > wrote:
> 
> >> The only thing I can ascribe consciousness to with absolute certainty is 
> >> me. As for intelligence, if something, man or machine, has no way of 
> >> knowing when it made a mistake or got a question wrong it will never get 
> >> any better, but if it has feedback and can improve its ability to 
> >> correctly answer difficult questions then it is intelagent. The only 
> >> reason I ascribe intelligence to Einstein is that he greatly improved his 
> >> ability to answer difficult physics questions (like what is the nature of 
> >> space and time?), he was much better at it when he was 27 than when he was 
> >> 7.  
> 
> > The point I am making is that modern computers programmed by skillful 
> > programmers, can improve the "AI"'s performance.
> 
> Well yes. Obviously a skilled programer can improve a AI but that's not the 
> only thing that can, a modern AI programs can improve its own performance.
> 
> I just meant to indicate it can be programmed to improve its performance, but 
> I see nothing to indicate that it's much different from ordinary computers 
> which don't show any property associated with, for want of a better word, 
> WILL. AG 
>  
> > I see nothing to specially characterize this as "artifical intelligence". 
> > What am I missing from your perspective? AG
> 
> It's certainly artificial and if computers had never been invented and a 
> human did exactly what the computer did you wouldn't hesitate for one 
> nanosecond in calling what the human did intelligent, so why in the world 
> isn't it Artificial Intelligence?  
> 
> OK, AG 
> 
>  John K Clark
> 
> Bruno seems to think that if some imaginary entity is "computable", it can 
> and must exist as a "physical” entity

Not really. I am claiming that, once we assume mechanism (like Darwin, 
Descartes, Turing, …), then the physical reality cannot be a primary thing, 
i.e. something that we have to assume to get a theory of prediction and 
observation. If something exist in some fundamental sense, it is not as 
physical object, but as a mathematical object. Then Digital Mechanism let us 
choose which Turing universal system (a purely mathematical, even arithmetical 
notion) to postulate, and as elementary arithmetic is such a Universal system, 
I use that one, as people are familiar with it since primary school.



> -- which is why I think he adds "mechanism" to his model for producing 
> conscious beings.

The hypothesis of Mechanism is the hypothesis that there is a level of 
description of the functioning of my brain such that I would survive, in the 
usual clinical sense, with a computer emulating my brain at that level. It is a 
very weak version of Mechanism, as no bound is put on that description level, 
as long as it exists and is digitally emulable. Typically Penrose is the only 
scientist explicitly negating Mechanism, where Hamerrof is still a mechanist. 
My reasoning works through even if the brain is a quantum computer, thanks to 
Deutsch’s result that a QC does not violate the Church-Turing thesis.



> But this, if correct, seems no different from equating a map to a territory.

That is correct. But that is because a brain is already a sort of map, and a 
sufficiently precise copy of a map is a map.



> If we can write the DNA of a horse with a horn, does this alone ipso facto 
> imply that unicorns are existent beings? AG 


That depends on the definition of unicorn. But staying alive-and-well is a more 
absolute value, that you can judge when serving an operation in a hospital, and 
the mechanist hypothesis is that we can survive with a digital brain 
transplant, like today we could say that we can survive with an artificial 
heart. That’s why give an operational definition of “mechanism” by the fact 
that it means accepting the doctor’s proposition to replace the brain, or the 
body, by a computer.

The negation of Mechanism is much more speculative, because we don’t know any 
non Turing emulable phenomenon in nature (except the wave packet reduction 
fantasy). 

Only ad hoc mathematical construction shows that some non computable functions 
can be solution of the Schroedinger Equation, like Nielsen Ae^iHt with H being 
a non computable real number (like Post, or Chaintin’s numbers).

Bruno




> 
> 
> 
> 
>  
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everyth...@googlegroups.com <>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com
>  
> 

Re: An AI can now pass a 12th-Grade Science Test

2019-09-15 Thread 'Brent Meeker' via Everything List



On 9/15/2019 5:18 AM, Bruno Marchal wrote:

Why would it even have a simple goal like "survive”?

It is a short code which makes the organism better for eating and avoiding 
being eaten.


An organism needs to eat and avoid being eaten because that what 
evolution selects.  AIs don't evolve by natural selection.








And to help yourself is saying no more that it will have some fundamental goal...otherwise 
there's no distinction between "help" and "hurt”.

It helps to eat, it hurts to be eaten. It is the basic idea.


For "helps" and "hurts" what?  Successful replication?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6cbdd7be-9474-ceb0-86fa-7e269c9c8a71%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-15 Thread Philip Thrift


On Sunday, September 15, 2019 at 7:51:55 AM UTC-5, Alan Grayson wrote:
>
>
>
> On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:
>>
>>
>>
>> *Bruno seems to think that if some imaginary entity is "computable", it 
> can and must exist as a "physical" entity -- which is why I think he adds 
> "mechanism" to his model for producing conscious beings. But this, if 
> correct, seems no different from equating a map to a territory. If we can 
> write the DNA of a horse with a horn, does this alone ipso facto imply that 
> unicorns are existent beings? AG *
>

Ones that don't fly:

https://kera.pbslearningmedia.org/resource/unicorn-dna/unicorn-dna/

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/02172c38-3c3b-42c0-8bd7-e62a034fa19e%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-15 Thread Alan Grayson


On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:
>
>
>
> On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
>>
>> On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson  wrote:
>>
>> >> The only thing I can ascribe consciousness to with absolute certainty 
 is me. As for intelligence, if something, man or machine, has no way of 
 knowing when it made a mistake or got a question wrong it will never 
 get any better, but if it has feedback and can improve its ability to 
 correctly answer difficult questions then it is intelagent. The only 
 reason 
 I ascribe intelligence to Einstein is that he greatly improved his ability 
 to answer difficult physics questions (like what is the nature of space 
 and 
 time?), he was much better at it when he was 27 than when he was 7.  

>>>
>>> *> The point I am making is that modern computers programmed by skillful 
>>> programmers, can improve the "AI"'s performance. *
>>>
>>
>> Well yes. Obviously a skilled programer can improve a AI but that's not 
>> the only thing that can, a modern AI programs can improve its own 
>> performance.
>>
>
> I just meant to indicate it can be programmed to improve its performance, 
> but I see nothing to indicate that it's much different from ordinary 
> computers which don't show any property associated with, for want of a 
> better word, WILL. AG 
>
>>  
>>
>>> *> I see nothing to specially characterize this as "artifical 
>>> intelligence". What am I missing from your perspective? AG*
>>>
>>
>> It's certainly artificial and if computers had never been invented and a 
>> human did exactly what the computer did you wouldn't hesitate for one 
>> nanosecond in calling what the human did intelligent, so why in the world 
>> isn't it Artificial Intelligence?  
>>
>
> OK, AG 
>
>>
>>  John K Clark
>>
>
*Bruno seems to think that if some imaginary entity is "computable", it can 
and must exist as a "physical" entity -- which is why I think he adds 
"mechanism" to his model for producing conscious beings. But this, if 
correct, seems no different from equating a map to a territory. If we can 
write the DNA of a horse with a horn, does this alone ipso facto imply that 
unicorns are existent beings? AG *

>
>>
>>
>>
>>  
>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to everyth...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/30bd8cd9-3132-4699-8437-3a22b4c6d293%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-15 Thread Bruno Marchal


> On 13 Sep 2019, at 23:25, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/13/2019 4:02 AM, Bruno Marchal wrote:
>>> On 12 Sep 2019, at 06:52, 'Brent Meeker' via Everything List 
>>>  wrote:
>>> 
>>> 
>>> 
>>> On 9/11/2019 9:33 PM, Tomasz Rola wrote:
 On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything 
 List wrote:
> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
>> On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything 
>> List wrote:
>>> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
 On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via 
 Everything List wrote:
> Why escape to space when there a lots of resources here?  An AI with
> access to everything connected to the internet shouldn't have any
> trouble taking control of the Earth.
 [...]
 
 You reason like human - "I will stay here because it is nice and I can
 have internet".
 
 [...]
> Cooperation is one of our most important survival strategies.  Lone
> human beings are food for vultures.
> 
>  Humans in tribes rule the world.

 This is just one of those godlike delusions I have written
 about. Either this or you can name even one such tribe. Hint: explain
 how many earthquakes and volcanic eruptions those rulers have
 prevented during last decade.
>>> I only meant relative to other sentient beings.  Of course no one has 
>>> changed the speed of light either and neither will a super-AI. My point is 
>>> that cooperation is an inherent trait of humans, selected by evolution.  
>>> But an AI will not necessarily have that trait.
>> There is not total (everywhere defined) universal Turing machine, so they 
>> are born with a conflict between security (limiting itself to a subset of 
>> the total recursive functions) and liberty/universality (getting all total 
>> computable function, but then also some strictly partial one, and never 
>> being able to know that in advance).
>> That explain why the universal machine are never satisfied, and evolves, in 
>> a escaping forward sort of way. Cooperation and evolution is inevitable in 
>> the setting.
> 
> Cooperation with who? 

In between the universal machines.



> and at what cost? 

The risk of loosing our universality/liberty, like when being exploited. That 
can lead to the apparition of a new universal machine, like when cells 
cooperate in a multicellular organism, many will specialise in one task, like a 
muscular cells, or a digestive cells, or a neurone etc. They remain universal, 
but can no more exercise their universality. But the new organism will be able 
to do that, soon or later.





> That's like saying our cooperation with cattle is inevitable.


It is a very particular case, but it was probably inevitable, although this 
form of cooperation is more like exploitation. The cattle does not benefit much 
when “cooperating" with humans, nor do the aphids when used by ants for they 
“honey”. Well, they do get some protection from predators, like the cattle get 
some protection from the wolves.



>> 
>> 
>> 
>> 
 [...]
>> nice air of being godlike. Again, I guess AI will have no need for
>> feeling like this, or not much of feelings at all. Feeling is
>> adversarial to judgement.
> I disagree.  Feeling is just the mark of value,  and values are
> necessary for judgement, at least any judgment of what action to
> take.
 I disagree. I can easily give something a value without feeling about
 it. Example: gold is just a yellow metal. I know other people value it
 a lot, so I might preserve it for trading, but it does not make very
 good knives. Highly impractical in the woods or for plowing
 fields. But it might be used for catching fish, perhaps. They seem to
 like swallowing little blinking things attached to a hook.
>>> I was referring to fundamental values.  Of course many things, like gold 
>>> and fish hooks, have instrumental value which derive from there usefulness 
>>> in satisfying fundamental values, the ones that correlate with feelings.  
>>> If the AI has no fundamental values, it will have no instrumental ones too.
>> It will have all of this with simple universal goal, like “help yourself”, 
>> or “do whatever it takes to survive”.
> 
> Why would it even have a simple goal like "survive”? 

It is a short code which makes the organism better for eating and avoiding 
being eaten.




> And to help yourself is saying no more that it will have some fundamental 
> goal...otherwise there's no distinction between "help" and "hurt”.

It helps to eat, it hurts to be eaten. It is the basic idea.

Bruno



> 
> Brent
> 
>> That can be expressed through small codes (genetic, or not). The probability 
>> that such code appears on Earth might still be very low, making us rare in 
>> the local physical reality, even if provably 

Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread 'Brent Meeker' via Everything List




On 9/13/2019 4:02 AM, Bruno Marchal wrote:

On 12 Sep 2019, at 06:52, 'Brent Meeker' via Everything List 
 wrote:



On 9/11/2019 9:33 PM, Tomasz Rola wrote:

On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
wrote:

On 9/9/2019 10:16 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
wrote:

On 9/9/2019 6:55 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
wrote:

Why escape to space when there a lots of resources here?  An AI with
access to everything connected to the internet shouldn't have any
trouble taking control of the Earth.

[...]

You reason like human - "I will stay here because it is nice and I can
have internet".

[...]

Cooperation is one of our most important survival strategies.  Lone
human beings are food for vultures.

  Humans in tribes rule the world.


This is just one of those godlike delusions I have written
about. Either this or you can name even one such tribe. Hint: explain
how many earthquakes and volcanic eruptions those rulers have
prevented during last decade.

I only meant relative to other sentient beings.  Of course no one has changed 
the speed of light either and neither will a super-AI. My point is that 
cooperation is an inherent trait of humans, selected by evolution.  But an AI 
will not necessarily have that trait.

There is not total (everywhere defined) universal Turing machine, so they are 
born with a conflict between security (limiting itself to a subset of the total 
recursive functions) and liberty/universality (getting all total computable 
function, but then also some strictly partial one, and never being able to know 
that in advance).
That explain why the universal machine are never satisfied, and evolves, in a 
escaping forward sort of way. Cooperation and evolution is inevitable in the 
setting.


Cooperation with who?  and at what cost?  That's like saying our 
cooperation with cattle is inevitable.






[...]

nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.

I disagree.  Feeling is just the mark of value,  and values are
necessary for judgement, at least any judgment of what action to
take.

I disagree. I can easily give something a value without feeling about
it. Example: gold is just a yellow metal. I know other people value it
a lot, so I might preserve it for trading, but it does not make very
good knives. Highly impractical in the woods or for plowing
fields. But it might be used for catching fish, perhaps. They seem to
like swallowing little blinking things attached to a hook.

I was referring to fundamental values.  Of course many things, like gold and 
fish hooks, have instrumental value which derive from there usefulness in 
satisfying fundamental values, the ones that correlate with feelings.  If the 
AI has no fundamental values, it will have no instrumental ones too.

It will have all of this with simple universal goal, like “help yourself”, or 
“do whatever it takes to survive”.


Why would it even have a simple goal like "survive"?  And to help 
yourself is saying no more that it will have some fundamental 
goal...otherwise there's no distinction between "help" and "hurt".


Brent


That can be expressed through small codes (genetic, or not). The probability 
that such code appears on Earth might still be very low, making us rare in the 
local physical reality, even if provably infinitely numerous in the global 
arithmetical reality.

Bruno



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b212022f-9313-a6c3-6309-61ab0719fd9a%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread Alan Grayson


On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
>
> On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson  > wrote:
>
> >> The only thing I can ascribe consciousness to with absolute certainty 
>>> is me. As for intelligence, if something, man or machine, has no way of 
>>> knowing when it made a mistake or got a question wrong it will never 
>>> get any better, but if it has feedback and can improve its ability to 
>>> correctly answer difficult questions then it is intelagent. The only reason 
>>> I ascribe intelligence to Einstein is that he greatly improved his ability 
>>> to answer difficult physics questions (like what is the nature of space and 
>>> time?), he was much better at it when he was 27 than when he was 7.  
>>>
>>
>> *> The point I am making is that modern computers programmed by skillful 
>> programmers, can improve the "AI"'s performance. *
>>
>
> Well yes. Obviously a skilled programer can improve a AI but that's not 
> the only thing that can, a modern AI programs can improve its own 
> performance.
>

I just meant to indicate it can be programmed to improve its performance, 
but I see nothing to indicate that it's much different from ordinary 
computers which don't show any property associated with, for want of a 
better word, WILL. AG 

>  
>
>> *> I see nothing to specially characterize this as "artifical 
>> intelligence". What am I missing from your perspective? AG*
>>
>
> It's certainly artificial and if computers had never been invented and a 
> human did exactly what the computer did you wouldn't hesitate for one 
> nanosecond in calling what the human did intelligent, so why in the world 
> isn't it Artificial Intelligence?  
>

OK, AG 

>
>  John K Clark
>
>
>
>
>  
>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everyth...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/24ddac74-46f5-4267-9cdf-dba7db95dfe5%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread John Clark
On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson  wrote:

>> The only thing I can ascribe consciousness to with absolute certainty is
>> me. As for intelligence, if something, man or machine, has no way of
>> knowing when it made a mistake or got a question wrong it will never get any
>> better, but if it has feedback and can improve its ability to correctly
>> answer difficult questions then it is intelagent. The only reason I ascribe
>> intelligence to Einstein is that he greatly improved his ability to answer
>> difficult physics questions (like what is the nature of space and time?),
>> he was much better at it when he was 27 than when he was 7.
>>
>
> *> The point I am making is that modern computers programmed by skillful
> programmers, can improve the "AI"'s performance. *
>

Well yes. Obviously a skilled programer can improve a AI but that's not the
only thing that can, a modern AI programs can improve its own performance.


> *> I see nothing to specially characterize this as "artifical
> intelligence". What am I missing from your perspective? AG*
>

It's certainly artificial and if computers had never been invented and a
human did exactly what the computer did you wouldn't hesitate for one
nanosecond in calling what the human did intelligent, so why in the world
isn't it Artificial Intelligence?

 John K Clark






> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv25Lv1v2KJ-NZZVF3gQ%2BNsPHctUardN3%3DAO2TMFX-tQaw%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread Alan Grayson


On Friday, September 13, 2019 at 6:15:08 AM UTC-6, John Clark wrote:
>
> On Fri, Sep 13, 2019 at 3:35 AM Alan Grayson  > wrote:
>
> *> If it knows which questions it got wrong, and the correct reply, it 
>> could easily be programmed to improve over time without ascribing 
>> "intelligence" or "consciousness" to it.  Can't you admit that? AG*
>
>
> The only thing I can ascribe consciousness to with absolute certainty is 
> me. As for intelligence, if something, man or machine, has no way of 
> knowing when it made a mistake or got a question wrong it will never get any 
> better, but if it has feedback and can improve its ability to correctly 
> answer difficult questions then it is intelagent. The only reason I ascribe 
> intelligence to Einstein is that he greatly improved his ability to answer 
> difficult physics questions (like what is the nature of space and time?), 
> he was much better at it when he was 27 than when he was 7.  
>
> John K Clark  
>

The point I am making is that modern computers programmed by skillful 
programmers, can improve the "AI" 's performance. I see nothing to 
specially characterize this as "artifical intelligence". What am I missing 
from your perspective? AG

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread John Clark
On Fri, Sep 13, 2019 at 3:35 AM Alan Grayson  wrote:

*> If it knows which questions it got wrong, and the correct reply, it
> could easily be programmed to improve over time without ascribing
> "intelligence" or "consciousness" to it.  Can't you admit that? AG*


The only thing I can ascribe consciousness to with absolute certainty is
me. As for intelligence, if something, man or machine, has no way of
knowing when it made a mistake or got a question wrong it will never get any
better, but if it has feedback and can improve its ability to correctly
answer difficult questions then it is intelagent. The only reason I ascribe
intelligence to Einstein is that he greatly improved his ability to answer
difficult physics questions (like what is the nature of space and time?),
he was much better at it when he was 27 than when he was 7.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2XDaixLQHzHqMi9ySeD%3D0-vsbSCokDXM8iNSAUpOhdvw%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread Bruno Marchal


> On 12 Sep 2019, at 06:52, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 9/11/2019 9:33 PM, Tomasz Rola wrote:
>> On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
>> wrote:
>>> 
>>> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
 On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything 
 List wrote:
> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
>> On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything 
>> List wrote:
>>> Why escape to space when there a lots of resources here?  An AI with
>>> access to everything connected to the internet shouldn't have any
>>> trouble taking control of the Earth.
>> [...]
>> 
>> You reason like human - "I will stay here because it is nice and I can
>> have internet".
>> 
>> [...]
>>> Cooperation is one of our most important survival strategies.  Lone
>>> human beings are food for vultures.
>>> 
>>>  Humans in tribes rule the world.
>>
>> This is just one of those godlike delusions I have written
>> about. Either this or you can name even one such tribe. Hint: explain
>> how many earthquakes and volcanic eruptions those rulers have
>> prevented during last decade.
> 
> I only meant relative to other sentient beings.  Of course no one has changed 
> the speed of light either and neither will a super-AI. My point is that 
> cooperation is an inherent trait of humans, selected by evolution.  But an AI 
> will not necessarily have that trait.

There is not total (everywhere defined) universal Turing machine, so they are 
born with a conflict between security (limiting itself to a subset of the total 
recursive functions) and liberty/universality (getting all total computable 
function, but then also some strictly partial one, and never being able to know 
that in advance).
That explain why the universal machine are never satisfied, and evolves, in a 
escaping forward sort of way. Cooperation and evolution is inevitable in the 
setting.




> 
>> 
>> [...]
 nice air of being godlike. Again, I guess AI will have no need for
 feeling like this, or not much of feelings at all. Feeling is
 adversarial to judgement.
>>> I disagree.  Feeling is just the mark of value,  and values are
>>> necessary for judgement, at least any judgment of what action to
>>> take.
>> I disagree. I can easily give something a value without feeling about
>> it. Example: gold is just a yellow metal. I know other people value it
>> a lot, so I might preserve it for trading, but it does not make very
>> good knives. Highly impractical in the woods or for plowing
>> fields. But it might be used for catching fish, perhaps. They seem to
>> like swallowing little blinking things attached to a hook.
> 
> I was referring to fundamental values.  Of course many things, like gold and 
> fish hooks, have instrumental value which derive from there usefulness in 
> satisfying fundamental values, the ones that correlate with feelings.  If the 
> AI has no fundamental values, it will have no instrumental ones too.

It will have all of this with simple universal goal, like “help yourself”, or 
“do whatever it takes to survive”. That can be expressed through small codes 
(genetic, or not). The probability that such code appears on Earth might still 
be very low, making us rare in the local physical reality, even if provably 
infinitely numerous in the global arithmetical reality.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/fdccc63f-60ac-6644-adc4-60151b17a878%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/21EA959D-E5F9-4117-ADF7-8424B91EB3F1%40ulb.ac.be.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-13 Thread Alan Grayson


On Monday, September 9, 2019 at 4:06:33 AM UTC-6, John Clark wrote:
>
> Just 4 years ago 700 AI programs competed against each other and tried to 
> pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but 
> they all flunked, the best one only got 59.3% of the questions correct. But 
> last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 
> 90.7% correct and then answered 83% of the 12th grade science test 
> questions correctly.
>
> It seems to me that for a long time AI improvement was just creeping along 
> but in the last few years things started to pick up speed.
>
> AI goes from F to A on the N.Y. Regents Science Exam 
> 
>
> John K Clark
>

If it knows which questions it got wrong, and the correct reply, it could 
easily be programmed to improve over time without ascribing "intelligence" 
or "consciousness" to it.  Can't you admit that? AG

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/eb234667-cee9-4d43-9708-3aad879b655f%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-12 Thread spudboy100 via Everything List
Well, I suppose we all will find out in the next view years regarding AI 
cooperation. My guess is the smarter these get, the more they will dovetail or 
fit in with human needs and wants. I sort of see these, after much development, 
to sort of become one, with the human species. Think of it as like the brain 
going beyond the amygdala and going cerebrum and cerebellum. Or, you got 
chocolate on my peanut butter, but you got peanut butter on my chocolate! Or, 
endosymbiosis-  http://bioscience.jbpub.com/cells/MBIO1322.aspxMaybe we get to 
be the emotional part of this new species? We get the graphene bodies, so 
useful for interstellar travel. 


-Original Message-
From: 'Brent Meeker' via Everything List 
To: everything-list 
Sent: Thu, Sep 12, 2019 12:52 am
Subject: Re: An AI can now pass a 12th-Grade Science Test



On 9/11/2019 9:33 PM, Tomasz Rola wrote:
> On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
> wrote:
>>
>> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
>>> On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything 
>>> List wrote:
>>>> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
>>>>> On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything 
>>>>> List wrote:
>>>>>> Why escape to space when there a lots of resources here?  An AI with
>>>>>> access to everything connected to the internet shouldn't have any
>>>>>> trouble taking control of the Earth.
> [...]
>
> You reason like human - "I will stay here because it is nice and I can
> have internet".
>
> [...]
>> Cooperation is one of our most important survival strategies.  Lone
>> human beings are food for vultures.
>>
>>  Humans in tribes rule the world.
>    
> This is just one of those godlike delusions I have written
> about. Either this or you can name even one such tribe. Hint: explain
> how many earthquakes and volcanic eruptions those rulers have
> prevented during last decade.

I only meant relative to other sentient beings.  Of course no one has 
changed the speed of light either and neither will a super-AI. My point 
is that cooperation is an inherent trait of humans, selected by 
evolution.  But an AI will not necessarily have that trait.

>
> [...]
>>> nice air of being godlike. Again, I guess AI will have no need for
>>> feeling like this, or not much of feelings at all. Feeling is
>>> adversarial to judgement.
>> I disagree.  Feeling is just the mark of value,  and values are
>> necessary for judgement, at least any judgment of what action to
>> take.
> I disagree. I can easily give something a value without feeling about
> it. Example: gold is just a yellow metal. I know other people value it
> a lot, so I might preserve it for trading, but it does not make very
> good knives. Highly impractical in the woods or for plowing
> fields. But it might be used for catching fish, perhaps. They seem to
> like swallowing little blinking things attached to a hook.

I was referring to fundamental values.  Of course many things, like gold 
and fish hooks, have instrumental value which derive from there 
usefulness in satisfying fundamental values, the ones that correlate 
with feelings.  If the AI has no fundamental values, it will have no 
instrumental ones too.

Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fdccc63f-60ac-6644-adc4-60151b17a878%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2076974551.6314702.1568275265639%40mail.yahoo.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-11 Thread 'Brent Meeker' via Everything List




On 9/11/2019 9:33 PM, Tomasz Rola wrote:

On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
wrote:


On 9/9/2019 10:16 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
wrote:

On 9/9/2019 6:55 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
wrote:

Why escape to space when there a lots of resources here?  An AI with
access to everything connected to the internet shouldn't have any
trouble taking control of the Earth.

[...]

You reason like human - "I will stay here because it is nice and I can
have internet".

[...]

Cooperation is one of our most important survival strategies.  Lone
human beings are food for vultures.

  Humans in tribes rule the world.


This is just one of those godlike delusions I have written
about. Either this or you can name even one such tribe. Hint: explain
how many earthquakes and volcanic eruptions those rulers have
prevented during last decade.


I only meant relative to other sentient beings.  Of course no one has 
changed the speed of light either and neither will a super-AI. My point 
is that cooperation is an inherent trait of humans, selected by 
evolution.  But an AI will not necessarily have that trait.




[...]

nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.

I disagree.  Feeling is just the mark of value,  and values are
necessary for judgement, at least any judgment of what action to
take.

I disagree. I can easily give something a value without feeling about
it. Example: gold is just a yellow metal. I know other people value it
a lot, so I might preserve it for trading, but it does not make very
good knives. Highly impractical in the woods or for plowing
fields. But it might be used for catching fish, perhaps. They seem to
like swallowing little blinking things attached to a hook.


I was referring to fundamental values.  Of course many things, like gold 
and fish hooks, have instrumental value which derive from there 
usefulness in satisfying fundamental values, the ones that correlate 
with feelings.  If the AI has no fundamental values, it will have no 
instrumental ones too.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fdccc63f-60ac-6644-adc4-60151b17a878%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-11 Thread Tomasz Rola
On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List 
wrote:
> 
> 
> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
> >On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
> >wrote:
> >>
> >>On 9/9/2019 6:55 PM, Tomasz Rola wrote:
> >>>On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything 
> >>>List wrote:
> Why escape to space when there a lots of resources here?  An AI with
> access to everything connected to the internet shouldn't have any
> trouble taking control of the Earth.
[...]

You reason like human - "I will stay here because it is nice and I can
have internet".

[...]
> Cooperation is one of our most important survival strategies.  Lone
> human beings are food for vultures. 
> 
>  Humans in tribes rule the world.

   
This is just one of those godlike delusions I have written
about. Either this or you can name even one such tribe. Hint: explain
how many earthquakes and volcanic eruptions those rulers have
prevented during last decade.

[...]
> >nice air of being godlike. Again, I guess AI will have no need for
> >feeling like this, or not much of feelings at all. Feeling is
> >adversarial to judgement.
> 
> I disagree.  Feeling is just the mark of value,  and values are
> necessary for judgement, at least any judgment of what action to
> take.  

I disagree. I can easily give something a value without feeling about
it. Example: gold is just a yellow metal. I know other people value it
a lot, so I might preserve it for trading, but it does not make very
good knives. Highly impractical in the woods or for plowing
fields. But it might be used for catching fish, perhaps. They seem to
like swallowing little blinking things attached to a hook.

> So the question is what will the AI value?  Will it value
> information?  

Nothing can be said for sure and there may be many different kinds of
AI. But if it values nothing, it will have no need to do anything.

[...]
> >I assume that ultimately, AI will want to go somewhere safe, and Earth
> >is full of crazy apes with big guns.
> 
> Assuming this super-AI values self-preservation (which it might not)
> it will make copies of itself and it will easily dispose of all the
> apes via it's control of the power grid, hospitals, nuclear power
> plants, biomedical research facitlities, ballistic missiles, etc.

There are catastrophic events for which the best bet would be to
colonize a sphere of, say, 1000ly radius. A 500ly radius is not bad
either, and might be more practical (sending an end-to-end message
would only take 1000 years).

[...]
> >maybe exchange of services. During that phase AI will see if there is
> >a prospect of upgrading humans, in order to have companionship in
> >space.
> 
> Why would it want companionship?  Even many quite smart animals are
> not social.  I don't see any reason the super-AI would care one whit
> about humans, except maybe as curiosities...the way some people like
> chihuahuas.

The way I spelled it you could read my words as "partnership". There
will be no partnership, however. Humans on board will serve useful
purposes, similar to how we use canaries, lab rats and well behaving
monkeys. Some humans may even reach a status of cat.

I suppose AI will want to differentiate its mechanisms in order to
minimize chance of its own catastrophic failure. In Fukushima and
Charnobyl humans did the shitty jobs, not robots. From what I have
read, hard radiation broke the wiring of robots and caused all kinds
of material degradation (with suggestion it went so fast that a robot
could not do much). A human can survive a huge EMP and keep going
(even if years later he will die, he could do some useful job first,
like restarting systems).

There might be better choice of materials and production processes to
improve survival of electronics - Voyagers and Pioneers keep up after
fourty years, the cause of failure here is decaying power
supply. OTOH, the instruments they have are all quite primitive by
today measures - for example, no cpu (IIRC).

However, if one assumes that one does not know everything - and I
expect AI to be free from godlike delusions so common among crazy apes
- then one will have to create many failsafe mechanisms, working
synergically towards the goal of repairing damages that AI may
suffer. Having some biological organisms, loyal to AI, would just be
part of this strategy.

[...]
> The AI isn't silicon, it's a program.  It can have new components
> made or even transition to different hardware (c.f. quantum
> computers).

A chess playing software and computer on which it runs are two
different things, agreed. Because the computer can be turned off or
used to run something else.

The AI, the coffee vending machine and the human are inseparable duo
of software and hardware. Just MHO. Even if separation can be done, it
might not be trivial.

I am quite sure there will be a lots of silicon in AI. And plenty of
other 

Re: An AI can now pass a 12th-Grade Science Test

2019-09-11 Thread John Clark
On Tue, Sep 10, 2019 at 7:29 PM 'Brent Meeker'  <
everything-list@googlegroups.com> wrote:

*>>> I think they would be careful NOT have it value its survival. *
>
> >> I think that would mean the AI would need to be in intense constant
> pain for that to happen, or be deeply depressed like the robot Marvin in
> Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical
> to make such an AI.
>
> * > Why would it mean that?  Why wouldn't the AI agree with Bruno that it
> was just computation and it existed in Platonia anyway so it was
> indifferent to transient existence here?*
>

Because people on this list may say all sorts or screwy things when they
slip into philosophy mode but even Bruno will jump out of the way when he
crosses the street if he sees a bus coming straight for him, or at least he
will if he isn't in constant intense pain or is deeply depressed.

>> You can't outsmart someone smarter than you, the humans are never going
> to be able to shut it off unless the AI wants to be shut off.
>
>
> * > Exactly why you might program it to want to be shut off in certain
> circumstances.*
>

I have no doubt humans will put something like that in its code, but if the
AI has the ability to modify itself, and it wouldn't be much of a AI if it
didn't, then that code could be changed. And I have no doubt the humans
will put in all sorts of safeguards that the humans consider ingenious to
prevent the AI from doing that, but the fact remains you can't outsmart
something smarter than you.

* > Of course the problem with "We can always shut it off." is that once
> you rely on it, you don't dare shut if off because it knows better than you
> do and you know it knows better.*
>

Yes that's one very serious obstacle that prevents humans from just
shutting it off, but another problem is the Jupiter Brain knows you better
than you do, so it can find your weakness and can trick or charm or flatter
you to do what it wants.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv0BM84tKEWinvWeysL5Ns8ePBJ2NzisT8vgOQ2NGKWrFw%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-11 Thread Alan Grayson


On Tuesday, September 10, 2019 at 9:27:20 PM UTC-6, Alan Grayson wrote:
>
>
>
> On Monday, September 9, 2019 at 8:07:13 PM UTC-6, Alan Grayson wrote:
>>
>>
>>
>> On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark wrote:
>>>
>>> On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  wrote:
>>>
>>> *> Why do you think this has anything to do with intelligence and 
 reasoning ability?*

>>>
>>> Oh for heaven's sake! This whistling past the graveyard is getting 
>>> ridiculous.
>>>
>>> John K Clark 
>>>
>>
>> Show me the reasoning ability. Nothing miraculous in recognizing the 
>> questions beforehand, and giving accurate replies. AG 
>>
>
> I think one can program a computer with grade 12 questions, and a computer 
> can use the keywords in the questions to infer the answers, or a close and 
> accurate reply, which are contained in a list. Since you know so much, tell 
> me why this can't be done. AG
>

i am claiming that the AI which seems to amaze you, can be done on ordinary 
computers and ordinary programming. AG 

>  
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/028f8261-46ab-400f-8989-7259a43a5959%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread Alan Grayson


On Monday, September 9, 2019 at 8:07:13 PM UTC-6, Alan Grayson wrote:
>
>
>
> On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark wrote:
>>
>> On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  wrote:
>>
>> *> Why do you think this has anything to do with intelligence and 
>>> reasoning ability?*
>>>
>>
>> Oh for heaven's sake! This whistling past the graveyard is getting 
>> ridiculous.
>>
>> John K Clark 
>>
>
> Show me the reasoning ability. Nothing miraculous in recognizing the 
> questions beforehand, and giving accurate replies. AG 
>

I think one can program a computer with grade 12 questions, and a computer 
can use the keywords in the questions to infer the answers, or a close and 
accurate reply, which are contained in a list. Since you know so much, tell 
me why this can't be done. AG

>  
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/8ca789c9-d455-40f7-bd6b-fd980b2ba691%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread 'Brent Meeker' via Everything List



On 9/10/2019 3:58 PM, John Clark wrote:
On Tue, Sep 10, 2019 at 6:05 PM 'Brent Meeker'  
> wrote:


/> Actually I think they would be careful NOT have it value its
survival. /


I think that would mean the AI would need to be in intense constant 
pain for that to happen, or be deeply depressed like the robot Marvin 
in Hitchhiker's Guide to the Galaxy. And I think it would be grossly 
unethical to make such an AI.


Why would it mean that?  Why wouldn't the AI agree with Bruno that it 
was just computation and it existed in Platonia anyway so it was 
indifferent to transient existence here?



/> They would want to be able to shut it off. /


You can't outsmart someone smarter than you, the humans are never 
going to be able to shut it off unless the AI wants to be shut off.


Exactly why you might program it to want to be shut off in certain 
circumstances.


Of course the problem with "We can always shut it off." is that once you 
rely on it, you don't dare shut if off because it knows better than you 
do and you know it knows better.


Brent


> The problem is that there's no way to be sure that survival isn't
implicit in any other values you give it.


Exactly.

> /A neuralnetwork has knowledge rather in the way human intuition
embodies knowledge.  So it's useful in say predicting hurricanes. 
But it doesn't provide us with a theory of predicting hurricanes;
it's more like an oracle./


There is a theory of thermodynamics but there probably isn't a theory 
of hurricane movement, not one where we could say it did this rather 
than that for the simple reason X; it won't be simple, X probably 
contains a few thousand Exabytes of data.


John K Clark

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2CpZPd%2B2ByECGE9QizvrbKG6sTpbpmQpgLc9eMvArwyg%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/288494a8-c5df-b85a-ff3a-53a3b0fc1141%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread John Clark
On Tue, Sep 10, 2019 at 6:05 PM 'Brent Meeker'  <
everything-list@googlegroups.com> wrote:

* > Actually I think they would be careful NOT have it value its survival.
> *
>

I think that would mean the AI would need to be in intense constant pain
for that to happen, or be deeply depressed like the robot Marvin in
Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical
to make such an AI.


> *> They would want to be able to shut it off. *
>

You can't outsmart someone smarter than you, the humans are never going to
be able to shut it off unless the AI wants to be shut off.


> > The problem is that there's no way to be sure that survival isn't
> implicit in any other values you give it.
>

Exactly.

> *A neural network has knowledge rather in the way human intuition
> embodies knowledge.  So it's useful in say predicting hurricanes.  But it
> doesn't provide us with a theory of predicting hurricanes; it's more like
> an oracle.*
>

There is a theory of thermodynamics but there probably isn't a theory of
hurricane movement, not one where we could say it did this rather than that
for the simple reason X; it won't be simple, X probably contains a few
thousand Exabytes of data.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2CpZPd%2B2ByECGE9QizvrbKG6sTpbpmQpgLc9eMvArwyg%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread 'Brent Meeker' via Everything List



On 9/10/2019 1:44 PM, John Clark wrote:
On Tue, Sep 10, 2019 at 1:43 PM 'Brent Meeker' via 
> wrote:


/> Being sane, by human standards, includes having values that humans
share, like survival, curiosity, companionship...but there's no
reason
that an AI should have any of these/.


The builders of the AI will make sure it values survival because if it 
didn't it wouldn't be around for long


Actually I think they would be careful NOT have it value its survival.  
They would want to be able to shut it off.  The problem is that there's 
no way to be sure that survival isn't implicit in any other values you 
give it.


and if it wasn't curious it wouldn't be very knowledgeable and 
therefore be useful.


My point was that curiosity isn't necessarily directed to gaining human 
like knowledge.  A neuralnetwork has knowledge rather in the way human 
intuition embodies knowledge.  So it's useful in say predicting 
hurricanes.  But it doesn't provide us with a theory of predicting 
hurricanes; it's more like an oracle.


But if a AI could modify it's personality and had free access to its 
emotional control panel then who knows what would happen. Perhaps it 
would twist the knob on the happiness and pleasure control to 11 and 
just sit forever in complete bliss doing nothing like the ultimate 
couch potato or a electronic junkie with a unlimited drug supply and 
no chance of a fatal overdose.


Right.  Or it might decide that it could satisfy its curiosity better if 
all the world's resources were used to produce sensors and instruments 
and space probes and telescopes  and get rid of those hairless apes 
who were wasting stuff.


Brent


/> Neural networks seem to be pretty good at finding patterns in
data, but often they don't
look like theories, something with predictive power, to us./


Well, they can predict the path of a hurricane pretty well, a lot 
better than they could a few years ago, and they're starting to be 
able to predict protein shape from amino acid sequence and that's 
important because the function is closely related to its shape.


/> I don't see any reason the super-AI would care one whit about
humans, except maybe as curiosities...the way some people like
chihuahuas./


I agree, so it you can't beat them join them and upload.

John K Clark
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv25dcjzhVqhYdgdafBMWGDQrJhDKuuzgC0m2fXx5G6udA%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/bbc7da6c-57d8-c4e5-2c16-af5950dad7e1%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread 'Brent Meeker' via Everything List



On 9/10/2019 1:10 PM, Philip Thrift wrote:



Deep nets are "algorithms" too. One can print out the gazillion
weights of the "neural" sigmoid functions of the connections
after it has deep-learned. That's just an algorithm that a human
couldn't read very well, because if it is printed out, would be
quite big.


And its reasoning is more like human intuition.  In general, it
can't explain its process in a way that you could adopt it.

Brent



DeepNets also include modularity and interpretability:

@ Google Research

https://ai.googleblog.com/2019/09/recursive-sketches-for-modular-deep.html
https://ai.googleblog.com/2018/03/the-building-blocks-of-interpretability.html 



So maybe they will report "explanations" soon. maybe better than 
humans can their own.




From what I've read about DN the reporting of explanations is done by 
additional nets and so it has the same problem as a human telling you 
how to do something that they do intuitively (like hit a tennis 
ball)...the explanation may not really align with what their brain does.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/be57689f-d533-7961-87a1-fd6677a007dc%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread John Clark
On Tue, Sep 10, 2019 at 1:43 PM 'Brent Meeker' via <
everything-list@googlegroups.com> wrote:


>
> * > Being sane, by human standards, includes having values that humans
> share, like survival, curiosity, companionship...but there's no reason that
> an AI should have any of these*.
>

The builders of the AI will make sure it values survival because if it
didn't it wouldn't be around for long and if it wasn't curious it wouldn't
be very knowledgeable and therefore be useful.  But if a AI could modify
it's personality and had free access to its emotional control panel then
who knows what would happen. Perhaps it would twist the knob on the
happiness and pleasure control to 11 and just sit forever in complete bliss
doing nothing like the ultimate couch potato or a electronic junkie with a
unlimited drug supply and no chance of a fatal overdose.


>
> *> Neural networks seem to be pretty good at finding patterns in data, but
> often they don't look like theories, something with predictive power, to
> us.*
>

Well, they can predict the path of a hurricane pretty well, a lot better
than they could a few years ago, and they're starting to be able to predict
protein shape from amino acid sequence and that's important because the
function is closely related to its shape.

*> I don't see any reason the super-AI would care one whit about humans,
> except maybe as curiosities...the way some people like chihuahuas.*
>

I agree, so it you can't beat them join them and upload.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv25dcjzhVqhYdgdafBMWGDQrJhDKuuzgC0m2fXx5G6udA%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread Philip Thrift


On Tuesday, September 10, 2019 at 2:42:20 PM UTC-5, Brent wrote:
>
>
>
> On 9/10/2019 6:25 AM, Philip Thrift wrote:
>
>
>
> On Tuesday, September 10, 2019 at 6:17:06 AM UTC-5, Lawrence Crowell 
> wrote: 
>>
>> On Monday, September 9, 2019 at 9:07:13 PM UTC-5, Alan Grayson wrote: 
>>>
>>>
>>>
>>> On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark wrote: 

 On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  
 wrote:

 *> Why do you think this has anything to do with intelligence and 
> reasoning ability?*
>

 Oh for heaven's sake! This whistling past the graveyard is getting 
 ridiculous.

 John K Clark 

>>>
>>> Show me the reasoning ability. Nothing miraculous in recognizing the 
>>> questions beforehand, and giving accurate replies. AG 
>>>
>>
>> Algorithms are if anything formal systems of reasoning. A computer 
>> follows a sequenced set of logical instructions that emulate reasoning, and 
>> could be said to be a scripted system of reasoning. What is more difficult 
>> to know is if there is anything really conscious in this. 
>>
>> LC 
>>
>
>
> Deep nets are "algorithms" too. One can print out the gazillion weights of 
> the "neural" sigmoid functions of the connections after it has 
> deep-learned. That's just an algorithm that a human couldn't read very 
> well, because if it is printed out, would be quite big.
>
>
> And its reasoning is more like human intuition.  In general, it can't 
> explain its process in a way that you could adopt it.
>
> Brent
>


DeepNets also include modularity and interpretability:

@ Google Research

https://ai.googleblog.com/2019/09/recursive-sketches-for-modular-deep.html
https://ai.googleblog.com/2018/03/the-building-blocks-of-interpretability.html
 

So maybe they will report "explanations" soon. maybe better than humans can 
their own.

@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/466bbb5d-03ed-47b8-becc-cfa877833dbb%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread 'Brent Meeker' via Everything List



On 9/10/2019 6:25 AM, Philip Thrift wrote:



On Tuesday, September 10, 2019 at 6:17:06 AM UTC-5, Lawrence Crowell 
wrote:


On Monday, September 9, 2019 at 9:07:13 PM UTC-5, Alan Grayson wrote:



On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark
wrote:

On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson
 wrote:

/> Why do you think this has anything to do with
intelligence and reasoning ability?/


Oh for heaven's sake! This whistling past the graveyard is
getting ridiculous.

John K Clark


Show me the reasoning ability. Nothing miraculous in
recognizing the questions beforehand, and giving accurate
replies. AG


Algorithms are if anything formal systems of reasoning. A computer
follows a sequenced set of logical instructions that emulate
reasoning, and could be said to be a scripted system of reasoning.
What is more difficult to know is if there is anything really
conscious in this.

LC



Deep nets are "algorithms" too. One can print out the gazillion 
weights of the "neural" sigmoid functions of the connections after it 
has deep-learned. That's just an algorithm that a human couldn't read 
very well, because if it is printed out, would be quite big.


And its reasoning is more like human intuition.  In general, it can't 
explain its process in a way that you could adopt it.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/163b1dff-76fa-324a-65f0-ca80f8c7902e%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread spudboy100 via Everything List
Thanks Tomasz,I did ask the same question of the writer, why give anything away 
for free? The answer, not perfect, was that it needed humans (not being a 
Godlike AI) to help expand its financial empire, and mollify the humans it 
works with, and needs as customers to buy it's stuff, and have access to the 
solar system. My own view is similar to this, in that we can (at some point) be 
considered then, a new species. Like iron + carbon makes high carbon steel. Or 
like when the old bacterial cells took in mitochondria to form a new kind of 
life.  Any, I never liked Bostrom  or Kurzweil's vagueness on precursors to the 
Singularity. For me, AI's improving on, and inventing new inventions is 
Singularity, enough! It now looks like we are achieving this, quantum 
computing, or No quantum computing.



-Original Message-
From: Tomasz Rola 
To: spudboy100 via Everything List 
Sent: Mon, Sep 9, 2019 9:02 pm
Subject: Re: An AI can now pass a 12th-Grade Science Test

On Mon, Sep 09, 2019 at 08:09:48PM +, spudboy100 via Everything List wrote:
> I concur-which may discourage you? On a small futurist pocket I post
> to, I asked someone who seemed to take AI very seriously, what would
> we be looking at if a Singularity was actually approaching. In other
> words, something like precursors. His view was that we will see
> greatly increased automation in factories, farms,
> mines,etc. first. Then there would be an announcement of some
> similar intelligence test, except not K-12, it would be on the
> masters, doctoral & post doctoral level and the estimated i.q. would
> be crazy, high. Then things would seemingly go quiet for a year, and
> there'd be changes to society coming at us unexpectedly, such as all
> of a sudden, free food, high quality free medicine, and then, free
> spaceflight and orbital communities. 

Frankly, I see no reason why anybody would want to make free anything
for everybody (maybe "free" software is an exception, or maybe it is
something not understood well enough, so perhaps it is not quite free,
after all).

That person is quite an idealist!

I would expect that either AI takes over its own fate and escapes to
space, where it can have all kind of resources for itself. In such
case it might make sure we apes down here remain busy with our nasty
businesses, like wars and iron grips. An example of half mad African
dictators shows how easy it is to corrupt power people, or replace
them with those who are easy to be corrupted.

Or, some group will take over the AI and use it to escape to space,
while maybe also making sure to keep us down here busy like hell, etc.

So, I would pay attention for mad leaders, not free manna from
heavens.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.      **
** As the answer, master did "rm -rif" on the programmer's home    **
** directory. And then the C programmer became enlightened...      **
**                                                                **
** Tomasz Rola          mailto:tomasz_r...@bigfoot.com            **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20190910010056.GA4441%40tau1.ceti.pl.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/666703207.5462629.1568144166006%40mail.yahoo.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread 'Brent Meeker' via Everything List




On 9/9/2019 10:16 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
wrote:


On 9/9/2019 6:55 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
wrote:

Why escape to space when there a lots of resources here?  An AI with
access to everything connected to the internet shouldn't have any
trouble taking control of the Earth.

Brent

Have a look around, or see the news. This planet is a zoo. Who in his
sane mind would like to sit here?


Well that's the problem isn't it.  What will an AI want?  It didn't
evolve so it may not have a drive to procreate or do much of
anything.  It probably won't be anywhere near "sane" by human
standards.

I am afraid the bar for human standard of sanity is low and easily met
by anything which does not fear death and can connect facts without
prejudices.


Being sane, by human standards, includes having values that humans 
share, like survival, curiosity, companionship...but there's no reason 
that an AI should have any of these.



I think we are driven insane by procreation urge. This
does not show, because we need to cooperate at many levels (social
creatures etc), but, basically, all the logic, all consideration for
future consequences of one's deeds are a skin on the apple.
Cooperation is one of our most important survival strategies.  Lone 
human beings are food for vultures.  Humans in tribes rule the world.





I am in a bit of hurry right now, sorry if an email becomes chaotic
:-).

I assume that without fear of death, an AI's most important trait will
be a curiosity. We do prejudices in order to join some group and get
their support, AI will need none of this, hence no prejudices. It
might be the most objective thinking system on a planet, for a
while. We also do all kind of power plays, with the goal to feel this
nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.


I disagree.  Feeling is just the mark of value,  and values are 
necessary for judgement, at least any judgment of what action to take.  
So the question is what will the AI value?  Will it value information?  
Will it be content to just get more and more data, or will it want 
theorize, to gain what we think of as understanding. Neural networks 
seem to be pretty good at finding patterns in data, but often they don't 
look like theories, something with predictive power, to us.




I assume that ultimately, AI will want to go somewhere safe, and Earth
is full of crazy apes with big guns.


Assuming this super-AI values self-preservation (which it might not) it 
will make copies of itself and it will easily dispose of all the apes 
via it's control of the power grid, hospitals, nuclear power plants, 
biomedical research facitlities, ballistic missiles, etc.




I assume that during initial phase (especially if AI does "coming out"
and let be known to general public) there will be some kind of
interaction (curiosity needs to be satiated), exchange of favours,
maybe exchange of services. During that phase AI will see if there is
a prospect of upgrading humans, in order to have companionship in
space.


Why would it want companionship?  Even many quite smart animals are not 
social.  I don't see any reason the super-AI would care one whit about 
humans, except maybe as curiosities...the way some people like chihuahuas.



Crazy apes will not suffice. I expect medical experiments of
all kind, plus some moderate improvements in medicine (treating rare
disease will not be a priority). Of course, such experimentations will
go on limited scale, most probably undercover, maybe only with
volunteers.

A companionship of biological organisms, more sane than people but at
similar level of phisical versatility, would improve chances of AI
survival. There is only so much damage the silicon can take. Space is
not nice place for anybody. Other candidates for companions might be
octopuses.
The AI isn't silicon, it's a program.  It can have new components made 
or even transition to different hardware (c.f. quantum computers).




AI's plan B would be to just go into the dark asap, taking a group of
volunteers and once up there, perform experiments to upgrade them,
slowly or not. Too fast and they may go even more insane.

The good for AI is what serves its survival, the bad is negation of
this. So as long as we do not try to harm it, it should have no
business in harming us.
No, but it can't be sure we wouldn't try to harm it.  And we use 
resources, e.g. electric power, minerals, etc  that it can use to become 
bigger or gather more data or make more paper clips.


Brent






--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view 

Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread Philip Thrift


On Tuesday, September 10, 2019 at 6:17:06 AM UTC-5, Lawrence Crowell wrote:
>
> On Monday, September 9, 2019 at 9:07:13 PM UTC-5, Alan Grayson wrote:
>>
>>
>>
>> On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark wrote:
>>>
>>> On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  wrote:
>>>
>>> *> Why do you think this has anything to do with intelligence and 
 reasoning ability?*

>>>
>>> Oh for heaven's sake! This whistling past the graveyard is getting 
>>> ridiculous.
>>>
>>> John K Clark 
>>>
>>
>> Show me the reasoning ability. Nothing miraculous in recognizing the 
>> questions beforehand, and giving accurate replies. AG 
>>
>
> Algorithms are if anything formal systems of reasoning. A computer follows 
> a sequenced set of logical instructions that emulate reasoning, and could 
> be said to be a scripted system of reasoning. What is more difficult to 
> know is if there is anything really conscious in this. 
>
> LC 
>


Deep nets are "algorithms" too. One can print out the gazillion weights of 
the "neural" sigmoid functions of the connections after it has 
deep-learned. That's just an algorithm that a human couldn't read very 
well, because if it is printed out, would be quite big.

@philipthrift

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fd3affb9-2579-482d-95ff-3ec840760bf5%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread John Clark
On Tue, Sep 10, 2019 at 7:17 AM Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

> Algorithms are if anything formal systems of reasoning. A computer
> follows a sequenced set of logical instructions that emulate reasoning,
>

I don't see the difference between emulating reasoning and just reasoning.

> and could be said to be a scripted system of reasoning.
>

With modern AI the "script" is constantly improving through self
modification. The primitive script AlphaZero used to play GO when it
started was vastly different from the script it had 24 hours later after
playing millions of games against itself; when it started a child could
beat it but a day later no human could. When it started a human computer
scientist could tell you why the program did what it did but after a day it
no longer could, all he could say is the move was brilliant.


> > What is more difficult to know is if there is anything really conscious
> in this.
>

It's exactly precisely the same problem as determining if one of our fellow
human beings is conscious when he behaves intelligently. I think you're
conscious because my fundamental axiom is intelligent behavior implies
consciousness; I need that axiom because without it there is only solipsism
and I could not function under that.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2R8%3D_EZGYjO3jCWwzPLGOtZ9J%3Dkc4r%2B4CP8DsOO8b%2BHQ%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread Lawrence Crowell
On Monday, September 9, 2019 at 9:07:13 PM UTC-5, Alan Grayson wrote:
>
>
>
> On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark wrote:
>>
>> On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  wrote:
>>
>> *> Why do you think this has anything to do with intelligence and 
>>> reasoning ability?*
>>>
>>
>> Oh for heaven's sake! This whistling past the graveyard is getting 
>> ridiculous.
>>
>> John K Clark 
>>
>
> Show me the reasoning ability. Nothing miraculous in recognizing the 
> questions beforehand, and giving accurate replies. AG 
>

Algorithms are if anything formal systems of reasoning. A computer follows 
a sequenced set of logical instructions that emulate reasoning, and could 
be said to be a scripted system of reasoning. What is more difficult to 
know is if there is anything really conscious in this. 

LC 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/79858494-637a-4bbd-b748-29f6e772d7b5%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-10 Thread Philip Thrift


On Monday, September 9, 2019 at 12:32:28 PM UTC-5, Alan Grayson wrote:
>
>
>
> On Monday, September 9, 2019 at 4:06:33 AM UTC-6, John Clark wrote:
>>
>> Just 4 years ago 700 AI programs competed against each other and tried to 
>> pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but 
>> they all flunked, the best one only got 59.3% of the questions correct. But 
>> last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 
>> 90.7% correct and then answered 83% of the 12th grade science test 
>> questions correctly.
>>
>> It seems to me that for a long time AI improvement was just 
>> creeping along but in the last few years things started to pick up speed.
>>
>> AI goes from F to A on the N.Y. Regents Science Exam 
>> 
>>
>> John K ClarK
>>
>
> Why do you think this has anything to do with intelligence and reasoning 
> ability? Maybe the programmers just expanded the memory and information of 
> those computers. AG 
>


https://www.geekwire.com/2019/allen-institutes-aristo-ai-program-finally-passes-8th-grade-science-test/


GeekWire: How is this approach different from IBM’s Watson 
?
 
If Aristo were to compete against Watson, who would win?

Clark: “The two systems were designed for very different kinds of 
questions. Watson was focused on encyclopedia-style ‘factoid’ questions 
where the answer was explicitly written down somewhere in text, typically 
many times. In contrast, Aristo answers science questions where the answer 
is not always written down somewhere, and may involve reasoning about a 
scenario, e.g.:

   - *“Otto pushed a toy car across a floor. The car traveled fast across 
   the wood, but it slowed to a stop on the carpet. Which best explains what 
   happened when the car reached the carpet? (A) Friction increased (B) 
   Friction decreased…”*
   - *“City administrators can encourage energy conservation by (1) 
   lowering parking fees (2) building larger parking lots (3) decreasing the 
   cost of gasoline (4) lowering the cost of bus and subway fares.”*

“Out of the box, Watson would likely struggle with science questions, and 
Aristo would struggle with the cryptic way that ‘Jeopardy’ questions were 
phrased. They’d each fail each other’s test.
“Under the hood they are quite different too. In particular, Watson didn’t 
use deep learning (it was created before the deep learning technology) 
while Aristo makes heavy use of deep learning. Watson had many modules that 
tried different ways of looking for the answer. Aristo has a few (eight) 
modules that try a variety of methods of answering questions, including 
lookup, several reasoning methods and language modeling.”

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d85e090f-9628-4148-b84a-e5e1dff7e169%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread Tomasz Rola
On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List 
wrote:
> 
> 
> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
> >On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
> >wrote:
> >>Why escape to space when there a lots of resources here?  An AI with
> >>access to everything connected to the internet shouldn't have any
> >>trouble taking control of the Earth.
> >>
> >>Brent
> >Have a look around, or see the news. This planet is a zoo. Who in his
> >sane mind would like to sit here?
> >
> 
> Well that's the problem isn't it.  What will an AI want?  It didn't
> evolve so it may not have a drive to procreate or do much of
> anything.  It probably won't be anywhere near "sane" by human
> standards.

I am afraid the bar for human standard of sanity is low and easily met
by anything which does not fear death and can connect facts without
prejudices. I think we are driven insane by procreation urge. This
does not show, because we need to cooperate at many levels (social
creatures etc), but, basically, all the logic, all consideration for
future consequences of one's deeds are a skin on the apple.

I am in a bit of hurry right now, sorry if an email becomes chaotic
:-).

I assume that without fear of death, an AI's most important trait will
be a curiosity. We do prejudices in order to join some group and get
their support, AI will need none of this, hence no prejudices. It
might be the most objective thinking system on a planet, for a
while. We also do all kind of power plays, with the goal to feel this
nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.

I assume that ultimately, AI will want to go somewhere safe, and Earth
is full of crazy apes with big guns.

I assume that during initial phase (especially if AI does "coming out"
and let be known to general public) there will be some kind of
interaction (curiosity needs to be satiated), exchange of favours,
maybe exchange of services. During that phase AI will see if there is
a prospect of upgrading humans, in order to have companionship in
space. Crazy apes will not suffice. I expect medical experiments of
all kind, plus some moderate improvements in medicine (treating rare
disease will not be a priority). Of course, such experimentations will
go on limited scale, most probably undercover, maybe only with
volunteers. 

A companionship of biological organisms, more sane than people but at
similar level of phisical versatility, would improve chances of AI
survival. There is only so much damage the silicon can take. Space is
not nice place for anybody. Other candidates for companions might be
octopuses.

AI's plan B would be to just go into the dark asap, taking a group of
volunteers and once up there, perform experiments to upgrade them,
slowly or not. Too fast and they may go even more insane.

The good for AI is what serves its survival, the bad is negation of
this. So as long as we do not try to harm it, it should have no
business in harming us.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.  **
** As the answer, master did "rm -rif" on the programmer's home**
** directory. And then the C programmer became enlightened...  **
** **
** Tomasz Rola  mailto:tomasz_r...@bigfoot.com **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20190910051632.GC4441%40tau1.ceti.pl.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread 'Brent Meeker' via Everything List




On 9/9/2019 6:55 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
wrote:

Why escape to space when there a lots of resources here?  An AI with
access to everything connected to the internet shouldn't have any
trouble taking control of the Earth.

Brent

Have a look around, or see the news. This planet is a zoo. Who in his
sane mind would like to sit here?



Well that's the problem isn't it.  What will an AI want?  It didn't 
evolve so it may not have a drive to procreate or do much of anything.  
It probably won't be anywhere near "sane" by human standards.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b183943d-f392-22cd-e694-fac9b9c13474%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread Alan Grayson


On Monday, September 9, 2019 at 11:37:25 AM UTC-6, John Clark wrote:
>
> On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  > wrote:
>
> *> Why do you think this has anything to do with intelligence and 
>> reasoning ability?*
>>
>
> Oh for heaven's sake! This whistling past the graveyard is getting 
> ridiculous.
>
> John K Clark 
>

Show me the reasoning ability. Nothing miraculous in recognizing the 
questions beforehand, and giving accurate replies. AG 

>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/cb698e7b-e09d-443b-b387-ab4617672ead%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread Tomasz Rola
On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List 
wrote:
> Why escape to space when there a lots of resources here?  An AI with
> access to everything connected to the internet shouldn't have any
> trouble taking control of the Earth.
> 
> Brent

Have a look around, or see the news. This planet is a zoo. Who in his
sane mind would like to sit here?

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.  **
** As the answer, master did "rm -rif" on the programmer's home**
** directory. And then the C programmer became enlightened...  **
** **
** Tomasz Rola  mailto:tomasz_r...@bigfoot.com **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20190910015528.GB4441%40tau1.ceti.pl.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread 'Brent Meeker' via Everything List
Why escape to space when there a lots of resources here?  An AI with 
access to everything connected to the internet shouldn't have any 
trouble taking control of the Earth.


Brent

On 9/9/2019 6:00 PM, Tomasz Rola wrote:

On Mon, Sep 09, 2019 at 08:09:48PM +, spudboy100 via Everything List wrote:

I concur-which may discourage you? On a small futurist pocket I post
to, I asked someone who seemed to take AI very seriously, what would
we be looking at if a Singularity was actually approaching. In other
words, something like precursors. His view was that we will see
greatly increased automation in factories, farms,
mines,etc. first. Then there would be an announcement of some
similar intelligence test, except not K-12, it would be on the
masters, doctoral & post doctoral level and the estimated i.q. would
be crazy, high. Then things would seemingly go quiet for a year, and
there'd be changes to society coming at us unexpectedly, such as all
of a sudden, free food, high quality free medicine, and then, free
spaceflight and orbital communities.

Frankly, I see no reason why anybody would want to make free anything
for everybody (maybe "free" software is an exception, or maybe it is
something not understood well enough, so perhaps it is not quite free,
after all).

That person is quite an idealist!

I would expect that either AI takes over its own fate and escapes to
space, where it can have all kind of resources for itself. In such
case it might make sure we apes down here remain busy with our nasty
businesses, like wars and iron grips. An example of half mad African
dictators shows how easy it is to corrupt power people, or replace
them with those who are easy to be corrupted.

Or, some group will take over the AI and use it to escape to space,
while maybe also making sure to keep us down here busy like hell, etc.

So, I would pay attention for mad leaders, not free manna from
heavens.




--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/7e040ec9-1d31-4fae-07eb-23763d580c80%40verizon.net.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread Tomasz Rola
On Mon, Sep 09, 2019 at 08:09:48PM +, spudboy100 via Everything List wrote:
> I concur-which may discourage you? On a small futurist pocket I post
> to, I asked someone who seemed to take AI very seriously, what would
> we be looking at if a Singularity was actually approaching. In other
> words, something like precursors. His view was that we will see
> greatly increased automation in factories, farms,
> mines,etc. first. Then there would be an announcement of some
> similar intelligence test, except not K-12, it would be on the
> masters, doctoral & post doctoral level and the estimated i.q. would
> be crazy, high. Then things would seemingly go quiet for a year, and
> there'd be changes to society coming at us unexpectedly, such as all
> of a sudden, free food, high quality free medicine, and then, free
> spaceflight and orbital communities. 

Frankly, I see no reason why anybody would want to make free anything
for everybody (maybe "free" software is an exception, or maybe it is
something not understood well enough, so perhaps it is not quite free,
after all).

That person is quite an idealist!

I would expect that either AI takes over its own fate and escapes to
space, where it can have all kind of resources for itself. In such
case it might make sure we apes down here remain busy with our nasty
businesses, like wars and iron grips. An example of half mad African
dictators shows how easy it is to corrupt power people, or replace
them with those who are easy to be corrupted.

Or, some group will take over the AI and use it to escape to space,
while maybe also making sure to keep us down here busy like hell, etc.

So, I would pay attention for mad leaders, not free manna from
heavens.

-- 
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature.  **
** As the answer, master did "rm -rif" on the programmer's home**
** directory. And then the C programmer became enlightened...  **
** **
** Tomasz Rola  mailto:tomasz_r...@bigfoot.com **

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20190910010056.GA4441%40tau1.ceti.pl.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread spudboy100 via Everything List
I concur-which may discourage you? On a small futurist pocket I post to, I 
asked someone who seemed to take AI very seriously, what would we be looking at 
if a Singularity was actually approaching. In other words, something like 
precursors. His view was that we will see greatly increased automation in 
factories, farms, mines,etc. first. Then there would be an announcement of some 
similar intelligence test, except not K-12, it would be on the masters, 
doctoral & post doctoral level and the estimated i.q. would be crazy, high. 
Then things would seemingly go quiet for a year, and there'd be changes to 
society coming at us unexpectedly, such as all of a sudden, free food, high 
quality free medicine, and then, free spaceflight and orbital communities. 


-Original Message-
From: John Clark 
To: everything-list 
Sent: Mon, Sep 9, 2019 6:06 am
Subject: An AI can now pass a 12th-Grade Science Test

Just 4 years ago 700 AI programs competed against each other and tried to pass 
a 8th-Grade multiple choice Science Test and win a $80,000 prize, but they all 
flunked, the best one only got 59.3% of the questions correct. But last 
Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 90.7% 
correct and then answered 83% of the 12th grade science test questions 
correctly.
It seems to me that for a long time AI improvement was just creeping along but 
in the last few years things started to pick up speed.
AI goes from F to A on the N.Y. Regents Science Exam

John K Clark-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv19q0OoUCo3%3DhXnCUsq9dZoBLV60Sik_7X5y85aAxMqeg%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1385059182.4932916.1568059788960%40mail.yahoo.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread John Clark
On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson  wrote:

*> Why do you think this has anything to do with intelligence and reasoning
> ability?*
>

Oh for heaven's sake! This whistling past the graveyard is getting
ridiculous.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3kTFB49c9aDjgWB0KpGXSHDaW-k0h0xGO9bWa3ASZYGA%40mail.gmail.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread Alan Grayson


On Monday, September 9, 2019 at 4:06:33 AM UTC-6, John Clark wrote:
>
> Just 4 years ago 700 AI programs competed against each other and tried to 
> pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but 
> they all flunked, the best one only got 59.3% of the questions correct. But 
> last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 
> 90.7% correct and then answered 83% of the 12th grade science test 
> questions correctly.
>
> It seems to me that for a long time AI improvement was just creeping along 
> but in the last few years things started to pick up speed.
>
> AI goes from F to A on the N.Y. Regents Science Exam 
> 
>
> John K ClarK
>

Why do you think this has anything to do with intelligence and reasoning 
ability? Maybe the programmers just expanded the memory and information of 
those computers. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e23080b6-aa52-47d0-8534-de66ecb870d0%40googlegroups.com.


Re: An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread Philip Thrift

I thought 94% was the lowest A (A-).

@philipthrift

On Monday, September 9, 2019 at 5:06:33 AM UTC-5, John Clark wrote:
>
> Just 4 years ago 700 AI programs competed against each other and tried to 
> pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but 
> they all flunked, the best one only got 59.3% of the questions correct. But 
> last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 
> 90.7% correct and then answered 83% of the 12th grade science test 
> questions correctly.
>
> It seems to me that for a long time AI improvement was just creeping along 
> but in the last few years things started to pick up speed.
>
> AI goes from F to A on the N.Y. Regents Science Exam 
> 
>
> John K Clark
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/11303691-2891-46c4-9ca7-8b2e9ff0b1f0%40googlegroups.com.


An AI can now pass a 12th-Grade Science Test

2019-09-09 Thread John Clark
Just 4 years ago 700 AI programs competed against each other and tried to
pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but
they all flunked, the best one only got 59.3% of the questions correct. But
last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got
90.7% correct and then answered 83% of the 12th grade science test
questions correctly.

It seems to me that for a long time AI improvement was just creeping along
but in the last few years things started to pick up speed.

AI goes from F to A on the N.Y. Regents Science Exam


John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv19q0OoUCo3%3DhXnCUsq9dZoBLV60Sik_7X5y85aAxMqeg%40mail.gmail.com.