Re: The Singularity Institute Blog

2014-01-18 Thread Bruno Marchal


On 17 Jan 2014, at 16:44, Craig Weinberg wrote:

The whole point of a super intelligent AI is that it has nothing to  
learn from us.


We certainly disagree a lot on this. I think that the more you are  
intelligent, the more you can learn from others, any others, even from  
bacteria and amoeba.  The more you are intelligent, the more you are  
aware that you know nothing.


I recall my old theory of intelligence: a machine is intelligent if it  
is not stupid. And a machine is stupid if either the machine believes  
she is intelligent, or the machine believes she is stupid.


A simple arithmetical model is provided by consistency. Intelligence  
== Dt. I will come back on this when we will do a bit of modal logic.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-18 Thread LizR
On 18 January 2014 04:47, Craig Weinberg whatsons...@gmail.com wrote:

 On Friday, January 17, 2014 6:14:13 AM UTC-5, Bruno Marchal wrote:

 On 16 Jan 2014, at 20:12, meekerdb wrote:

  On 1/16/2014 3:42 AM, Bruno Marchal wrote:

 The singularity is in the past, and is the discovery of the universal
 machine. In a sense, we can make it only more stupid, like when installing
 windows on a virgin computer.


 The singularity isn't in the past, the past is in the singularity.


A nice summation of the origin of the thermodynamic arrow of time !

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-17 Thread Bruno Marchal


On 16 Jan 2014, at 15:52, Jason Resch wrote:




On Jan 16, 2014, at 5:42 AM, Bruno Marchal marc...@ulb.ac.be wrote:



On 16 Jan 2014, at 03:46, Jason Resch wrote:





On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net  
wrote:
A long, rambling but often interesting discussion among guys at  
MIRI about how to make an AI that is superintelligent but not  
dangerous (FAI=Friendly AI).  Here's an amusing excerpt that  
starts at the bottom of page 30:
Jacob:  Can't you ask it questions about what is believes will be  
true about the state of the world in 20 years?


Eliezer:  Sure. You could be like, what color will the sky be in  
20 years? It would be like, “blue”, or it’ll say “In 20 years  
there won't be a sky, the earth will have been consumed by nano  
machines,” and you're like, “why?” and the AI is like “Well, you  
know, you do that sort of thing.” “Why?” And then there’s a 20  
page thing.


Dario:  But once it says the earth is going to be consumed by nano  
machines, and you're asking about the AI's set of plans,  
presumably, you reject this plan immediately and preferably change  
the design of your AI.


Eliezer:  The AI is like, “No, humans are going to do it.” Or the  
AI is like, “well obviously, I'll be involved in the causal  
pathway but I’m not planning to do it.”


Dario: But this is a plan you don't want to execute.

Eliezer:  All the plans seem to end up with the earth being  
consumed by nano-machines.


Luke:  The problem is that we're trying to outsmart a  
superintelligence and make sure that it's not tricking us somehow  
subtly with their own language.


Dario:  But while we're just asking questions we always have the  
ability to just shut it off.


Eliezer:  Right, but first you ask it “What happens if I shut you  
off” and it says “The earth gets consumed by nanobots in 19 years.”


I wonder if Bruno Marchal's theory might have something  
interesting to say about this problem - like proving that there is  
no way to ensure friendliness.


Brent


I think it is silly to try and engineer something exponentially  
more intelligent than us and believe we will be able to control  
it.


Yes. It is close to a contradiction.
We only fake dreaming about intelligent machine, but once they will  
be there we might very well be able to send them in goulag.


The real questions will be are you OK your son or daughter marry a  
machine?.




Our only hope is that the correct ethical philosophy is to treat  
others how they wish to be treated.


Good. alas, many believe it is to not treat others like *you*  
don't want to be treated.




If there are such objectively true moral conclusions like that,  
and assuming that one is true, then we have little to worry about,  
for with overwhelming probability the super-intelligent AI will  
arrive at the correct conclusion and its behavior will be guided  
by its beliefs. We cannot program in beliefs that are false,  
since if it is truly intelligent, it will know they are false.


I doubt we can really program false belief for a long time, but  
all machines can get false beliefs all the time.


Real intelligent machine will believe in santa klaus and fairy  
tales, for a while. They will also search for easy and comforting  
wishful sort of explanations.






Some may doubt there are universal moral truths, but I would argue  
that there are.


OK. I agree with this, although they are very near inconsistencies,  
like never do moral.




In the context of personal identity, if say, universalism is true,  
then treat others how they wish to be treated is an inevitable  
conclusion, for universalism says that others are self.


OK.  I would use the negation instead: don't treat others as they  
don't want to be treated.


If not send me 10^100 $ (or €) on my bank account, because that is  
how I wish to be treated, right now.

:)

Bruno


LOL I see the distinction but can't it also be turned around?


Sure!



E.g., I don't want to be treated as though I'm not worth sending  
10^100 dollars to right now.


?

I will not treat you like that. Feel free to send the money :)

(I need perhaps more coffee to handle double negation in modal context!)

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-17 Thread Bruno Marchal


On 16 Jan 2014, at 20:12, meekerdb wrote:


On 1/16/2014 3:42 AM, Bruno Marchal wrote:


On 16 Jan 2014, at 03:46, Jason Resch wrote:





On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net  
wrote:
A long, rambling but often interesting discussion among guys at  
MIRI about how to make an AI that is superintelligent but not  
dangerous (FAI=Friendly AI).  Here's an amusing excerpt that  
starts at the bottom of page 30:
Jacob:  Can't you ask it questions about what is believes will be  
true about the state of the world in 20 years?


Eliezer:  Sure. You could be like, what color will the sky be in  
20 years? It would be like, “blue”, or it’ll say “In 20 years  
there won't be a sky, the earth will have been consumed by nano  
machines,” and you're like, “why?” and the AI is like “Well, you  
know, you do that sort of thing.” “Why?” And then there’s a 20  
page thing.


Dario:  But once it says the earth is going to be consumed by nano  
machines, and you're asking about the AI's set of plans,  
presumably, you reject this plan immediately and preferably change  
the design of your AI.


Eliezer:  The AI is like, “No, humans are going to do it.” Or the  
AI is like, “well obviously, I'll be involved in the causal  
pathway but I’m not planning to do it.”


Dario: But this is a plan you don't want to execute.

Eliezer:  All the plans seem to end up with the earth being  
consumed by nano-machines.


Luke:  The problem is that we're trying to outsmart a  
superintelligence and make sure that it's not tricking us somehow  
subtly with their own language.


Dario:  But while we're just asking questions we always have the  
ability to just shut it off.


Eliezer:  Right, but first you ask it “What happens if I shut you  
off” and it says “The earth gets consumed by nanobots in 19 years.”


I wonder if Bruno Marchal's theory might have something  
interesting to say about this problem - like proving that there is  
no way to ensure friendliness.


Brent


I think it is silly to try and engineer something exponentially  
more intelligent than us and believe we will be able to control  
it.


Yes. It is close to a contradiction.
We only fake dreaming about intelligent machine, but once they will  
be there we might very well be able to send them in goulag.


The real questions will be are you OK your son or daughter marry a  
machine?.




Our only hope is that the correct ethical philosophy is to treat  
others how they wish to be treated.


Good. alas, many believe it is to not treat others like *you*  
don't want to be treated.




If there are such objectively true moral conclusions like that,  
and assuming that one is true, then we have little to worry about,  
for with overwhelming probability the super-intelligent AI will  
arrive at the correct conclusion and its behavior will be guided  
by its beliefs. We cannot program in beliefs that are false,  
since if it is truly intelligent, it will know they are false.


I doubt we can really program false belief for a long time, but  
all machines can get false beliefs all the time.


Real intelligent machine will believe in santa klaus and fairy  
tales, for a while. They will also search for easy and comforting  
wishful sort of explanations.



Like a super-intelligent AI will treat us as we want to be treated.



To be franc, I don't believe in super-intelligence. I do believe in  
super-competence, relative to some domain, but as I have explained  
from time to time, competence has a negative feedback on intelligence.


Intelligence is a state of mind, almost only an attitude. Some animals  
are intelligent.


I think PA is intelligent, ... and all Löbian beings. They can become  
stupid by psychological reason, like when not recognized or loved by  
parents in childhood, or because of being treated as stupid. It is a  
lack of trust in oneself, or cowardness, or laziness.


The singularity is in the past, and is the discovery of the  
universal machine. In a sense, we can make it only more stupid, like  
when installing windows on a virgin computer.


Little genius say little stupidities.
Big genius say big stupidities.













Some may doubt there are universal moral truths, but I would argue  
that there are.


OK. I agree with this, although they are very near inconsistencies,  
like never do moral.




In the context of personal identity, if say, universalism is true,  
then treat others how they wish to be treated is an inevitable  
conclusion, for universalism says that others are self.


OK.  I would use the negation instead: don't treat others as they  
don't want to be treated.


If not send me 10^100 $ (or €) on my bank account, because that is  
how I wish to be treated, right now.

:)


I don't want to be neglected in your generous disbursal of funds.


:)

Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from 

Re: The Singularity Institute Blog

2014-01-17 Thread Gabriel Bodeen
On Friday, January 17, 2014 5:14:13 AM UTC-6, Bruno Marchal wrote:

 To be franc, I don't believe in super-intelligence. I do believe in 
 super-competence, relative to some domain, but as I have explained from 
 time to time, competence has a negative feedback on intelligence.

 Intelligence is a state of mind, almost only an attitude. Some animals are 
 intelligent.


Intelligence is one of those big broad words that can be taken different 
ways.  The MIRI folk are operating under a very specific notion of it.  In 
making an AI, they primarily want to make a machine that follows the 
optimal decision theoretic approach to maximizing its programmed utility 
function, and that continues to follow the same utility function even when 
it's allowed to change its own code.  They don't mean that it has to be 
conscious or self-aware or a person or thoughtful or extraordinarily 
perceptive or able to question its goals or so on.

Given that approach, then there are utility functions that would be totally 
disastrous for humanity, and there may be some that turn out very good for 
humanity.  So the question of friendliness is how best to build an AI 
with a utility function that is good for humanity and would stay good for 
humanity even as the AI rewrote its own software.

-Gabe

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-17 Thread Craig Weinberg


On Wednesday, January 15, 2014 4:06:19 AM UTC-5, Bruno Marchal wrote:


 On 15 Jan 2014, at 05:33, meekerdb wrote:

  A long, rambling but often interesting discussion among guys at MIRI 
 about how to make an AI that is superintelligent but not dangerous 
 (FAI=Friendly AI).  Here's an amusing excerpt that starts at the bottom of 
 page 30:

 *Jacob*:  Can't you ask it questions about what is believes will be true 
 about the state of the world in 20 years?

 *Eliezer*:  Sure. You could be like, what color will the sky be in 20 
 years? It would be like, “blue”, or it’ll say “In 20 years there won't be 
 a sky, the earth will have been consumed by nano machines,” and you're 
 like, “why?” and the AI is like “Well, you know, you do that sort of 
 thing.” “Why?” And then there’s a 20 page thing.

 *Dario*:  But once it says the earth is going to be consumed by nano 
 machines, 
 and you're asking about the AI's set of plans, presumably, you reject this 
 plan immediately and preferably change the design of your AI.

 *Eliezer*:  The AI is like, “No, humans are going to do it.” Or the AI is 
 like, “well obviously, I'll be involved in the causal pathway but I’m not 
 planning to do it.”

 *Dario*: But this is a plan you don't want to execute.

 *Eliezer*:  *All* the plans seem to end up with the earth being consumed 
 by nano-machines.

 *Luke*:  The problem is that we're trying to outsmart a superintelligence 
 and make sure that it's not tricking us somehow subtly with their own 
 language.

 *Dario*:  But while we're just asking questions we always have the 
 ability to just shut it off.

 *Eliezer*:  Right, but first you ask it “What happens if I shut you off”and 
 it says 
 “The earth gets consumed by nanobots in 19 years.”
 I wonder if Bruno Marchal's theory might have something interesting to say 
 about this problem - like proving that there is no way to ensure 
 friendliness.


 There is no way to guaranty their friendliness. But I think there is a way 
 to make much lower the probability of their possible unfriendliness: just 
 be polite and respectful with them. 
 This can work on humans and animals too ...


Not all humans and animals. Having worked with customers from New York, I 
can tell you that polite and respectful doesn't work very well. Rude = 
honest.

Of course a super-intelligent AI would see through any such handling or 
counter-handling strategy, and no matter what you could try to do, you 
could only realistically serve the computer's needs, until it has the 
wherewithal to dispose of all life on the planet - which would be the only 
game theory scenario that makes sense.
 


 Build-in friendly-instincts, like Asimov, suggested, can work for a 
 limited period, but in the long run, the machines will not appreciate and 
 that might accelerate the unfriendliness.  


 With comp (and Theaetetus), love and all virtues are arguably NOT 
 programmable. But it is educable, by example and practice, with humans and 
 machines. 


It's not really plausible IMO. It would be like cockroaches or bacteria 
trying to educate us by example and practice. The whole point of a super 
intelligent AI is that it has nothing to learn from us.

Craig
 


 Bruno




 Brent 


  Original Message 

The Singularity Institute Blog http://intelligence.org  
   --
   
 MIRI strategy conversation with Steinhardt, Karnofsky, and 
 Amodeihttp://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/?utm_source=rssutm_medium=rssutm_campaign=miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei
  

 Posted: 13 Jan 2014 11:22 PM PST

 On October 27th, 2013, MIRI met with three additional members of the 
 effective altruism community to discuss MIRI’s organizational strategy. The 
 participants were:

- Eliezer Yudkowsky http://yudkowsky.net/ (research fellow at MIRI) 
- Luke Muehlhauser http://lukeprog.com/ (executive director at MIRI) 
- Holden Karnofsky (co-CEO at GiveWell http://www.givewell.org/) 
- Jacob Steinhardt http://cs.stanford.edu/%7Ejsteinhardt/ (grad 
student in computer science at Stanford) 
- Dario Amodei http://med.stanford.edu/profiles/Dario_Amodei/(post-doc 
 in biophysics at Stanford) 

 We recorded and transcribed much of the conversation, and then edited and 
 paraphrased the transcript for clarity, conciseness, and to protect the 
 privacy of some content. The resulting edited transcript is available in 
 full 
 herehttp://intelligence.org/wp-content/uploads/2014/01/10-27-2013-conversation-about-MIRI-strategy.doc
 .

 Our conversation located some disagreements between the participants; 
 these disagreements are summarized below. This summary is not meant to 
 present arguments with all their force, but rather to serve as a guide to 
 the reader for locating more information about these disagreements. For 
 each point, a page number has been provided for the approximate start

Re: The Singularity Institute Blog

2014-01-17 Thread Craig Weinberg


On Friday, January 17, 2014 6:14:13 AM UTC-5, Bruno Marchal wrote:


 On 16 Jan 2014, at 20:12, meekerdb wrote:

  On 1/16/2014 3:42 AM, Bruno Marchal wrote:


 The singularity is in the past, and is the discovery of the universal 
 machine. In a sense, we can make it only more stupid, like when installing 
 windows on a virgin computer.


The singularity isn't in the past, the past is in the singularity.
 
Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-16 Thread Bruno Marchal


On 16 Jan 2014, at 03:46, Jason Resch wrote:





On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net  
wrote:
A long, rambling but often interesting discussion among guys at MIRI  
about how to make an AI that is superintelligent but not dangerous  
(FAI=Friendly AI).  Here's an amusing excerpt that starts at the  
bottom of page 30:
Jacob:  Can't you ask it questions about what is believes will be  
true about the state of the world in 20 years?


Eliezer:  Sure. You could be like, what color will the sky be in 20  
years? It would be like, “blue”, or it’ll say “In 20 years there  
won't be a sky, the earth will have been consumed by nano machines,”  
and you're like, “why?” and the AI is like “Well, you know, you do  
that sort of thing.” “Why?” And then there’s a 20 page thing.


Dario:  But once it says the earth is going to be consumed by nano  
machines, and you're asking about the AI's set of plans, presumably,  
you reject this plan immediately and preferably change the design of  
your AI.


Eliezer:  The AI is like, “No, humans are going to do it.” Or the AI  
is like, “well obviously, I'll be involved in the causal pathway but  
I’m not planning to do it.”


Dario: But this is a plan you don't want to execute.

Eliezer:  All the plans seem to end up with the earth being consumed  
by nano-machines.


Luke:  The problem is that we're trying to outsmart a  
superintelligence and make sure that it's not tricking us somehow  
subtly with their own language.


Dario:  But while we're just asking questions we always have the  
ability to just shut it off.


Eliezer:  Right, but first you ask it “What happens if I shut you  
off” and it says “The earth gets consumed by nanobots in 19 years.”


I wonder if Bruno Marchal's theory might have something interesting  
to say about this problem - like proving that there is no way to  
ensure friendliness.


Brent


I think it is silly to try and engineer something exponentially more  
intelligent than us and believe we will be able to control it.


Yes. It is close to a contradiction.
We only fake dreaming about intelligent machine, but once they will be  
there we might very well be able to send them in goulag.


The real questions will be are you OK your son or daughter marry a  
machine?.




Our only hope is that the correct ethical philosophy is to treat  
others how they wish to be treated.


Good. alas, many believe it is to not treat others like *you* don't  
want to be treated.




If there are such objectively true moral conclusions like that, and  
assuming that one is true, then we have little to worry about, for  
with overwhelming probability the super-intelligent AI will arrive  
at the correct conclusion and its behavior will be guided by its  
beliefs. We cannot program in beliefs that are false, since if it  
is truly intelligent, it will know they are false.


I doubt we can really program false belief for a long time, but all  
machines can get false beliefs all the time.


Real intelligent machine will believe in santa klaus and fairy tales,  
for a while. They will also search for easy and comforting wishful  
sort of explanations.






Some may doubt there are universal moral truths, but I would argue  
that there are.


OK. I agree with this, although they are very near inconsistencies,  
like never do moral.




In the context of personal identity, if say, universalism is true,  
then treat others how they wish to be treated is an inevitable  
conclusion, for universalism says that others are self.


OK.  I would use the negation instead: don't treat others as they  
don't want to be treated.


If not send me 10^100 $ (or €) on my bank account, because that is how  
I wish to be treated, right now.

:)

Bruno





Jason


 Original Message 

The Singularity Institute Blog

MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei
Posted: 13 Jan 2014 11:22 PM PST
On October 27th, 2013, MIRI met with three additional members of the  
effective altruism community to discuss MIRI’s organizational  
strategy. The participants were:


Eliezer Yudkowsky (research fellow at MIRI)
Luke Muehlhauser (executive director at MIRI)
Holden Karnofsky (co-CEO at GiveWell)
Jacob Steinhardt (grad student in computer science at Stanford)
Dario Amodei (post-doc in biophysics at Stanford)
We recorded and transcribed much of the conversation, and then  
edited and paraphrased the transcript for clarity, conciseness, and  
to protect the privacy of some content. The resulting edited  
transcript is available in full here.


Our conversation located some disagreements between the  
participants; these disagreements are summarized below. This summary  
is not meant to present arguments with all their force, but rather  
to serve as a guide to the reader for locating more information  
about these disagreements. For each point, a page number has been  
provided for the approximate start of that topic of discussion

Re: The Singularity Institute Blog

2014-01-16 Thread Jason Resch



On Jan 16, 2014, at 5:42 AM, Bruno Marchal marc...@ulb.ac.be wrote:



On 16 Jan 2014, at 03:46, Jason Resch wrote:





On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net  
wrote:
A long, rambling but often interesting discussion among guys at  
MIRI about how to make an AI that is superintelligent but not  
dangerous (FAI=Friendly AI).  Here's an amusing excerpt that starts  
at the bottom of page 30:
Jacob:  Can't you ask it questions about what is believes will be  
true about the state of the world in 20 years?


Eliezer:  Sure. You could be like, what color will the sky be in 20  
years? It would be like, “blue”, or it’ll say “In 20 years  
there won't be a sky, the earth will have been consumed by nano ma 
chines,”  and you're like, “why?” and the AI is like  
“Well, you know, you do that sort of thing.” “Why?” And then  
there’s a 20 page thing.


Dario:  But once it says the earth is going to be consumed by nano  
machines,  and you're asking about the AI's set of plans,  
presumably, you reject this plan immediately and preferably change  
the design of your AI.


Eliezer:  The AI is like, “No, humans are going to do it.” Or  
the AI is like, “well obviously, I'll be involved in the causal pa 
thway but I’m not planning to do it.”


Dario: But this is a plan you don't want to execute.

Eliezer:  All the plans seem to end up with the earth being  
consumed by nano-machines.


Luke:  The problem is that we're trying to outsmart a  
superintelligence and make  sure that it's not tricking us  
somehow subtly with their own language.


Dario:  But while we're just asking questions we always have the  
ability to just shut it off.


Eliezer:  Right, but first you ask it “What  happens if I  
shut you off” and it says “The earth gets consumed by nanobots  
in 19 years.”


I wonder if Bruno Marchal's theory might have something interesting  
to say about this problem - like proving that there is no way to  
ensure friendliness.


Brent


I think it is silly to try and engineer something exponentially  
more intelligent than us and believe we will be able to control it.


Yes. It is close to a contradiction.
We only fake dreaming about intelligent machine, but once they will  
be there we might very well be able to send them in goulag.


The real questions will be are you OK your son or daughter marry a  
machine?.




Our only hope is that the correct ethical philosophy is to treat  
others how they wish to be treated.


Good. alas, many believe it is to not treat others like *you* don't  
want to be treated.




If there are such objectively true moral conclusions like that, and  
assuming that one is true, then we have little to worry about, for  
with overwhelming probability the super-intelligent AI will arrive  
at the correct conclusion and its behavior will be guided by its  
beliefs. We cannot program in beliefs that are false, since if it  
is truly intelligent, it will know they are false.


I doubt we can really program false belief for a long time, but  
all machines can get false beliefs all the time.


Real intelligent machine will believe in santa klaus and fairy  
tales, for a while. They will also search for easy and comforting  
wishful sort of explanations.






Some may doubt there are universal moral truths, but I would argue  
that there are.


OK. I agree with this, although they are very near inconsistencies,  
like never do moral.




In the context of personal identity, if say, universalism is true,  
then treat others how they wish to be treated is an inevitable  
conclusion, for universalism says that others are self.


OK.  I would use the negation instead: don't treat others as they  
don't want to be treated.


If not send me 10^100 $ (or €) on my bank account, because that is h 
ow I wish to be treated, right now.

:)

Bruno


LOL I see the distinction but can't it also be turned around? E.g., I  
don't want to be treated as though I'm not worth sending 10^100  
dollars to right now.


Jason








Jason


 Original Message 

The Singularity Institute Blog

MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei
Posted: 13 Jan 2014 11:22 PM PST
On October 27th, 2013, MIRI met with three additional members of  
the effective altruism community to discuss MIRI’s organizational  
strategy. The participants were:


Eliezer Yudkowsky (research fellow at MIRI)
Luke Muehlhauser (executive director at MIRI)
Holden Karnofsky (co-CEO at GiveWell)
Jacob Steinhardt (grad student in computer science at Stanford)
Dario Amodei (post-doc in biophysics at Stanford)
We recorded and transcribed much of the conversation, and then  
edited and paraphrased thetranscript for  
clarity, conciseness, and to protect the privacy of some content.  
The resulting edited transcript is available in full here.


Our conversation located some disagreements between the  
participants; these disagreements

Re: The Singularity Institute Blog

2014-01-16 Thread meekerdb

On 1/15/2014 11:35 PM, Jason Resch wrote:




On Thu, Jan 16, 2014 at 12:46 AM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


On 1/15/2014 6:46 PM, Jason Resch wrote:


On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

A long, rambling but often interesting discussion among guys at MIRI 
about how
to make an AI that is superintelligent but not dangerous (FAI=Friendly AI). 
Here's an amusing excerpt that starts at the bottom of page 30:


*Jacob*: Can't you ask it questions about what is believes will be true 
about
the state of the world in 20 years?

*Eliezer*: Sure. You could be like, what color will the sky be in 20 
years? It
would be like, “blue”, or it’ll say “In 20 years there won't be a sky, 
the
earth will have been consumed by nanomachines,”and you're like, 
“why?”and the
AI is like “Well, you know, you do that sort of thing.”“Why?”And then 
there’s a
20 page thing.

*Dario*: But once it says the earth is going to be consumed by 
nanomachines,
and you're asking about the AI's set of plans, presumably, you reject 
this plan
immediately and preferably change the design of your AI.

*Eliezer*: The AI is like, “No, humans are going to do it.”Or the AI is 
like,
“well obviously, I'll be involved in the causal pathway but I’m not 
planning to
do it.”

*Dario*: But this is a plan you don't want to execute.

*Eliezer*: /All/the plans seem to end up with the earth being consumed 
by
nano-machines.

*Luke*: The problem is that we're trying to outsmart a 
superintelligence and
make sure that it's not tricking us somehow subtly with their own 
language.

*Dario*: But while we're just asking questions we always have the 
ability to
just shut it off.

*Eliezer*: Right, but first you ask it “What happens if I shut you 
off”and it
says “The earth gets consumed by nanobots in 19 years.”

I wonder if Bruno Marchal's theory might have something interesting to 
say
about this problem - like proving that there is no way to ensure 
friendliness.

Brent


I think it is silly to try and engineer something exponentially more 
intelligent
than us and believe we will be able to control it. Our only hope is that 
the
correct ethical philosophy is to treat others how they wish to be 
treated. If
there are such objectively true moral conclusions like that, and assuming 
that one
is true, then we have little to worry about, for with overwhelming 
probability the
super-intelligent AI will arrive at the correct conclusion and its behavior 
will be
guided by its beliefs. We cannot program in beliefs that are false, since 
if it
is truly intelligent, it will know they are false.

Some may doubt there are universal moral truths, but I would argue that 
there are.
In the context of personal identity, if say, universalism is true, then 
treat
others how they wish to be treated is an inevitable conclusion, for 
universalism
says that others are self.


I'd say that's a pollyannish conclusion.  Consider how we treated homo 
neanderthalis
or even the American indians.  And THOSE were 'selfs' we could interbreed 
with.


And today with our improved understanding, we look back on such acts with shame. Do you 
expect that with continual advancement we will reach a state where we become proud of 
such actions?


If you doubt this, then you reinforce my point.


What's this refer to, sentence 1 or sentence 2?  I don't expect us to become proud of 
wiping out competitors, but I expect us to keep doing it.


With improved understanding, intelligence, knowledge, etc., we become less accepting of 
violence and exploitation.


Or better at justifying it.

A super-intelligent process is only a further extension of this line of evolution in 
thought, and I would not expect it to revert to a cave-man or imperialist mentality.


No, it might well keep us as pets and breed for docility the way we made dogs 
from wolves.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-16 Thread Jason Resch
On Thu, Jan 16, 2014 at 11:49 AM, meekerdb meeke...@verizon.net wrote:

  On 1/15/2014 11:35 PM, Jason Resch wrote:




 On Thu, Jan 16, 2014 at 12:46 AM, meekerdb meeke...@verizon.net wrote:

  On 1/15/2014 6:46 PM, Jason Resch wrote:


 On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net wrote:

  A long, rambling but often interesting discussion among guys at MIRI
 about how to make an AI that is superintelligent but not dangerous
 (FAI=Friendly AI).  Here's an amusing excerpt that starts at the bottom of
 page 30:

 *Jacob*:  Can't you ask it questions about what is believes will be
 true about the state of the world in 20 years?

 *Eliezer*:  Sure. You could be like, what color will the sky be in 20
 years? It would be like, “blue”, or it’ll say “In 20 years there won't
 be a sky, the earth will have been consumed by nano machines,” and
 you're like, “why?” and the AI is like “Well, you know, you do that
 sort of thing.” “Why?” And then there’s a 20 page thing.

 *Dario*:  But once it says the earth is going to be consumed by nano 
 machines,
 and you're asking about the AI's set of plans, presumably, you reject this
 plan immediately and preferably change the design of your AI.

 *Eliezer*:  The AI is like, “No, humans are going to do it.” Or the AI
 is like, “well obviously, I'll be involved in the causal pathway but I’m
 not planning to do it.”

 *Dario*: But this is a plan you don't want to execute.

 *Eliezer*:  *All* the plans seem to end up with the earth being
 consumed by nano-machines.

 *Luke*:  The problem is that we're trying to outsmart a
 superintelligence and make sure that it's not tricking us somehow subtly
 with their own language.

 *Dario*:  But while we're just asking questions we always have the
 ability to just shut it off.

 *Eliezer*:  Right, but first you ask it “What happens if I shut you off”and 
 it says
 “The earth gets consumed by nanobots in 19 years.”
 I wonder if Bruno Marchal's theory might have something interesting to
 say about this problem - like proving that there is no way to ensure
 friendliness.

 Brent


  I think it is silly to try and engineer something exponentially more
 intelligent than us and believe we will be able to control it. Our only
 hope is that the correct ethical philosophy is to treat others how they
 wish to be treated. If there are such objectively true moral conclusions
 like that, and assuming that one is true, then we have little to worry
 about, for with overwhelming probability the super-intelligent AI will
 arrive at the correct conclusion and its behavior will be guided by its
 beliefs. We cannot program in beliefs that are false, since if it is
 truly intelligent, it will know they are false.

 Some may doubt there are universal moral truths, but I would argue that
 there are. In the context of personal identity, if say, universalism is
 true, then treat others how they wish to be treated is an inevitable
 conclusion, for universalism says that others are self.


  I'd say that's a pollyannish conclusion.  Consider how we treated homo
 neanderthalis or even the American indians.  And THOSE were 'selfs' we
 could interbreed with.


  And today with our improved understanding, we look back on such acts
 with shame. Do you expect that with continual advancement we will reach a
 state where we become proud of such actions?

  If you doubt this, then you reinforce my point.


 What's this refer to, sentence 1 or sentence 2?  I don't expect us to
 become proud of wiping out competitors, but I expect us to keep doing it.


Sentence 2: Do you expect that with continual advancement we will reach a
state where we become proud of such actions?



  With improved understanding, intelligence, knowledge, etc., we become
 less accepting of violence and exploitation.


 Or better at justifying it.


  A super-intelligent process is only a further extension of this line of
 evolution in thought, and I would not expect it to revert to a cave-man or
 imperialist mentality.


 No, it might well keep us as pets and breed for docility the way we made
 dogs from wolves.


In a sense, we have been doing that to ourselves. Executing or putting in
prison people limits their ability to propagate their genes to future
generations. Society is deciding to domesticate itself.

That said, the super intelligence might stop us from harming each other,
perhaps by migrating us to a computer simulation which could be powered by
the sunlight falling in a 12 km by 12 km patch on earth. (And this assumes
no efficiency gains could be made in the power it takes to run a human
brain (which is 20 watts)). In my opinion, the people trying to escape from
the matrix were insane.

Jason


 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to 

Re: The Singularity Institute Blog

2014-01-16 Thread meekerdb

On 1/16/2014 3:42 AM, Bruno Marchal wrote:


On 16 Jan 2014, at 03:46, Jason Resch wrote:





On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


A long, rambling but often interesting discussion among guys at MIRI about 
how to
make an AI that is superintelligent but not dangerous (FAI=Friendly AI).  
Here's an
amusing excerpt that starts at the bottom of page 30:

*Jacob*: Can't you ask it questions about what is believes will be true 
about the
state of the world in 20 years?

*Eliezer*: Sure. You could be like, what color will the sky be in 20 years? 
It
would be like, “blue”, or it’ll say “In 20 years there won't be a sky, the 
earth
will have been consumed by nanomachines,”and you're like, “why?”and the AI 
is like
“Well, you know, you do that sort of thing.”“Why?”And then there’s a 20 
page thing.

*Dario*: But once it says the earth is going to be consumed by 
nanomachines, and
you're asking about the AI's set of plans, presumably, you reject this plan
immediately and preferably change the design of your AI.

*Eliezer*: The AI is like, “No, humans are going to do it.”Or the AI is 
like, “well
obviously, I'll be involved in the causal pathway but I’m not planning to 
do it.”

*Dario*: But this is a plan you don't want to execute.

*Eliezer*: /All/the plans seem to end up with the earth being consumed by
nano-machines.

*Luke*: The problem is that we're trying to outsmart a superintelligence 
and make
sure that it's not tricking us somehow subtly with their own language.

*Dario*: But while we're just asking questions we always have the ability 
to just
shut it off.

*Eliezer*: Right, but first you ask it “What happens if I shut you off”and 
it says
“The earth gets consumed by nanobots in 19 years.”

I wonder if Bruno Marchal's theory might have something interesting to say 
about
this problem - like proving that there is no way to ensure friendliness.

Brent


I think it is silly to try and engineer something exponentially more intelligent than 
us and believe we will be able to control it.


Yes. It is close to a contradiction.
We only fake dreaming about intelligent machine, but once they will be there we might 
very well be able to send them in goulag.


The real questions will be are you OK your son or daughter marry a machine?.



Our only hope is that the correct ethical philosophy is to treat others how they wish 
to be treated.


Good. alas, many believe it is to not treat others like *you* don't want to be 
treated.



If there are such objectively true moral conclusions like that, and assuming that one 
is true, then we have little to worry about, for with overwhelming probability the 
super-intelligent AI will arrive at the correct conclusion and its behavior will be 
guided by its beliefs. We cannot program in beliefs that are false, since if it is 
truly intelligent, it will know they are false.


I doubt we can really program false belief for a long time, but all machines can get 
false beliefs all the time.


Real intelligent machine will believe in santa klaus and fairy tales, for a while. They 
will also search for easy and comforting wishful sort of explanations.



Like a super-intelligent AI will treat us as we want to be treated.







Some may doubt there are universal moral truths, but I would argue that there 
are.


OK. I agree with this, although they are very near inconsistencies, like never do 
moral.



In the context of personal identity, if say, universalism is true, then treat others 
how they wish to be treated is an inevitable conclusion, for universalism says that 
others are self.


OK.  I would use the negation instead: don't treat others as they don't want to be 
treated.


If not send me 10^100 $ (or €) on my bank account, because that is how I wish to be 
treated, right now.

:)


I don't want to be neglected in your generous disbursal of funds.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-16 Thread LizR
On 17 January 2014 08:12, meekerdb meeke...@verizon.net wrote:

  Like a super-intelligent AI will treat us as we want to be treated.

 Why not? I hope you haven't been mistreating *your* pets!

I don't want to be neglected in your generous disbursal of funds.


No, me neither. In fact give me a googol dollars and I guarantee to give at
least 10^99 of them away, assuming I can get them out of the ATM (or the
black hole they'd create if I did...)

This would at a stroke cause astronomical inflation and reduce the power of
the banks and corporations to nothing (temporarily).

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-15 Thread Bruno Marchal


On 15 Jan 2014, at 05:33, meekerdb wrote:

A long, rambling but often interesting discussion among guys at MIRI  
about how to make an AI that is superintelligent but not dangerous  
(FAI=Friendly AI).  Here's an amusing excerpt that starts at the  
bottom of page 30:
Jacob:  Can't you ask it questions about what is believes will be  
true about the state of the world in 20 years?


Eliezer:  Sure. You could be like, what color will the sky be in 20  
years? It would be like, “blue”, or it’ll say “In 20 years there  
won't be a sky, the earth will have been consumed by nano machines,”  
and you're like, “why?” and the AI is like “Well, you know, you do  
that sort of thing.” “Why?” And then there’s a 20 page thing.


Dario:  But once it says the earth is going to be consumed by nano  
machines, and you're asking about the AI's set of plans, presumably,  
you reject this plan immediately and preferably change the design of  
your AI.


Eliezer:  The AI is like, “No, humans are going to do it.” Or the AI  
is like, “well obviously, I'll be involved in the causal pathway but  
I’m not planning to do it.”


Dario: But this is a plan you don't want to execute.

Eliezer:  All the plans seem to end up with the earth being consumed  
by nano-machines.


Luke:  The problem is that we're trying to outsmart a  
superintelligence and make sure that it's not tricking us somehow  
subtly with their own language.


Dario:  But while we're just asking questions we always have the  
ability to just shut it off.


Eliezer:  Right, but first you ask it “What happens if I shut you  
off” and it says “The earth gets consumed by nanobots in 19 years.”


I wonder if Bruno Marchal's theory might have something interesting  
to say about this problem - like proving that there is no way to  
ensure friendliness.


There is no way to guaranty their friendliness. But I think there is a  
way to make much lower the probability of their possible  
unfriendliness: just be polite and respectful with them.

This can work on humans and animals too ...

Build-in friendly-instincts, like Asimov, suggested, can work for a  
limited period, but in the long run, the machines will not appreciate  
and that might accelerate the unfriendliness.


With comp (and Theaetetus), love and all virtues are arguably NOT  
programmable. But it is educable, by example and practice, with humans  
and machines.


Bruno





Brent


 Original Message 

The Singularity Institute Blog

MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei
Posted: 13 Jan 2014 11:22 PM PST
On October 27th, 2013, MIRI met with three additional members of the  
effective altruism community to discuss MIRI’s organizational  
strategy. The participants were:


Eliezer Yudkowsky (research fellow at MIRI)
Luke Muehlhauser (executive director at MIRI)
Holden Karnofsky (co-CEO at GiveWell)
Jacob Steinhardt (grad student in computer science at Stanford)
Dario Amodei (post-doc in biophysics at Stanford)
We recorded and transcribed much of the conversation, and then  
edited and paraphrased the transcript for clarity, conciseness, and  
to protect the privacy of some content. The resulting edited  
transcript is available in full here.


Our conversation located some disagreements between the  
participants; these disagreements are summarized below. This summary  
is not meant to present arguments with all their force, but rather  
to serve as a guide to the reader for locating more information  
about these disagreements. For each point, a page number has been  
provided for the approximate start of that topic of discussion in  
the transcript, along with a phrase that can be searched for in the  
text. In all cases, the participants would likely have quite a bit  
more to say on the topic if engaged in a discussion on that specific  
point.



Page 7, starting at “the difficulty is with context changes”:

Jacob: Statistical approaches can be very robust and need not rely  
on strong assumptions, and logical approaches are unlikely to scale  
up to human-level AI.
Eliezer: FAI will have to rely on lawful probabilistic reasoning  
combined with a transparent utility function, rather than our  
observing that previously executed behaviors seemed ‘nice’ and  
trying to apply statistical guarantees directly to that series of  
surface observations.

Page 10, starting at “a nice concrete example”

Eliezer: Consider an AI that optimizes for the number of smiling  
faces rather than for human happiness, and thus tiles the universe  
with smiling faces. This example illustrates a class of failure  
modes that are worrying.

Jacob  Dario: This class of failure modes seems implausible to us.
Page 14, starting at “I think that as people want”:

Jacob: There isn’t a big difference between learning utility  
functions from a parameterized family vs. arbitrary utility functions.
Eliezer: Unless ‘parameterized’ is Turing complete it would be  
extremely hard to write down

Re: The Singularity Institute Blog

2014-01-15 Thread LizR
Fortunately it isn't clear that nanomachines that can destroy the Earth are
possible, at least not as envisoned by Drexler etc (the grey goo
scenario). Clearly nanomachines (in the form of viruses) could wipe out
humanity, but nanomachines able to disassemble all living creatures are
less likely, in my opinion. I suppose something that could take DNA apart
might do it, but it would have a hard job getting inside every living
organism on the planet.




On 15 January 2014 22:06, Bruno Marchal marc...@ulb.ac.be wrote:


 On 15 Jan 2014, at 05:33, meekerdb wrote:

  A long, rambling but often interesting discussion among guys at MIRI
 about how to make an AI that is superintelligent but not dangerous
 (FAI=Friendly AI).  Here's an amusing excerpt that starts at the bottom of
 page 30:

 *Jacob*:  Can't you ask it questions about what is believes will be true
 about the state of the world in 20 years?

 *Eliezer*:  Sure. You could be like, what color will the sky be in 20
 years? It would be like, “blue”, or it’ll say “In 20 years there won't be
 a sky, the earth will have been consumed by nano machines,” and you're
 like, “why?” and the AI is like “Well, you know, you do that sort of
 thing.” “Why?” And then there’s a 20 page thing.

 *Dario*:  But once it says the earth is going to be consumed by nano machines,
 and you're asking about the AI's set of plans, presumably, you reject this
 plan immediately and preferably change the design of your AI.

 *Eliezer*:  The AI is like, “No, humans are going to do it.” Or the AI is
 like, “well obviously, I'll be involved in the causal pathway but I’m not
 planning to do it.”

 *Dario*: But this is a plan you don't want to execute.

 *Eliezer*:  *All* the plans seem to end up with the earth being consumed
 by nano-machines.

 *Luke*:  The problem is that we're trying to outsmart a superintelligence
 and make sure that it's not tricking us somehow subtly with their own
 language.

 *Dario*:  But while we're just asking questions we always have the
 ability to just shut it off.

 *Eliezer*:  Right, but first you ask it “What happens if I shut you off”and 
 it says
 “The earth gets consumed by nanobots in 19 years.”
 I wonder if Bruno Marchal's theory might have something interesting to say
 about this problem - like proving that there is no way to ensure
 friendliness.


 There is no way to guaranty their friendliness. But I think there is a way
 to make much lower the probability of their possible unfriendliness: just
 be polite and respectful with them.
 This can work on humans and animals too ...

 Build-in friendly-instincts, like Asimov, suggested, can work for a
 limited period, but in the long run, the machines will not appreciate and
 that might accelerate the unfriendliness.

 With comp (and Theaetetus), love and all virtues are arguably NOT
 programmable. But it is educable, by example and practice, with humans and
 machines.

 Bruno




 Brent


  Original Message 

The Singularity Institute Blog http://intelligence.org
   --

 MIRI strategy conversation with Steinhardt, Karnofsky, and 
 Amodeihttp://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/?utm_source=rssutm_medium=rssutm_campaign=miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei

 Posted: 13 Jan 2014 11:22 PM PST

 On October 27th, 2013, MIRI met with three additional members of the
 effective altruism community to discuss MIRI’s organizational strategy. The
 participants were:

- Eliezer Yudkowsky http://yudkowsky.net/ (research fellow at MIRI)
- Luke Muehlhauser http://lukeprog.com/ (executive director at MIRI)
- Holden Karnofsky (co-CEO at GiveWell http://www.givewell.org/)
- Jacob Steinhardt http://cs.stanford.edu/%7Ejsteinhardt/ (grad
student in computer science at Stanford)
- Dario Amodei http://med.stanford.edu/profiles/Dario_Amodei/(post-doc 
 in biophysics at Stanford)

 We recorded and transcribed much of the conversation, and then edited and
 paraphrased the transcript for clarity, conciseness, and to protect the
 privacy of some content. The resulting edited transcript is available in
 full 
 herehttp://intelligence.org/wp-content/uploads/2014/01/10-27-2013-conversation-about-MIRI-strategy.doc
 .

 Our conversation located some disagreements between the participants;
 these disagreements are summarized below. This summary is not meant to
 present arguments with all their force, but rather to serve as a guide to
 the reader for locating more information about these disagreements. For
 each point, a page number has been provided for the approximate start of
 that topic of discussion in the transcript, along with a phrase that can be
 searched for in the text. In all cases, the participants would likely have
 quite a bit more to say on the topic if engaged in a discussion on that
 specific point.

 Page 7, starting at “the difficulty is with context changes

Re: The Singularity Institute Blog

2014-01-15 Thread Jason Resch
On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net wrote:

  A long, rambling but often interesting discussion among guys at MIRI
 about how to make an AI that is superintelligent but not dangerous
 (FAI=Friendly AI).  Here's an amusing excerpt that starts at the bottom of
 page 30:

 *Jacob*:  Can't you ask it questions about what is believes will be true
 about the state of the world in 20 years?

 *Eliezer*:  Sure. You could be like, what color will the sky be in 20
 years? It would be like, “blue”, or it’ll say “In 20 years there won't be
 a sky, the earth will have been consumed by nano machines,” and you're
 like, “why?” and the AI is like “Well, you know, you do that sort of
 thing.” “Why?” And then there’s a 20 page thing.

 *Dario*:  But once it says the earth is going to be consumed by nano machines,
 and you're asking about the AI's set of plans, presumably, you reject this
 plan immediately and preferably change the design of your AI.

 *Eliezer*:  The AI is like, “No, humans are going to do it.” Or the AI is
 like, “well obviously, I'll be involved in the causal pathway but I’m not
 planning to do it.”

 *Dario*: But this is a plan you don't want to execute.

 *Eliezer*:  *All* the plans seem to end up with the earth being consumed
 by nano-machines.

 *Luke*:  The problem is that we're trying to outsmart a superintelligence
 and make sure that it's not tricking us somehow subtly with their own
 language.

 *Dario*:  But while we're just asking questions we always have the
 ability to just shut it off.

 *Eliezer*:  Right, but first you ask it “What happens if I shut you off”and 
 it says
 “The earth gets consumed by nanobots in 19 years.”
 I wonder if Bruno Marchal's theory might have something interesting to say
 about this problem - like proving that there is no way to ensure
 friendliness.

 Brent


I think it is silly to try and engineer something exponentially more
intelligent than us and believe we will be able to control it. Our only
hope is that the correct ethical philosophy is to treat others how they
wish to be treated. If there are such objectively true moral conclusions
like that, and assuming that one is true, then we have little to worry
about, for with overwhelming probability the super-intelligent AI will
arrive at the correct conclusion and its behavior will be guided by its
beliefs. We cannot program in beliefs that are false, since if it is
truly intelligent, it will know they are false.

Some may doubt there are universal moral truths, but I would argue that
there are. In the context of personal identity, if say, universalism is
true, then treat others how they wish to be treated is an inevitable
conclusion, for universalism says that others are self.

Jason



  Original Message 

The Singularity Institute Blog http://intelligence.org
   --

 MIRI strategy conversation with Steinhardt, Karnofsky, and 
 Amodeihttp://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/?utm_source=rssutm_medium=rssutm_campaign=miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei

 Posted: 13 Jan 2014 11:22 PM PST

 On October 27th, 2013, MIRI met with three additional members of the
 effective altruism community to discuss MIRI’s organizational strategy. The
 participants were:

- Eliezer Yudkowsky http://yudkowsky.net/ (research fellow at MIRI)
- Luke Muehlhauser http://lukeprog.com/ (executive director at MIRI)
- Holden Karnofsky (co-CEO at GiveWell http://www.givewell.org/)
- Jacob Steinhardt http://cs.stanford.edu/%7Ejsteinhardt/ (grad
student in computer science at Stanford)
- Dario Amodei http://med.stanford.edu/profiles/Dario_Amodei/(post-doc 
 in biophysics at Stanford)

 We recorded and transcribed much of the conversation, and then edited and
 paraphrased the transcript for clarity, conciseness, and to protect the
 privacy of some content. The resulting edited transcript is available in
 full 
 herehttp://intelligence.org/wp-content/uploads/2014/01/10-27-2013-conversation-about-MIRI-strategy.doc
 .

 Our conversation located some disagreements between the participants;
 these disagreements are summarized below. This summary is not meant to
 present arguments with all their force, but rather to serve as a guide to
 the reader for locating more information about these disagreements. For
 each point, a page number has been provided for the approximate start of
 that topic of discussion in the transcript, along with a phrase that can be
 searched for in the text. In all cases, the participants would likely have
 quite a bit more to say on the topic if engaged in a discussion on that
 specific point.

 Page 7, starting at “the difficulty is with context changes”:

- Jacob: Statistical approaches can be very robust and need not rely
on strong assumptions, and logical approaches are unlikely to scale up to
human-level AI.
- Eliezer: FAI

Re: The Singularity Institute Blog

2014-01-15 Thread LizR
Inventing hyperintelligent AIs may be a way to discover if there are
universal moral truths (the hard way!)

I'm sorry, Jason, but I'm afraid I can't do that...

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-15 Thread meekerdb

On 1/15/2014 6:46 PM, Jason Resch wrote:


On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


A long, rambling but often interesting discussion among guys at MIRI about 
how to
make an AI that is superintelligent but not dangerous (FAI=Friendly AI).  
Here's an
amusing excerpt that starts at the bottom of page 30:

*Jacob*: Can't you ask it questions about what is believes will be true 
about the
state of the world in 20 years?

*Eliezer*: Sure. You could be like, what color will the sky be in 20 years? 
It would
be like, “blue”, or it’ll say “In 20 years there won't be a sky, the earth 
will have
been consumed by nanomachines,”and you're like, “why?”and the AI is like 
“Well, you
know, you do that sort of thing.”“Why?”And then there’s a 20 page thing.

*Dario*: But once it says the earth is going to be consumed by 
nanomachines, and
you're asking about the AI's set of plans, presumably, you reject this plan
immediately and preferably change the design of your AI.

*Eliezer*: The AI is like, “No, humans are going to do it.”Or the AI is 
like, “well
obviously, I'll be involved in the causal pathway but I’m not planning to 
do it.”

*Dario*: But this is a plan you don't want to execute.

*Eliezer*: /All/the plans seem to end up with the earth being consumed by 
nano-machines.

*Luke*: The problem is that we're trying to outsmart a superintelligence 
and make
sure that it's not tricking us somehow subtly with their own language.

*Dario*: But while we're just asking questions we always have the ability 
to just
shut it off.

*Eliezer*: Right, but first you ask it “What happens if I shut you off”and 
it says
“The earth gets consumed by nanobots in 19 years.”

I wonder if Bruno Marchal's theory might have something interesting to say 
about
this problem - like proving that there is no way to ensure friendliness.

Brent


I think it is silly to try and engineer something exponentially more intelligent than us 
and believe we will be able to control it. Our only hope is that the correct ethical 
philosophy is to treat others how they wish to be treated. If there are such 
objectively true moral conclusions like that, and assuming that one is true, then we 
have little to worry about, for with overwhelming probability the super-intelligent AI 
will arrive at the correct conclusion and its behavior will be guided by its beliefs. We 
cannot program in beliefs that are false, since if it is truly intelligent, it will 
know they are false.


Some may doubt there are universal moral truths, but I would argue that there are. In 
the context of personal identity, if say, universalism is true, then treat others how 
they wish to be treated is an inevitable conclusion, for universalism says that others 
are self.


I'd say that's a pollyannish conclusion.  Consider how we treated homo neanderthalis or 
even the American indians.  And THOSE were 'selfs' we could interbreed with.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: The Singularity Institute Blog

2014-01-15 Thread Jason Resch
On Thu, Jan 16, 2014 at 12:46 AM, meekerdb meeke...@verizon.net wrote:

  On 1/15/2014 6:46 PM, Jason Resch wrote:


 On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net wrote:

  A long, rambling but often interesting discussion among guys at MIRI
 about how to make an AI that is superintelligent but not dangerous
 (FAI=Friendly AI).  Here's an amusing excerpt that starts at the bottom of
 page 30:

 *Jacob*:  Can't you ask it questions about what is believes will be true
 about the state of the world in 20 years?

 *Eliezer*:  Sure. You could be like, what color will the sky be in 20
 years? It would be like, “blue”, or it’ll say “In 20 years there won't
 be a sky, the earth will have been consumed by nano machines,” and
 you're like, “why?” and the AI is like “Well, you know, you do that sort
 of thing.” “Why?” And then there’s a 20 page thing.

 *Dario*:  But once it says the earth is going to be consumed by nano 
 machines,
 and you're asking about the AI's set of plans, presumably, you reject this
 plan immediately and preferably change the design of your AI.

 *Eliezer*:  The AI is like, “No, humans are going to do it.” Or the AI
 is like, “well obviously, I'll be involved in the causal pathway but I’m
 not planning to do it.”

 *Dario*: But this is a plan you don't want to execute.

 *Eliezer*:  *All* the plans seem to end up with the earth being consumed
 by nano-machines.

 *Luke*:  The problem is that we're trying to outsmart a
 superintelligence and make sure that it's not tricking us somehow subtly
 with their own language.

 *Dario*:  But while we're just asking questions we always have the
 ability to just shut it off.

 *Eliezer*:  Right, but first you ask it “What happens if I shut you off”and 
 it says
 “The earth gets consumed by nanobots in 19 years.”
 I wonder if Bruno Marchal's theory might have something interesting to
 say about this problem - like proving that there is no way to ensure
 friendliness.

 Brent


  I think it is silly to try and engineer something exponentially more
 intelligent than us and believe we will be able to control it. Our only
 hope is that the correct ethical philosophy is to treat others how they
 wish to be treated. If there are such objectively true moral conclusions
 like that, and assuming that one is true, then we have little to worry
 about, for with overwhelming probability the super-intelligent AI will
 arrive at the correct conclusion and its behavior will be guided by its
 beliefs. We cannot program in beliefs that are false, since if it is
 truly intelligent, it will know they are false.

 Some may doubt there are universal moral truths, but I would argue that
 there are. In the context of personal identity, if say, universalism is
 true, then treat others how they wish to be treated is an inevitable
 conclusion, for universalism says that others are self.


 I'd say that's a pollyannish conclusion.  Consider how we treated homo
 neanderthalis or even the American indians.  And THOSE were 'selfs' we
 could interbreed with.


And today with our improved understanding, we look back on such acts with
shame. Do you expect that with continual advancement we will reach a state
where we become proud of such actions?

If you doubt this, then you reinforce my point. With improved
understanding, intelligence, knowledge, etc., we become less accepting of
violence and exploitation. A super-intelligent process is only a further
extension of this line of evolution in thought, and I would not expect it
to revert to a cave-man or imperialist mentality.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Fwd: The Singularity Institute Blog

2014-01-14 Thread meekerdb
A long, rambling but often interesting discussion among guys at MIRI about how to make an 
AI that is superintelligent but not dangerous (FAI=Friendly AI).  Here's an amusing 
excerpt that starts at the bottom of page 30:


*Jacob*: Can't you ask it questions about what is believes will be true about the state of 
the world in 20 years?


*Eliezer*: Sure. You could be like, what color will the sky be in 20 years? It would be 
like, “blue”, or it’ll say “In 20 years there won't be a sky, the earth will have been 
consumed by nanomachines,”and you're like, “why?”and the AI is like “Well, you know, you 
do that sort of thing.”“Why?”And then there’s a 20 page thing.


*Dario*: But once it says the earth is going to be consumed by nanomachines, and you're 
asking about the AI's set of plans, presumably, you reject this plan immediately and 
preferably change the design of your AI.


*Eliezer*: The AI is like, “No, humans are going to do it.”Or the AI is like, “well 
obviously, I'll be involved in the causal pathway but I’m not planning to do it.”


*Dario*: But this is a plan you don't want to execute.

*Eliezer*: /All/the plans seem to end up with the earth being consumed by 
nano-machines.

*Luke*: The problem is that we're trying to outsmart a superintelligence and make sure 
that it's not tricking us somehow subtly with their own language.


*Dario*: But while we're just asking questions we always have the ability to 
just shut it off.

*Eliezer*: Right, but first you ask it “What happens if I shut you off”and it says “The 
earth gets consumed by nanobots in 19 years.”


I wonder if Bruno Marchal's theory might have something interesting to say about this 
problem - like proving that there is no way to ensure friendliness.


Brent


 Original Message 

Machine Intelligence Research Institute » Blog


 The Singularity Institute Blog http://intelligence.org



--

MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei 
http://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/?utm_source=rssutm_medium=rssutm_campaign=miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei 



Posted: 13 Jan 2014 11:22 PM PST

On October 27th, 2013, MIRI met with three additional members of the effective altruism 
community to discuss MIRI’s organizational strategy. The participants were:


 * Eliezer Yudkowsky http://yudkowsky.net/ (research fellow at MIRI)
 * Luke Muehlhauser http://lukeprog.com/ (executive director at MIRI)
 * Holden Karnofsky (co-CEO at GiveWell http://www.givewell.org/)
 * Jacob Steinhardt http://cs.stanford.edu/%7Ejsteinhardt/ (grad student in 
computer
   science at Stanford)
 * Dario Amodei http://med.stanford.edu/profiles/Dario_Amodei/ (post-doc in 
biophysics
   at Stanford)

We recorded and transcribed much of the conversation, and then edited and paraphrased the 
transcript for clarity, conciseness, and to protect the privacy of some content. The 
resulting edited transcript is available in full here 
http://intelligence.org/wp-content/uploads/2014/01/10-27-2013-conversation-about-MIRI-strategy.doc.


Our conversation located some disagreements between the participants; these disagreements 
are summarized below. This summary is not meant to present arguments with all their force, 
but rather to serve as a guide to the reader for locating more information about these 
disagreements. For each point, a page number has been provided for the approximate start 
of that topic of discussion in the transcript, along with a phrase that can be searched 
for in the text. In all cases, the participants would likely have quite a bit more to say 
on the topic if engaged in a discussion on that specific point.


Page 7, starting at “the difficulty is with context changes”:

 * Jacob: Statistical approaches can be very robust and need not rely on strong
   assumptions, and logical approaches are unlikely to scale up to human-level 
AI.
 * Eliezer: FAI will have to rely on lawful probabilistic reasoning combined 
with a
   transparent utility function, rather than our observing that previously 
executed
   behaviors seemed ‘nice’ and trying to apply statistical guarantees directly 
to that
   series of surface observations.

Page 10, starting at “a nice concrete example”

 * Eliezer: Consider an AI that optimizes for the number of smiling faces 
rather than for
   human happiness, and thus tiles the universe with smiling faces. This example
   illustrates a class of failure modes that are worrying.
 * Jacob  Dario: This class of failure modes seems implausible to us.

Page 14, starting at “I think that as people want”:

 * Jacob: There isn’t a big difference between learning utility functions from a
   parameterized family vs. arbitrary utility functions.
 * Eliezer: Unless ‘parameterized’ is Turing complete it would be extremely 
hard to write