On 17 Jan 2014, at 16:44, Craig Weinberg wrote:
The whole point of a super intelligent AI is that it has nothing to
learn from us.
We certainly disagree a lot on this. I think that the more you are
intelligent, the more you can learn from others, any others, even from
bacteria and
On 18 January 2014 04:47, Craig Weinberg whatsons...@gmail.com wrote:
On Friday, January 17, 2014 6:14:13 AM UTC-5, Bruno Marchal wrote:
On 16 Jan 2014, at 20:12, meekerdb wrote:
On 1/16/2014 3:42 AM, Bruno Marchal wrote:
The singularity is in the past, and is the discovery of the
On 16 Jan 2014, at 15:52, Jason Resch wrote:
On Jan 16, 2014, at 5:42 AM, Bruno Marchal marc...@ulb.ac.be wrote:
On 16 Jan 2014, at 03:46, Jason Resch wrote:
On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net
wrote:
A long, rambling but often interesting discussion
On 16 Jan 2014, at 20:12, meekerdb wrote:
On 1/16/2014 3:42 AM, Bruno Marchal wrote:
On 16 Jan 2014, at 03:46, Jason Resch wrote:
On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net
wrote:
A long, rambling but often interesting discussion among guys at
MIRI about how to
On Friday, January 17, 2014 5:14:13 AM UTC-6, Bruno Marchal wrote:
To be franc, I don't believe in super-intelligence. I do believe in
super-competence, relative to some domain, but as I have explained from
time to time, competence has a negative feedback on intelligence.
Intelligence is a
Institute Blog http://intelligence.org
--
MIRI strategy conversation with Steinhardt, Karnofsky, and
Amodeihttp://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/?utm_source=rssutm_medium=rssutm_campaign=miri-strategy
On Friday, January 17, 2014 6:14:13 AM UTC-5, Bruno Marchal wrote:
On 16 Jan 2014, at 20:12, meekerdb wrote:
On 1/16/2014 3:42 AM, Bruno Marchal wrote:
The singularity is in the past, and is the discovery of the universal
machine. In a sense, we can make it only more stupid, like when
treat others as they
don't want to be treated.
If not send me 10^100 $ (or €) on my bank account, because that is how
I wish to be treated, right now.
:)
Bruno
Jason
Original Message
The Singularity Institute Blog
MIRI strategy conversation with Steinhardt
the distinction but can't it also be turned around? E.g., I
don't want to be treated as though I'm not worth sending 10^100
dollars to right now.
Jason
Jason
Original Message
The Singularity Institute Blog
MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei
On 1/15/2014 11:35 PM, Jason Resch wrote:
On Thu, Jan 16, 2014 at 12:46 AM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:
On 1/15/2014 6:46 PM, Jason Resch wrote:
On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net
On Thu, Jan 16, 2014 at 11:49 AM, meekerdb meeke...@verizon.net wrote:
On 1/15/2014 11:35 PM, Jason Resch wrote:
On Thu, Jan 16, 2014 at 12:46 AM, meekerdb meeke...@verizon.net wrote:
On 1/15/2014 6:46 PM, Jason Resch wrote:
On Tue, Jan 14, 2014 at 10:33 PM, meekerdb
On 1/16/2014 3:42 AM, Bruno Marchal wrote:
On 16 Jan 2014, at 03:46, Jason Resch wrote:
On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:
A long, rambling but often interesting discussion among guys at MIRI about
how to
make an
On 17 January 2014 08:12, meekerdb meeke...@verizon.net wrote:
Like a super-intelligent AI will treat us as we want to be treated.
Why not? I hope you haven't been mistreating *your* pets!
I don't want to be neglected in your generous disbursal of funds.
No, me neither. In fact give me a
and machines.
Bruno
Brent
Original Message
The Singularity Institute Blog
MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei
Posted: 13 Jan 2014 11:22 PM PST
On October 27th, 2013, MIRI met with three additional members of the
effective altruism community
The Singularity Institute Blog http://intelligence.org
--
MIRI strategy conversation with Steinhardt, Karnofsky, and
Amodeihttp://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/?utm_source=rssutm_medium=rssutm_campaign=miri
conclusion, for universalism says that others are self.
Jason
Original Message
The Singularity Institute Blog http://intelligence.org
--
MIRI strategy conversation with Steinhardt, Karnofsky, and
Amodeihttp://intelligence.org/2014/01/13/miri
Inventing hyperintelligent AIs may be a way to discover if there are
universal moral truths (the hard way!)
I'm sorry, Jason, but I'm afraid I can't do that...
--
You received this message because you are subscribed to the Google Groups
Everything List group.
To unsubscribe from this group and
On 1/15/2014 6:46 PM, Jason Resch wrote:
On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:
A long, rambling but often interesting discussion among guys at MIRI about
how to
make an AI that is superintelligent but not dangerous
On Thu, Jan 16, 2014 at 12:46 AM, meekerdb meeke...@verizon.net wrote:
On 1/15/2014 6:46 PM, Jason Resch wrote:
On Tue, Jan 14, 2014 at 10:33 PM, meekerdb meeke...@verizon.net wrote:
A long, rambling but often interesting discussion among guys at MIRI
about how to make an AI that is
Machine Intelligence Research Institute » Blog
The Singularity Institute Blog http://intelligence.org
--
MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei
http://intelligence.org
20 matches
Mail list logo