26, 2010 1:14:22 PM
*Subject:* Re: [agi] Questions for an AGI
why should AGIs give a damn about us?
I like to think that they will give a damn because humans have a unique
way of experiencing reality and there is no reason to not take advantage of
that precious opportunity to create
Ian, Travis, etc.
On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker ianpark...@gmail.com wrote:
On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote:
I think crime has to be made impossible even for an enhanced humans first.
If our enhancement was Internet based it could be turned
What is the equation and solution method providing solution of every
physical problem?
or
Give me the equation of god, and its solution. (lol)
On Mon, Jun 28, 2010 at 6:02 PM, David Jones davidher...@gmail.com wrote:
Crime has its purpose just like many other unpleasant behaviors. When
--
*From:* rob levy r.p.l...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Sat, June 26, 2010 1:14:22 PM
*Subject:* Re: [agi] Questions for an AGI
why should AGIs give a damn about us?
I like to think that they will give a damn because humans have a unique
way of experiencing reality
Anyone who could suggest making crime impossible is SO far removed from
reality that it is hard to imagine that they function in society.
I cleared this obviously confusing statement up with Matt. What I meant to
say was impossible to get away with in public (in America I guess) because
of mass
, 2010 1:14:22 PM
*Subject:* Re: [agi] Questions for an AGI
why should AGIs give a damn about us?
I like to think that they will give a damn because humans have a
unique way of experiencing reality and there is no reason to not take
advantage of that precious opportunity to create astonishment
@v2.listbox.com
Sent: Sat, June 26, 2010 1:14:22 PM
Subject: Re: [agi] Questions for an AGI
why should AGIs give a damn about us?
I like to think that they will give a damn because humans have a unique way of
experiencing reality and there is no reason to not take advantage of that
precious
of hoping that AGI won't destroy the world, you study the problem and come
up with a safe design.
-- Matt Mahoney, matmaho...@yahoo.com
--
*From:* rob levy r.p.l...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Sat, June 26, 2010 1:14:22 PM
*Subject:* Re: [agi
.listbox.com
*Sent:* Sat, June 26, 2010 1:14:22 PM
*Subject:* Re: [agi] Questions for an AGI
why should AGIs give a damn about us?
I like to think that they will give a damn because humans have a unique
way of experiencing reality and there is no reason to not take advantage of
that precious
and maintain control over it. An
example would be the internet.
-- Matt Mahoney, matmaho...@yahoo.com
From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 2:37:15 PM
Subject: Re: [agi] Questions for an AGI
I definitely agree, however we
--
*From:* rob levy r.p.l...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Sat, June 26, 2010 1:14:22 PM
*Subject:* Re: [agi] Questions for an AGI
why should AGIs give a damn about us?
I like to think that they will give a damn because humans have a unique
way of experiencing
Lenting travlent...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 5:21:24 PM
Subject: Re: [agi] Questions for an AGI
I don't like the idea of enhancing human intelligence before the singularity. I
think crime has to be made impossible even for an enhanced humans first. I
think life
Mahoney, matmaho...@yahoo.com
--
*From:* Travis Lenting travlent...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Sun, June 27, 2010 5:21:24 PM
*Subject:* Re: [agi] Questions for an AGI
I don't like the idea of enhancing human intelligence before the
singularity
: Travis Lenting travlent...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 6:53:14 PM
Subject: Re: [agi] Questions for an AGI
Everything has to happen before the singularity because there is no after.
I meant when machines take over technological evolution.
That is easy. Eliminate
Travis,
The AGI world seems to be cleanly divided into two groups:
1. People (like Ben) who feel as you do, and aren't at all interested or
willing to look at the really serious lapses in logic that underlie this
approach. Note that there is a similar belief in Buddhism, akin to the
prisoners
Fellow Cylons,
I sure hope SOMEONE is assembling a list from these responses, because this
is exactly the sort of stuff that I (or someone) would need to run a Reverse
Turing Test (RTT) competition.
Steve
---
agi
Archives:
Well, the existence of different contingencies is one reason I don't wont
the first one modeled after a brain. I would like it to be a bit simpler in
the sense that it only tries to answer questions from the most scientific
perspective as possible. To me it seems like there isn't someone stable
why should AGIs give a damn about us?
I like to think that they will give a damn because humans have a unique way
of experiencing reality and there is no reason to not take advantage of that
precious opportunity to create astonishment or bliss. If anything is
important in the universe, its
I hope I don't miss represent him but I agree with Ben (at
least my interpretation) when he said, We can ask it questions like, 'how
can we make a better A(G)I that can serve us in more different ways without
becoming dangerous'...It can help guide us along the path to a
positive singularity. I'm
?
--
*From:* The Wizard [mailto:key.unive...@gmail.com]
*Sent:* Wednesday, June 23, 2010 11:05 PM
*To:* agi
*Subject:* [agi] Questions for an AGI
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
How do you work?
_
From: The Wizard [mailto:key.unive...@gmail.com]
Sent: Wednesday, June 23, 2010 11:05 PM
To: agi
Subject: [agi] Questions for an AGI
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time
I would ask What should I ask if I could ask AGI anything?
On Thu, Jun 24, 2010 at 11:34 AM, The Wizard key.unive...@gmail.com wrote:
If you could ask an AGI anything, what would you ask it?
--
Carlos A Mejia
Taking life one singularity at a time.
www.Transalchemy.com
*agi* |
Tell me what I need to know, by order of importance.
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
I would ask the agi What should I ask an agi
On Thu, Jun 24, 2010 at 4:56 AM, Florent Berthet
florent.bert...@gmail.comwrote:
Tell me what I need to know, by order of importance.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
Carlos A Mejia invited questions for an AGI!
If you could ask an AGI anything, what would you ask it?
Who killed Donald Young, a gay sex partner
of U.S. President Barak Obama, on December
24, 2007, in Obama's home town of Chicago,
when it began to look like Obama could
actually be elected
I get the impression from this question that you think an AGI is some sort
of all-knowing, idealistic invention. It is sort of like asking if you
could ask the internet anything, what would you ask it?. Uhhh, lots of
stuff, like how do I get wine stains out of white carpet :). AGI's will not
be
Am I a human or am I an AGI?
Dana Ream wrote:
How do you work?
Just like you designed me to.
deepakjnath wrote:
What should I ask if I could ask AGI anything?
The Wizard wrote:
What should I ask an agi
You don't need to ask me anything. I will do all of your thinking for you.
Florent
.listbox.com
Subject: Re: [agi] Questions
On 11/6/07, Monika Krishan [EMAIL PROTECTED] wrote:
So when speaking of augmentation, a clarification would have to made
as to whether the enhancement refers to human competence or human
performance. . and hence the related issue of discovering
: Monika Krishan [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 07, 2007 10:20 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Questions
On Nov 7, 2007 8:46 AM, Edward W. Porter [EMAIL PROTECTED] wrote:
It is much easier to think how superhuman intelligences will outshine us
in the performance
On 11/5/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Monika Krishan [EMAIL PROTECTED] wrote:
Hi All,
I'm new to the list. So I'm not sure if these issues have been already
been
raised.
1. Do you think AGIs will eventually reach a point in their evolution
when
self improvement
On Tue, Nov 06, 2007 at 01:55:43PM -0500, Monika Krishan wrote:
questions was the possibility that AGI might come full circle and attempt to
emulate human intelligence (HI) in the process of continually improving
itself.
Google The simulation argument, Nick Bostrom. There is a 1/3 chance
that
On 11/6/07, Monika Krishan [EMAIL PROTECTED] wrote:
There has been discussion re. the use of AGI to augment human intelligence
(HI). Can this augmentation be achieved without determining what HI is
capable of? For instance, one wouldn't consider a basic square root
calculator something that
Monika Krishan wrote:
2. Would it be a worthwhile exercise to explore what Human General
Intelligence, in it's present state, is capable of ?
Nah.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-
--- Monika Krishan [EMAIL PROTECTED] wrote:
Hi All,
I'm new to the list. So I'm not sure if these issues have been already been
raised.
1. Do you think AGIs will eventually reach a point in their evolution when
self improvement might come to mean attempting to solve previously solved
35 matches
Mail list logo