Mark, my point is that while in the past evolution did the choosing,
now it's *we* who decide,
But the *we* who is deciding was formed by evolution. Why do you do
*anything*? I've heard that there are four basic goals that drive every
decision: safety, feeling good, looking good, and being
On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote:
Ethics only becomes snarled when one is unwilling to decide/declare what the
goal of life is.
Extrapolated Volition comes down to a homunculus depending upon the
definition of wiser or saner.
Evolution has decided what the goal
?
- Original Message -
From: Vladimir Nesov [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, January 30, 2008 2:14 PM
Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide
On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote:
Ethics only becomes snarled when one
Richard Hollerith said:
If I am found dead with a bag over my head attached to helium or
natural gas, please investigate the possibility that it was a
murder made to look like a suicide.
--
Richard Hollerith
http://dl4.jottit.com
Same here Richard. Nitrous Oxide would
If I am found dead with a bag over my head attached to helium or
natural gas, please investigate the possibility that it was a
murder made to look like a suicide.
--
Richard Hollerith
http://dl4.jottit.com
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or
When transhumanists talk about indefinite life extension, they often take
care to say it's optional to forestall one common objection.
Yet I feel that most suicides we see should have been prevented -- that the
person should have been taken into custody and treated if possible, even
against their
: [agi] OpenMind, MindPixel founders both commit suicide
When transhumanists talk about indefinite life extension, they often take
care to say it's optional to forestall one common objection.
Yet I feel that most suicides we see should have been prevented -- that the
person should have been
On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
Consider the following subset of possible requirements: the program is
correct
if and only if it halts.
It's a perfectly valid requirement, and I can write all sorts of
software that satisfies it. I can't take a piece of software
On 28/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
When your computer can write and debug
software faster and more accurately than you can, then you should worry.
A tool that could generate computer code from formal specifications
would be a wonderful thing, but not an autonomous
On Jan 28, 2008 11:22 AM, Bob Mottram [EMAIL PROTECTED] wrote:
On 27/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
Consider the following subset of possible requirements: the program is
correct
if and only if it halts.
It's a perfectly valid requirement, and I can write all sorts
On Jan 28, 2008 2:08 PM, Bob Mottram [EMAIL PROTECTED] wrote:
On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
You can try checking out for example this paper (link from LtU
discussion), which presents a rather powerful language for describing
terminating programs:
On 28/01/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
I can take even external arbitrary program (say, a
Turing machine that I can't check in general case), place it on a
dedicated tape in UTM, and add control for termination, so that if it
doesn't terminate in 10^6 tacts, it will be
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 28, 2008 4:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Consider the following subset of possible requirements: the program is
correct
if and only if it halts.
It's a perfectly valid requirement, and I can write all sorts
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
It is undecidable whether a program satisfies the requirements of a formal
specification, which is the same as saying that it is undecidable whether
two
programs are equivalent.
On 1/24/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Theoretically yes, but behind my comment was a deeper analysis (which I
have posted before, I think) according to which it will actually be very
difficult for a negative-outcome singularity to occur.
I was really trying to make the point
On Jan 28, 2008 7:41 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
It is easy to construct programs that you can prove halt or don't halt.
There is no procedure to verify that a program is equivalent to a formal
specification (another program). Suppose there was. Then I can take any
program P
On Jan 28, 2008 6:33 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
It is undecidable whether a program satisfies the requirements of a formal
specification, which is the same as saying that it is undecidable whether two
programs are equivalent. The halting problem reduces to it.
Yes it is, if
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
Exactly. That's why it can't hack provably correct programs.
Which is useless because you can't write provably correct programs that aren't
extremely simple. *All* nontrivial properties of programs are undecidable.
On Jan 29, 2008 12:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
Exactly. That's why it can't hack provably correct programs.
Which is useless because you can't write provably correct programs that aren't
extremely simple. *All* nontrivial
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be magically solved by AGI. The problem will
actually get worse, because complex systems are harder to get right.
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be magically solved by AGI. The problem
will
actually get worse, because
On Jan 28, 2008 1:15 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be
On 27/01/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 27, 2008 5:32 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Software correctness is undecidable -- the halting problem reduces to it.
Computer security isn't going to be magically
Google
already knows more than any human,
This is only true, of course, for specific interpretations of the word
know ... and NOT for the standard ones...
and can retrieve the information faster,
but it can't launch a singularity.
Because, among other reasons, it is not an intelligence, but
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
No computer is going to start writing and debugging software faster and
more accurately than we can UNLESS we design it to do so, and during the
design process we will have ample opportunity to ensre that the machine
will
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Maybe you can
program it with a moral code, so it won't write malicious code. But the
two
sides of the security problem require almost identical skills. Suppose
you
ask the AGI to examine some operating system or
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
This whole scenario is filled with unjustified, unexamined assumptions.
For example, you suddenly say I foresee a problem when the collective
computing power of the network exceeds the collective computing power of
the humans
--- Richard Loosemore [EMAIL PROTECTED] wrote:
This whole scenario is filled with unjustified, unexamined assumptions.
For example, you suddenly say I foresee a problem when the collective
computing power of the network exceeds the collective computing power of
the humans that administer
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
You must
demonstrate some reason why the collective net of dumb computers will be
intelligent: it is not enough to simply assert that they will, or
might, become intelligent.
The intelligence comes from an infrastructure
--- Richard Loosemore [EMAIL PROTECTED] wrote:
You suggest that a collection of *sub-intelligent* (this is crucial)
computer programs can ad up to full intelligence just in virtue of their
existence.
This is not the same as a collection of *already-intelligent* humans
appearing more
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
You suggest that a collection of *sub-intelligent* (this is crucial)
computer programs can ad up to full intelligence just in virtue of their
existence.
This is not the same as a collection of *already-intelligent* humans
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are
Nightmare Scenarios) is that the vast majority of them involve
completely untenable assumptions. One example is the idea that there
will be a situation in
On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which
are Nightmare Scenarios) is that the vast majority of them
involve completely untenable assumptions.
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are
Nightmare Scenarios) is that the vast majority of them involve
completely untenable assumptions. One
Randall Randall wrote:
On Jan 24, 2008, at 10:25 AM, Richard Loosemore wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which
are Nightmare Scenarios) is that the vast majority of them involve
completely
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are
Nightmare Scenarios) is that the vast majority of them involve
completely untenable
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Because recursive self improvement is a competitive evolutionary process
even
if all agents have a common ancestor.
As explained in parallel post: this is a non-sequiteur.
OK, consider a network of agents, such as my
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
Because recursive self improvement is a competitive evolutionary process
even
if all agents have a common ancestor.
As explained in parallel post: this is a non-sequiteur.
OK, consider a network of
Richard Loosemore wrote:
Matt Mahoney wrote:
...
Matt,
...
As for your larger point, I continue to vehemently disagree with your
assertion that a singularity will end the human race.
As far as I can see, the most likely outcome of a singularity would be
exactly the opposite. Rather than
Charles D Hixson wrote:
Richard Loosemore wrote:
Matt Mahoney wrote:
...
Matt,
...
As for your larger point, I continue to vehemently disagree with your
assertion that a singularity will end the human race.
As far as I can see, the most likely outcome of a singularity would be
exactly
--- Richard Loosemore [EMAIL PROTECTED] wrote:
The problem with the scenarios that people imagine (many of which are
Nightmare Scenarios) is that the vast majority of them involve
completely untenable assumptions. One example is the idea that there
will be a situation in the world in which
--- Samantha Atkins [EMAIL PROTECTED] wrote:
In http://www.mattmahoney.net/singularity.html I discuss how a
singularity
will end the human race, but without judgment whether this is good
or bad.
Any such judgment is based on emotion.
Really? I can think of arguments why this
For example, hunger is an emotion, but the
desire for money to buy food is not
Hunger is a sensation, not an emotion.
The sensation is unpleasant and you have a hard-coded goal to get rid of it.
Further, desires tread pretty close to the line of emotions if not actually
crossing over . . . .
Matt Mahoney wrote:
--- Samantha Atkins [EMAIL PROTECTED] wrote:
In http://www.mattmahoney.net/singularity.html I discuss how a
singularity
will end the human race, but without judgment whether this is good
or bad.
Any such judgment is based on emotion.
Really? I can think of arguments why
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt,
This usage of emotion is idiosyncratic and causes endless confusion.
You're right. I didn't mean for the discussion to devolve into a disagreement
over definitions.
As for your larger point, I continue to vehemently disagree with your
Joshua Fox wrote:
Turing also committed suicide.
And Chislenko. Each of these people had different circumstances, and
suicide strikes everywhere, but I wonder if there is a common thread.
Ramanujan, like many other great mathematicians and achievers, died
young. There are on the other hand
Regarding the suicide rates of geniuses or those with high intelligence, I
wouldn't be concerned:
Berman says that the intelligence study is less useful than those that
point to *risk factors like divorce or unemployment*. ''It's not as if I'm
going to get more worried about my less
--- Mike Dougherty [EMAIL PROTECTED] wrote:
On Jan 19, 2008 8:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
Turing also committed suicide.
That's a
I believe that humans have the emotions that we do because of the
environment we evolved in. The more selfish/fearful/emotional you are, the
more likely you are to survive and reproduce. For humans, I think logic is a
sort of tool used to help us achieve happiness. Happiness is the
top-priority
Regarding AIG research as potentially psychologically disturbing, there are
so many other ways to be pscyhologically disturbed in a postmodern world
that it may not matter :)
It's already hard for a lot of people to have a healthy level of self-esteem
or self-indentity, and nihilism is not in
On Jan 19, 2008, at 5:24 PM, Matt Mahoney wrote:
--- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
Turing also committed suicide.
In his case I understand that the British government saw fit to
sentence
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
I guess the moral here is Stay away from attempts to hand-program a
database of common-sense assertions.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute
Well, Lenat survives...
But he paid people to build his database (Cyc)
What's depressing is trying to get folks to build a commonsense KB for
free ... then you
get confronted with the absolute stupidity of what they enter, and the
poverty and
repetitiveness of their senses of humor... ;-p
ben
Some thoughts of mine on the article.
http://streebgreebling.blogspot.com/2008/01/singh-and-mckinstry.html
On 19/01/2008, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
I guess the moral here is Stay away from
Quality is an issue, but it's really all about volume. Provided that
you have enough volume the signal stands out from the noise.
The solution is probably to make the knowledge capture into a game or
something that people will do as entertainment. Possibly the Second
Life approach will provide
This thread has nothing to do with artificial general intelligence -
please close this thread. Thanks
Sorry, but I have to say that I strongly disagree. There are
many aspects of agi that are non-technical and organizing
one's own live while doing ai is certainly one of them. That's
why I think
On Jan 19, 2008 5:53 PM, a [EMAIL PROTECTED] wrote:
This thread has nothing to do with artificial general intelligence -
please close this thread. Thanks
IMO, this thread is close enough to AGI to be list-worthy.
It is certainly true that knowledge-entry is not my preferred
approach to AGI ...
This thread has nothing to do with artificial general intelligence -
please close this thread. Thanks
Bob Mottram wrote:
Quality is an issue, but it's really all about volume. Provided that
you have enough volume the signal stands out from the noise.
The solution is probably to make the
Breeds There a Man...? by Isaac Asimov
On Saturday 19 January 2008 04:42:30 pm, Eliezer S. Yudkowsky wrote:
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
I guess the moral here is Stay away from attempts to hand-program a
database of common-sense
, USA 78704
512.791.7860
- Original Message
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, January 19, 2008 3:49:55 PM
Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide
Well, Lenat survives...
But he paid people to build his database (Cyc
--- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
Turing also committed suicide.
Building a copy of your mind raises deeply troubling issues. Logically, there
is no need for it to be conscious; it only needs to
On Jan 19, 2008 8:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all
Turing also committed suicide.
That's a personal solution to the Halting problem I do not plan to
Turing also committed suicide.
And Chislenko. Each of these people had different circumstances, and
suicide strikes everywhere, but I wonder if there is a common thread.
Joshua
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go
63 matches
Mail list logo