On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
J Storrs Hall, PhD wrote:
a) the most likely sources of AI are corporate or military labs, and not
just
US ones. No friendly AI here, but profit-making and mission-performing
AI.
Main assumption built into this statement:
] Religion-free technical content
On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
J Storrs Hall, PhD wrote:
a) the most likely sources of AI are corporate or military labs, and
not
just
US ones. No friendly AI here, but profit-making and
mission-performing
AI.
Main assumption
--- Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient except yourself.
, October 02, 2007 7:12 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
--- Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being
J Storrs Hall, PhD wrote:
On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
Main assumption built into this statement: that it is possible to build
an AI capable of doing anything except dribble into its wheaties, using
the techiques currently being used.
I have explained
not have to
become vegetables and flirt with addiction and possibly death to enjoy life
intensely.
-Original Message-
From: Jef Allbright [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 02, 2007 12:55 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On 10/2/07
or thoughts through the many
monitoring programs which were developed during their initial learning
period before they became concious.
-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 02, 2007 12:36 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion
unambiguously?
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 7:12 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
--- Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show
On 10/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
I remain skeptical. Your argument applies to an AGI not modifying its own
motivational system. It does not apply to an AGI making modified copies of
itself. In fact you say:
Also, during the development of the first true AI, we would
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Derek Zahn wrote:
Richard Loosemore writes:
It is much less opaque.
I have argued that this is the ONLY way that I know of to ensure that
AGI is done in a way that allows safety/friendliness to be guaranteed.
I
On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the
likelihood of them becoming unfriendly would be similar to the
likelihood of the molecules of an
:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 8:36 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content
--- Edward W. Porter [EMAIL PROTECTED] wrote:
To Derek Zahn
You're 9/30/2007 10:58 AM post is very interesting. It is the type of
discussion of this subject
Richard Loosemore writes: You must remember that the complexity is not a
massive part of the system, just a small-but-indispensible part. I think
this sometimes causes confusion: did you think that I meant that the whole
thing would be so opaque that I could not understand *anything* about
Edward W. Porter writes: To Matt Mahoney.
Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and
implied RSI
(which I assume from context is a reference to Recursive Self Improvement) is
necessary for general intelligence.
So could you, or someone, please define exactly
Derek Zahn wrote:
Richard Loosemore writes:
You must remember that the complexity is not a massive part of the
system, just a small-but-indispensible part.
I think this sometimes causes confusion: did you think that I meant
that the whole thing would be so opaque that I could not
it a lot.
Date: Mon, 1 Oct 2007 11:34:09 -0400 From: [EMAIL PROTECTED] To:
agi@v2.listbox.com Subject: Re: [agi] Religion-free technical content
Derek Zahn wrote: Richard Loosemore writes: You must remember
that the complexity is not a massive part of the system, just a
small
Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 8:36 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical
Derek Zahn wrote:
Richard:
You agree that if we could get such a connection between the
probabilities, we are home and dry? That we need not care about
proving the friendliness if we can show that the probability is simply
too low to be plausible?
Yes, although the probability itself
Richard,
Let me make sure I understand your proposal. You propose to program
friendliness into the motivational structure of the AGI as tens of thousands
of hand-coded soft constraints or rules. Presumably with so many rules, we
should be able to cover every conceivable situation now or in the
01, 2007 12:57 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
Richard,
Let me make sure I understand your proposal. You propose to program
friendliness into the motivational structure of the AGI as tens of
thousands
of hand-coded soft constraints or rules. Presumably with so
Matt Mahoney wrote:
Richard,
Let me make sure I understand your proposal. You propose to program
friendliness into the motivational structure of the AGI as tens of thousands
of hand-coded soft constraints or rules. Presumably with so many rules, we
should be able to cover every conceivable
PROTECTED]
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Monday, October 01, 2007 12:01 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content
In my last post I had in mind RSI at the level of source code or machine
code. Clearly we already
Jef Allbright wrote:
On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that
the likelihood of them becoming unfriendly would be similar to
the likelihood of the
. Porter
Porter Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Monday, October 01, 2007 1:41 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical
On 10/1/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Jef Allbright wrote:
On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that
the likelihood of them
On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote:
Right, now consider the nature of the design I propose: the
motivational system never has an opportunity for a point failure:
everything that happens is multiply-constrained (and on a massive scale:
far more than is the case
On 10/1/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote:
Right, now consider the nature of the design I propose: the
motivational system never has an opportunity for a point failure:
everything that happens is
Replies to several posts, omnibus edition:
Edward W. Porter wrote:
Richard and Matt,
The below is an interesting exchange.
For Richard I have the question, how is what you are proposing that
different than what could
: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October 01, 2007 4:53 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
Replies to several posts, omnibus edition:
Edward W. Porter
Answer in this case: (1) such elemental things as protection from
diseases could always be engineered so as not to involve painful
injections (we are assuming superintelligent AGI, after all),
:-)First of all, I'm not willing to concede an AGI superintelligent
enough to solve all the
On Sun, Sep 30, 2007 at 12:49:43PM -0700, Morris F. Johnson wrote:
Integration of sociopolitical factors into a global evolution predictive
model will require something the best
economists, scientists, military strategists will have to get right or risk
global social anarchy.
FYI, there was
Mark Waser wrote:
And apart from the global differences between the two types of AGI, it
would be no good to try to guarantee friendliness using the kind of
conventional AI system that is Novamente, because inasmuch as general
goals would be encoded in such a system, they are explicitly coded
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
On 9/29/07, Kaj Sotala [EMAIL PROTECTED] wrote:
I'd be curious to see these, and I suspect many others would, too.
(Even though they're probably from lists I am on, I haven't followed
them nearly as actively as I could've.)
On 9/30/07, Richard Loosemore [EMAIL PROTECTED] wrote:
You know, I'm struggling here to find a good reason to disagree with
you, Russell. Strange position to be in, but it had to happen
eventually ;-).
And when Richard Loosemore and Russell Wallace agreed with each
other, it was also a
On 9/30/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote:
So, let's look at this from a technical point of view. AGI has the potential
of becoming a very powerful technology and misused or out of control could
possibly be dangerous. However, at this point we have little idea of how
these
On 29/09/2007, Vladimir Nesov [EMAIL PROTECTED] wrote:
Although it indeed seems off-topic for this list, calling it a
religion is ungrounded and in this case insulting, unless you have
specific arguments.
Killing huge amounts of people is a pretty much possible venture for
regular humans, so
I suppose I'd like to see the list management weigh in on whether this type of
talk belongs on this particular list or whether it is more appropriate for the
singularity list.
Assuming it's okay for now, especially if such talk has a technical focus:
One thing that could improve safety is to
First, let me say I think this is an interesting and healthy discussion and
has enough technical ramifications to qualify for inclusion on this list.
Second, let me clarify that I am not proposing that the dangers of AGI be
swiped under the rug or that we should be misleading the public.
I
On 9/30/07, Kaj Sotala [EMAIL PROTECTED] wrote:
Quoting Eliezer:
... Evolutionary programming (EP) is stochastic, and does not
precisely preserve the optimization target in the generated code; EP
gives you code that does what you ask, most of the time, under the
tested circumstances, but the
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Kaj Sotala [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 10:11 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On 9/30/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote
On 9/30/07, Edward W. Porter wrote:
I think you, Don Detrich, and many others on this list believe that, for at
least a couple of years, it's still pretty safe to go full speed ahead on
AGI research and development. It appears from the below post that both you
and Don agree AGI can
When presenting reasons for developing IGI to the general public one should
refer to a list of
problems that are generally insoluble with current computational technology.
Global weather modelling and technology to predict very long term effects of
energy expended to modify climate so that a
PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content
First, let me say I think this is an interesting and healthy discussion
and has enough technical ramifications to qualify for inclusion on this
list.
Second, let me clarify that I am not proposing that the dangers
Derek Zahn wrote:
[snip]
Surely certain AGI efforts are more dangerous than others, and the
opaqueness that Yudkowski writes about is, at this point, not the
primary danger. However, in that context, I think that Novamente is, to
an extent, opaque in the sense that its actions may not be
Richard Loosemore writes: It is much less opaque. I have argued that this
is the ONLY way that I know of to ensure that AGI is done in a way that
allows safety/friendliness to be guaranteed. I will have more to say about
that tomorrow, when I hope to make an announcement.
Cool. I'm sure
Derek Zahn wrote:
Richard Loosemore writes:
It is much less opaque.
I have argued that this is the ONLY way that I know of to ensure that
AGI is done in a way that allows safety/friendliness to be guaranteed.
I will have more to say about that tomorrow, when I hope to make an
--- Edward W. Porter [EMAIL PROTECTED] wrote:
To Derek Zahn
You're 9/30/2007 10:58 AM post is very interesting. It is the type of
discussion of this subject -- potential dangers of AGI and how and when do
we deal with them -- that is probably most valuable.
In response I have the
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Derek Zahn wrote:
Richard Loosemore writes:
It is much less opaque.
I have argued that this is the ONLY way that I know of to ensure that
AGI is done in a way that allows safety/friendliness to be guaranteed.
I will
Although it indeed seems off-topic for this list, calling it a
religion is ungrounded and in this case insulting, unless you have
specific arguments.
Killing huge amounts of people is a pretty much possible venture for
regular humans, so it should be at least as possible for artificial
ones. If
On 9/29/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
Although it indeed seems off-topic for this list, calling it a
religion is ungrounded and in this case insulting, unless you have
specific arguments.
I've been through the specific arguments at length on lists where
they're on topic, let me
I just want to point out that by itself such assertion seems to serve
no positive/informative purpose. You could just say about off-topic
part, unless you specifically want to discuss religion part.
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
On 9/29/07, Vladimir Nesov [EMAIL PROTECTED]
On 9/29/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
I just want to point out that by itself such assertion seems to serve
no positive/informative purpose.
I will be more than happy to refrain on this list from further mention
of my views on the matter - as I have done heretofore. I ask only
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
I've been through the specific arguments at length on lists where
they're on topic, let me know if you want me to dig up references.
I'd be curious to see these, and I suspect many others would, too.
(Even though they're probably from lists I
Oops, I thought we were having fun, but it looks like I have offended
somebody, again. I plead guilty for being somewhat off the purely technical
discussion topic, but I thought Edward W. Porter and I were having a
pretty interesting discussion. However it seems my primary transgression is
On 9/30/07, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote:
Oops, I thought we were having fun, but it looks like I have offended
somebody, again. I plead guilty for being somewhat off the purely technical
discussion topic, but I thought Edward W. Porter and I were having a
pretty interesting
Sotala [mailto:[EMAIL PROTECTED]
Sent: Saturday, September 29, 2007 9:09 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On 9/29/07, Russell Wallace [EMAIL PROTECTED] wrote:
I've been through the specific arguments at length on lists where
they're on topic, let me
101 - 156 of 156 matches
Mail list logo