That sounds like a useful purpose. Yeh, I don't believe in fast and quick
methods either.. but also humans tend to overestimate their own
capabilities, so it will probably take more time than predicted.
On 9/3/08, William Pearson [EMAIL PROTECTED] wrote:
2008/8/28 Valentina Poletti [EMAIL
So it's about money then.. now THAT makes me feel less worried!! :)
That explains a lot though.
On 8/28/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Valentina Poletti [EMAIL PROTECTED] wrote:
Got ya, thanks for the clarification. That brings up another question.
Why do we want to make an AGI?
2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?
To understand ourselves as intelligent agents better? It might enable
us to have decent education policy, rehabilitation of criminals.
Even if
Define crazy, and I'll define control :)
---
This is crazy. What do you mean by breaking the laws of information
theory? Superintelligence is a completely lawful phenomenon, that can
exist entirely within the laws of physics as we know them and
bootrapped by technology as we know it. It might
On Thu, Aug 28, 2008 at 8:29 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Vladimir Nesov [EMAIL PROTECTED] wrote:
AGI doesn't do anything with the question, you do. You
answer the
question by implementing Friendly AI. FAI is the answer to
the
question.
The question is: how could one
On Thu, Aug 28, 2008 at 9:08 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
One of the main motivations for the fast development of
Friendly AI is
that it can be allowed to develop superintelligence to
police the
human space from
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
You start with what is right? and end with
Friendly AI, you don't
start with Friendly AI and close the circular
argument. This doesn't
answer the question, but it defines Friendly AI and thus
Friendly AI
(in terms of right).
In
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Won't work, Moore's law is ticking, and one day a
morally arbitrary
self-improving optimization will go FOOM. We have to try.
I wish I had a response to that. I wish I could believe it was even possible.
To me, this is like saying
About Friendly AI..
Let me put it this way: I would think anyone in a position to offer funding
for this kind of work would require good answers to the above.
Terren
My view is a little different. I think these answers are going to come out
of a combination of theoretical advances with
I agree with that to the extent that theoretical advances could address the
philosophical objections I am making. But until those are dealt with,
experimentation is a waste of time and money.
If I was talking about how to build faster-than-lightspeed travel, you would
want to know how I plan
Hi,
Your philosophical objections aren't really objections to my perspective, so
far as I have understood so far...
What you said is
I've been saying that Friendliness is impossible to implement because 1)
it's a moving target (as in, changes through time), since 2) its definition
is dependent
comments below...
[BG]
Hi,
Your philosophical objections aren't really objections to my perspective, so
far as I have understood so far...
[TS]
Agreed. They're to the Eliezer perspective that Vlad is arguing for.
[BG]
I don't plan to hardwire beneficialness (by which I may not mean precisely
On Sat, Aug 30, 2008 at 9:18 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Given the psychological unity of humankind, giving the
focus of
right to George W. Bush personally will be
enormously better for
everyone than going in any
[BG]
I do however plan to hardwire **a powerful, super-human capability for
empathy** ... and a goal-maintenance system hardwired toward **stability of
top-level goals under self-modification**. But I agree this is different
from hardwiring specific goal content ... though it strongly
Terren Suydam [EMAIL PROTECTED] was quoted to say:
I've been saying that Friendliness is impossible to implement because 1)
it's a moving target (as in, changes through time), since 2) its definition
is dependent on context (situational context, cultural context, etc).
I think that Friendliness
.listbox.com
Sent: Thursday, August 28, 2008 10:54 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
Hi Mark,
Obviously you need to complicated your original statement I believe that
ethics is *entirely* driven by what is best
--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
Saying that ethics is entirely driven by evolution is NOT
the same as saying
that evolution always results in ethics. Ethics is
computationally/cognitively expensive to successfully
implement (because a
stupid implementation gets
A succesful AGI should have n methods of data-mining its experience
for knowledge, I think. If it should have n ways of generating those
methods or n sets of ways to generate ways of generating those methods
etc I don't know.
On 8/28/08, j.k. [EMAIL PROTECTED] wrote:
On 08/28/2008 04:47 PM, Matt
OK. How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.
Ethics can't be explained simply by examining interactions between
individuals. It's
I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.
On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
OK. How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your
I like that argument.
Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.
--Abram
On Thu, Aug 28, 2008 at 9:04 PM, j.k.
than it seems to make sense that . . . .
- Original Message -
From: Eric Burton [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 12:56 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment
Dawkins tends to see an truth, and then overstate it. What he says
isn't usually exactly wrong, so much as one-sided. This may be an
exception.
Some meanings of group selection don't appear to map onto reality.
Others map very weakly. Some have reasonable explanatory power. If you
don't
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 29, 2008 1:13:43 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
Group selection (as used as the term of art in evolutionary biology) does
not seem
On 08/29/2008 10:09 AM, Abram Demski wrote:
I like that argument.
Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.
2008/8/29 j.k. [EMAIL PROTECTED]:
On 08/28/2008 04:47 PM, Matt Mahoney wrote:
The premise is that if humans can create agents with above human
intelligence, then so can they. What I am questioning is whether agents at
any intelligence level can do this. I don't believe that agents at any
It seems that the debate over recursive self improvement depends on what you
mean by improvement. If you define improvement as intelligence as defined by
the Turing test, then RSI is not possible because the Turing test does not test
for superhuman intelligence. If you mean improvement as more
On 08/29/2008 01:29 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable intelligence. That same AGI at some later
2008/8/29 j.k. [EMAIL PROTECTED]:
On 08/29/2008 01:29 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable
On 08/29/2008 03:14 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
... The human-level AGI running a million
times faster could simultaneously interact with tens of thousands of
scientists at their pace, so there is no reason to believe it need be
starved for interaction to the
Interesting discussion. And we brought up wireheading. It's kind of the
ultimate example that shows that pursuing pleasure is different from
pursuing the good. It really is an area for the philosophers. What is
the good, anyway?
But what I wanted to comment on was my understanding of the
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?
On 8/27/08, Matt Mahoney [EMAIL PROTECTED] wrote:
An AGI will not design its goals. It is up to humans to define the goals
of an AGI, so that it will do what we want it to do.
All these points you made are good points, and I agree with you. However,
what I was trying to say - and I realized I did not express myself too well,
is that, from what I understand I see a paradox in what Eliezer is trying to
do. Assuming that we agree on the definition of AGI - a being far more
Lol..it's not that impossible actually.
On Tue, Aug 26, 2008 at 6:32 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Valentina:In other words I'm looking for a way to mathematically define
how the AGI will mathematically define its goals.
Holy Non-Existent Grail? Has any new branch of logic or
On Thu, Aug 28, 2008 at 12:34 PM, Valentina Poletti [EMAIL PROTECTED] wrote:
All these points you made are good points, and I agree with you. However,
what I was trying to say - and I realized I did not express myself too well,
is that, from what I understand I see a paradox in what Eliezer is
are just sloppy reasoning . . . .
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:05 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
Mark Waser
:05 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
Mark Waser [EMAIL PROTECTED] wrote:
What if the utility of the state decreases the longer that you are in
it
(something that is *very* true of human
beings)?
If you
approaches to AGI
(was Re: [agi] The Necessity of Embodiment))
Matt,
Thanks for the reply. There are 3 reasons that I can think of for
calling Goedel machines bounded:
1. As you assert, once a solution is found, it stops.
2. It will be on a finite computer, so it will eventually reach
PROTECTED]
- Original Message
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:40:24 AM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Matt,
Thanks
It doesn't matter what I do with the question. It
only matters what an AGI does with it.
AGI doesn't do anything with the question, you do. You
answer the
question by implementing Friendly AI. FAI is the answer to
the
question.
The question is: how could one specify Friendliness in
theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Hi mark,
I think the miscommunication is relatively simple...
On Wed, Aug 27, 2008 at 10:14 PM, Mark Waser [EMAIL PROTECTED] wrote:
Hi,
I think that I'm missing some of your points . . . .
Whatever good
--- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
One of the main motivations for the fast development of
Friendly AI is
that it can be allowed to develop superintelligence to
police the
human space from global catastrophes like Unfriendly AI,
which
includes as a special case a
, August 27, 2008 3:43 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches
to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction
.listbox.com
Sent: Thursday, August 28, 2008 1:59 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple
On Thu, Aug 28, 2008 at 12:29 PM, Terren Suydam [EMAIL PROTECTED] wrote:
I challenge anyone who believes that Friendliness is attainable in principle
to construct a scenario in which there is a clear right action that does not
depend on cultural or situational context.
It does depend on culture
.listbox.com
Sent: Thursday, August 28, 2008 1:59 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple
: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
I still think your definitions still sound difficult to implement,
although not nearly as hard as make humans happy without modifying
them. How would you define consent? You'd need a definition of
decision
--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
Actually, I *do* define good and ethics not only in
evolutionary terms but
as being driven by evolution. Unlike most people, I
believe that ethics is
*entirely* driven by what is best evolutionarily while not
believing at all
in
Valentina Poletti [EMAIL PROTECTED] wrote:
Got ya, thanks for the clarification. That brings up another question. Why do
we want to make an AGI?
I'm glad somebody is finally asking the right question, instead of skipping
over the specification to the design phase. It would avoid a lot of
]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 9:18:05 AM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
No, the state of ultimate bliss that you, I, and all other
PROTECTED]
- Original Message
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 11:42:10 AM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI
(was Re: [agi] The Necessity of Embodiment))
PS-- I have thought of a weak
Matt:If RSI is possible, then there is the additional threat of a fast
takeoff of the kind described by Good and Vinge
Can we have an example of just one or two subject areas or domains where a
takeoff has been considered (by anyone) as possibly occurring, and what
form such a takeoff might
Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI
(was Re: [agi] The Necessity of Embodiment))
Matt:If RSI is possible, then there is the additional
, [EMAIL PROTECTED]
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 7:00:07 PM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Matt:If RSI is possible
On 08/28/2008 04:47 PM, Matt Mahoney wrote:
The premise is that if humans can create agents with above human intelligence,
then so can they. What I am questioning is whether agents at any intelligence
level can do this. I don't believe that agents at any level can recognize
higher
)
from others.
- Original Message -
From: Terren Suydam [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
--- On Thu, 8/28/08, Mark
to
persist?
Terren
--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
From: Mark Waser [EMAIL PROTECTED]
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
To: agi@v2.listbox.com
Date: Thursday, August 28, 2008, 9:21 PM
@v2.listbox.com
Sent: Monday, August 25, 2008 3:30:59 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)
Matt,
What is your opinion on Goedel machines?
http://www.idsia.ch/~juergen/goedelmachine.html
--Abram
On Sun, Aug 24, 2008 at 5:46 PM, Matt
in a
different state.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Valentina Poletti [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 11:34:56 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)
Thanks
experiencing uniqueness normally
improves fitness through learning, etc)?
- Original Message -
From: Matt Mahoney
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 10:52 AM
Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re:
[agi] The Necessity
to AGI (was Re: [agi] The
Necessity of Embodiment)
Matt
Below is a sampling
of my peer reviewed conference presentations on my background ethical theory
...
This should elevate
me above the common crackpot
#
Talks
* Presentation of a paper
that.
- Original Message -
From: Matt Mahoney
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 10:52 AM
Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re:
[agi] The Necessity of Embodiment))
An AGI will not design its goals. It is up to humans to define
: Monday, August 25, 2008 3:30:59 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)
Matt,
What is your opinion on Goedel machines?
http://www.idsia.ch/~juergen/goedelmachine.html
--Abram
On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL
, August 27, 2008 10:52 AM
Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re:
[agi] The Necessity of Embodiment))
An AGI will not design its goals. It is up to humans to define the goals of
an AGI, so that it will do what we want it to do.
Unfortunately
to be a balance between the three).
- Original Message -
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
I
: [agi] The
Necessity of Embodiment)
Matt,
What is your opinion on Goedel machines?
http://www.idsia.ch/~juergen/goedelmachine.html
--Abram
On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL PROTECTED]
wrote:
Eric Burton [EMAIL PROTECTED] wrote:
These have profound impacts on AGI design
appear random.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 3:30:59 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)
Matt,
What
On Wed, Aug 27, 2008 at 8:32 PM, Mark Waser [EMAIL PROTECTED] wrote:
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict
Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
I agree that we are mired 5 steps before
suboptimal situation - YMMV).
- Original Message -
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 2:25 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
OK
PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 2:25 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
OK, I take up the challenge. Here is a different set of goal-axioms:
-Good
On Wed, Aug 27, 2008 at 8:43 PM, Abram Demski wrote:
snip
By the way, where does this term wireheading come from? I assume
from context that it simply means self-stimulation.
Science Fiction novels.
http://en.wikipedia.org/wiki/Wirehead
In Larry Niven's Known Space stories, a wirehead is
See also http://wireheading.com/
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 4:50:56 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi
On Wed, Aug 27, 2008 at 5:40 AM, Terren Suydam [EMAIL PROTECTED] wrote:
It doesn't matter what I do with the question. It only matters what an AGI
does with it.
AGI doesn't do anything with the question, you do. You answer the
question by implementing Friendly AI. FAI is the answer to the
Mark Waser [EMAIL PROTECTED] wrote:
All rational goal-seeking agents must have a mental state of maximum utility
where
any thought or perception would be unpleasant because it would result in a
different state.
I'd love to see you attempt to prove the above statement.
What if there are
Matt Mahoney wrote:
An AGI will not design its goals. It is up to humans to define the
goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach? To me it seems more
promising to design the motives, and to allow the AGI to design it's own
On Wed, Aug 27, 2008 at 7:44 AM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
But what is safe, and how to improve safety? This is a
complex goal
for complex environment, and naturally any solution to this
goal is
going to be very
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:40:24 AM
Subject: Re: Goedel machines (was Re: Information theoretic approaches to AGI
(was Re: [agi] The Necessity of Embodiment))
Matt
What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human
beings)?
If you are aware of the passage of time, then you are not staying in the
same state.
I have to laugh. So you agree that all your arguments don't apply to
anything that
: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction. Whatever good is, it cannot
Mark Waser [EMAIL PROTECTED] wrote:
What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human
beings)?
If you are aware of the passage of time, then you are not staying in the
same state.
I have to laugh. So you agree that all your
: Wednesday, August 27, 2008 7:16:53 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
Matt Mahoney wrote:
An AGI will not design its goals. It is up to humans to define the
goals of an AGI, so that it will do what we want
://www.angelfire.com/rnb/fairhaven/behaviorism.html
http://www.forebrain.org
- Original Message -
From: Matt Mahoney
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 7:55 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment
Mahoney
To: agi@v2.listbox.com
Sent: Monday, August 25, 2008 7:30 AM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)
John, I have looked at your patent and various web pages. You list a lot of
nice sounding ethical terms (honor, love, hope
On Tue, Aug 26, 2008 at 7:53 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Or take any number of ethical dilemmas, in which it's ok to steal food if it's
to feed your kids. Or killing ten people to save twenty. etc. How do you
define
Friendliness in these circumstances? Depends on the context.
Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me
Are you saying Friendliness is not context-dependent? I guess I'm struggling
to understand what a conceptual dynamics would mean that isn't dependent on
context. The AGI has to act, and at the end of the day, its actions are our
only true measure of its Friendliness. So I'm not sure what it
On Mon, Aug 25, 2008 at 11:09 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
What is the point of building general intelligence if all
it does is
takes the future from us and wastes it on
Valentina:In other words I'm looking for a way to mathematically define how the
AGI will mathematically define its goals.
Holy Non-Existent Grail? Has any new branch of logic or mathematics ever been
logically or mathematically (axiomatically) derivable from any old one? e.g.
topology,
On Tue, Aug 26, 2008 at 8:05 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Are you saying Friendliness is not context-dependent? I guess I'm
struggling to understand what a conceptual dynamics would mean
that isn't dependent on context. The AGI has to act, and at the end of the
day, its actions
If Friendliness is an algorithm, it ought to be a simple matter to express what
the goal of the algorithm is. How would you define Friendliness, Vlad?
--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
It is expressed in individual decisions, but it isn't
these decisions
On Tue, Aug 26, 2008 at 8:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:
If Friendliness is an algorithm, it ought to be a simple matter to express
what the goal of the algorithm is. How would you define Friendliness, Vlad?
Algorithm doesn't need to be simple. The actual Friendly AI that
formally. It can only be approximated, with error.
--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
From: Vladimir Nesov [EMAIL PROTECTED]
Subject: Re: [agi] The Necessity of Embodiment
To: agi@v2.listbox.com
Date: Tuesday, August 26, 2008, 1:21 PM
On Tue, Aug 26, 2008 at 8:54 PM
On Tue, Aug 26, 2008 at 9:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:
I didn't say the algorithm needs to be simple, I said the goal of
the algorithm ought to be simple. What are you trying to compute?
Your answer is, what is the right thing to do?
The obvious next question is, what does
Mike,
The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.
--Abram Demski
On
Abram,
Thanks for reply. This is presumably after the fact - can set theory
predict new branches? Which branch of maths was set theory derivable from? I
suspect that's rather like trying to derive any numeral system from a
previous one. Or like trying to derive any programming language from
Vlad, Terren and all,
by reading your interesting discussion, this saying popped in my mind..
admittedly it has little to do with AGI but you might get the point anyhow:
An old lady used to walk down a street everyday, and on a tree by that
street a bird sang beautifully, the sound made her
Mike,
That may be the case, but I do not think it is relevant to Valentina's
point. How can we mathematically define how an AGI might
mathematically define its own goals? Well, that question assumes 3
things:
-An AGI defines its own goals
-In doing so, it phrases them in mathematical language
It doesn't matter what I do with the question. It only matters what an AGI does
with it.
I'm challenging you to demonstrate how Friendliness could possibly be specified
in the formal manner that is required to *guarantee* that an AI whose goals
derive from that specification would actually
--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
But what is safe, and how to improve safety? This is a
complex goal
for complex environment, and naturally any solution to this
goal is
going to be very intelligent. Arbitrary intelligence is not
safe
(fatal, really), but what is
1 - 100 of 191 matches
Mail list logo