That sounds like a useful purpose. Yeh, I don't believe in fast and quick
methods either.. but also humans tend to overestimate their own
capabilities, so it will probably take more time than predicted.
On 9/3/08, William Pearson [EMAIL PROTECTED] wrote:
2008/8/28 Valentina Poletti [EMAIL
So it's about money then.. now THAT makes me feel less worried!! :)
That explains a lot though.
On 8/28/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Valentina Poletti [EMAIL PROTECTED] wrote:
Got ya, thanks for the clarification. That brings up another question.
Why do we want to make an AGI?
2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?
To understand ourselves as intelligent agents better? It might enable
us to have decent education policy, rehabilitation of criminals.
Even if
Hi Terren,
Obviously you need to complicated your original statement I believe that
ethics is *entirely* driven by what is best evolutionarily... in such a
way that we don't derive ethics from parasites.
Saying that ethics is entirely driven by evolution is NOT the same as saying
that
--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
Saying that ethics is entirely driven by evolution is NOT
the same as saying
that evolution always results in ethics. Ethics is
computationally/cognitively expensive to successfully
implement (because a
stupid implementation gets
OK. How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.
Ethics can't be explained simply by examining interactions between
individuals. It's
I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.
On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
OK. How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your
Group selection (as used as the term of art in evolutionary biology) does
not seem to be experimentally supported (and there have been a lot of recent
experiments looking for such an effect).
It would be nice if people could let the idea drop unless there is actually
some proof for it other
Dawkins tends to see an truth, and then overstate it. What he says
isn't usually exactly wrong, so much as one-sided. This may be an
exception.
Some meanings of group selection don't appear to map onto reality.
Others map very weakly. Some have reasonable explanatory power. If you
don't
Group selection is not dead, just weaker than individual selection. Altruism in
many species is evidence for its existence.
http://en.wikipedia.org/wiki/Group_selection
In any case, evolution of culture and ethics in humans is primarily memetic,
not genetic. Taboos against nudity are nearly
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?
On 8/27/08, Matt Mahoney [EMAIL PROTECTED] wrote:
An AGI will not design its goals. It is up to humans to define the goals
of an AGI, so that it will do what we want it to do.
No, the state of ultimate bliss that you, I, and all other rational, goal
seeking agents seek
Your second statement copied below not withstanding, I *don't* seek ultimate
bliss.
You may say that is not what you want, but only because you are unaware of
the possibilities of reprogramming
Mark,
I second that!
Matt,
This is like my imaginary robot that rewires its video feed to be
nothing but tan, to stimulate the pleasure drive that humans put there
to make it like humans better.
If we have any external goals at all, the state of bliss you refer to
prevents us from achieving
Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure
Mark,
Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple definition. However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds. So,
better to put such ideas in
However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.
Why not wait until a theory is derived before making this decision?
Wouldn't such a theory be a good starting point, at least?
better to put such
Mark,
I still think your definitions still sound difficult to implement,
although not nearly as hard as make humans happy without modifying
them. How would you define consent? You'd need a definition of
decision-making entity, right?
Personally, if I were to take the approach of a preprogrammed
Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain
--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
Actually, I *do* define good and ethics not only in
evolutionary terms but
as being driven by evolution. Unlike most people, I
believe that ethics is
*entirely* driven by what is best evolutionarily while not
believing at all
in
Valentina Poletti [EMAIL PROTECTED] wrote:
Got ya, thanks for the clarification. That brings up another question. Why do
we want to make an AGI?
I'm glad somebody is finally asking the right question, instead of skipping
over the specification to the design phase. It would avoid a lot of
Nobody wants to enter a mental state where thinking and awareness are
unpleasant, at least when I describe it that way. My point is that having
everything you want is not the utopia that many people think it is. But it is
where we are headed.
-- Matt Mahoney, [EMAIL PROTECTED]
-
Parasites are very successful at surviving but they don't have other
goals. Try being parasitic *and* succeeding at goals other than survival.
I think you'll find that your parasitic ways will rapidly get in the way of
your other goals the second that you need help (or even
Hi Mark,
Obviously you need to complicated your original statement I believe that
ethics is *entirely* driven by what is best evolutionarily... in such a way
that we don't derive ethics from parasites. You did that by invoking social
behavior - parasites are not social beings.
So from there
An AGI will not design its goals. It is up to humans to define the goals of an
AGI, so that it will do what we want it to do.
Unfortunately, this is a problem. We may or may not be successful in
programming the goals of AGI to satisfy human goals. If we are not successful,
then AGI will be
All rational goal-seeking agents must have a mental state of maximum utility
where any thought or perception would be unpleasant because it would result
in a different state.
I'd love to see you attempt to prove the above statement.
What if there are several states with utility equal to or
It is up to humans to define the goals of an AGI, so that it will do what we
want it to do.
Why must we define the goals of an AGI? What would be wrong with setting it
off with strong incentives to be helpful, even stronger incentives to not be
harmful, and let it chart it's own course
Mark,
I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful?
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?
Actually, my description gave the AGI four
On Wed, Aug 27, 2008 at 8:32 PM, Mark Waser [EMAIL PROTECTED] wrote:
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict
Mark,
OK, I take up the challenge. Here is a different set of goal-axioms:
-Good is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
Hi,
A number of problems unfortunately . . . .
-Learning is pleasurable.
. . . . for humans. We can choose whether to make it so for machines or
not. Doing so would be equivalent to setting a goal of learning.
-Other things may be pleasurable depending on what we initially want
the
Mark,
The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction. Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).
On Wed, Aug 27, 2008 at 8:43 PM, Abram Demski wrote:
snip
By the way, where does this term wireheading come from? I assume
from context that it simply means self-stimulation.
Science Fiction novels.
http://en.wikipedia.org/wiki/Wirehead
In Larry Niven's Known Space stories, a wirehead is
See also http://wireheading.com/
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 4:50:56 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The
Mark Waser [EMAIL PROTECTED] wrote:
All rational goal-seeking agents must have a mental state of maximum utility
where
any thought or perception would be unpleasant because it would result in a
different state.
I'd love to see you attempt to prove the above statement.
What if there are
Matt Mahoney wrote:
An AGI will not design its goals. It is up to humans to define the
goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach? To me it seems more
promising to design the motives, and to allow the AGI to design it's own
What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human
beings)?
If you are aware of the passage of time, then you are not staying in the
same state.
I have to laugh. So you agree that all your arguments don't apply to
anything that
Hi,
I think that I'm missing some of your points . . . .
Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).
I don't understand this unless you mean by directly observable that the
Mark Waser [EMAIL PROTECTED] wrote:
What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human
beings)?
If you are aware of the passage of time, then you are not staying in the
same state.
I have to laugh. So you agree that all your
Goals and motives are the same thing, in the sense that I mean them.
We want the AGI to want to do what we want it to do.
Failure is an extreme danger, but it's not only failure to design safely
that's a danger. Failure to design a successful AGI at all could be
nearly as great a danger.
40 matches
Mail list logo