That sounds like a useful purpose. Yeh, I don't believe in fast and quick
methods either.. but also humans tend to overestimate their own
capabilities, so it will probably take more time than predicted.
On 9/3/08, William Pearson [EMAIL PROTECTED] wrote:
2008/8/28 Valentina Poletti [EMAIL
So it's about money then.. now THAT makes me feel less worried!! :)
That explains a lot though.
On 8/28/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Valentina Poletti [EMAIL PROTECTED] wrote:
Got ya, thanks for the clarification. That brings up another question.
Why do we want to make an AGI?
2008/8/28 Valentina Poletti [EMAIL PROTECTED]:
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?
To understand ourselves as intelligent agents better? It might enable
us to have decent education policy, rehabilitation of criminals.
Even if
Hi Terren,
Obviously you need to complicated your original statement I believe that
ethics is *entirely* driven by what is best evolutionarily... in such a
way that we don't derive ethics from parasites.
Saying that ethics is entirely driven by evolution is NOT the same as saying
that
--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
Saying that ethics is entirely driven by evolution is NOT
the same as saying
that evolution always results in ethics. Ethics is
computationally/cognitively expensive to successfully
implement (because a
stupid implementation gets
A succesful AGI should have n methods of data-mining its experience
for knowledge, I think. If it should have n ways of generating those
methods or n sets of ways to generate ways of generating those methods
etc I don't know.
On 8/28/08, j.k. [EMAIL PROTECTED] wrote:
On 08/28/2008 04:47 PM, Matt
OK. How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your survival.
Obviously, it is
then to your (evolutionary) benefit to behave ethically.
Ethics can't be explained simply by examining interactions between
individuals. It's
I remember Richard Dawkins saying that group selection is a lie. Maybe
we shoud look past it now? It seems like a problem.
On 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
OK. How about this . . . . Ethics is that behavior that,
when shown by you,
makes me believe that I should facilitate your
I like that argument.
Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.
--Abram
On Thu, Aug 28, 2008 at 9:04 PM, j.k.
Group selection (as used as the term of art in evolutionary biology) does
not seem to be experimentally supported (and there have been a lot of recent
experiments looking for such an effect).
It would be nice if people could let the idea drop unless there is actually
some proof for it other
Dawkins tends to see an truth, and then overstate it. What he says
isn't usually exactly wrong, so much as one-sided. This may be an
exception.
Some meanings of group selection don't appear to map onto reality.
Others map very weakly. Some have reasonable explanatory power. If you
don't
Group selection is not dead, just weaker than individual selection. Altruism in
many species is evidence for its existence.
http://en.wikipedia.org/wiki/Group_selection
In any case, evolution of culture and ethics in humans is primarily memetic,
not genetic. Taboos against nudity are nearly
On 08/29/2008 10:09 AM, Abram Demski wrote:
I like that argument.
Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.
2008/8/29 j.k. [EMAIL PROTECTED]:
On 08/28/2008 04:47 PM, Matt Mahoney wrote:
The premise is that if humans can create agents with above human
intelligence, then so can they. What I am questioning is whether agents at
any intelligence level can do this. I don't believe that agents at any
It seems that the debate over recursive self improvement depends on what you
mean by improvement. If you define improvement as intelligence as defined by
the Turing test, then RSI is not possible because the Turing test does not test
for superhuman intelligence. If you mean improvement as more
On 08/29/2008 01:29 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable intelligence. That same AGI at some later
2008/8/29 j.k. [EMAIL PROTECTED]:
On 08/29/2008 01:29 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable
On 08/29/2008 03:14 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
... The human-level AGI running a million
times faster could simultaneously interact with tens of thousands of
scientists at their pace, so there is no reason to believe it need be
starved for interaction to the
Got ya, thanks for the clarification. That brings up another question. Why
do we want to make an AGI?
On 8/27/08, Matt Mahoney [EMAIL PROTECTED] wrote:
An AGI will not design its goals. It is up to humans to define the goals
of an AGI, so that it will do what we want it to do.
Lol..it's not that impossible actually.
On Tue, Aug 26, 2008 at 6:32 PM, Mike Tintner [EMAIL PROTECTED]wrote:
Valentina:In other words I'm looking for a way to mathematically define
how the AGI will mathematically define its goals.
Holy Non-Existent Grail? Has any new branch of logic or
No, the state of ultimate bliss that you, I, and all other rational, goal
seeking agents seek
Your second statement copied below not withstanding, I *don't* seek ultimate
bliss.
You may say that is not what you want, but only because you are unaware of
the possibilities of reprogramming
Mark,
I second that!
Matt,
This is like my imaginary robot that rewires its video feed to be
nothing but tan, to stimulate the pleasure drive that humans put there
to make it like humans better.
If we have any external goals at all, the state of bliss you refer to
prevents us from achieving
Matt,
Ok, you have me, I admit defeat.
I could only continue my argument if I could pin down what sorts of
facts need to be learned with high probability for RSI, and show
somehow that this set does not include unlearnable facts. Learnable
facts form a larger set than provable facts, since for
PS-- I have thought of a weak argument:
If a fact is not probabilistically learnable, then it is hard to see
how it has much significance for an AI design. A non-learnable fact
won't reliably change the performance of the AI, since if it did, it
would be learnable. Furthermore, even if there were
Also, I should mention that the whole construction becomes irrelevant
if we can logically describe the goal ahead of time. With the make
humans happy example, something like my construction would be useful
if we need to AI to *learn* what a human is and what happy is. (We
then set up the pleasure
Mark,
Actually I am sympathetic with this idea. I do think good can be
defined. And, I think it can be a simple definition. However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds. So,
better to put such ideas in
However, it
doesn't seem right to me to preprogram an AGI with a set ethical
theory; the theory could be wrong, no matter how good it sounds.
Why not wait until a theory is derived before making this decision?
Wouldn't such a theory be a good starting point, at least?
better to put such
Mark,
I still think your definitions still sound difficult to implement,
although not nearly as hard as make humans happy without modifying
them. How would you define consent? You'd need a definition of
decision-making entity, right?
Personally, if I were to take the approach of a preprogrammed
Personally, if I were to take the approach of a preprogrammed ethics,
I would define good in pseudo-evolutionary terms: a pattern/entity is
good if it has high survival value in the long term. Patterns that are
self-sustaining on their own are thus considered good, but patterns
that help sustain
--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
Actually, I *do* define good and ethics not only in
evolutionary terms but
as being driven by evolution. Unlike most people, I
believe that ethics is
*entirely* driven by what is best evolutionarily while not
believing at all
in
Valentina Poletti [EMAIL PROTECTED] wrote:
Got ya, thanks for the clarification. That brings up another question. Why do
we want to make an AGI?
I'm glad somebody is finally asking the right question, instead of skipping
over the specification to the design phase. It would avoid a lot of
Nobody wants to enter a mental state where thinking and awareness are
unpleasant, at least when I describe it that way. My point is that having
everything you want is not the utopia that many people think it is. But it is
where we are headed.
-- Matt Mahoney, [EMAIL PROTECTED]
-
I'm not trying to win any arguments, but I am trying to solve the problem of
whether RSI is possible at all. It is an important question because it
profoundly affects the path that a singularity would take, and what precautions
we need to design into AGI. Without RSI, then a singularity has to
Matt:If RSI is possible, then there is the additional threat of a fast
takeoff of the kind described by Good and Vinge
Can we have an example of just one or two subject areas or domains where a
takeoff has been considered (by anyone) as possibly occurring, and what
form such a takeoff might
Here is Vernor Vinge's original essay on the singularity.
http://mindstalk.net/vinge/vinge-sing.html
The premise is that if humans can create agents with above human intelligence,
then so can they. What I am questioning is whether agents at any intelligence
level can do this. I don't believe
Thanks. But like I said, airy generalities.
That machines can become faster and faster at computations and accumulating
knowledge is certain. But that's narrow AI.
For general intelligence, you have to be able first to integrate as well as
accumulate knowledge. We have learned vast amounts
On 08/28/2008 04:47 PM, Matt Mahoney wrote:
The premise is that if humans can create agents with above human intelligence,
then so can they. What I am questioning is whether agents at any intelligence
level can do this. I don't believe that agents at any level can recognize
higher
Parasites are very successful at surviving but they don't have other
goals. Try being parasitic *and* succeeding at goals other than survival.
I think you'll find that your parasitic ways will rapidly get in the way of
your other goals the second that you need help (or even
Hi Mark,
Obviously you need to complicated your original statement I believe that
ethics is *entirely* driven by what is best evolutionarily... in such a way
that we don't derive ethics from parasites. You did that by invoking social
behavior - parasites are not social beings.
So from there
Abram Demski [EMAIL PROTECTED] wrote:
Matt,
What is your opinion on Goedel machines?
http://www.idsia.ch/~juergen/goedelmachine.html
Thanks for the link. If I understand correctly, this is a form of bounded RSI,
so it could not lead to a singularity. A Goedel machine is functionally
An AGI will not design its goals. It is up to humans to define the goals of an
AGI, so that it will do what we want it to do.
Unfortunately, this is a problem. We may or may not be successful in
programming the goals of AGI to satisfy human goals. If we are not successful,
then AGI will be
All rational goal-seeking agents must have a mental state of maximum utility
where any thought or perception would be unpleasant because it would result
in a different state.
I'd love to see you attempt to prove the above statement.
What if there are several states with utility equal to or
John, are any of your peer-reviewed papers online? I can't seem to find them...
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: John LaMuth [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, August 26, 2008 2:35:10 AM
Subject: Re: Information theoretic approaches to
It is up to humans to define the goals of an AGI, so that it will do what we
want it to do.
Why must we define the goals of an AGI? What would be wrong with setting it
off with strong incentives to be helpful, even stronger incentives to not be
harmful, and let it chart it's own course
Matt,
Thanks for the reply. There are 3 reasons that I can think of for
calling Goedel machines bounded:
1. As you assert, once a solution is found, it stops.
2. It will be on a finite computer, so it will eventually reach the
one best version of itself that it can reach.
3. It can only make
Mark,
I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful?
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?
Actually, my description gave the AGI four
I think if an artificial intelligence of length n was able to fully
grok itself and had a space of at least n in which to try out
modifications, it would be pretty simple for that intelligence to
figure out when the intelligences it's engineering in the allocated
space exhibit shiny new
What about raising thousands of generations of these things, whole
civilizations comprised of individual instances, then frozen at a
point of enlightenment to cherry-pick the population? You can have it
educated and bred and raised and everything by a real lineage in a VR
world with Earth-accurate
On Wed, Aug 27, 2008 at 8:32 PM, Mark Waser [EMAIL PROTECTED] wrote:
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict
Mark,
OK, I take up the challenge. Here is a different set of goal-axioms:
-Good is a property of some entities.
-Maximize good in the world.
-A more-good entity is usually more likely to cause goodness than a
less-good entity.
-A more-good entity is often more likely to cause pleasure than a
Hi,
A number of problems unfortunately . . . .
-Learning is pleasurable.
. . . . for humans. We can choose whether to make it so for machines or
not. Doing so would be equivalent to setting a goal of learning.
-Other things may be pleasurable depending on what we initially want
the
Mark,
The main motivation behind my setup was to avoid the wirehead
scenario. That is why I make the explicit goodness/pleasure
distinction. Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).
On Wed, Aug 27, 2008 at 8:43 PM, Abram Demski wrote:
snip
By the way, where does this term wireheading come from? I assume
from context that it simply means self-stimulation.
Science Fiction novels.
http://en.wikipedia.org/wiki/Wirehead
In Larry Niven's Known Space stories, a wirehead is
See also http://wireheading.com/
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: BillK [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 4:50:56 PM
Subject: Re: AGI goals (was Re: Information theoretic approaches to AGI (was
Re: [agi] The
Mark Waser [EMAIL PROTECTED] wrote:
All rational goal-seeking agents must have a mental state of maximum utility
where
any thought or perception would be unpleasant because it would result in a
different state.
I'd love to see you attempt to prove the above statement.
What if there are
Matt Mahoney wrote:
An AGI will not design its goals. It is up to humans to define the
goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach? To me it seems more
promising to design the motives, and to allow the AGI to design it's own
Abram Demski [EMAIL PROTECTED] wrote:
First, I do not think it is terribly difficult to define a Goedel
machine that does not halt. It interacts with its environment, and
there is some utility value attached to this interaction, and it
attempts to rewrite its code to maximize this utility.
It's
What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human
beings)?
If you are aware of the passage of time, then you are not staying in the
same state.
I have to laugh. So you agree that all your arguments don't apply to
anything that
Hi,
I think that I'm missing some of your points . . . .
Whatever good is, it cannot be something directly
observable, or the AI will just wirehead itself (assuming it gets
intelligent enough to do so, of course).
I don't understand this unless you mean by directly observable that the
Mark Waser [EMAIL PROTECTED] wrote:
What if the utility of the state decreases the longer that you are in it
(something that is *very* true of human
beings)?
If you are aware of the passage of time, then you are not staying in the
same state.
I have to laugh. So you agree that all your
Goals and motives are the same thing, in the sense that I mean them.
We want the AGI to want to do what we want it to do.
Failure is an extreme danger, but it's not only failure to design safely
that's a danger. Failure to design a successful AGI at all could be
nearly as great a danger.
Matt
You are just goin' to have to take my word for it all ...
Besides, my ideas stand alone apart from any sheepskin rigamarole ...
BTW, please don't throw out any more grand challenges if you are just goin' to
play the TEASE about addressing the relevant issues.
John LaMuth
Matt
Below is a sampling of my peer reviewed conference presentations on my
background ethical theory ...
This should elevate me above the common crackpot
#
Talks
a.. Presentation of a paper at ISSS 2000 (International Society for Systems
Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me
Valentina:In other words I'm looking for a way to mathematically define how the
AGI will mathematically define its goals.
Holy Non-Existent Grail? Has any new branch of logic or mathematics ever been
logically or mathematically (axiomatically) derivable from any old one? e.g.
topology,
Mike,
The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.
--Abram Demski
On
Abram,
Thanks for reply. This is presumably after the fact - can set theory
predict new branches? Which branch of maths was set theory derivable from? I
suspect that's rather like trying to derive any numeral system from a
previous one. Or like trying to derive any programming language from
Mike,
That may be the case, but I do not think it is relevant to Valentina's
point. How can we mathematically define how an AGI might
mathematically define its own goals? Well, that question assumes 3
things:
-An AGI defines its own goals
-In doing so, it phrases them in mathematical language
John, I have looked at your patent and various web pages. You list a lot of
nice sounding ethical terms (honor, love, hope, peace, etc) but give no details
on how to implement them. You have already admitted that you have no
experimental results, haven't actually built anything, and have no
Matt,
What is your opinion on Goedel machines?
http://www.idsia.ch/~juergen/goedelmachine.html
--Abram
On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Eric Burton [EMAIL PROTECTED] wrote:
These have profound impacts on AGI design. First, AIXI is (provably) not
Eric Burton [EMAIL PROTECTED] wrote:
These have profound impacts on AGI design. First, AIXI is (provably) not
computable,
which means there is no easy shortcut to AGI. Second, universal intelligence
is not
computable because it requires testing in an infinite number of environments.
Since
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 24, 2008 2:46 PM
Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The
Necessity of Embodiment)
I have challenged this list as well as the singularity and SL4 lists
2008/8/23 Matt Mahoney [EMAIL PROTECTED]:
Valentina Poletti [EMAIL PROTECTED] wrote:
I was wondering why no-one had brought up the information-theoretic aspect
of this yet.
It has been studied. For example, Hutter proved that the optimal strategy of
a rational goal seeking agent in an
On Sat, Aug 23, 2008 at 7:00 AM, William Pearson [EMAIL PROTECTED] wrote:
2008/8/23 Matt Mahoney [EMAIL PROTECTED]:
Valentina Poletti [EMAIL PROTECTED] wrote:
I was wondering why no-one had brought up the information-theoretic aspect
of this yet.
It has been studied. For example, Hutter
These have profound impacts on AGI design. First, AIXI is (provably) not
computable,
which means there is no easy shortcut to AGI. Second, universal intelligence
is not
computable because it requires testing in an infinite number of environments.
Since
there is no other well accepted test of
76 matches
Mail list logo