Steve,
According to Wikipedia, a problem is defined as an obstacle which
makes it difficult to achieve a desired goal, objective or purpose. It
exists when an individual becomes aware of a significant difference
between what actually is and what is desired. I understand that
conquering a
--- On Sat, 6/14/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
On Fri, Jun 13, 2008 at 6:21 PM, Mark Waser
[EMAIL PROTECTED] wrote:
if you wire-head, you go extinct
Doing it today certainly wouldn't be a good idea, but
whatever we do to take care of risks and improvements, our AGI(s) will
Jiri,
On 6/12/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
You may not necessarily want to mess with a particular problem/education.
You may have much better things to do. All of us may have better things to
do.
Just listen to that word: PROBLEM.. Do you want to have anything to
do with
if you wire-head, you go extinct
Doing it today certainly wouldn't be a good idea, but
whatever we do to take care of risks and improvements, our AGI(s) will
eventually do a better job, so why not then?
Going into a degenerate mental state is no different than death. If you can't
see
There've been enough responses to this that I will reply in generalities, and
hope I cover everything important...
When I described Nirvana attractractors as a problem for AGI, I meant that in
the sense that they form a substantial challenge for the designer (as do many
other
Sent: Friday, June 13, 2008 11:58 AM
Subject: Re: [agi] Nirvana
There've been enough responses to this that I will reply in generalities,
and
hope I cover everything important...
When I described Nirvana attractractors as a problem for AGI, I meant that
in
the sense that they form a substantial
In my visualization of the Cosmic All, it is not surprising.
However, there is an undercurrent of the Singularity/AGI community that is
somewhat apocaliptic in tone, and which (to my mind) seems to imply or assume
that somebody will discover a Good Trick for self-improving AIs and the jig
will
of the Singularity/AGI community that is
somewhat apocaliptic in tone,
Yeah, well, I would (and will, shortly) argue differently.
- Original Message -
From: J Storrs Hall, PhD [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 1:28 PM
Subject: Re: [agi] Nirvana
In my visualization
On Fri, Jun 13, 2008 at 1:28 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I think that our culture of self-indulgence is to some extent in a Nirvana
attractor. If you think that's a good thing, why shouldn't we all lie around
with wires in our pleasure centers (or hopped up on cocaine, same
Mark,
Assuming that
a) pain avoidance and pleasure seeking are our primary driving forces; and
b) our intelligence wins over our stupidity; and
c) we don't get killed by something we cannot control;
Nirvana is where we go.
Jiri
---
agi
Archives:
Yes, but I strongly disagree with assumption one. Pain avoidance and
pleasure are best viewed as status indicators, not goals.
- Original Message -
From: Jiri Jelinek [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 3:42 PM
Subject: Re: [agi] Nirvana
Mark
a) pain avoidance and pleasure seeking are our primary driving forces;
On Fri, Jun 13, 2008 at 3:47 PM, Mark Waser [EMAIL PROTECTED] wrote:
Yes, but I strongly disagree with assumption one. Pain avoidance and
pleasure are best viewed as status indicators, not goals.
Pain and pleasure [levels]
to promote their own pleasure.
But then again, it really doesn't matter because you're extinct either way,
right?
- Original Message -
From: Jiri Jelinek [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 4:34 PM
Subject: Re: [agi] Nirvana
a) pain avoidance and pleasure
On Fri, Jun 13, 2008 at 6:21 PM, Mark Waser [EMAIL PROTECTED] wrote:
if you wire-head, you go extinct
Doing it today certainly wouldn't be a good idea, but whatever we do
to take care of risks and improvements, our AGI(s) will eventually do
a better job, so why not then?
Regards,
Jiri Jelinek
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
I'm getting several replies to this that indicate that people don't understand
what a utility function is.
If you are an AI (or a person) there will be occasions where you have to make
choices. In fact, pretty much everything you do involves
Jiri, Josh, et al,
On 6/11/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
If you can modify your mind, what is the shortest path to satisfying all
your
goals? Yep, you got it: delete the goals.
We can set whatever
If you have a program structure that can make decisions that would otherwise
be vetoed by the utility function, but get through because it isn't executed
at the right time, to me that's just a bug.
Josh
On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote:
If you have a fixed-priority
Isn't your Nirvana trap exactly equivalent to Pascal's Wager? Or am I
missing something?
- Original Message -
From: J Storrs Hall, PhD [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, June 11, 2008 10:54 PM
Subject: Re: [agi] Nirvana
On Wednesday 11 June 2008 06:18:03 pm
--- On Thu, 6/12/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
But it doesn't work for full fledged AGI. Suppose you
are a young man who's always been taught not to get yourself killed, and
not to kill people (as top
priorities). You are confronted with your country being invaded and faced
AM
Subject: Re: [agi] Nirvana
If you have a program structure that can make decisions that would
otherwise
be vetoed by the utility function, but get through because it isn't
executed
at the right time, to me that's just a bug.
Josh
On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote
On Thu, Jun 12, 2008 at 3:36 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
... and here we have the makings of AGI run amok...
My point.. it is usually possible to make EVERYONE happy with the results,
but only with a process that roots out the commonly held invalid assumptions.
Like Gort
On Thu, Jun 12, 2008 at 6:44 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
If you have a fixed-priority utility function, you can't even THINK ABOUT the
choice. Your pre-choice function will always say Nope, that's bad and
you'll be unable to change. (This effect is intended in all the RSI
Jiri,
The point that you apparently missed is that substantially all problems fall
cleanly into two categories:
1. The solution is known (somewhere in the world and hopefully to the AGI),
in which case, as far as the user is concerned, this is an issue of
ignorance that is best cured by
--- On Wed, 6/11/08, Jey Kottalam [EMAIL PROTECTED] wrote:
On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD
[EMAIL PROTECTED] wrote:
The real problem with a self-improving AGI, it seems
to me, is not going to be
that it gets too smart and powerful and takes over the
world. Indeed, it
On Thu, Jun 12, 2008 at 10:23 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Huh? I used those phrases to describe two completely different things: a
program that CAN change its highest priorities (due to what I called a bug),
and one that CAN'T. How does it follow that I'm missing a
attempted reply to
your non-reply was confusing).
- Original Message -
From: J Storrs Hall, PhD [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, June 12, 2008 2:23 PM
Subject: Re: [agi] Nirvana
Huh? I used those phrases to describe two completely different things: a
program
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
On Thursday 12 June 2008 02:48:19 am, William Pearson wrote:
The kinds of choices I am interested in designing for at the moment
are should program X or program Y get control of this bit of memory or
IRQ for the next time period. X and Y can
J Storrs Hall, PhD wrote:
The real problem with a self-improving AGI, it seems to me, is not going to be
that it gets too smart and powerful and takes over the world. Indeed, it
seems likely that it will be exactly the opposite.
If you can modify your mind, what is the shortest path to
The real problem with a self-improving AGI, it seems to me, is not going to be
that it gets too smart and powerful and takes over the world. Indeed, it
seems likely that it will be exactly the opposite.
If you can modify your mind, what is the shortest path to satisfying all your
goals? Yep,
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
The real problem with a self-improving AGI, it seems to me, is not going to be
that it gets too smart and powerful and takes over the world. Indeed, it
seems likely that it will be exactly the opposite.
If you can
Vladimir,
You seem to be assuming that there is some objective utility for which the
AI's internal utility function is merely the indicator, and that if the
indicator is changed it is thus objectively wrong and irrational.
There are two answers to this. First is to assume that there is such an
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
If you can modify your mind, what is the shortest path to satisfying all your
goals? Yep, you got it: delete the goals.
We can set whatever goals/rules we want for AGI, including rules for
[particular [types of]]
2008/6/11 J Storrs Hall, PhD [EMAIL PROTECTED]:
Vladimir,
You seem to be assuming that there is some objective utility for which the
AI's internal utility function is merely the indicator, and that if the
indicator is changed it is thus objectively wrong and irrational.
There are two
On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Vladimir,
You seem to be assuming that there is some objective utility for which the
AI's internal utility function is merely the indicator, and that if the
indicator is changed it is thus objectively wrong and
I'm getting several replies to this that indicate that people don't understand
what a utility function is.
If you are an AI (or a person) there will be occasions where you have to make
choices. In fact, pretty much everything you do involves making choices. You
can choose to reply to this or
On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
The real problem with a self-improving AGI, it seems to me, is not going to be
that it gets too smart and powerful and takes over the world. Indeed, it
seems likely that it will be exactly the opposite.
If you can
On Thu, Jun 12, 2008 at 5:12 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'm getting several replies to this that indicate that people don't understand
what a utility function is.
I don't see any specific indication of this problem in replies you
received, maybe you should be a little more
A very diplomatic reply, it's appreciated.
However, I have no desire (or time) to argue people into my point of view. I
especially have no time to argue with people over what they did or didn't
understand. And if someone wishes to state that I misunderstood what he
understood, fine. If he
On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote:
On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
I claim that there's plenty of historical evidence that people fall into
this
kind of attractor, as the word nirvana indicates (and you'll find similar
On Thu, Jun 12, 2008 at 6:30 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
A very diplomatic reply, it's appreciated.
However, I have no desire (or time) to argue people into my point of view. I
especially have no time to argue with people over what they did or didn't
understand. And if
Matt,
Printing ahh or ouch is just for show. The important observation is that
the program changes its behavior in response to a reinforcement signal in the
same way that animals do.
Let me remind you that the problem we were originally discussing was
about qualia and uploading. Not just about a
.
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 18, 2007 5:32 PM
To: agi@v2.listbox.com
Subject: Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana?
Never!)
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
Matt,
autobliss passes tests
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
Matt,
Printing ahh or ouch is just for show. The important observation is
that
the program changes its behavior in response to a reinforcement signal in
the
same way that animals do.
Let me remind you that the problem we were originally
--- Gary Miller [EMAIL PROTECTED] wrote:
Too complicate things further.
A small percentage of humans perceive pain as pleasure
and prefer it at least in a sexual context or else
fetishes like sadomachism would not exist.
And they do in fact experience pain as a greater pleasure.
More
Eliezer,
You asked that very personal question yourself and now you blame
Jiri for asking the same?
:-)
Ok, let's take a look into your answer.
You said that you prefer to be transported into a randomly selected
anime.
In my taste, Jiri's Endless AGI supervised pleasure is much wiser
choice
Matt,
You algorithm is too complex.
What's the point of doing step 1?
Step 2 is sufficient.
Saturday, November 3, 2007, 8:01:45 PM, you wrote:
So we can dispense with the complex steps of making a detailed copy of your
brain and then have it transition into a degenerate state, and just skip
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
We just need to control AGIs goal system.
Matt,
autobliss passes tests for awareness of its inputs and responds as if it has
qualia. How is it fundamentally different from human awareness of pain and
pleasure, or is it just a matter of degree?
If your code has feelings it reports then reversing the order of the
feeling strings (without
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
We just need to control AGIs goal system.
You can only control the goal system of the first
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
We just need to control AGIs goal system.
You can only control the goal system of the first iteration.
..and you can add rules for it's creations (e.g. stick with the same
Matt Mahoney wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
We just need to control AGIs goal system.
You can only control the goal system of the first iteration.
..and you can add rules for it's creations (e.g. stick with
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
We just need to control AGIs goal system.
You can only control the goal system of the first iteration.
..and
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
We just need to control AGIs goal system.
You can only control the goal system of the first iteration.
..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)
But if
I've often heard people say things like qualia are an illusion or
consciousness is just an illusion, but the concept of an illusion
when applied to the mind is not very helpful, since all our thoughts
and perceptions could be considered as illusions reconstructed from
limited sensory data and
Matt,
We can compute behavior, but nothing indicates we can compute
feelings. Qualia research needed to figure out new platforms for
uploading.
Regards,
Jiri Jelinek
On Nov 4, 2007 1:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
Matt,
Create a
Ed,
But I guess I am too much of a product of my upbringing and education
to want only bliss. I like to create things and ideas.
I assume it's because it provides pleasure you are unable to get in
other ways. But there are other ways and if those were easier for you,
you would prefer them over
Jelinek [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 04, 2007 2:59 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Nirvana? Manyana? Never!
Ed,
But I guess I am too much of a product of my upbringing and education
to want only bliss. I like to create things and ideas.
I assume it's because
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
Matt,
Create a numeric pleasure variable in your mind, initialize it with
a positive number and then keep doubling it for some time. Done? How
do you feel? Not a big difference? Oh, keep doubling! ;-))
The point of autobliss.cpp is to illustrate
On 11/4/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Let's say your goal is to stimulate your nucleus accumbens. (Everyone has
this goal; they just don't know it). The problem is that you would forgo
food, water, and sleep until you died (we assume, from animal experiments).
We have no need to
On Nov 3, 2007 12:58 PM, Mike Dougherty [EMAIL PROTECTED] wrote:
You are describing a very convoluted process of drug addiction.
The difference is that I have safety controls built into that scenario.
If I can get you hooked on heroine or crack cocaine, I'm pretty confident
that you will
, November 03, 2007 3:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Nirvana? Manyana? Never!
On Nov 3, 2007 12:58 PM, Mike Dougherty [EMAIL PROTECTED] wrote:
You are describing a very convoluted process of drug addiction.
The difference is that I have safety controls built into that scenario
--- Edward W. Porter [EMAIL PROTECTED] wrote:
If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven to
maximize bliss why wouldn't they kill
On 11/2/07, Eliezer S. Yudkowsky wrote:
I didn't ask whether it's possible. I'm quite aware that it's
possible. I'm asking if this is what you want for yourself. Not what
you think that you ought to logically want, but what you really want.
Is this what you lived for? Is this the most
Jiri Jelinek wrote:
Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?
That's a very personal question, don't you think?
Even the parts I'm
On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
if you could have anything you wanted, is this the end you
would wish for yourself, more than anything else?
Yes. But don't forget I would also have AGI
On Fri, Nov 02, 2007 at 01:19:19AM -0400, Jiri Jelinek wrote:
Or do we know anything better?
I sure do. But ask me again, when I'm smarter, and have had more time to
think about the question.
--linas
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
I'm asking if this is what you want for yourself.
Then you could read just the first word from my previous response: YES
if you could have anything you wanted, is this the end you
would wish for yourself, more than anything
Jiri Jelinek wrote:
On Nov 2, 2007 4:54 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more
Jiri,
You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be
Linas, BillK
It might currently be hard to accept for association-based human
minds, but things like roses, power-over-others, being worshiped
or loved are just waste of time with indirect feeling triggers
(assuming the nearly-unlimited ability to optimize).
Regards,
Jiri Jelinek
On Nov 2, 2007
On Nov 2, 2007 2:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Could you please provide one specific example of a human goal which
isn't feeling-based?
It depends on what you mean by 'based' and 'goal'. Does any choice
qualify as a goal? For example, if I choose to write certain word in
Jiri Jelinek wrote on Thu 11/01/07 2:51 AM
JIRI Ok, here is how I see it: If we survive, I believe we will
eventually get plugged into some sort of pleasure machine and we will not
care about intelligence at all. Intelligence is a useless tool when there
are no problems and no goals to think
Is this really what you *want*?
Out of all the infinite possibilities, this is the world in which you
would most want to live?
Yes, great feelings only (for as many people as possible) and the
engine being continuously improved by AGI which would also take care
of all related tasks including
ED So is the envisioned world is one in which people are on something
equivalent to a perpetual heroin or crystal meth rush?
Kind of, except it would be safe.
If so, since most current humans wouldn't have much use for such people, I
don't know why self-respecting productive human-level AGIs
Jiri Jelinek wrote:
Let's go to an extreme: Imagine being an immortal idiot.. No matter
what you do how hard you try, the others will be always so much
better in everything that you will eventually become totally
discouraged or even afraid to touch anything because it would just
always
On Nov 2, 2007 1:19 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:
Is this really what you *want*?
Out of all the infinite possibilities, this is the world in which you
would most want to live?
Yes, great feelings only (for as many people as possible) and the
engine being continuously
Stefan,
closing your eyes to reality. This is bad because you
effectively deny yourself the potential for further increasing your fitness
I'm closing my eyes, but my AGI - which is an extension of my
intelligence (/me) - does not. I fact it opens them more than I could.
We and our AGI should
Jiri Jelinek wrote:
Is this really what you *want*?
Out of all the infinite possibilities, this is the world in which you
would most want to live?
Yes, great feelings only (for as many people as possible) and the
engine being continuously improved by AGI which would also take care
of all
78 matches
Mail list logo