Re: [agi] Unlimited intelligence.

2004-10-24 Thread Brad Wyble
On Thu, 21 Oct 2004, deering wrote:
True intelligence must be aware of the widest possible context and derive super-goals based on direct observation of that context, and then generate subgoals for subcontexts.  Anything with preprogrammed goals is limited intelligence.

You have pre-programmed goals.
And you are certainly not aware of the widest possible context, you'd need 
a brain several orders of magnitude larger than the universe you're trying 
to be aware of in order to posses that remarkable ability.

-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re[4]: [agi] Unlimited intelligence. --- Super Goals

2004-10-22 Thread Dennis Gorelik
deering,

It seems that I agree with you ~70% of the time :-)

Let's focus on 30% differences and compare our understanding of
sub-goals and super-goals. 

1) What did come first sub-goals or super-goals?
Super-goals are primary goals, aren't they?


 SUPERGOAL 1:  take actions which will aid the advancement of intelligence in the 
 universe.
 SUPERGOAL 2:  take actions which will aid in the continued survival and advancement 
 of me.
 SUPERGOAL 3:  do not take actions which will harm others.

2) You examples of super-goals look as sub-goals to me.
Highly abstractional/intelligent sub-goals.
I name them sub-goals because they are not primary. They are derived
from basic human instincts.
Some of these primary goals you named sub-goal:
 SUBGOAL 1:  satisfy bodily needs, sex, food, sleep.
But for me it's definitely super-goal.

By the way, I agree that other sub-goals in your list are really
sub-goals:
 SUBGOAL 2:  make money.
 SUBGOAL 3:  wear seatbelt in car.
 SUBGOAL 4:  build websites explaining the coming Singularity.
 SUBGOAL 5:  play with and read to son.
 SUBGOAL 6:  annoy Eliezer.
 SUBGOAL 7:  learn about nanotechnology.
 SUBGOAL 8:  do household chores.
 SUBGOAL 9:  use proper grammar and spelling.

3) Goals are not equal in value.
Doesn't matter if they are super-goals or sub-goals.
All they have different weight.

4) Sometimes sub-goals could be more valuable than super-goals.
There could be several reasons for that.
Let's compare weight of sub-goal subA and super-goal superB.
- subA could serve for super-goal superC.
If superC is far more important that superB than subA could be more
important than superB.
- subA could serve for several super-goals superD, superE, ...,
superZ. As a result subA could be more valueable than superB.
- subA could be more suitable in current situation, therefore it would
be more active that superB which is not strongly related to the
current choice situation.

5) Only reinforcement matter when the system makes decision.
If there is no (mental) force - there is no (mental) action.

Friday, October 22, 2004, 3:02:11 AM, you wrote:

 All supergoals are equal in value.  Not all supergoals are
 applicable to all situations or decisions.  Subgoals are created,
 edited, or deleted to support supergoals.  Supergoals take
 precedence over subgoals.  Subgoals are more specific than
 supergoals and therefore more commonly directly applicable to
 situations or decisions.
  
 The subject chooses option 4 because it satisfies supergoal 1,
 which takes precedence over all subgoals despite producing no
 reinforcement.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Unlimited intelligence.

2004-10-21 Thread deering




Computer chess programs are merely one example of 
many kinds of software that display human level intelligence in a very narrow 
domain. The chess program on my desktop computer can beat me (but just 
barely), nevertheless, I consider myself more intelligent than it because I can 
do a lot of other things in addition to playing chess. But even if someone 
were to tack together a bunch of specialized programs to make a super program 
that did lots of stuff, I would still be more intelligent than 
it.Intelligence isn't just being able to do lots of stuff, but also 
having multiple levels of abstraction. The computer program has one level 
of abstraction; it plays chess. It doesn't know why it plays chess, the 
greater goal satisfied by playing chess, or the even greater goal that the chess 
playing goal serves. 

True intelligence must be aware of the widest 
possible context and derive super-goals based on direct observation of that 
context,and then generate subgoals for subcontexts. Anything with 
preprogrammed goals is limited intelligence.




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Dennis Gorelik
Deering,

I strongly disagree.
Humans have preprogrammed super-goals.
Humans don't update ability to update their super-goals.
And humans are intelligent creatures, aren't they?

Moreover: system which can easily redefine its super-goals is very
unstable.

At the same time intelligent system has to be able to define its own
sub-goals (not super-goals). These sub-goals are set based on
environment and super-goals.

 True intelligence must be aware of the widest possible context
 and derive super-goals based on direct observation of that
 context, and then generate subgoals for subcontexts.  Anything with
 preprogrammed goals is limited intelligence.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Eugen Leitl
On Thu, Oct 21, 2004 at 10:47:40AM -0400, Dennis Gorelik wrote:
 Deering,
 
 I strongly disagree.
 Humans have preprogrammed super-goals.

Yes? Can you show them in the brain coredump? Do you have such a coredump?

 Humans don't update ability to update their super-goals.

What, precisely, is a supergoal, in an animal context?

 And humans are intelligent creatures, aren't they?
 
 Moreover: system which can easily redefine its super-goals is very
 unstable.
 
 At the same time intelligent system has to be able to define its own
 sub-goals (not super-goals). These sub-goals are set based on
 environment and super-goals.


-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07078, 11.61144http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org http://nanomachines.net

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


pgpWEmolUd0h4.pgp
Description: PGP signature


Re[2]: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Dennis Gorelik
Eugen,

 Yes? Can you show them in the brain coredump? Do you have such a coredump?

There is no coredump.
But we can observe humans behavior.

 Humans don't update ability to update their super-goals.

 What, precisely, is a supergoal, in an animal context?

There are many supergoals.

They are: desire to physical activity, hunger, pain (actually desire
to avoid pain), sexual attraction, society instincts (like desire to
chat).
There are many more supergoals. Not all of them are located in a
brain.

You cannot reprogram them. You can suppress some of super-goals based
on other super-goals. But this is not reprogramming.
You can also destroy some of surer-goals by medical treatments. But this
kind of reprogramming is very limited.

Supergoals comes with our genes.

 And humans are intelligent creatures, aren't they?
 
 Moreover: system which can easily redefine its super-goals is very
 unstable.
 
 At the same time intelligent system has to be able to define its own
 sub-goals (not super-goals). These sub-goals are set based on
 environment and super-goals.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Jef Allbright
Dennis Gorelik wrote:
Deering,
I strongly disagree.
Humans have preprogrammed super-goals.
Humans don't update ability to update their super-goals.
And humans are intelligent creatures, aren't they?
 

In what sense do human have pre-programmed super-goals?  It seems to me 
that our evolved programming is about as far from a super-goal as can 
be conceived, responding in ways that provided superior fitness in the 
ancestral environment but certainly not forward-looking.

Moreover: system which can easily redefine its super-goals is very
unstable.
 

I can't see how this concept of a *system* modifying its supergoals even 
makes sense.  In order to do so in any workable way appears to require 
use of resources (knowledge) greater than that within the *system*.  
Indeed, the concept of a supergoal doesn't make sense, except from a 
point of view at a higher level of context than the system itself.

At the same time intelligent system has to be able to define its own
sub-goals (not super-goals). These sub-goals are set based on
environment and super-goals.
 

True intelligence must be aware of the widest possible context
and derive super-goals based on direct observation of that
context, and then generate subgoals for subcontexts.  Anything with
preprogrammed goals is limited intelligence.
   

 

True intelligence exists at various levels of capability in terms of a 
system modeling its environment and acting according to that model.  All 
intelligence is limited in its ability to sense, model, and predict.

- Jef
http://www.jefallbright.net
---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread deering



Yes, we have instincts, drives built into our 
systems at a hardware level, beyond the ability to reprogram through merely a 
software upgrade. These drives, sex, pain/pleasure, food, air, security, 
social status, self-actualization, are not supergoals, they are 
reinforcers.

Reinforcers give you positive or negative feelings 
when they are encountered. 

Supergoals are the top level of rules you use to 
determine a choice of behavior.

You can make reinforcers your supergoals, which is 
what animals do because their contextual understanding, and reasoning 
abilityis so limited. People have a choice. You don't have to 
be a slave to the biologically programmed drives you were born with. You 
can perceive a broader context where you are not the center of the 
universe. You can even imagine redesigning your hardware and software to 
become something completely different with no vestige of your human 
reinforcers. 

Can a system choose to change its supergoal, or 
supergoals? Obviously not, unless some method of supergoal change is 
specifically written into the supergoals. People's supergoals change as 
they mature but this is not a voluntary process. Systems can be designed 
to have some sensitivity to the external environment for supergoal 
modification. Certainly systems with immutable supergoals are more stable, 
but stability isn't always desirable or even safe.




To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re[2]: [agi] Unlimited intelligence. --- Super Goals

2004-10-21 Thread Dennis Gorelik
1) All Supergoals are implemented in form of reinforcers.

Not all reinforcers constitute supergoals.
Some reinforcers can be created as sub-goals implementation.
For instance: unconditional reflexes are Supergoal reinforcers.
Conditional reflexes are sub-goals reinforcers.


2) You are telling that Supergoals are the top level of rules.
What do you mean under top level?
What are differences between top level rules and low level rules?

3) You said: You don't have to be a slave to the biologically
programmed drives you were born with.

When you follow your desires --- it is not slavery, isn't it?

4) About stability.
Complex systems with constant set of super-goals can be very different
even inside of similar environment.
For example you can consider twins --- exactly the same set of
supergoals, vert similar environment, different personalities.

Save supergoals and different environment can make huge difference in
behavior.

Slight difference in supergoals increases difference in behavior of
the complex system even more.

Same about stability. Systems are not very stable even with constant supergoals.

System with ability to selfmodify supergoals are TOO unstable.
Such systems are not safe at all.

That's why it has no sense to allow self-modification of supergoals.

 Yes, we have instincts, drives built into our systems at a
 hardware level, beyond the ability to reprogram through merely a
 software upgrade.  These drives, sex, pain/pleasure, food, air,
 security, social status, self-actualization, are not supergoals,
 they are reinforcers.
  
 Reinforcers give you positive or negative feelings when they are encountered. 
  
 Supergoals are the top level of rules you use to determine a choice of behavior.
  
 You can make reinforcers your supergoals, which is what animals
 do because their contextual understanding, and reasoning ability is
 so limited.  People have a choice.  You don't have to be a slave to
 the biologically programmed drives you were born with.  You can
 perceive a broader context where you are not the center of the
 universe.  You can even imagine redesigning your hardware and
 software to become something completely different with no vestige of
 your human reinforcers.  
  
 Can a system choose to change its supergoal, or supergoals? 
 Obviously not, unless some method of supergoal change is
 specifically written into the supergoals.  People's supergoals
 change as they mature but this is not a voluntary process.  Systems
 can be designed to have some sensitivity to the external environment
 for supergoal modification.  Certainly systems with immutable
 supergoals are more stable, but stability isn't always desirable or
 even safe.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]