Re: [agi] Non-evolutionary models of recursive self-improvement (was: Ability to improve ones own efficiency as a measure of intelligence)

2008-09-12 Thread Bryan Bishop
On Wednesday 10 September 2008, Matt Mahoney wrote:
 I have asked this list as well as the singularity and SL4 lists
 whether there are any non-evolutionary models (mathematical,
 software, physical, or biological) for recursive self improvement
 (RSI), i.e. where the parent and not the environment decides what the
 goal is and measures progress toward it. But as far as I know, the
 answer is no.

Have considered resource constraint situations where parents kill their 
young? The runt of the litter or, sometimes, others - like when a lion 
takes over a pride. Mostly in the non-human, non-Chinese portions of 
the animal kingdom. (I refer to current events re: China's population 
constraints on female offspring, of course.)

Secondly, I'm still wondering about the representations of goals in the 
brain. So far, there has been no study showing the neurobiological 
basis of 'goal' in the human brain. As far as we know, it's folk 
psychology anyway, and it might not be 'true', since there's no hard 
physical evidence of the existence of goals. I'm talking about 
bottom-up existence, not top-down (top being us, humans and our 
social contexts and such). 

Does RSI have to be measured with respect to goals? Can you prove to me 
that there exists no non-goal oriented improvement methodology? Keeping 
some possibilities open, as you can guess. I suspect that a non-goal 
oriented improvement function could fit into your thoughts in the same 
way that you might hope the goal variation of RSI would. 

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement (was: Ability to improve ones own efficiency as a measure of intelligence)

2008-09-12 Thread Matt Mahoney
--- On Fri, 9/12/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 On Wednesday 10 September 2008, Matt Mahoney wrote:

  I have asked this list as well as the singularity and
 SL4 lists
  whether there are any non-evolutionary models
 (mathematical,
  software, physical, or biological) for recursive self
 improvement
  (RSI), i.e. where the parent and not the environment
 decides what the
  goal is and measures progress toward it. But as far as
 I know, the
  answer is no.
 
 Have considered resource constraint situations where
 parents kill their 
 young? The runt of the litter or, sometimes, others - like
 when a lion 
 takes over a pride. Mostly in the non-human, non-Chinese
 portions of 
 the animal kingdom. (I refer to current events re:
 China's population 
 constraints on female offspring, of course.)

There are two problems with this approach. First, if your child is smarter than 
you, how would you know? Second, this approach favors parents who don't kill 
their children. How do you prevent this trait from evolving?

 Secondly, I'm still wondering about the representations
 of goals in the 
 brain. So far, there has been no study showing the
 neurobiological 
 basis of 'goal' in the human brain. As far as we
 know, it's folk 
 psychology anyway, and it might not be 'true',
 since there's no hard 
 physical evidence of the existence of goals. I'm
 talking about 
 bottom-up existence, not top-down (top being
 us, humans and our 
 social contexts and such). 

You can define an algorithm as goal-oriented if it can be described as having a 
utility function U(x): X - R (any input, real-valued output) and an iterative 
search over x in X such that U(x) increases over time.

Whether a program has a goal depends on how you describe it. For example, 
linear regression has the goal of finding m and b such the straight line 
equation (y = mx + b) minimizes RMS error given a set of (x,y) points, but only 
if you solve it by iteratively adjusting m and b and evaluating the error, 
rather than use the conventional closed form solution.

The human brain is easiest to describe as having a utility function described 
by Maslow's hierarchy of needs. Or you could describe it as a state table with 
2^(10^15) inputs.

 Does RSI have to be measured with respect to goals? Can you
 prove to me 
 that there exists no non-goal oriented improvement
 methodology?

No, it is a philosophical question. What do you mean by improvement?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com