Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-14 Thread Bryan Bishop
On Sunday 14 September 2008, Dimitry Volfson wrote:
 Well, then I don't understand what you're looking for.
 Brain chemistry is part of the model.

Check out one of the sentences:
 The thalamus in the limbic system ('leopard brain') converts the
 physical need into an urge within the cortex.

So if I shoot a physical need at a thalamus sitting in my lab, it'll 
pop out an urge ? You're just talking about the output of the 
neurons, not the concept of urge that most people talk about from 
Webster's etc -- which is of the mind, not the brain. I'm not saying 
that the mind is separate from the brain, I'm just saying that people 
are confused and probably wrong when they talk about the mind. They 
most often are .. having no background in neuroscience, etc.

If you look on the page, you see some implementation details like -
 Wants and needs have to struggle against one another in a priority
 list for action now or later or not at all. The strength of the urge
 is thus important, with strong urges leading to needs that jump the
 queue, demanding immediate action.

I'm a programmer, I know what a list and queue look like, show it to me. 
Nobody has yet shown neurons doing math, much less a list object.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-14 Thread Dimitry Volfson
Actually, I remember reading something about scientists finding a list 
structure in the brain of a bird singing a song (a moving pointer to the 
next item in a list sort of thing). But whatever.


It's not a very low level model, but the lower level activation is implied.

When you imagine a goal-state, the relationship is represented in the 
brain somehow (in the neurons of course). And when evidence of the 
actualization of that goal-state comes in through the senses, the brains 
sends an opiate reward, which might make the person want to do whatever 
that was again in the correct context.


Motivation circuits - familiar with the concept?
If a motivation circuit gets over-energized then a person gets locked 
into doing the same thing over and over again (and not getting the 
goal-state), rather than having enough resources left to think about 
doing something different and what that different thing should be. Does 
someone need to know exactly how a motivation circuit becomes 
over-energized at the neuronal level in order to model it in an AI? I 
don't think so.


Many things like this are known. And people don't need to understand 
such at the individual-neuron level to model what happens.


Bryan Bishop wrote:

On Sunday 14 September 2008, Dimitry Volfson wrote:
  

Well, then I don't understand what you're looking for.
Brain chemistry is part of the model.



Check out one of the sentences:
  

The thalamus in the limbic system ('leopard brain') converts the
physical need into an urge within the cortex.



So if I shoot a physical need at a thalamus sitting in my lab, it'll 
pop out an urge ? You're just talking about the output of the 
neurons, not the concept of urge that most people talk about from 
Webster's etc -- which is of the mind, not the brain. I'm not saying 
that the mind is separate from the brain, I'm just saying that people 
are confused and probably wrong when they talk about the mind. They 
most often are .. having no background in neuroscience, etc.


If you look on the page, you see some implementation details like -
  

Wants and needs have to struggle against one another in a priority
list for action now or later or not at all. The strength of the urge
is thus important, with strong urges leading to needs that jump the
queue, demanding immediate action.



I'm a programmer, I know what a list and queue look like, show it to me. 
Nobody has yet shown neurons doing math, much less a list object.


- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


  



Click to book your dream cruise.
http://thirdpartyoffers.juno.com/TGL2141/fc/Ioyw6i3nL6Yyjtb9De1g2zOYT4IGQyJIHWTsd9hNE8UMIeJrcJo6ST/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-14 Thread Bryan Bishop
On Sunday 14 September 2008, Dimitry Volfson wrote:
 Actually, I remember reading something about scientists finding a
 list structure in the brain of a bird singing a song (a moving
 pointer to the next item in a list sort of thing). But whatever.

That does sound interesting, yes, I'd like to find a citation on it. Do 
you know where I might find that? Was it a magazine, journal, etc.?

 It's not a very low level model, but the lower level activation is
 implied.

How could it be used if it's left unspecified and hand-wavy?

 When you imagine a goal-state, the relationship is represented in the
 brain somehow (in the neurons of course). And when evidence of the

Of course. But how?

 actualization of that goal-state comes in through the senses, the
 brains sends an opiate reward, which might make the person want to do
 whatever that was again in the correct context.

How is it that only one class of molecules correlates to goalism? This 
seems suspect considering the complex infrastructure of the brain.

 Motivation circuits - familiar with the concept?

Yes, but only from psychology, not from stuff we can actually build or 
understand.

 If a motivation circuit gets over-energized then a person gets locked
 into doing the same thing over and over again (and not getting the
 goal-state), rather than having enough resources left to think about

Perseveration occurs for many other reasons than 'over-energized' 
though ..

 doing something different and what that different thing should be.
 Does someone need to know exactly how a motivation circuit becomes
 over-energized at the neuronal level in order to model it in an AI? I
 don't think so.

Another illustration of the problem with this line of hypothesis that I 
have is that you're trying to make intelligence, a vague concept in 
the first place, with a foundation made out of motivation, another 
somewhat vague psychology concept. I don't care how many times the 
mouse hits the button, etc. Also, I recently cited this:

http://www.ece.utexas.edu/~werner/siren_call.pdf

It's an elaboration of a few of my points here.

 Many things like this are known. And people don't need to understand
 such at the individual-neuron level to model what happens.

No, you miss my point. It's not that I'm saying there's some scale of 
microscopism that we have to climb down (brain, region, tissue slice, 
neuron, axon, subcellular mechanism, ..) to understand things; that's 
not it at all. What I'm talking about is actually considering the 
neurons as the physical components that make up the 'brain' which is 
the physical location of, supposedly, these 'goal-states'. These 
biological systems (brains) are the real things that can be 
experimentally tested or perhaps manipulated, but on the other hand the 
semantic space of goals, meaning, motivation is hardly 
meaningfully manipulated, even with the WordNet or Cyc relations. 

I can randomly generate new designs for experiments using WordNet or Cyc 
relations or something, where we observe mice subjected to a battery of 
different psychochemical compounds. Then, using WordNet, we can pull 
out random labels for each of the behaviors, maybe it's a goal or 
maybe it's a who knows what that the creature supposedly intrinsically 
has, and then what? You'd plot the data sets in some multidimensional 
manner, maybe a State Vector Machine, I'd have to ask some 
mathematicians, and then there's this strong likelihood of statistical 
irrelevance of assigning these labels to the different phenotypes 
observed in experiments. These same phenotypes are the things of folk 
psychology as well. The trick is that instead of observing rats, you're 
observing people. 

Given that scenario, what would I care if it's subatomic or neuron-level 
or whole brain level? So, no, our disagreement is about something else.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-13 Thread Dimitry Volfson

Bryan Bishop wrote:
Secondly, I'm still wondering about the representations of goals in the 
brain. So far, there has been no study showing the neurobiological 
basis of 'goal' in the human brain. As far as we know, it's folk 
psychology anyway, and it might not be 'true', since there's no hard 
physical evidence of the existence of goals. I'm talking about 
bottom-up existence, not top-down (top being us, humans and our 
social contexts and such). 

  


Look at The Brain's Urge System at ChangingMinds.org 
http://changingminds.org/explanations/brain/urge_system.htm: .
Notice that the stimulus can be pure thought. Meaning that a mental 
image of a goal-state can form the basis of urge-desire-action.


- Dimitry

Looking for insurance?  Click to compare and save big.
http://thirdpartyoffers.juno.com/TGL2141/fc/Ioyw6i3m275cwCIqqfzaFFrlqKhcwvvYMpvWAlmDHTzZKHdEudmmKD/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-13 Thread Bryan Bishop
On Saturday 13 September 2008, Dimitry Volfson wrote:
 Look at The Brain's Urge System at ChangingMinds.org
 http://changingminds.org/explanations/brain/urge_system.htm: .
 Notice that the stimulus can be pure thought. Meaning that a mental
 image of a goal-state can form the basis of urge-desire-action.

No, that's the fictional version of the 'mind', nothing about the actual 
brain.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement (was: Ability to improve ones own efficiency as a measure of intelligence)

2008-09-12 Thread Bryan Bishop
On Wednesday 10 September 2008, Matt Mahoney wrote:
 I have asked this list as well as the singularity and SL4 lists
 whether there are any non-evolutionary models (mathematical,
 software, physical, or biological) for recursive self improvement
 (RSI), i.e. where the parent and not the environment decides what the
 goal is and measures progress toward it. But as far as I know, the
 answer is no.

Have considered resource constraint situations where parents kill their 
young? The runt of the litter or, sometimes, others - like when a lion 
takes over a pride. Mostly in the non-human, non-Chinese portions of 
the animal kingdom. (I refer to current events re: China's population 
constraints on female offspring, of course.)

Secondly, I'm still wondering about the representations of goals in the 
brain. So far, there has been no study showing the neurobiological 
basis of 'goal' in the human brain. As far as we know, it's folk 
psychology anyway, and it might not be 'true', since there's no hard 
physical evidence of the existence of goals. I'm talking about 
bottom-up existence, not top-down (top being us, humans and our 
social contexts and such). 

Does RSI have to be measured with respect to goals? Can you prove to me 
that there exists no non-goal oriented improvement methodology? Keeping 
some possibilities open, as you can guess. I suspect that a non-goal 
oriented improvement function could fit into your thoughts in the same 
way that you might hope the goal variation of RSI would. 

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement (was: Ability to improve ones own efficiency as a measure of intelligence)

2008-09-12 Thread Matt Mahoney
--- On Fri, 9/12/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 On Wednesday 10 September 2008, Matt Mahoney wrote:

  I have asked this list as well as the singularity and
 SL4 lists
  whether there are any non-evolutionary models
 (mathematical,
  software, physical, or biological) for recursive self
 improvement
  (RSI), i.e. where the parent and not the environment
 decides what the
  goal is and measures progress toward it. But as far as
 I know, the
  answer is no.
 
 Have considered resource constraint situations where
 parents kill their 
 young? The runt of the litter or, sometimes, others - like
 when a lion 
 takes over a pride. Mostly in the non-human, non-Chinese
 portions of 
 the animal kingdom. (I refer to current events re:
 China's population 
 constraints on female offspring, of course.)

There are two problems with this approach. First, if your child is smarter than 
you, how would you know? Second, this approach favors parents who don't kill 
their children. How do you prevent this trait from evolving?

 Secondly, I'm still wondering about the representations
 of goals in the 
 brain. So far, there has been no study showing the
 neurobiological 
 basis of 'goal' in the human brain. As far as we
 know, it's folk 
 psychology anyway, and it might not be 'true',
 since there's no hard 
 physical evidence of the existence of goals. I'm
 talking about 
 bottom-up existence, not top-down (top being
 us, humans and our 
 social contexts and such). 

You can define an algorithm as goal-oriented if it can be described as having a 
utility function U(x): X - R (any input, real-valued output) and an iterative 
search over x in X such that U(x) increases over time.

Whether a program has a goal depends on how you describe it. For example, 
linear regression has the goal of finding m and b such the straight line 
equation (y = mx + b) minimizes RMS error given a set of (x,y) points, but only 
if you solve it by iteratively adjusting m and b and evaluating the error, 
rather than use the conventional closed form solution.

The human brain is easiest to describe as having a utility function described 
by Maslow's hierarchy of needs. Or you could describe it as a state table with 
2^(10^15) inputs.

 Does RSI have to be measured with respect to goals? Can you
 prove to me 
 that there exists no non-goal oriented improvement
 methodology?

No, it is a philosophical question. What do you mean by improvement?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com