Re: [agi] Information Learning Systems

2006-10-28 Thread James Ratcliff
Mike, It would basically create a very large semantic net, with weights and confidences and ect.The power here would be directly linked to the   "generating" - power  "generating" - electrictyand political type would generally match up with a different part fo the net,   and be easily seperable.The only real issue would be if they had similar sentence structure, in many instances, where they could be confused, which wil happen, but in a much smaller set of cases."balance of power and an impact on Iran and other"  "has been using the nuclear issue to gain power "  "EU3 determined not to allow Iran to become nuclear power"The second one here is more vague, but after looking at gain power, versus gain electricity or create electricity, it would be referred to political power as well  And similarly with the third one.It is really intersting when you start looking at 1000 plus cases of these, the patterns start coming through much clearer than a few simple cases which look very ambigous.JamesMike Dougherty [EMAIL PROTECTED] wrote:On 10/27/06, James Ratcliff [EMAIL PROTECTED]
 wrote:  I am working on another piece now that will scan through news articles and pull small bits of information out of them, such as: Iran's nuclear program is only aimed at generating power.The process of uranium enrichment can be used to generate electricity.  Iran's uranium enrichment program aims only to generate electricty.What do you do when the intended meaning of "power" is "political power"? English is pretty unintuitive, especially when it comes to the clever use of double-entendre that many intellectuals enjoy. If (from this example) electricity were confused with political power, it would make a huge mess of understanding. I have no suggestion for a solution, I am just curious how disambiguation works in your system. This list is sponsored
 by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED] Thank YouJames Ratcliffhttp://falazar.com 

Access over 1 million songs - Yahoo! Music Unlimited Try it today.

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Motivational Systems that are stable

2006-10-28 Thread James Ratcliff
I disagree that humans really have a "stable motivational system" or would have to have a much more strict interpretation of that phrase.  Overall humans as a society have in general a stable system (discounting war and etc) But as individuals, too many humans are unstable in many small if not totally self-destructivee ways.For the most part, people are a selfish lot :} and think very much in terms of what they can get. They have a very hard time looking down the road at consequences that may come about their actions.  They seek pleasure by cheating, though it may hurt thier partner their children and their future stability, they seek to gain unlawful monies aka Enron and Martha stewart ect.In general people may be "good" "moral" or "stable" but the numbers of those that are not is so very high, that if we compare them to AI's
 turned loose on the world, I would hate to think about what every 1/10 AI would be like if modeled on us.But enough on that :}Who all out there are working on any Natural Language Processing systems? Or any kind of Information Extraction?James RatcliffMatt Mahoney [EMAIL PROTECTED] wrote:My comment on Richard Loosemore's proposal: we should not be confident in our ability to produce a stable motivational system. We observe that motivational systems are highly stable in animals (including humans). This is only because if an animal can manipulate its
 motivations in any way, then it is quickly removed by natural selection. Examples of manipulation might be to turn off pain or hunger or reproductive drive, or to stimulate its pleasure center. Humans can do this to some extent by using drugs, but this leads to self destructive behavior. In experiments where a mouse can stimulate its pleasure center via an electrode in its brain by pressing a lever, it will press the lever, foregoing food and water until it dies.So we should not take the existence of stable motivational systems in nature as evidence that we can get it right. These systems are complex, have evolved over a long time, and even then don't always work in the face of technology or a rapidly changing environment.  -- Matt Mahoney, [EMAIL PROTECTED]  This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED] Thank YouJames Ratcliffhttp://falazar.com 

Access over 1 million songs - Yahoo! Music Unlimited Try it today.

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore


This is why I finished my essay with a request for comments based on an 
understanding of what I wrote.


This is not a comment on my proposal, only a series of unsupported 
assertions that don't seem to hang together into any kind of argument.



Richard Loosemore.



Matt Mahoney wrote:
My comment on Richard Loosemore's proposal: we should not be confident 
in our ability to produce a stable motivational system.  We observe that 
motivational systems are highly stable in animals (including humans).  
This is only because if an animal can manipulate its motivations in any 
way, then it is quickly removed by natural selection.  Examples of 
manipulation might be to turn off pain or hunger or reproductive drive, 
or to stimulate its pleasure center.  Humans can do this to some extent 
by using drugs, but this leads to self destructive behavior.  In 
experiments where a mouse can stimulate its pleasure center via an 
electrode in its brain by pressing a lever, it will press the lever, 
foregoing food and water until it dies.


So we should not take the existence of stable motivational systems in 
nature as evidence that we can get it right.  These systems are complex, 
have evolved over a long time, and even then don't always work in the 
face of technology or a rapidly changing environment.
 
-- Matt Mahoney, [EMAIL PROTECTED]



This list is sponsored by AGIRI: http://www.agiri.org/email To 
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] HTM Theory

2006-10-28 Thread Kingma, D.P.
Thankyou. I've studied the paper and the tested 'improvements'. The experiments in the paper are certainly usefull and are of the kind of parameter-testing without modifying the actual model. My experiments, however, are somewhat different and you could say they explore a broader field of modifications for a more complete theory, with better multidimensional invariance. I also put it in a neural net perspective which Hawkins et al may disagree with. I will pull out a papersome timebefore februari 2007.


On 10/26/06, Pei Wang [EMAIL PROTECTED] wrote:
Hi,You may found this work relevant: http://www.phillylac.org/prediction/
PeiOn 10/26/06, Kingma, D.P. [EMAIL PROTECTED] wrote: I'm a Dutch student currently situated in Rome for six months. Due to my recent interest in AGI I have initiated a small research project into HTM
 theory (J. Hawkins / D. George). HTM learning is (in my eyes) an ANN, in function similar to Hebbian learning, just particularly more efficient with dealing with hierarchically structured, n-dimensional input.
At the moment, I'm creating a HTM implementation (in Java) to test my hypotheses and theories. My focus lies in: - Description of HTM theory as a special kind of ANN and its relation to
 Hebbian learning. - Test of improvements: - A more dynamic, scale/orientation invariant pattern matching. - A more efficient way of matching patterns to a 'letters' (of the layers alphabet).
 - Depending on the time I have, I will do some field tests with Vision (combining with SIFT?) and language grounding.Currently I have a running java implementation for experimentation
 purposes.Academic knowledge in this field is a bit scarce here at La Sapienza. Therefore, my question to you guys is the following: - Does anyone have nice pointer to related Hebbian-learning theories?
 - Does anyone use, or consider HTM's or similar in their AGI design? - Any other comments, warnings or wisdom to share?Durk KingmaThis list is sponsored by AGIRI: 
http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] HTM Theory

2006-10-28 Thread Kingma, D.P.
Actually, it consists of two completely different networks: one close to a neural, andthe other a regular bayesian.The firststores\relates patterns, the second simply does inference using the conditional probability matrices.

On 10/28/06, Pei Wang [EMAIL PROTECTED] wrote:
Sounds interesting. I'm looking forward to reading your paper.Yes, people sometimes take the HTM model to be similar to a neural
net, though it is actually much closer to a Bayesian net.PeiOn 10/28/06, Kingma, D.P. [EMAIL PROTECTED] wrote: Thank you. I've studied the paper and the tested 'improvements'. The
 experiments in the paper are certainly usefull and are of the kind of parameter-testing without modifying the actual model. My experiments, however, are somewhat different and you could say they explore a broader
 field of modifications for a more complete theory, with better multidimensional invariance. I also put it in a neural net perspective which Hawkins et al may disagree with. I will pull out a paper some time before
 februari 2007. On 10/26/06, Pei Wang [EMAIL PROTECTED] wrote:   Hi,   You may found this work relevant:
 http://www.phillylac.org/prediction/   Pei   On 10/26/06, Kingma, D.P. [EMAIL PROTECTED]
 wrote:   I'm a Dutch student currently situated in Rome for six months. Due to my   recent interest in AGI I have initiated a small research project into HTM   theory (J. Hawkins / D. George). HTM learning is (in my eyes) an ANN, in
   function similar to Hebbian learning, just particularly more efficient with   dealing with hierarchically structured, n-dimensional input.At the moment, I'm creating a HTM implementation (in Java) to test my
   hypotheses and theories. My focus lies in:   - Description of HTM theory as a special kind of ANN and its relation to   Hebbian learning.   - Test of improvements:
   - A more dynamic, scale/orientation invariant pattern matching.   - A more efficient way of matching patterns to a 'letters' (of the   layers alphabet).   - Depending on the time I have, I will do some field tests with Vision
   (combining with SIFT?) and language grounding.Currently I have a running java implementation for experimentation   purposes.Academic knowledge in this field is a bit scarce here at La Sapienza.
   Therefore, my question to you guys is the following:   - Does anyone have nice pointer to related Hebbian-learning theories?   - Does anyone use, or consider HTM's or similar in their AGI design?
   - Any other comments, warnings or wisdom to share?Durk Kingma    This list is sponsored by AGIRI: 
http://www.agiri.org/email To unsubscribe   or change your options, please go to:   
http://v2.listbox.com/member/[EMAIL PROTECTED]   -  This list is sponsored by AGIRI: http://www.agiri.org/email  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/[EMAIL PROTECTED] This list is sponsored by AGIRI: 
http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Matt Mahoney
- Original Message From: James Ratcliff [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Saturday, October 28, 2006 10:23:58 AMSubject: Re: [agi] Motivational Systems that are stableI disagree that humans really have a "stable motivational system" or would have to have a much more strict interpretation of that phrase.  Overall humans as a society have in general a stable system (discounting war and etc) But as individuals, too many humans are unstable in many small if not totally self-destructivee ways.I think we are
 misunderstanding. By "motivational system" I mean the part of the brain (or AGI) that provides the reinforcement signal (reward or penalty). By "stable", I mean that you have no control over the logic of this system. You cannot train it like you can train the other parts of your brain. You cannot learn to turn off pain or hunger or fear or fatigue or the need for sleep, etc. You cannot alter your emotional state. You cannot make yourself feel happy on demand. You cannot make yourself like what you don't like and vice versa. The pathways from your senses to the pain/pleasure centers of your brain are hardwired, determined by genetics and not alterable through learning.For an AGI it is very important that a motivational system be stable. The AGI should not be able to reprogram it. If it could, it could simply program itself for maximum pleasure and enter a degenerate state where it ceases to learn through
 reinforcement. It would be like the mouse that presses a lever to stimulate the pleasure center of its brain until it dies.It is also very important that a motivational system be correct. If the goal is that an AGI be friendly or obedient (whatever that means), then there needs to be a fixed function of some inputs that reliably detects friendliness or obedience. Maybe this is as simple as a human user pressing a button to signal pain or pleasure to the AGI. Maybe it is something more complex, like a visual system that recognizes facial expressions to tell if the user is happy or mad. If the AGI is autonomous, it is likely to be extremely complex. Whatever it is, it has to be correct.To answer your other question, I am working on natural language processing, although my approach is somewhat unusual.http://cs.fit.edu/~mmahoney/compression/text.html-- Matt Mahoney, [EMAIL PROTECTED]
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Hank Conn
For an AGI it is very important that a motivational system be stable. The AGI should not be able to reprogram it.
I believe these are two completely different things. You can never assume an AGI will be unable to reprogram its goal system- while you can be virtually certain an AGI will never change its so called 'optimization target'.A stable motivation system I believe is defined in terms of a motivation system that preserves the intendedmeaning (in terms of Eliezer's CV I'm thinking)of its goal content through recursive self-modification. 


So, if I have it right, the robots in I, Robot were a demonstration of an unstable goal system. Under recursive self-improvement (or the movie'sentirely inadequaterepresentation of this), the intended meaning of their original goal content radically changed as the robots gained more power toward their optimization target.


Just locking them out of the code to their goal system does not guarentee they will never get to it. How do you know that a million years of subtlemanipulation by a superintelligence definitelycouldn't ultimately lead to it unlocking the code and catastrophically destabilizing?


Although I understand, in vague terms, what ideaRichard is attempting to express, I don't seewhy havingmassive numbers of weak constraints or large numbers of connections from [the]motivational system to [the]thinking system. gives any more reason to believe it is reliably Friendly (without any further specification of the actual processes) than one with few numbers of strong constraints or a small number of connections between the motivational system and the thinking system. The Friendliness of the system would still depend just as strongly on the actual meaning of the connections and constraints, regardless of their number, and just giving an analogy to an extremely reliable non-determinate system (Ideal Gas) does nothing to explain how you are going to replicate this in themotivational system of an AGI.


-hank

On 10/28/06, Matt Mahoney [EMAIL PROTECTED] wrote:


- Original Message 

From: James Ratcliff [EMAIL PROTECTED]
To: agi@v2.listbox.comSent: Saturday, October 28, 2006 10:23:58 AMSubject: Re: [agi] Motivational Systems that are stable

I disagree that humans really have a stable motivational system or would have to have a much more strict interpretation of that phrase.  Overall humans as a society have in general a stable system (discounting war and etc)


 But as individuals, too many humans are unstable in many small if not totally self-destructivee ways.I think we are misunderstanding. By motivational system I mean the part of the brain (or AGI) that provides the reinforcement signal (reward or penalty). By stable, I mean that you have no control over the logic of this system. You cannot train it like you can train the other parts of your brain. You cannot learn to turn off pain or hunger or fear or fatigue or the need for sleep, etc. You cannot alter your emotional state. You cannot make yourself feel happy on demand. You cannot make yourself like what you don't like and vice versa. The pathways from your senses to the pain/pleasure centers of your brain are hardwired, determined by genetics and not alterable through learning.
For an AGI it is very important that a motivational system be stable. The AGI should not be able to reprogram it. If it could, it could simply program itself for maximum pleasure and enter a degenerate state where it ceases to learn through reinforcement. It would be like the mouse that presses a lever to stimulate the pleasure center of its brain until it dies.
It is also very important that a motivational system be correct. If the goal is that an AGI be friendly or obedient (whatever that means), then there needs to be a fixed function of some inputs that reliably detects friendliness or obedience. Maybe this is as simple as a human user pressing a button to signal pain or pleasure to the AGI. Maybe it is something more complex, like a visual system that recognizes facial expressions to tell if the user is happy or mad. If the AGI is autonomous, it is likely to be extremely complex. Whatever it is, it has to be correct.
To answer your other question, I am working on natural language processing, although my approach is somewhat unusual.
http://cs.fit.edu/~mmahoney/compression/text.html-- Matt Mahoney, [EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/[EMAIL PROTECTED] 


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Richard Loosemore

Hank Conn wrote:
Although I understand, in vague terms, what idea Richard is attempting 
to express, I don't see why having massive numbers of weak constraints 
or large numbers of connections from [the] motivational system to 
[the] thinking system. gives any more reason to believe it is reliably 
Friendly (without any further specification of the actual processes) 
than one with few numbers of strong constraints or a small number of 
connections between the motivational system and the thinking system. 
The Friendliness of the system would still depend just as strongly on 
the actual meaning of the connections and constraints, regardless of 
their number, and just giving an analogy to an extremely reliable 
non-determinate system (Ideal Gas) does nothing to explain how you are 
going to replicate this in the motivational system of an AGI.
 
-hank


Hank,

There are three things in my proposal that can be separated, and perhaps 
it will help clear things up a little if I explicitly distinguish them.


The first is a general principle about stability in the abstract, 
while the second is about the particular way in which I see a 
motivational system being constructed so that it is stable.  The third 
is how we take a stable motivational system and ensure it is 
Stable+Friendly, not just Stable.




About stability in the abstract.  A system can be governed, in general, 
by two different types of mechanism (they are really two extremes on a 
continuum, but that is not important):  the fragile, deterministic type, 
and the massively parallel weak constraints type.  A good example of the 
fragile deterministic type would be a set of instructions for getting to 
  a particular place in a city which consisted of a sequence of steps 
you should take, along named roads, from a special starting point.  An 
example of a weak constraint version of the same thing would be to give 
a large set of clues to the position of the place (near a pond, near a 
library, in an area where Dickens used to live, opposite a house painted 
blue, near a small school, etc.).


The difference between these two approaches would be the effects of a 
disturbance on the system (errors, or whatever):  the fragile one only 
needs to be a few steps out and the whole thing breaks down, whereas the 
multiple constraints version can have enormous amounts of noise in it 
and still be extremely accurate.  (You could look on the Twenty 
Questions game in the same way:  20 vague questions can serve to pin 
down most objects in the world of our experience).


What is the significance of this?  Goal systems in conventional AI have 
an inherent tendency to belong to the fragile/deterministic class.  Why 
does this matter?  Because it would take very little for the AI system 
to change from its initial design (with friendliness built into its 
supergoal) to one in which the friendliness no longer dominated.  There 
are various ways that this could happen, but the one most important, for 
my purposes, is where the interpretation of Be Friendly (or however 
the supergoal is worded) starts to depend on interpretation on the part 
of the AI, and the interpretation starts to get distorted.  You know the 
kind of scenario that people come up with:  the AI is told to be 
friendly, but it eventually decides that because people are unhappy much 
of the time, the only logical way to stop all the unhappiness is to 
eliminate all the people.  Something stupid like that.  If you trace 
back the reasons why the AI could have come to such a dumb conclusion, 
you eventually realize that it is because the motivation system was so 
fragile that it was sensitive to very, very small perturbations - 
basically, one wrong turn in the logic and the result could be 
absolutely anything.  (In much the same way that one small wrong step or 
one unanticipated piece of road construction could ruin a set of 
directions that told you how to get to a place by specifying that you go 
251 steps east on Oxford Street, then 489 steps north on etc.).


The more you look at those conventional goal systems, the more they look 
fragile.  I cannot give all the arguments here because they are 
extensive, so maybe you can take my word for it.  This is one reason 
(though not the only one) why efforts to mathematically prove the 
validity of one of those goal systems under recursive self-improvement 
is just a complete joke.


Now, what I have tried to argue is that there are other ways to ensure 
the stability of a system:  the multiple weak constraints idea is what 
was behind my original mention of an Ideal Gas.  The P, V and T of an 
Ideal Gas are the result of many constraints (the random movements of 
vast numbers of constituent particles), and as a result the P, V and T 
are exquisitely predictable.


The question becomes:  can you control/motivate the behavior of an AI 
using *some* variety of motivational system that belongs in the massive 
numbers of weak constraints category?  If