[agi] cheap 8-core cluster

2006-06-09 Thread Eugen Leitl

http://www.tyan.com/PRODUCTS/html/typhoon_b2881.html

Notice the Direct Connect Architecture part. Online
pricing looks very reasonable.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] information in the brain?

2006-06-09 Thread Eugen Leitl
On Fri, Jun 09, 2006 at 01:12:49AM -0400, Philip Goetz wrote:

 Does anyone know how to compute how much information, in bits, arrives
 at the frontal lobes from the environment per second in a human?

Most of information is visual, and retina purportedly compresses 1:126
(obviously, some of it lossy). 
http://www.4colorvision.com/dynamics/mechanism.htm
claims 23000 receptor cells on the foveola, so I would just
do a rough calculation of some 50 fps (you don't see this, but
the cells do), and 16 bit/cell (it's probably 12 bit, but it's
a rough estimate, anyway). Estimate gives some 20 MBit/s, which
I think is way too low. 

It's also kinda artificial to start counting on the visual nerve,
since the retina is technically a part of the brain. So I would
just use total photoreceptor count instead of just 23 k cells
of the fovea.
 
 For a specific  brain region, you can compute its channel capacity if
 you know the number of neurons, and the refractory period of the
 neurons in that region, since you can compute approximate bandwidth
 per neuron as the max firing frequency.  However, that doesn't tell
 you how much information from its inputs is actually coming through
 that channel.  The channel capacity is sometimes much smaller than the
 input bandwidth, but that doesn't mean the channel is fully utilized.
 If the channel capacity going out of a region is larger than the
 channel capacity coming in, it is especially important to have some
 equation that accounts for the mutual information between inputs to
 different neurons in that area.

I'm not sure you can separate the processing so cleanly into modules,
connected by interfaces. What's wrong with looking at metabolic rate,
and how much spiking flows over across an arbitrary boundary?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] AGI bottlenecks

2006-06-09 Thread James Ratcliff
I had similar feelings about William Pearson's recent message aboutsystems that use reinforcement learning:  A reinforcement scenario, from wikipedia is defined as  "Formally, the basic reinforcement learning model consists of:   1. a set of environment states S;  2. a set of actions A; and  3. a set of scalar "rewards" in the Reals. " Here is my standard response to Behaviorism (which is what the above reinforcement learning model actually is):  Who decides when the rewards should come, and who chooses what are the relevant "states" and "actions"?If you find out what is doing *that* work, you have found your intelligent system.  And it will probably turn out to be so enormously complex, relative to the reinforcement learning part shown above, that
 the above formalism (assuming it has not been discarded by then) will be almost irrelevant.Just my deux centimes' worth.I've been looking at something like this, but on a very generalized scale. Part 1 is infinite amount of states, incalculable,Part 2 is a large huge amount, that can be broken into parts and (must be) dealt with.Part 3 is what I am looking at currently.Is it possible for us to generate a model calculation of Worth, or Happiness Value or GoodState value.Just playing around with some ideas I have come up with:GoodValue = alive * (a*health + b*wealth + c*enjoyment + d*learning + e*friends) +pastplans -timeThis covers many simple motivations of humans currently, with a couple of these vague ideas that would need to be fleshed out. There is also a pastplans, which is a general preference for doing some things as patterns of past actions.This is general right now, but would
 it be possible to flesh out a comlpex, changeable, but still sizeably managable equation here that could be a controlling factor on AI motivations?James RatcliffThank YouJames Ratcliffhttp://FallsTown.com - Local Wichita Falls Community Websitehttp://Falazar.com - Personal WebsiteHosting Starting at $9.95Dialups Accounts - $8.95 __Do You Yahoo!?Tired of spam?  Yahoo! Mail has the best spam protection around http://mail.yahoo.com 
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI bottlenecks

2006-06-09 Thread James Ratcliff
Richard, Can you explain differently, in other words the second part of this post. I am very interested in this as a large part of an AI system. I believe in some fashion there needs to be a controlling algorithm that tells the AI that it is doing "Right" be it either an internal or external human reward. We receive these rewards in our daily life, in our jobs relationships and such, wether we actually learn from these is to be debated though.James RatcliffRichard Loosemore [EMAIL PROTECTED] wrote: Will,Comments taken, but the direction of my critique may have gotten lost in the details:Suppose I proposed a solution to the problem of unifying quantum mechanics and gravity, and suppose I came out with a solution that said that the unified theory involved (a) a
 specific interface to quantum theory, which I spell out in great detail, and (b) ditto for an interface with geometrodynamics, and (c) a linkage component, to be specified.Physicists would laugh at this.  What linkage component?! they would say.  And what makes you *believe* that once you sorted out the linkage component, the two interfaces you just specified would play any role whatsoever in that linkage component?  They would point out that my "linkage component" was the meat of the theory, and yet I had referred to in such a way that it seemed as though it was just an extra, to be sorted out later.This is exactly what happened to Behaviorism, and the idea of Reinforcement Learning.  The one difference was that they did not explicitly specify an equivalent of my (c) item above:  it was for the cognitive psychologists to come along later and point out that Reinforcement Learning implicitly assumed that
 something in the brain would do the job of deciding when to give rewards, and the job of deciding what the patterns actually were  and that that something was the part doing all the real work.  In the case of all the experiments in the behaviorist literature, the experimenter substituted for those components, making them less than obvious.Exactly the same critique bears on anyone who suggests that Reinforcement Learning could be the basis for an AGI.  I do not believe there is still any reply to that critique.Richard LoosemoreWilliam Pearson wrote: On 01/06/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:  I had similar feelings about William Pearson's recent message about systems that use reinforcement learning:   A reinforcement scenario, from wikipedia is defined as   "Formally, the
 basic reinforcement learning model consists of:1. a set of environment states S;   2. a set of actions A; and   3. a set of scalar "rewards" in the Reals.  " Here is my standard response to Behaviorism (which is what the above reinforcement learning model actually is):  Who decides when the rewards should come, and who chooses what are the relevant "states" and  "actions"?  The rewards I don't deal with, I am interested in external brain add-ons rather than autonomous systems, so the reward system will be closely coupled to a human in some fashion.  The rest of post I was trying to outline a system that could alter what it considered actions and states (and bias, learning algorithms etc). The RL definition  was just there as an example to work against.
  If you find out what is doing *that* work, you have found your intelligent system.  And it will probably turn out to be so enormously complex, relative to the reinforcement learning part shown above, that the above formalism (assuming it has not been discarded by then) will be almost irrelevant.  The internals of the system will be enormously more complex compared to the reinforcement part I described. But it won't make that irrelevent. What goes on inside a PC is vastly more complex than the system that governs the permissions of what each *nix program can do. This doesn't mean the permission governing system is irrelevent.  Like the permissions system in *nix the reinforcement system it is only supposed to govern who is allowed to do what, not what actually happens. Unlike the permission system it is supposed to get that
 from the affect of the programs on the environment.  Without it both sorts of systems would be highly unstable.  I see it as a necessity for complete modular flexibility. If you get one of the bits that does the work wrong, or wrong for the current environment, how do you allow it to change?  Just my deux centimes' worth.  Appreciated.  On a more positive note, I do think it is possible for AGI researchers to work together within a common formalism.  My presentation at the AGIRI workshop was about that, and when I get the paper version of the talk finalized I will post it somewhere.  I'll be interested, but sceptical.   Will  --- To unsubscribe, change your address, or temporarily deactivate your  subscription, please go to 
 http://v2.listbox.com/member/[EMAIL PROTECTED]  ---To unsubscribe, change your address, or temporarily deactivate your subscription, please go to 

Re: [agi] information in the brain?

2006-06-09 Thread Philip Goetz

On 6/9/06, Eugen Leitl [EMAIL PROTECTED] wrote:


Most of information is visual, and retina purportedly compresses 1:126
(obviously, some of it lossy). 
http://www.4colorvision.com/dynamics/mechanism.htm
claims 23000 receptor cells on the foveola, so I would just
do a rough calculation of some 50 fps (you don't see this, but
the cells do), and 16 bit/cell (it's probably 12 bit, but it's
a rough estimate, anyway). Estimate gives some 20 MBit/s, which
I think is way too low.


That is the kind of info that I'm looking for, but I already have it
for the retina.

Each human eye has about 6 million cones and 125 million rods
(Wikipedia).  A cone has a max frequency of perhaps 60Hz, since movies
are 30 frames/sec, while a rod has a refractory period of about 50 ms
(Varsányi et al. 2005).  Assuming a signal/noise ratio of 3 (typical
of some other neurons), you can calculate that the information
provided by human photoreceptors is about 2.7 billion bits per second:

Photoreceptors:  125 million rods with frequencies up to 20Hz:
125Mx20log10(3?)=1.2Gbps
6 million cones wi freqs up to 60?Hz: 6Mx60log10(3?)=172Mbps
Total for 2 eyes = 343.57Mbps + 2385Mbps = 2729Mbps

Much of this, however, is highly correlated, because the info coming
to the left eye is so strongly correlated with that in the right eye,
so total info is probably around 1.5Gbit/sec.

This info goes from the photoreceptors, to bipolar cells, to ganglion
cells, all within the retina.  The information coming from the retinal
ganglion cells to the lateral geniculate nucleus is a lot less.  A
ganglion cell has a refractory period, on average, of 3.75
milliseconds (Berry  Marker 1998), which means it can fire at rates
up to 267Hz, while its lowest firing rate is about 10Hz (Berry 
Marker 1998).  This gives it a bandwidth of 257Hz.  A retinal ganglion
cell has a signal-to-noise ratio of about 3 at most contrast levels
(Dhingra et al. 2005).  Hence, we can calculate the channel capacity
of a retinal ganglion cell using the Shannon-Hartley channel-capacity
formula as 257log10(3) = 123 bits per second.  There are about 1.35
million ganglion cells in a human retina (Wikipedia, Ganglion cell).
(There are 5 different types of ganglion cell, but I don't have data
on all the different types.)  With two eyes, this gives us 331 million
bits per second coming into the human visual system, again wi
redundancy between the two eyes, total info probably less than
200Mbits/sec.

For the human auditory modality, we know the frequency sensitivity of
the entire auditory system combined, and hence need not deal with
individual neurons.  If we believe that humans can hear sounds from
30Hz to 20,000Hz, and that the background-noise level of the human ear
is 0dB, while a typical environment might have 40dB of sound, we then
have that the auditory bandwidth is 19,070Hz, with a signal-to-noise
ratio of 4000.  The channel capacity of each human ear is then
19070log_10(4000) = 69,000 bits/second.

Note a possibly general rule:  Primary sensors, like photoreceptors,
have high refractory periods, meaning few bits per second, but there
are a lot of them.  Summarizing neurons, like ganglions, have much
higher frequencies, so they transmit more bits/sec, but there are
fewer of them.

If the information preservation factor of .1 found in the
photoreceptor-ganglion passage is typical, one could calculate the
number of steps between retina and frontal lobe (along various paths)
and estimate info left after all those steps.  Probably that
assumption would be unjustified.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI bottlenecks

2006-06-09 Thread Richard Loosemore


James,

It is a little hard to know where to start, to be honest.  Do you have a 
background in any particular area already, or are you pre-college?  If 
the latter, and if you are interested in the field in a serious way, I 
would recommend that you hunt down a good programme in cognitive science 
(and if possible do software engineering as a minor).  After about three 
or four years of that, you'll have a better idea of where the below 
argument was coming from.  Even then, expect to have to argue the heck 
out of your professors, only believe one tenth of everything they say, 
and discover your own science as you go along, rather than be told what 
the answers are.  A lot of the questions do not have answers yet.


All thinking systems do have a motivation system of some sort (what you 
were talking about below as rewards), but people's ideas about the 
design of that motivational system vary widely from the implicit and 
confused to the detailed and convoluted (but not necessarily less 
confused).  The existence of a motivational system was not the issue in 
my post:  the issue was exactly *how* you design that motivation system.


Behaviorism (and reinforcement learning) was a suggestion that took a 
diabolically simplistic view of how that motivation system is supposed 
to work  so simplistic that, in fact, it swept under the carpet all 
the real issues.  What I was complaining of was a recent revival in 
interest in the idea of reinforcement learning, in which people were 
beginning to make the same stupid mistakes that were made 80 years ago, 
without apparently being aware of what those stupid mistakes were.


(To give you an analogy that illustrates the problem:  imagine someone 
waltzes into Detroit and says It ain't so hard to beat these Japanese 
car makers:  I mean, a car is just four wheels and a thing that pushes 
them around.  I could build one of those in my garage and beat the pants 
off Toyota in a couple of weeks!   A car is not four wheels and a 
thing that pushes them around.  Likewise, an artificial general 
intelligence is not a set of environment states S, a set of actions A, 
and a set of scalar rewards in the Reals.)


Watching history repeat itself is pretty damned annoying.

Richard Loosemore




James Ratcliff wrote:

Richard,
  Can you explain differently, in other words the second part of this 
post.  I am very interested in this as a large part of an AI system.
  I believe in some fashion there needs to be a controlling algorithm 
that tells the AI that it is doing Right be it either an internal or 
external human reward.  We receive these rewards in our daily life, in 
our jobs relationships and such, wether we actually learn from these is 
to be debated though.


James Ratcliff

*/Richard Loosemore [EMAIL PROTECTED]/* wrote:


Will,

Comments taken, but the direction of my critique may have gotten
lost in
the details:

Suppose I proposed a solution to the problem of unifying quantum
mechanics and gravity, and suppose I came out with a solution that said
that the unified theory involved (a) a specific interface to quantum
theory, which I spell out in great detail, and (b) ditto for an
interface with geometrodynamics, and (c) a linkage component, to be
specified.

Physicists would laugh at this. What linkage component?! they would
say. And what makes you *believe* that once you sorted out the linkage
component, the two interfaces you just specified would play any role
whatsoever in that linkage component? They would point out that my
linkage component was the meat of the theory, and yet I had referred
to in such a way that it seemed as though it was just an extra, to be
sorted out later.

This is exactly what happened to Behaviorism, and the idea of
Reinforcement Learning. The one difference was that they did not
explicitly specify an equivalent of my (c) item above: it was for the
cognitive psychologists to come along later and point out that
Reinforcement Learning implicitly assumed that something in the brain
would do the job of deciding when to give rewards, and the job of
deciding what the patterns actually were  and that that something
was the part doing all the real work. In the case of all the
experiments in the behaviorist literature, the experimenter substituted
for those components, making them less than obvious.

Exactly the same critique bears on anyone who suggests that
Reinforcement Learning could be the basis for an AGI. I do not believe
there is still any reply to that critique.

Richard Loosemore





William Pearson wrote:
  On 01/06/06, Richard Loosemore wrote:
 
  I had similar feelings about William Pearson's recent message about
  systems that use reinforcement learning:
 
  
   A reinforcement scenario, from wikipedia is defined as
  
   Formally, the basic reinforcement 

Re: [agi] information in the brain?

2006-06-09 Thread Richard Loosemore

Phil,

But the visual calculations below only give you the flow going back to 
the visual cortex:  didn't you specifically want the frontal lobes?


Given that you could go into a sensory-deprivation tank and yet still 
get a good flow into your frontal lobes, how would you know what 
proportion of the visual cortex flow was going frontward?  In other 
words, the frontal cortex is doing a lot more than just handling 
infromation from the environment, so I am not sure your original 
question can be easily answered.


Richard Loosemore


Philip Goetz wrote:

On 6/9/06, Eugen Leitl [EMAIL PROTECTED] wrote:


Most of information is visual, and retina purportedly compresses 1:126
(obviously, some of it lossy). 
http://www.4colorvision.com/dynamics/mechanism.htm

claims 23000 receptor cells on the foveola, so I would just
do a rough calculation of some 50 fps (you don't see this, but
the cells do), and 16 bit/cell (it's probably 12 bit, but it's
a rough estimate, anyway). Estimate gives some 20 MBit/s, which
I think is way too low.


That is the kind of info that I'm looking for, but I already have it
for the retina.

Each human eye has about 6 million cones and 125 million rods
(Wikipedia).  A cone has a max frequency of perhaps 60Hz, since movies
are 30 frames/sec, while a rod has a refractory period of about 50 ms
(Varsányi et al. 2005).  Assuming a signal/noise ratio of 3 (typical
of some other neurons), you can calculate that the information
provided by human photoreceptors is about 2.7 billion bits per second:

Photoreceptors:  125 million rods with frequencies up to 20Hz:
125Mx20log10(3?)=1.2Gbps
6 million cones wi freqs up to 60?Hz: 6Mx60log10(3?)=172Mbps
Total for 2 eyes = 343.57Mbps + 2385Mbps = 2729Mbps

Much of this, however, is highly correlated, because the info coming
to the left eye is so strongly correlated with that in the right eye,
so total info is probably around 1.5Gbit/sec.

This info goes from the photoreceptors, to bipolar cells, to ganglion
cells, all within the retina.  The information coming from the retinal
ganglion cells to the lateral geniculate nucleus is a lot less.  A
ganglion cell has a refractory period, on average, of 3.75
milliseconds (Berry  Marker 1998), which means it can fire at rates
up to 267Hz, while its lowest firing rate is about 10Hz (Berry 
Marker 1998).  This gives it a bandwidth of 257Hz.  A retinal ganglion
cell has a signal-to-noise ratio of about 3 at most contrast levels
(Dhingra et al. 2005).  Hence, we can calculate the channel capacity
of a retinal ganglion cell using the Shannon-Hartley channel-capacity
formula as 257log10(3) = 123 bits per second.  There are about 1.35
million ganglion cells in a human retina (Wikipedia, Ganglion cell).
(There are 5 different types of ganglion cell, but I don't have data
on all the different types.)  With two eyes, this gives us 331 million
bits per second coming into the human visual system, again wi
redundancy between the two eyes, total info probably less than
200Mbits/sec.

For the human auditory modality, we know the frequency sensitivity of
the entire auditory system combined, and hence need not deal with
individual neurons.  If we believe that humans can hear sounds from
30Hz to 20,000Hz, and that the background-noise level of the human ear
is 0dB, while a typical environment might have 40dB of sound, we then
have that the auditory bandwidth is 19,070Hz, with a signal-to-noise
ratio of 4000.  The channel capacity of each human ear is then
19070log_10(4000) = 69,000 bits/second.

Note a possibly general rule:  Primary sensors, like photoreceptors,
have high refractory periods, meaning few bits per second, but there
are a lot of them.  Summarizing neurons, like ganglions, have much
higher frequencies, so they transmit more bits/sec, but there are
fewer of them.

If the information preservation factor of .1 found in the
photoreceptor-ganglion passage is typical, one could calculate the
number of steps between retina and frontal lobe (along various paths)
and estimate info left after all those steps.  Probably that
assumption would be unjustified.

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Motivational system was Re: [agi] AGI bottlenecks

2006-06-09 Thread William Pearson

On 09/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:

 Likewise, an artificial general
intelligence is not a set of environment states S, a set of actions A,
and a set of scalar rewards in the Reals.)

Watching history repeat itself is pretty damned annoying.



While I would agree with you the set of environmental states and
actions are not well defined for anything we would call intelligence.
I would argue the concept of rewards, probably not Reals, does have a
place in understanding intelligence.

It is very simple and I wouldn't apply it to everything that
behaviourists would (we don't get direct rewards for solving crossword
puzzles). But there is a necessity for a simple explanation for how
simple chemicals can lead to the alteration of complex goals. How and
why do we get addicted? What is it about morphine that allows the
alteration of a brain to one that wants more morphine, when the desire
for morphine didn't previously exist?

That would be like bit flipping a piece of code or variable in an AI
and then the AI deciding that bit-flipping that code was somehow good
and should be sort after.

The RL answer would be that the reward was variable altered.

If your model of motivation can explain that sort of change, I would
be interested to know more. Otherwise I have to stick with the best
models I know.

Will

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] procedural vs declarative knowledge

2006-06-09 Thread James Ratcliff
  Ben:   It's a little more than that (more than just speed optimization), because the declarative knowledge may be uncertain, but the procedure derived from it will often be more determinate...  [...]  Well, we are trying to make NOvamente actually do stuff (and succeeding, to a limited but nontrivial extent so far).We already have got the fundamentals figured out and are at the point where we  actually need to make our system work in an interesting way, and that means that doing theorem-proving internally every time an action needs to be taken is just not acceptable. [...]
   Two answers:  1) NOte that the compiled procedures may involve calls to the system's main memory, and may sometimes invoke inferences internally.So there  is an interweaving of execution of compiled instructions and on-the-fly mining of the knowledge and when necessary inference based on the knowledge base.  2) In a situation like you mention above, the choice of whether to read  Braille, light a fire, etc., would be carried out inferentially within Novamente.But once the choice was made, some stored procedures would be used to carry out the individual choices.For instance, reading  Braille would take the form of stored procedures rather than on-the-fly logical inference.And, lighting a fire would probably be done by spontaneously piecing together a
 procedure based on stored  subprocedures learned during previous fire-lighting experiences Alright, I assumeyou have a reasonable way of dealing with procedures. My concern is really about KR. The question isdo you*segregate* memory into declarative, episodic, and procedural memory. I just don't see why procedural knowledge should be mixed with declarative memory in the same container. If you put proceduralknowledge in a separate memory spacethenyou can use a more standard / conventional KR for declarative knowledge.   [ I have not thought about howtoapply procedural knowledge, that probably
 involves a lot of heuristics,and we can discuss that at a later time. ]   YKY Well, what do you actually mean when you say you would seperate it? How? It seems likely that we will have links between them all to provide easy access. If we have a frame for "book" or "reading a book" then it would seem very useful to have a reference link off there to a procedural description of reading a book, and maybe some past experiences. Likewise in the other direction, anytimg I consider the procedure or "reading a book" I will need to reference back to book to check facts and information about the object.James RatcliffThank YouJames Ratcliffhttp://FallsTown.com - Local Wichita Falls Community Websitehttp://Falazar.com - Personal
 WebsiteHosting Starting at $9.95Dialups Accounts - $8.95 __Do You Yahoo!?Tired of spam?  Yahoo! Mail has the best spam protection around http://mail.yahoo.com 
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Two draft papers: AI and existential risk; heuristics and biases

2006-06-09 Thread James Ratcliff
It IS my contention that there is a relatively simple, inductively-robust (in a mathematical proof sense) formulation of friendliness that will guarantee that there won't be effects that *I* consider undesirable, horrible, or immoral.  It will, of course/however, produce a number of effects that others will decry as undesirable, horrible, or immoral -- like allowing abortion and assisted suicide in a reasonable number of cases, NOT allowing the killing of infidels, allowing almost any personal modifications (with  truly informed consent) that are non-harmful to others, NOT allowing the imposition of personal modifications whether they be physical, mental, or spiritual, etc.[snip]Hmm, now what again is your goal, I am confused?You there is a possible formula that will make an AI "friendly"but unfriendly toward others, how will that benefit anyone then?Friendly
 appears to be very very subjective there, with someone always on the losing end.Now, something that appeals to the friendliness of everyone, 'sounds'better, but hasnt that already been tried with Socialism, Communism,and Democracy?  With less than spectacular results?There still would be abortion/noabortion xlaw/no xlaw that would be deemed unfriendly.So the best we can hope for is for the programmers to decide on theirsmall laws to program in.  But there is no way that I know, in any kind of self learning system that we can gaurentee that any of these laws would stay present.If the AI saw that abortion was causing a dearth of smiley faces, he could proceed to start compaigning against it immediatly.On another tack, I am looking at using some sort of general goodness or friendliness equation as a decider for motivation of my AI, and it takes into account many 'selfish' values such as personal
 wealth, but will also have a 'world' value thatdetermines if the world is in better state, ie preventing death where possible and making other people happy.Now the values on this in an AI can switch around, in just the same way as humans, and they could become selfish, or homicidal as well.James RatcliffThank YouJames Ratcliffhttp://FallsTown.com - Local Wichita Falls Community Websitehttp://Falazar.com - Personal WebsiteHosting Starting at $9.95Dialups Accounts - $8.95 __Do You Yahoo!?Tired of spam?  Yahoo! Mail has the best spam protection around http://mail.yahoo.com 
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Motivational system

2006-06-09 Thread Dennis Gorelik
William,

 It is very simple and I wouldn't apply it to everything that
 behaviourists would (we don't get direct rewards for solving crossword
 puzzles).

How do you know that we don't get direct rewards on solving crossword
puzzles (or any other mental task)?
Chances are that under certain mental condition (achievement state),
brain produces some form of pleasure signal.
If there is no such reward, then what's your explanation why people
like to solve crossword puzzles?



---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] AGI bottlenecks

2006-06-09 Thread James Ratcliff
Richard, I am a grad student and have studied this for a number of years already. I have dabbled in a few of the areas, but been unhappy in general with most peoples approaches as generally too specific (expert systems) or studying fringe problems of AI. I have been spending all my time reading papers and studying up on some sort of Strong AI, or general AI, but am dissappointed with many people here and in other places spending so much time on side issues, like calculating the actual date (which all seem wildly off) of the 'Singularity'.Aside: I have done some really interesting research with statistical extracting of facts from a large corpus, over 600 novels, and have had some really interesting results, but now am backing out to try and describe an overall AI agent sytem that has some consistency.James RatcliffRichard Loosemore [EMAIL PROTECTED] wrote: James,It is a little hard to know where to start, to be honest.  Do you have a background in any particular area already, or are you pre-college?  If the latter, and if you are interested in the field in a serious way, I would recommend that you hunt down a good programme in cognitive science (and if possible do software engineering as a minor).  After about three or four years of that, you'll have a better idea of where the below argument was coming from.  Even then, expect to have to argue the heck out of your professors, only believe one tenth of everything they say, and discover your own science as you go along, rather than be told what the answers are.  A lot of the questions do not have answers yet.All thinking systems do have a motivation system of some sort (what you were talking about below as "rewards"), but people's
 ideas about the design of that motivational system vary widely from the implicit and confused to the detailed and convoluted (but not necessarily less confused).  The existence of a motivational system was not the issue in my post:  the issue was exactly *how* you design that motivation system.Behaviorism (and reinforcement learning) was a suggestion that took a diabolically simplistic view of how that motivation system is supposed to work  so simplistic that, in fact, it swept under the carpet all the real issues.  What I was complaining of was a recent revival in interest in the idea of reinforcement learning, in which people were beginning to make the same stupid mistakes that were made 80 years ago, without apparently being aware of what those stupid mistakes were.(To give you an analogy that illustrates the problem:  imagine someone waltzes into Detroit and says "It ain't so hard to beat these Japanese
 car makers:  I mean, a car is just four wheels and a thing that pushes them around.  I could build one of those in my garage and beat the pants off Toyota in a couple of weeks!"   A car is not "four wheels and a thing that pushes them around".  Likewise, an artificial general intelligence is not "a set of environment states S, a set of actions A, and a set of scalar "rewards" in the Reals".)Watching history repeat itself is pretty damned annoying.Richard LoosemoreJames Ratcliff wrote: Richard,   Can you explain differently, in other words the second part of this  post.  I am very interested in this as a large part of an AI system.   I believe in some fashion there needs to be a controlling algorithm  that tells the AI that it is doing "Right" be it either an internal or  external human reward.  We receive these rewards in our daily life, in  our jobs
 relationships and such, wether we actually learn from these is  to be debated though.  James Ratcliff  */Richard Loosemore <[EMAIL PROTECTED]>/* wrote:   Will,  Comments taken, but the direction of my critique may have gotten lost in the details:  Suppose I proposed a solution to the problem of unifying quantum mechanics and gravity, and suppose I came out with a solution that said that the unified theory involved (a) a specific interface to quantum theory, which I spell out in great detail, and (b) ditto for an interface with geometrodynamics, and (c) a linkage component, to be specified.  Physicists would laugh at this. What linkage component?! they would say. And what makes you *believe* that once you sorted out the linkage component,
 the two interfaces you just specified would play any role whatsoever in that linkage component? They would point out that my "linkage component" was the meat of the theory, and yet I had referred to in such a way that it seemed as though it was just an extra, to be sorted out later.  This is exactly what happened to Behaviorism, and the idea of Reinforcement Learning. The one difference was that they did not explicitly specify an equivalent of my (c) item above: it was for the cognitive psychologists to come along later and point out that Reinforcement Learning implicitly assumed that something in the brain would do the job of deciding when to give 

Re: [agi] Motivational system

2006-06-09 Thread James Ratcliff
I definitely get pleasure out of doing them, that appears to be adirect feedback that is easily seen.Another harder one I saw the other day, is long term gains, which seem to be much harder to visualize.Take for instance flossing your teeth, it hurts sometimes, and could make your mouth bleed, not really the most pleasant task, but down the road you get the benefit of having a healthy mouth. But how do we know to look that far down the road, and how to we represent this tradeoff nicely.James RatcliffDennis Gorelik [EMAIL PROTECTED] wrote: William, It is very simple and I wouldn't apply it to everything that behaviourists would (we don't get direct rewards for solving crossword puzzles).How do you know that we don't get direct rewards on solving crosswordpuzzles
 (or any other mental task)?Chances are that under certain mental condition ("achievement state"),brain produces some form of pleasure signal.If there is no such reward, then what's your explanation why peoplelike to solve crossword puzzles?---To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] __Do You Yahoo!?Tired of spam?  Yahoo! Mail has the best spam protection around http://mail.yahoo.com 
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Motivational system

2006-06-09 Thread William Pearson

On 09/06/06, Dennis Gorelik [EMAIL PROTECTED] wrote:

William,

 It is very simple and I wouldn't apply it to everything that
 behaviourists would (we don't get direct rewards for solving crossword
 puzzles).

How do you know that we don't get direct rewards on solving crossword
puzzles (or any other mental task)?


I don't know, I only make hypotheses. As far as my model is concerned
the structures that give direct reward have to be pretty much in-built
otherwise for a selectionist system allowing a selected for behaviour
to give direct reward would quickly lead to behaviour that gives
itself direct reward and doesn't actually do anything.


Chances are that under certain mental condition (achievement state),
brain produces some form of pleasure signal.
If there is no such reward, then what's your explanation why people
like to solve crossword puzzles?


Why? By indirect rewards! If you will allow me to slip into my
economics metaphor, I shall try to explain my view of things. The
consumer is the direct reward giver, something that attempts to mold
the system to produce certain products, it doesn't say what is wants
just what is good, by giving money ( direct reward).

In humans this role played by the genome constructing structures that
says nice food and sex is good, along with respect from your peers
(probably the Hypothalamus and amygdala).

The role of raw materials is played by the information coming from the
environment. It can be converted to products or tools.

You have retail outlets that interact directly with the consumer,
being closest to the outputs they get directly the money that allows
their survival. However they have to pass some of the money onto the
companies that produced the products they passed onto the consumer.
This network of money passing will have to carefully controlled so
that more money isn't produced in one company than was given
(currently I think of the network of dopaminergic neurons being this
part).

Now with this sort of system you can make a million just so stories
about why one program would be selected that passes reward to another,
that is give indirect reward. This is where the complexity kicks in.
In terms of crossword solving one possibility is that a program closer
to the output and with lots of reward has selected for rewarding
logical problem solving because in general it is useful for getting
reward and so passes reward on to a program that has proven its
ability to logical problem solve, possibly entering into a deal of
some sort.

This is all very subconcious, as it is needed to be to be able to
encompass and explain low level learning such as neural plasticity,
which is very subconcious itself.

Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Motivational system

2006-06-09 Thread Dennis Gorelik
William,

1) I agree that direct reward has to be in-built
(into brain / AI system).
2) I don't see why direct reward cannot be used for rewarding mental
achievements. I think that this direct rewarding mechanism is
preprogrammed in genes and cannot be used directly by mind.
This mechanism probably can be cheated to the certain extend by the
mind. For example mind can claim that there is mental achievement when
actually there is none.
That possibility of cheating with rewards is definitely a problem.
I think this problem is solved (in human brain) by using only small
dozes of mental rewards.
For example, you can get small positive mental rewards by cheating your
mind to like finding solutions to 1+1=2 problem.
However, if you do it too often you'll eventually get hungry and would
get huge negative reward. This negative reward would not just stop you
doing 1+1=2 operation over and over, it would also re-setup your
judgement mechanism, so you will not consider 1+1=2 problem as an
achievement anymore.

Also, we all familiar with what boring is.
When you solve a problem once - it's boring to solve it again.
I guess that that is another genetically programmed mechanism with
prevents cheating with mental rewards.

3) Indirect rewarding mechanisms definitely work too, but they are not
sufficient for bootstrapping strong-AI capable system.
Consider a baby. She doesn't know why it's good to play (alone or with
others). Indirect reward from childhood playing will come years later
from professional success. 
Baby cannot understand human language yet, so she cannot envision this
success.
AI system would face the same problem.

My conclusion: indirect reward mechanisms (as you described them) would not be
able to bootstrap strong-AI capable system.

Back to real baby: typically nobody explains to baby that it's good to play.
But somehow babies/children like to play.
My conclusion: there are direct reward mechanisms in humans even for
things which are not directly beneficial to the system (like mental
achievements, speech, physical activity).

Friday, June 9, 2006, 4:48:07 PM, you wrote:

 How do you know that we don't get direct rewards on solving crossword
 puzzles (or any other mental task)?

 I don't know, I only make hypotheses. As far as my model is concerned
 the structures that give direct reward have to be pretty much in-built
 otherwise for a selectionist system allowing a selected for behaviour
 to give direct reward would quickly lead to behaviour that gives
 itself direct reward and doesn't actually do anything.

 Chances are that under certain mental condition (achievement state),
 brain produces some form of pleasure signal.
 If there is no such reward, then what's your explanation why people
 like to solve crossword puzzles?

 Why? By indirect rewards! If you will allow me to slip into my
 economics metaphor, I shall try to explain my view of things. The
 consumer is the direct reward giver, something that attempts to mold
 the system to produce certain products, it doesn't say what is wants
 just what is good, by giving money ( direct reward).

 In humans this role played by the genome constructing structures that
 says nice food and sex is good, along with respect from your peers
 (probably the Hypothalamus and amygdala).

 The role of raw materials is played by the information coming from the
 environment. It can be converted to products or tools.

 You have retail outlets that interact directly with the consumer,
 being closest to the outputs they get directly the money that allows
 their survival. However they have to pass some of the money onto the
 companies that produced the products they passed onto the consumer.
 This network of money passing will have to carefully controlled so
 that more money isn't produced in one company than was given
 (currently I think of the network of dopaminergic neurons being this
 part).

 Now with this sort of system you can make a million just so stories
 about why one program would be selected that passes reward to another,
 that is give indirect reward. This is where the complexity kicks in.
 In terms of crossword solving one possibility is that a program closer
 to the output and with lots of reward has selected for rewarding
 logical problem solving because in general it is useful for getting
 reward and so passes reward on to a program that has proven its
 ability to logical problem solve, possibly entering into a deal of
 some sort.

 This is all very subconcious, as it is needed to be to be able to
 encompass and explain low level learning such as neural plasticity,
 which is very subconcious itself.

  Will Pearson


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Neural representations of negation and time?

2006-06-09 Thread Philip Goetz

Various people have the notion that events, concepts, etc., are
represented in the brain as a combination of various sensory percepts,
contexts, subconcepts, etc.  This leads to a representational scheme
in which some associational cortex links together the sub-parts making
up a concept or a remembered event or even a proposition.  Antonia
Damasio's convergence zones are an example.

Does anyone have any ideas for neural representations of negation?
How could such a system represent a negated proposition?

I'm also interested in ideas about neural representations of time.
How, when memories are stored, are they tagged with a time sequence,
so that we remember when and/or in what order they happened, and how
do we judge how far apart in time events occurred?  Is there some
brain code for time, with a 1D metric on it to judge distance?

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] list vs. forum

2006-06-09 Thread Philip Goetz

Why do we have both an email list and a forum?
Seems they both serve the same purpose.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]