Re: [agi] Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto

2007-06-23 Thread Bo Morgan

Right, in the limit of infinite training, this approach will learn exactly 
what we want it to learn.  Theoretical sufficiency is not generally of 
interest to engineers trying to build systems that solve real problems.

Reinforcement learning is a good tool for learning something, when *all 
you have* is a value function.  It is, however, only a small part of our 
ways to have computers learn to be intelligent.

For example, how do you learn to not get hit by a car?  Hopefully your AI 
will have something better than *only* reinforcement learning.

Bo

On Sat, 23 Jun 2007, Philip Goetz wrote:

) On 6/22/07, Bo Morgan [EMAIL PROTECTED] wrote:
)  
)  You make AGI sound like a members only club by this obligatory
)  comment. ;)
)  
)  Reinforcement learning is a simple theory that only solves problems for
)  which we can design value functions.
) 
) Can you explain what you mean by a value function?
) If success or failure are sufficient value functions (and I think
) they are), then this covers a wide class of problems.
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

Thanks for putting this together!  If I were to put myself into your 
theory of AI research, I would probably be roughly included in the 
Structure-AI and Capability-AI (better descriptions of the brain and 
computer programs that have more capabilities).

I haven't heard of a lot of these systems current Capabilities.  A lot of 
them are pretty old--like SOAR and ACT-R.

I tried finding literature on the success of some of these architectures, 
but most of the available literature was in the theory of theories of AI 
category.  The SOAR literature, for example, is massive and mostly focused 
on small independent projects.

Are there large real-world problems that have been solved by these 
systems?  I would find Capability links very useful if they were added.

Bo

On Fri, 22 Jun 2007, Pei Wang wrote:

) Hi,
) 
) I put a brief introduction to AGI at
) http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
) Overview followed by Representative AGI Projects.
) 
) It is basically a bunch of links and quotations organized according to
) my opinion. Hopefully it can help some newcomers to get a big picture
) of the idea and the field.
) 
) Pei
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

On Sat, 23 Jun 2007, Pei Wang wrote:

) On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:
)  
)  Thanks for putting this together!  If I were to put myself into your
)  theory of AI research, I would probably be roughly included in the
)  Structure-AI and Capability-AI (better descriptions of the brain and
)  computer programs that have more capabilities).
) 
) It is a reasonable position, though in the long run you may have to
) choose between the two, since they often conflict.

For example, if one can mentally simulate a computation, it has an analog 
in the brain.  I just want to describe the brain in computer language, 
which will require much more advanced programming languages to just get 
computers to simulate things similar to what people can do mentally.

--

)  I haven't heard of a lot of these systems current Capabilities.  A lot of
)  them are pretty old--like SOAR and ACT-R.
) 
) At the current stage, no AGI system has achieved remarkable
) capability. In the list, the ones have most practical applications are
) probably Cyc, SOAR, and ACT-R.

Well, they've been trying to find Capabilities, for example, I'm no ACT-R 
expert at all, but I read a paper about how they are looking for 
correlations between their planner's stack-size and fMRI BOLD signal 
voxels.  This would be a cool Capability in terms of Structural-AI if they 
were able to pull it off.  A simple theory of planning, but slow progress 
toward Structural-AI.

--

)  Are there large real-world problems that have been solved by these
)  systems?  I would find Capability links very useful if they were added.
) 
) I don't think there is any such solution, though that is not the major
) issue they face as AGI projects. As I analyzed in the paper on AI
) definitions, they are not designed with Capability as the primary
) goal.

Hmm..  It seems that even if Capability-AI isn't the primary goal of the 
theory, it must be *one* of the goals.  A Human-Scale thinking system is 
going to have a lot of small milestones of Capability.  If any of these 
systems have reached anything similar to this, which I'm sure many of them 
have because they've been around for 20-30 years.  I'm no expert on any of 
these systems, but I'm just trying to find how successful each has been in 
terms of Capability, which is seems much be at least a distant subgoal of 
all of them.  Even if they are purely theoretical, they must be created 
with the intention of creating other theories that do have Capabilities?!

Bo

) Pei
) 
)  Bo
)  
)  On Fri, 22 Jun 2007, Pei Wang wrote:
)  
)  ) Hi,
)  )
)  ) I put a brief introduction to AGI at
)  ) http://nars.wang.googlepages.com/AGI-Intro.htm ,  including an AGI
)  ) Overview followed by Representative AGI Projects.
)  )
)  ) It is basically a bunch of links and quotations organized according to
)  ) my opinion. Hopefully it can help some newcomers to get a big picture
)  ) of the idea and the field.
)  )
)  ) Pei
)  )
)  ) -
)  ) This list is sponsored by AGIRI: http://www.agiri.org/email
)  ) To unsubscribe or change your options, please go to:
)  ) http://v2.listbox.com/member/?;
)  )
)  
)  -
)  This list is sponsored by AGIRI: http://www.agiri.org/email
)  To unsubscribe or change your options, please go to:
)  http://v2.listbox.com/member/?;
)  
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Foundational/Integrative approach was Re: [agi] AGI introduction

2007-06-23 Thread Bo Morgan

On Sun, 24 Jun 2007, William Pearson wrote:

) I think the brains programs have the ability to protect their own
) storage from interference from other programs. The architecture will
) only allow programs that have proven themselves better* to be able to
) override this protection on other programs if they request it.
) 
) If you look at the brain it is fundamentally distributed and messy. To
) stop errors propagating as they do in stored program architectures you
) need something more decentralised than the current attempted
) dictatorial kernel control.

This is only partially true, and mainly only for the neocortex, right?  
For example, removing small parts of the brainstem result in coma.

) The value becomes in the architecture a fungible, distributable, but
) conserved, resource.  Analogous to money, although when used to
) overwrite something it is removed dependent upon hoe useful the
) program overwritten was. The outputting programs pass it back to the
) programs that have given them they information they needed to output,
) whether that information is from long term memory or processed from
) the environment. These second tier programs pass it further back.
) However the method of determining who gets the credit doesn't have to
) always be a simplistic function, they can have heuristics on how to
) distribute the utility based on the information they get from each of
) its partners. As these heuristics are just part of each program they
) can change as well.

Are there elaborations (or a general name that I could look up) on this 
theory--sounds good?  For example, you're referring to multiple tiers of 
organization, which sound like larger scale organizations that maybe have 
been further discussed elsewhere?

It sounds like there are intricate dependency networks that must be 
maintained, for starters.  A lot of supervision and support code that 
does this--or is that evolved in the system also?

--
Bo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] NVIDIA GPU's

2007-06-22 Thread Bo Morgan

That's 53.8 GB/s for a load of 33.6 MB?  Is there a burst cache effect 
going on here or do you think that's sustainable for multiple seconds?

Bo

On Fri, 22 Jun 2007, J Storrs Hall, PhD wrote:

) BTW, the CUDA toolkit for programming the GPU's is developing rapidly (and is 
) still in beta). here are memory bandwidths actually measured on my machine:
) 
) CUDA version 0.8:
) 
) Host to Device Bandwidth for Pinned memory
) Transfer Size (Bytes)   Bandwidth(MB/s)
)  33554432   1647.6
) 
) Device to Host Bandwidth for Pinned memory
) Transfer Size (Bytes)   Bandwidth(MB/s)
)  33554432   1654.7
) 
) Device to Device Bandwidth
) Transfer Size (Bytes)   Bandwidth(MB/s)
)  33554432   3332.1
) 
) CUDA version 0.9:
) 
) Host to Device Bandwidth for Pinned memory
) Transfer Size (Bytes)   Bandwidth(MB/s)
)  33554432   2700.0
) 
) Device to Host Bandwidth for Pinned memory
) Transfer Size (Bytes)   Bandwidth(MB/s)
)  33554432   2693.3
) 
) Device to Device Bandwidth
) Transfer Size (Bytes)   Bandwidth(MB/s)
)  33554432   53768.0
) 
) In other words, once uploaded to the GPU, you can afford to reorder your data 
) any way you want as often as you need to to take advantage of parallel ops.
) 
) Or to put it another way, the GPU has about the same bandwidth to its memory 
) as your brain does to its on a per-byte basis, in my ballpark estimates. 
) Somewhere on the order of 100 GPUs is a brain-equivalent. 
) 
) We're getting damn close...
) 
) Josh
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto

2007-06-22 Thread Bo Morgan

You make AGI sound like a members only club by this obligatory 
comment. ;)

Reinforcement learning is a simple theory that only solves problems for 
which we can design value functions.

We need some good readings about how to organize better programs.  Books 
on how to program large complicated systems with many interconnected 
parts.

Bo

On Fri, 22 Jun 2007, Lukasz Stafiniak wrote:

) Obligatory reading:
) http://www.cs.ualberta.ca/~sutton/book/ebook/the-book.html
) 
) Cheers.
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] NVIDIA GPU's

2007-06-21 Thread Bo Morgan

You could probably do a crazy game of life!  Whole virtual organisms!

I don't know how you would visualize it, or tell if it was ultimately 
alive!? :)

Anyone working on this?

Bo

On Thu, 21 Jun 2007, J Storrs Hall, PhD wrote:

) Yep. I have a homebrew version of this (pair of 8800s as a compute server) 
) that I'm experimenting with. Mostly as a vision preprocessor, which it's 
) clearly suited for. They have a subset of BLAS and simple FFT for them, and 
) are growing the repertoire rapidly. I'm fairly sure you could do GAs very 
) well IF you can fit the individual in the VERY memory-limited parallel 
) threads. I'm personally looking at them for more straight-forward vector 
) processing, for which they are very well adapted.
) 
) Josh
) 
) 
) On Thursday 21 June 2007 10:17:41 am Benjamin Goertzel wrote:
)  This looks interesting...
)  
)  http://www.nvidia.com/page/home.html
)  
)  Anyone know what are the weaknesses of these GPU's as opposed to
)  ordinary processors?
)  
)  They are good at linear algebra and number crunching, obviously.
)  
)  Is there some reason they would be bad at, say, MOSES learning?
)  (Having 128 program trees evaluated on 128 processors, against the
)  same knowledge base stored in RAM, would be nice.  Though I guess
)  processors contending for the same RAM could cause some slowdown
)  unless properly managed.)
)  
)  Unlike the PS3, these processors come with a decent amount of onboard RAM.
)  
)  -- Ben
)  
)  -
)  This list is sponsored by AGIRI: http://www.agiri.org/email
)  To unsubscribe or change your options, please go to:
)  http://v2.listbox.com/member/?;
)  
)  
) 
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] NVIDIA GPU's

2007-06-21 Thread Bo Morgan

Thanks.  Ahh..  Solved in 1980.  Very clever.  Memoize everything.  I 
guess if the game was based on floating point matrix math then these GPUs 
would be more useful.  I guess this applies more to neural-net-like 
algorithms, e.g. markov models, and bayes nets...  although these would 
probably especially benefit from sparse matrix math.  Could arbitrary 
graph algorithms be represented as sparse matrix math that these machines 
would do well?

It seems like communication between disparate areas of the matrix are the 
bottleneck here.  I wonder if very large sparse matrices (a few million 
dimensions square) would be automatically distributed to take advantage of 
local memory access?  Probably an open problem?  Sounds like a lot of our 
full-scale human knowledge problems (like commonsense logic calculations 
that can often be decomposed into sparse matrices) could use this kind of 
thing...

Bo

On Thu, 21 Jun 2007, Russell Wallace wrote:

) On 6/21/07, Bo Morgan [EMAIL PROTECTED] wrote:
)  
)  
)  You could probably do a crazy game of life!  Whole virtual organisms!
)  
) 
) It turns out for this workload there are much bigger wins on the algorithm
) level - check out the Hash Life family of algorithms - for which CPUs are
) much better suited.
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Pure reason is a disease.

2007-06-16 Thread Bo Morgan

I haven't kept up with this thread.  But I wanted to counter the idea of a 
simple ordering of painfulness.

A simple ordering of painfulness is one way to think about pain that might 
work in some simple systems, where resources are allocated in a serial 
fashion, but may not work in systems where resource allocation choices are 
not necessarily serial and mutually exclusive.

If our system has a heterarchy of goal-accomplishing resources--some of 
which imply others and some of which exclude others, the problem of 
simple orderings of painfulness may be not useful for thinking about 
these types of resource allocation.

--
Bo

On Sat, 16 Jun 2007, Matt Mahoney wrote:

) 
) --- Jiri Jelinek [EMAIL PROTECTED] wrote:
) 
)  Eric,
)  
)  I'm not 100% sure if someone/something else than me feels pain, but
)  considerable similarities between my and other humans
)  
)  - architecture
)  - [triggers of] internal and external pain related responses
)  - independent descriptions of subjective pain perceptions which
)  correspond in certain ways with the internal body responses
)  
)  make me think it's more likely than not that other humans feel pain
)  the way I do.
) 
) There is a simple proof for the existence of pain.  Define pain as a signal
) that an intelligent system has the goal of avoiding.  By the equivalence:
) 
)   (P = Q) = (not Q = not P)
) 
) if you didn't believe the pain was real, you would not try to avoid it.
) 
) (OK, that is proof by belief.  I omitted the step (you believe X = X is
) true).  If you believe it is true, that is good enough).
) 
)  The further you move from human like architecture the less you see the
)  signs of pain related behavior (e.g. the avoidance behavior). Insect
)  keeps trying to use badly injured body parts the same way as if they
)  weren't injured and (unlike in mammals) its internal responses to the
)  injury don't suggest that anything crazy is going on with them. And
)  when I look at software, I cannot find a good reason for believing it
)  can be in pain. The fact that we can use pain killers (and other
)  techniques) to get rid of pain and still remain complex systems
)  capable of general problem solving suggests that the pain quale takes
)  more than complex problem solving algorithms we are writing for our
)  AGI.
) 
) Pain is clearly measurable.  It obeys a strict ordering.  If you prefer
) penalty A to B and B to C, then you will prefer A to C.  You can estimate,
) e.g. that B is twice as painful as A and choose A twice vs. B once.  In AIXI,
) the reinforcement signal is a numeric quantity.
) 
) But how should pain be measured?
) 
) Pain results in a change in the behavior of an intelligent system.  If a
) system responds Y = f(X) to input X, followed by negative reinforcement, then
) the function f is changed to output Y with lower probability given input X. 
) The magnitude of this change is measurable in bits.  Let f be the function
) prior to negative reinforcement and f' be the function afterwards.  Then
) define
) 
)   dK(f) = K(f'|f) = K(f, f') - K(f)
) 
) where K() is algorithmic complexity.  Then dK(f) is the number of bits needed
) to describe the change from f to f'.
) 
) Arguments for:
) - Greater pain results in a greater change in behavior (consistent with animal
) experiments).
) - Greater intelligence implies greater possible pain (consistent with the
) belief that people feel more pain than insects or machines).
) 
) Argument against:
) - dK makes no distinction between negative and positive reinforcement, or
) neutral methods such as supervised learning or classical conditioning.
) 
) I don't know how to address this argument.  Earlier I posted a program that
) simulates a programmable logic gate that you train using reinforcement
) learning.  Note that you can achieve the same state using either positive or
) negative reinforcement, or by a neutral method such as setting the weights
) directly.
) 
) -- Matt Mahoney
) 
) 
) -- Matt Mahoney, [EMAIL PROTECTED]
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Guessing robots

2007-05-10 Thread Bo Morgan

You could probably buy 10 cheap webcams and put them all around the robot 
and get some vision algorithms to turn them into 3D scenes, which are 
avoided/mapped?  This seems like a pretty well understood and constrained 
problem.

It also sounds like a lot of robot perception work on object permanence, 
which from what I understand is getting a noisy perceptual input to 
conform to a unique symbolic representation of the object (e.g. this is 
the same T-shaped-part-of-the-hallway that I was at 5 minutes ago.)

They mention the SLAM algorithm as being similar, and it doesn't seem to 
have any much greater abilities to accomplish goals for example.  It 
basically sounds like a better inference algorithm.

Bo

On Thu, 10 May 2007, Bob Mottram wrote:

) I don't know what algorithms are being referred to in this article -
) perhaps a type of monte carlo localization.  Does anyone have more
) direct references?  Also it's only in 2D, which is normal for laser
) based mapping.
) 
) It's unlikely that we'll see products based on this sort of technology
) appearing any time soon, unless someone comes out with a cheap
) scanning laser rangefinder (currently costing around $2000 each) or
) alternative types of sensor are used, such as stereo vision.
) 
) 
) 
) On 10/05/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
)  article in NS about the Purdue guessing robot navigators...
)  
)  
http://www.newscientisttech.com/article/dn11805-guessing-robots-navigate-faster.html
)  
)  I think I get a toljaso on this one --- if the architecture were composed of
)  modules that did CBR, each in its own language, from the very start, this
)  technique would have fallen out automatically.
)  
)  Josh
)  
)  -
)  This list is sponsored by AGIRI: http://www.agiri.org/email
)  To unsubscribe or change your options, please go to:
)  http://v2.listbox.com/member/?;
)  
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?;
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] general weak ai

2007-03-09 Thread Bo Morgan

Right, you would need the Saw to say hey I can cut that table-leg or that 
ladder or I could cut your hand.  And then you would need the hammer to 
say similar things that it could do.

Then you would need another agent resource to say okay, it doesn't make 
any sense to cut that table right now (e.g. because we're in a 
Constructive mode of thought rather than a Destructive mode of thought).

In this way, resources could recognize potential paths of action, at many 
levels of different relatively independent resources.

Bo

On Fri, 9 Mar 2007, Pei Wang wrote:

) On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
)  
)  If I understand Minsky's Society of Mind, the basic idea is to have the
)  tools
)  be such that you can build your deck by first pointing at the saw and saying
)  you do your thing and then pointing at the hammer, etc. The tools are then
)  in turn made of little guys who do the same to their tools, ad infinitum (or
)  at least ad neuronium).
) 
) This understanding assumes a you who does the pointing, which is a
) central controller not assumed in the Society of Mind. To see
) intelligence as a toolbox, we would have to assume that somehow the
) saw, hammer, etc. can figure out what they should do in building the
) deck all by themselves.
) 
) Pei
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?list_id=303
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Marvin Minsky's 2001 AI Talk on podcast

2007-03-05 Thread Bo Morgan

On Mon, 5 Mar 2007, Richard Loosemore wrote:

) Rowan Cox wrote:
)  Hey all,
)  
)  Just thought I'd breifly delurk to post a link (or three,..).  I
)  believe this is a talk from 2001, so everyone else has probably heard
)  it already ;)
)  
)  Part 1:
)  http://www.informationweek.com/news/showArticle.jhtml?articleID=197700609
)  Part 2:
)  http://www.informationweek.com/news/showArticle.jhtml?articleID=197700610
)  Part 3:
)  http://www.informationweek.com/news/showArticle.jhtml?articleID=197700612
)  
)  (all via Slashdot:
)  http://developers.slashdot.org/article.pl?sid=07/03/02/0133231)
)  
)  There's some interesting comments about knowledge representation in
)  there, particularly about mixing representations.  Though I have to
)  admit at the end of it I'm not entirely sure if he's advocating that
)  everything should be encoded in every possible way, or if the
)  representation for each 'fact' should be chosen based on the type of
)  fact.
)  
)  Does anyone have any strong arguments for which form of representation
)  mixing for a given set of facts is most effective (or most practical?
)  :D )
)  
)  ie. given facts f(1) though f(n) should:
)  
)  a) all facts be encoded in all possible formats: in a Neural Net, AND
)  in a set of rules, AND as probablities etc.
)  
)  b) each fact should be encoded once, but in the best format for that
)  fact: facts f(1) through f(500) might be best represened as a NN
)  (could be audio or video patterns), f(501) through f(700) could be
)  natural language grammar rules, and best be declarative logic
)  statements, while the next few hundred could be probablities of some
)  known events best modeled as Hidden Markov Models and so on,
)  
)  or c) would trying to support every knowledge representation under the
)  sun doom our poor little AI mind from day one, and should all
)  knowledge be normalised to one representation.
)  
)  any thoughts?
) 
) The truth is that the formats you mention (neural nets, rules, probablities)
) are all just vague ideas with a thousand variations on each one.  They all
) (all the variations) have problems.  They all seem at least partially ad-hoc.
) 
) What is needed is a different way of asking the question, how is knowledge
) represented?.  I have my cognitive-systems/complexity approach, Ben has his
) eclectic approach, Pei has his NARS approach and Peter Voss has something else
) again (does it make sense to call it a neural-gas approach, Peter?).
) 
) Richard Loosemore.

I think one thing that is meant by a different representation is expressed 
in Minsky's idea of Mental Realms.  These realms not only have different 
representations for problems but also contain different Ways to Think, or 
processing algorithms for those problem data.  For example, a Physical 
reasoning problem may use mental resources that perform rotations, 
translations, scaling, containment inference, support structure inference, 
matter conservation, etc.  Given a problem in a Physics representation, we 
could perform these Ways of Thinking on the problem, that would help to 
simulate and plan actions.  This limited Mental Realm of Physical 
reasoning will only get you so far in your action planning however.  If 
there is another person in the room, for example, you may need to 
translate the current Physical plan state into the Mental Realm of Social 
reasoning, where you might consider the beliefs, desires, and 
relationships between the other people in the room.  Many of these mental 
realms share many representations and ways to think, which allows the 
analogical translation of a problem in one mental realm into another 
mental realm: this process is referred to as a Parallel Analogy in the 
theory (Panalogy) because these translations between mental realms are 
occuring as reasoning progresses in many different mental realms at once.  
As one mental realm's way to think makes sufficient progress in the 
planning process, that mental realm's representation of the current 
problem can be translated by analogy to the other mental realms.  Each 
mental realm also has unique ways of thinking as well as these shared ways 
of thinking that allow the panalogy planning process.

It might be taboo to mention a link, but here is a much better description 
of these ideas, by the late Push Singh:

http://citeseer.ist.psu.edu/cache/papers/cs/26907/http:zSzzSztaffy.media.mit.eduzSzPush.Phd.Proposal.pdf/singh03panalogy.pdf

Bo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Marvin Minsky's 2001 AI Talk on podcast

2007-03-05 Thread Bo Morgan

On Mon, 5 Mar 2007, Richard Loosemore wrote:

) Bo Morgan wrote:
) 
)  I think one thing that is meant by a different representation is expressed
)  in Minsky's idea of Mental Realms.  These realms not only have different
)  representations for problems but also contain different Ways to Think, or
)  processing algorithms for those problem data.  For example, a Physical
)  reasoning problem may use mental resources that perform rotations,
)  translations, scaling, containment inference, support structure inference,
)  matter conservation, etc.  Given a problem in a Physics representation, we
)  could perform these Ways of Thinking on the problem, that would help to
)  simulate and plan actions.  This limited Mental Realm of Physical reasoning
)  will only get you so far in your action planning however.  If there is
)  another person in the room, for example, you may need to translate the
)  current Physical plan state into the Mental Realm of Social reasoning, where
)  you might consider the beliefs, desires, and relationships between the other
)  people in the room.  Many of these mental realms share many representations
)  and ways to think, which allows the analogical translation of a problem in
)  one mental realm into another mental realm: this process is referred to as a
)  Parallel Analogy in the theory (Panalogy) because these translations between
)  mental realms are occuring as reasoning progresses in many different mental
)  realms at once.  As one mental realm's way to think makes sufficient
)  progress in the planning process, that mental realm's representation of the
)  current problem can be translated by analogy to the other mental realms.
)  Each mental realm also has unique ways of thinking as well as these shared
)  ways of thinking that allow the panalogy planning process.
) 
) My response to these ideas is generally positive, but with one strong
) reservation that is much the same as, or closely related to, Pei's.
) 
) If you try to hand-build these various domains, you could be in trouble in the
) long run. Much better to ask about the processes going on one level down,
) where these domains (realms) are constructed and deployed.
) 
) Thus:  _learning_ the physical reasoning realm may be a process that is
) actually common to learning all the realms, so it would be better to tackle
) that first.
) 
) Similarly, _choosing to deploy_ the physical reasoning realm on a particular
) occasion (being able to recognise that the PRR is what you need, for the
) problem at hand) is also a process that could be more important than the
) actual use of the PRR itself.  This was always a classic problem in GOFAI, and
) in behaviorism (where choosing the pairs of patterns to do association with
) was more important than the association process), and it continues to be a
) problem in mainstream AI today.
) 
) It is the general principles at that lower level that are my main focus, even
) while I am interested in (and working on) the stuff that goes on at the
) realm level itself.
) 
) My golden rule is:  Teach a system how to fish and it will just be an
) electronic fishing rod.  Teach a system how to *learn* such skills as how to
) fish, and it will be an AI.
) 
) Richard Loosemore.

I definately agree with you to a point.  There are many specifically 
connected brain centers though, and these might be considered Predestined 
learning, which might be different from other forms of Declarative 
learning or Procedural learning.  Learning is such a general term that 
refers to many different processes of change and probably has many 
different implementations in human minds.

I'm kind of approaching the problem from the other direction: If we can 
sketch out the overall structure of what is required by every human mind, 
then we can fill in the details with many different learning algorithms 
that are developed to work in that human architecture.

That is not to say I'm not studying learning algorithms, but I see 
learning as an adaptive process that starts with a partially 
already working structure.

Bo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Has anyone read On Intelligence

2007-02-21 Thread Bo Morgan

On Wed, 21 Feb 2007, Richard Loosemore wrote:

) Aki Iskandar wrote:
)  I'd be interested in getting some feedback on the book On Intelligence
)  (author: Jeff Hawkins).
)  
)  It is very well written - geared for the general masses of course - so it's
)  not written like a research paper, although it has the feel of a thesis.
)  
)  The basic premise of the book, if I can even attempt to summarize it in two
)  statements (I wouldn't be doing it justice though) is:
)  
)  1 - Intelligence is the ability to make predictions on memory.
)  2 - Artificial Intelligence will not be achieved by todays computer chips
)  and smart software.  What is needed is a new type of computer - one that is
)  physically wired differently.
)  
)  
)  I like the first statement.  It's very concise, while capturing a great deal
)  of meaning, and I can relate to it ... it jives.
)  
)  However, (and although Hawkins backs up the statements fairly convincingly)
)  I don't like the second set of statements.  As a software architect
)  (previously at Microsoft, and currently at Charles Schwab where I am writing
)  a custom business engine, and workflow system) it scares me.   It scares me
)  because, although I have no formal training in AI / Cognitive Science, I
)  love the AI field, and am hoping that the AI puzzle is solvable by
)  software.
)  
)  So - really, I'm looking for some of your gut feelings as to whether there
)  is validity in what Hawkins is saying (I'm sure there is because there are
)  probably many ways to solve these type of challenges), but also as to
)  whether the solution(s) its going to be more hardware - or software.
)  
)  Thanks,
)  ~Aki
)  
)  P.S.  I remember a video I saw, where Dr. Sam Adams from IBM stated
)  Hardware is not the issue.  We have all the hardware we need.   This makes
)  sense.  Processing power is incredible.  But after reading Hawkins' book, is
)  it the right kind of hardware to begin with?
) 
) For the time being, it is the software (the conceptual framework, the high
) level architecture) that matters most.
) 
) If someone has naive views about the AGI problem, about the various issues
) that must be relevant to the design of a thinking system (like, if they have
) no comprehensive knowledge of both cognitive science and AI, among other
) things), and yet that person has really strong views about the hardware that
) we MUST use to build an intelligent system, what I hear is Hey, I don't know
) exactly what you guys are doing, but I know you need THIS!.   H. Just
) keep banging the rocks together.
) 
) Having said that, there is an element of truth in what Hawkins says.  My
) personal opinion is that he has only a fragment of the truth, however, and is
) mistaking it for the whole deal.
) 
) 
) Richard Loosemore.

I like what Hawkins has to say, despite his regailing against A.I., which 
most everyone does.  We just keep trucking along, making new ways to think 
about the brain...  The simple hierarchical learning and inference system 
that he describes as the neocortex with the hippocampus sitting on top 
is absurdly simplistic, and his ideas of that vague and mystical word 
Consciousness are pretty far out--can't say I agree with much of that.

I feel Ramachandran is a good reference for what is it to perceive, which 
is my current best understanding of Consciousness...

Bo

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-20 Thread Bo Morgan

On Tue, 20 Feb 2007, Richard Loosemore wrote:

) Bo Morgan wrote:
)  On Tue, 20 Feb 2007, Richard Loosemore wrote:
)  
)  In regard to your comments about complexity theory: from what I understand,
)  it is primarily about taking simple physics models and trying to explain
)  complicated datasets by recognizing these simple models.  These simple
)  complexity theory patterns can be found in complicated datasets for the
)  purpose of inference, but do they get us closer to human thought?
) 
) Uh, no:  this is a misunderstanding of what complexity is about.  The point of
) complexity is that some types of (extremely nonlinear) systems can show
) interesting regularities in high-level descriptions of their behavior, but [it
) has been postulated that] there is no tractable theory that will ever be able
) to relate the observed high-level regularities to the low-level mechanisms
) that drive the system.  The high level behavior is not random, but you cannot
) explain it using the kind of analytic approaches that work with simple [sic]
) physical systems.
) 
) This is a huge topic, and I think we're talking past each other:  you may want
) to go read up on it (Mitchell Waldrop's book is a good, though non-technical
) introduction to the idea).

Okay.  Thanks for the pointer.  I'm very interested in simple and easily 
understood ideas. :)  They make easy-to-understand theories.

)  Do they tell us what grief is doing when a loved one dies?
)  Do these inference system tell us why we get depressed when we keep
)  failing to accomplish our goals?
)  Do they give a model for understanding why we feel proud when we are
)  encouraged by our parents?
)  
)  These questions are trying to get at some of the most powerful thought
)  processes in humans.
) 
) If you are attacking the ability of simple logical inference systems to
) cover these topics, I kind of agree with you.  But you are diving into some
) very complicated, high-level stuff there.  Nothing wrong with that in
) principle, but these are deep waters.  Your examples are all about the
) motivational/emotional system.  I have many ideas about how that is
) implemented, so you can rest assured that I, at least, am not ignoring them.
) (And, again: I *am* taking a complex systems approach).
) 
) Can't speak for anyone else, though.
) 
) 
) Richard Loosemore
) 
) -
) This list is sponsored by AGIRI: http://www.agiri.org/email
) To unsubscribe or change your options, please go to:
) http://v2.listbox.com/member/?list_id=303
) 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Missing Piece

2007-02-19 Thread Bo Morgan

On Mon, 19 Feb 2007, John Scanlon wrote:

) Is there anyone out there who has a sense that most of the work being 
) done in AI is still following the same track that has failed for fifty 
) years now?  The focus on logic as thought, or neural nets as the 
) bottom-up, brain-imitating solution just isn't getting anywhere?  It's 
) the same thing, and it's never getting anywhere.

Yes, they are mostly building robots and trying to pick up blocks or catch 
balls.  Visual perception and motor control for solving this task was 
first shown in a limited context in the 1960s.  You are correct that the 
bottom up approach is not a theory driven approach.  People talk about 
mystical words, such as Emergence or Complexity, in order to explain how 
their very simple model of mind can ultimately think like a human.  
Top-down design of an A.I. requires a theory of what abstract thought 
processes do.

) The missing component is thought.  What is thought, and how do human 
) beings think?  There is no reason that thought cannot be implemented in 
) a sufficiently powerful computing machine -- the problem is how to 
) implement it.

Right, there are many theories of how to implement an AI.  I wouldn't 
worry too much about trying to define Thought.  It has different 
definitions depending on the different problem solving contexts that it is 
used.  If you focus on making a machine solve problems, then you might see 
some part of the machine you build will resemble your many uses for the 
term Thought.

) Logical deduction or inference is not thought.  It is mechanical symbol 
) manipulation that can can be programmed into any scientific pocket 
) calculator.

Logical deduction is only one way to think.  As you say, there are many 
other ways to think.  Some of these are simple reactive processes, while 
others are more deliberative and form multistep plans, while still others 
are reflective and react to problems in actual planning and inference 
processes.

) Human intelligence is based on animal intelligence.

No.  Human intelligence has evolved from animal intelligence.  Human 
intelligence is not necessarily a simple subsumption of animal 
intelligence.

) The world is continuous, spatiotemporal, and non-descrete, and simply is 
) not describable in logical terms.  A true AI system has to model the 
) world in the same way -- spatiotemporal sensorimotor maps.  Animal 
) intelligence.

Logical parts of the world are describable in logical terms.  We think in 
many different ways.  Each of these ways uses different representations of 
the world.  We have many specific solutions to specific types of problem 
solving, but to make a general problem solver we need ways to map these 
representations from one specific problem solver to another.  This allows 
alternatives to pursue when a specific problem solver gets stuck.  This 
type of robust problem solving requires reasoning by analogy.

) Ask some questions, and I'll tell you what I think.

People always have a lot to say, but what we need more of are working 
algorithms and demonstrations of robust problem solving.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303