Ben wrote:

[...]

> To be more precise, we are not considering something as narrow as a "blocks
> world", though we are considering a simulated world.
> 
> My strong feeling is that a lot of the concepts learned in a simulation
> world could be used by an AI in the real world.  If this is not the case
> then clearly the AI is not abstracting enough!  For instance, if an AI
> learns the relationship between "near" and "by" via interacting
> linguistically with humans in the context of a simulation world, it should
> be able to generalize this semantic knowledge to real-world situations.

The concepts may be the "same", but the mechanisms that evoke those
concepts may be intricately entangled with the sensory modality, to
the point that when you redesign the sensorium, all the old concepts
will not be reusable.

In a bottom-up hierarchy of concepts (built up from micro-features)
I'm afraid it is impossible to change to an entirely new bottom
without having to rebuild the whole structure.

Having said that, I do agree that sensory input is not the most
important thing for an AGI for many interesting tasks. The problem
is whether the concepts/features hierarchy can be horizontally
separated or not. I'm thinking more along this line...

============

I read your article on experiential learning, here're some comments:

1. Self-Modifying Programs: I assume your idea is to use self-
-modification as a form of learning. The search space is thus the
algorithmic space. I have explained briefly in my web page that
algorithmic search is highly intractable, which we seemed to have
consensus already. If you think about it, program self-modification
is a form of evolutionary programming, and EP is not very efficient
even when people are doing it consciously. Do you have particular
reasons to believe you have found an efficient algorithmic search
algorithm? If not, maybe the idea of self-modifying programs is
a dead end.

2. I discovered some new problems in my AGI "blueprint", so I'm
not promoting it at the moment. One issue that you may want to
consider is that of *redundancy*. I suspect that in our brain
we're simulataneously keeping a lot of alternative
interpretations to sensory events. We act coherently because
at any time only one interpretation is active, but other
interpretations are latent in the neural network. The "Necker
cube" may somewhat illustrate this point. Another example is
sometimes you listen to what people say to you and only
understand their deeper meanings much later. If you want to
design a memory module, redundancy is probably necessary. That
means you should be keeping multiple interpretations of events
and let them compete with each other, and don't delete those
that are out-competed unless they've been inactive for a long
time. Basically very "Hebbian". Could there be any alternatives
to it?

More later =)
YKY
-- 
_______________________________________________
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to