J. Storrs Hall, PhD. wrote:
On Tuesday 13 March 2007 22:34, Ben Goertzel wrote:
J. Storrs Hall, PhD. wrote:
On Tuesday 13 March 2007 20:33, Ben Goertzel wrote:
I am confused about whether you are proposing a brain model or an AGI
design.
I'm working with a brain model for inspiration, but I speculate that once
we understand what it's doing we can squeeze a few orders of magnitude
optimization out of it.

Well, if you want to wait till we understand the brain to work on AGI,
you may as well go to the
beach (or join the neuroscientists ... or work on building better brain
scan equipment) for the next
10-20 years or so.

Woops, not what I meant. You wondered if I were thinking about the brain because I acted as if I had a processor per concept. I'm just taking as a point of departure that (a) we know intelligence can be done in 1e16 ops, and (b) lets assume that it needs 1e16 ops for a brute-force implementation -- what architecture would that imply? That turned out to suggest a whole bunch of ideas.
OK, that's clearer....

Here is a brain question though: In your approach, the recursive build-up of patterns-among-patterns- ...-among-patterns seems to rely on the ability to treat transformations (e.g. matrices, or perhaps nonlinear transformations represented by NN modules?) as inputs to other transformations.
How do you hypothesize this occurring in the brain?

I.e. how do you hypothesize the brain implementing the transformation

[connectivity / connection-strength pattern of neural subnet A] ==>
[list of input activations to neural subnet B]

What neural subsystems, what dynamics within them, lead to this?

(I have my own hypotheses, but am curious what you think.)

It seems that you need this in your picture.

Is this something that you envision happening all the time within LTM, or something that you envision
happening only when transformations are "active" in working memory?
I expect experiments of parts of the theory can be done handily on a high-end workstation today, that will lead to the better understanding and optimization. The real brain may hold 10 million concepts but I should be able to demonstrate adaptiveness, robustness, learning, and reflective control with 10 to 100 -- which I have the horsepower for.



I don't think you can demonstrate anything exciting with 10-100 concepts, but I agree that you can come to understand some aspects of your theory better via this sort of experimentation. We have done experimentation with Novamente on this level, and even though the dynamics of the system with such a small scope of understanding is not representative of what its dynamics would
be more generally, there were still things to be learnt..

Ben



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to