Hi Pei,

I agree with Shane's comments.

The standard NN architectures are badly inadequate for AGI, but so are
the standard logic-based AI architectures.  Just as it's a mistake to
throw out NARS with the foolish, conventional logic-based
architectures; also, it's a mistake to throw out good NN architectures
with the typical ones.

One of the more interesting NN architectures I know of is John Weng's
SAIL architecture.  I haven't studied it in detail but it doesn't seem
to fall prey to the simple objections that you posit, and it is being
developed with AGI in mind, initially  from a robot control/vision
direction.

> The problem is that what you describe as neural networks is just a certain
> limited class of neural networks.  That class has certain limitations, which
> you point out.  However you can't then extend those conclusions to neural
> networks in general.  For example...
>
> You say, "Starting from an initial state determined by an input vector..."
> For recurrent NNs this isn't true, or at least I think that your description
> is confusing.  The state is a product of the history of inputs, rather than
> being determined by "an input vector".  Similarly I also wouldn't say that
> NNs are about input-output function learning.  Back prop NNs are about
> this when used in simple configurations.   However this isn't true of NNs
> in general, in particular its not true of recurrent NNs.  See for example
> liquid machines or echo state networks.

In general attractor neural networks can interact with the world in
arbitrarily complex ways, just like logic-based AI systems.  Daniel
Amit's old book "Modeling Brain Function" gives a good review of the
possibilities of attractor neural nets, including NN's that store
information in complex strange attractors.  Mikhail Zak published some
cool work on NN's with "terminal attractors" that store memories, as
well.

> I also wouldn't be so sure about neurons not being easily mapped to
> conceptual units.  In recent years neuro scientists have found that
> small groups of neurons in parts of the human brain correspond to very
> specific things.  One famous case is the "Bill Clinton neuron".  Of course
> you're talking about artificial NNs not real brains.  Nevertheless, if
> biologicial
> NNs can have this quasi-symbolic nature in places, then I can't see how
> you could argue that artificial NNs can't do it due to some fundamental
> limitation.

My suspicion is that in the brain knowledge is often stored on two levels:

* specific neuronal groups correlated with specific information

* strange attractors spanning large parts of the brain, correlated
with specific information

The neuronal group correlated with X may serve as a "key" that sets
off the strange attractor correlated with X.

I hypothesized this in 1997 or so in my book "From Complexity to
Creativity," but so far as I know the neuroscience hasn't quite caught
up yet.

This is related to how Novamente works, in that we can have individual
Novamente nodes or links denoting particular information, but these
serve effectively as keys for memory-wide attractors that more fully
represent the information.

-- ben

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to