> Alan, I strongly suggest you increase your familiarity with neuroscience
before making such claims in the future.  I'm not sure what simplified model
of the neuron you are using, but be assured that there are many layers of
complexity of function within even a simple neuron, let alone in networks.
The coupled resistor/capacitor model is only given as a simplified version
in textbooks to make the topic of neural networks digestible to the
entry-level student.  Dendrites are not simple summators, they have a
variety of nonlinear processes including recursive, catalytic chemical
reactions and complex second-messenger systems.  That's just the tip of the
iceberg once you get into pharmacological subsystems, the complexity becomes
a bit staggering.
>

agreed that the brain is enormously complex; however I think the point Alan
was making hinges on a slightly different interpretation of the word
complexity.

His interpretation seems to be similar to that which Hofstadter elucidates
in GEB; namely the idea of 'sealing off' of levels.  You can look at the
mind through different perspectives and at varying scales because of it's
high complexity.  Yet this very trait, arising from the brain's
mind-boggling complexity, allows one to model it at a system-scale level.
At a high enough level, you can start treating various major components as
black boxes, and dealing only with their high functionality.  Of course you
lose a certain amount of accuracy in doing this, but it is nonetheless a
valid approach.  We view and deal with other people as unified personalities
who we cannot 'read their mind'.  Rather we observe their actions and draw
conclusions about internal states that cannot be directly observed in the
absence of sophisticated brain-scanning technology.  Despite this
limitation, we are able to interact with others and predict their future
behavior and mental states to a reasonable degree.

Say I'm designing an AGI architecture (which I am btw, but it is irrelevant
to this discussion :)  and I want to preprocess audio data so that speech is
already parsed by the time it enters the AI's cognitive modules.  All I need
to do is obtain a preexisting natural language parser program and then
tailor the AI cognitive module(s) to work w/ it's output instead of raw
audio data.  I don't need to even look at the parsers' code if I don't want
to. (Although it may ease the use of it if I do examine it, it;s not
necessary)

I suppose I'm saying you can approach the mind (or any complex system that
has at least vaguely recognizable functional subsystems) in a manner
analogous to that of Object Oriented Programming

Jonathan Standley

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to