On 7/22/2011 6:35 PM, Craig Weinberg wrote:
On Jul 22, 6:25 pm, meekerdb<meeke...@verizon.net> wrote:
But that's contradicting your assumption that the "pegs" are transparent
to the neural communication:
"If the living
cells are able to talk to each other well through the prosthetic
network, then functionality should be retained"
Neurological functionality is retained but there are fewer and fewer
actual neurons to comprise the network,/so the content of the
conversations are degraded, even though that degradation is preserved
with high fidelity./
Well at least we've got the contradiction compressed down into one
sentence: "Degradation is preserved with high fidelity."
Whatever neurons remain, even it it's only the afferent/efferent
ones, they get exactly the same communication as if there were no "pegs"
and the whole brain was neurons.
Think of them like sock puppet/bots multiplying in a closed social
network. If you have 100 actual friends on a social network and their
accounts are progressively replaced by emulated accounts posting even
slightly unconvincing status updates, you rapidly lose interest in
those updates and either route around them, focusing on the
diminishing group of your original non-bots, or check out of the
network altogether. A neuron is more than it's communication.
Not to the next neuron it isn't...and not to the efferent neurons. If
there is something that isn't communicated, it can't make a difference
to behavior because we know that muscles are moved by what the neurons
communicate to them.
communicating peg cannot communicate feelings that it doesn't have, it
can only emulate computations that are based upon feeling correlates.
You're evading the point by changing examples.
Not intentionally. It's just that example is built on fundamental
assumptions which I think are not only untrue, but buried in the gap
between our understanding of consciousness and our understanding of
everything else. The assumption being that our consciousness must work
like everything else that our consciousness can examine objectively,
whereas my working assumption is to suppose that our consciousness
works in exactly the opposite way, and that opposition itself is
critically important and fundamental to any understanding of
consciousness. Observing our neurons behaviors is like chasing
billions of our tails, and assuming that their heads must be our head.
Replacing the tails alone doesn't make our head happen magically. The
neurons that we see are only the outer half of the neurons that we
are. The inside looks like our lives, our society, our evolution as
It does raise in my mind an interesting pont though. These questions
are usually considered in terms of replacing some part of the brain (a
neuron, or a set of neurons) by an artificial device that implements the
same input/output function. It then seems, absent some intellect
vitale, that the behavior of that brain/person would be unchanged. But
wouldn't it be likely that the person would suffer some slight
impairment in learning/memory simply because the artificial device
always computes the same function, whereas the biological neurons grow
and change in response to stimuli. And those stimuli are external and
cannot be forseen by the doctor. So what he needs to implant is not
just a fixed function but a function that depends on the history of its
inputs (i.e. a function with memory).
Now you're getting closer to what I'm looking at. A flat model of a
neuron is not a neuron. It's a living thing. It has respiration. It
learns and grows. It's us.
Or as Bruno suggests, just model it at a lower level. Of course if you
have to model it at the quark level, you might as well make your
artificial neuron out of quarks and it won't be all that "artificial".
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at