On Jul 23, 2:35 am, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Jul 22, 6:25 pm, meekerdb <meeke...@verizon.net> wrote:
> >But that's contradicting your assumption that the "pegs" are transparent
> >to the neural communication:
> >"If the living
> >cells are able to talk to each other well through the prosthetic
> >network, then functionality should be retained"
> Neurological functionality is retained but there are fewer and fewer
> actual neurons to comprise the network, so the content of the
> conversations are degraded, even though that degradation is preserved
> with high fidelity.
Assuming replacement neurons aren;t functionally equivalent.
> > Whatever neurons remain, even it it's only the afferent/efferent
> >ones, they get exactly the same communication as if there were no "pegs"
> >and the whole brain was neurons.
> Think of them like sock puppet/bots multiplying in a closed social
> network. If you have 100 actual friends on a social network and their
> accounts are progressively replaced by emulated accounts posting even
> slightly unconvincing status updates,
Why would "slightly unconvincing" fall under "exact funcitonal
>you rapidly lose interest in
> those updates and either route around them, focusing on the
> diminishing group of your original non-bots, or check out of the
> network altogether. A neuron is more than it's communication. A
> communicating peg cannot communicate feelings that it doesn't have, it
> can only emulate computations that are based upon feeling correlates.
> >You're evading the point by changing examples.
> Not intentionally. It's just that example is built on fundamental
> assumptions which I think are not only untrue, but buried in the gap
> between our understanding of consciousness and our understanding of
> everything else.
IOW: yout think the Neurone Replacement Hypothesis doens't
disprove your theory because you think your theory is correct.
See the problem?
> The assumption being that our consciousness must work
> like everything else that our consciousness can examine objectively,
> whereas my working assumption is to suppose that our consciousness
> works in exactly the opposite way, and that opposition itself is
> critically important and fundamental to any understanding of
> consciousness. Observing our neurons behaviors is like chasing
> billions of our tails, and assuming that their heads must be our head.
> Replacing the tails alone doesn't make our head happen magically. The
> neurons that we see are only the outer half of the neurons that we
> are. The inside looks like our lives, our society, our evolution as
> >It does raise in my mind an interesting pont though. These questions
> >are usually considered in terms of replacing some part of the brain (a
> >neuron, or a set of neurons) by an artificial device that implements the
> >same input/output function. It then seems, absent some intellect
> >vitale, that the behavior of that brain/person would be unchanged. But
> >wouldn't it be likely that the person would suffer some slight
> >impairment in learning/memory simply because the artificial device
> >always computes the same function, whereas the biological neurons grow
> >and change in response to stimuli.
There is such a thing as machine learning.
And those stimuli are external and
> >cannot be forseen by the doctor. So what he needs to implant is not
> >just a fixed function but a function that depends on the history of its
> >inputs (i.e. a function with memory).
> Now you're getting closer to what I'm looking at. A flat model of a
> neuron is not a neuron. It's a living thing. It has respiration. It
> learns and grows. It's us.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at