Re: COMP is empty(?)

2011-10-18 Thread Peter Kinnon


While the comments made here make interesting and amusing reading the
underlying rationale of COMP as an attempt to resolve the mind-body
problem which worried earlier philosophers is, in my view fatally
flawed. Here are some of the main reasons:

1.  There is no longer a mind-body problem. Objective current
understandings of physics, chemistry and biology easily dispel the
mystical notions previously associated with consciousness.  As long as
we take care to avoid the trap of introspection with its attendant
self-referential recursive loops we can now see that this feature,
which happens to be greatly hypertrophied  in our species, is merely
an extension and enhancement of the navigational facility seen in most
animals.  The degree of sophistication being a result natural
selection to permit optimal interaction of the organism with its
environment.
Which in our case, of course is extraordinarily high.

2. The language of mathematics has evolved to handle more efficiently
the relatively simple situations not requiring the high levels of
abstraction found in the natural languages. The latter are, for the
most part, more appropriate for complex disciplines such as chemistry
and particularly biology. A tree, for instance, or a cell, defies
mathematical description. Only for the simpler aspects of these
disciplines does mathematics play a minor (but nevertheless valuable
part) as an adjunct.
For this reason, mathematics would not be a good contender for the
solution of the mind-body problem even if it still had any
significance.

3. Even in those areas where mathematics is most valuable we must bear
in mind that, like all languages, it is capable of generating
fictions. Most importantly, of the multitudinous mathematical models
that can be envisaged, only a small subset correspond to empirical
reality. For example, any number of dimensions can be handled within
mathematics yet only the three of space and one of time have, as yet,
been observed. Science has found no straight lines or points in our
universe.
It is the failure to recognize these inherent limitations which, to
me, appear to inspire much of the contention in the above discussions
of this topic.

A treatment of consciousness and related issues is provided within the
context of a broad evolutionary model which extends beyond biology in:
The Goldilocks Effect: What Has Serendipity Ever Done For Us? (free
download in e-book formats from the Unusual Perspectives website)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP is empty(?)

2011-10-18 Thread Bruno Marchal

Hi Peter,

On 18 Oct 2011, at 13:00, Peter Kinnon wrote:




While the comments made here make interesting and amusing reading the
underlying rationale of COMP as an attempt to resolve the mind-body
problem which worried earlier philosophers is, in my view fatally
flawed. Here are some of the main reasons:

1.  There is no longer a mind-body problem. Objective current
understandings of physics, chemistry and biology easily dispel the
mystical notions previously associated with consciousness.



The problem is already here. I suggest you read my paper here:

http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html

It shows that mechanism is incompatible with weak form of materialism  
and physicalism. It provides a new precise reformulation of the mind- 
body problem in the form of a pure body problem in arithmetic (or in  
any first order logical specification of a universal machine).
In a nutshell, universal machine cannot distinguish physical reality  
(if that makes sense) from virtual reality nor, it is the key point,  
from arithmetical reality. Their subjective continuations has to be  
given by an average of some sort on *all* computations foing through  
their actual state, and existing in the additive and multiplicative  
structure of the numbers.
So, even if locally, you could dispel the mind-body problem with  
current physics, you cannot do that to solve the problem before you  
justify the physical laws from that relative measure on the  
computations.
Then computer science and mathematical logic can already provide  
quickly the propositional logic of the observable events, and up to  
now, QM confirms comp, so that comp seems to be confirmed in its  
weirdest consequences (which is that we are multiplied into infinities  
of computations 'all the time'). The propositional logic of the  
observable extracted from comp already justify statistical  
interference of the computations, and a linear symmetrical bottom for  
physics.


Better than that, the splitting of those logics into a provable and  
true part (by and on the machine respectively) gives a solid hint on  
how we can distinguish the quanta (sharable by universal machines) and  
the qualia (irreducibly NON sharable and private).






As long as
we take care to avoid the trap of introspection with its attendant
self-referential recursive loops we can now see that this feature,
which happens to be greatly hypertrophied  in our species,


If you avoid introspection, you avoid the very nature of consciousness  
and qualia.





is merely
an extension and enhancement of the navigational facility seen in most
animals.


I am OK with this.



The degree of sophistication being a result natural
selection to permit optimal interaction of the organism with its
environment.
Which in our case, of course is extraordinarily high.


That does not explain the nature of the qualia. A priori such  
explanations explain only complex third person describable phenomena,  
not the inner qualia. yet, the logic above does explains the qualia,  
gives them a role, and give a role to consciousness (self-speeding up  
relatively to other universal machines).





2. The language of mathematics has evolved to handle more efficiently
the relatively simple situations not requiring the high levels of
abstraction found in the natural languages. The latter are, for the
most part, more appropriate for complex disciplines such as chemistry
and particularly biology. A tree, for instance, or a cell, defies
mathematical description. Only for the simpler aspects of these
disciplines does mathematics play a minor (but nevertheless valuable
part) as an adjunct.
For this reason, mathematics would not be a good contender for the
solution of the mind-body problem even if it still had any
significance.


I insist, if a mechanist explanation can work, then the price of the  
mind-body solution is an explanation of physics, from the non  
physical. No need of magical soul, programs and numbers are enough,  
but we have to explain the origin of the appearance of the physical  
laws from this.





3. Even in those areas where mathematics is most valuable we must bear
in mind that, like all languages, it is capable of generating
fictions.


You confuse the arithmetical reality, with the theories exploring it,  
and you confuse the theories with the languages which can be used to  
express those theories.
To say that math is language is conventionalism, and this has been  
abandoned, because it is refuted by facts, notably that arithmetical  
truth is beyond the reach of any possible theory.


Mechanism is often use in a reductionist way by materialist, but when  
you look at the detail mechanism defeats all possible reductionism of  
our conception of number and machine.






Most importantly, of the multitudinous mathematical models
that can be envisaged, only a small subset correspond to empirical
reality.


Sure. Note that what you call models is called 

Re: The Overlords Gambit

2011-10-18 Thread benjayk


Craig Weinberg wrote:
 
 Here’s a little thought experiment about free will. Let’s say that
 there exists a technology which will allow us to completely control
 another person’s neurology. What if two people use this technology to
 control each other? If one person started before the other, then they
 could effectively ‘disarm’ the others control over them preemptively,
 but what if they both began at the exact same time? Would one ‘win’
 control over the other somehow? Would either of them even be able to
 try to win? How would they know if they were controlling the other or
 being controlled to think they are controlling the other?
 
Complete control over anything is simply impossible. Control is just a
feeling and not fundamental.
The closest one can get to controlling the brain is to make it
dysfunctional. It's a bit boring, but the most realistic answer is that both
would fall unconscious, as that is the only result of exerting excessive
control over a brain.
It's the same result as if you try to totally control an ecosystem, or an
economy. It'll destroy the natural order, as control is not a fundamental
ordering principle.

It seems like you think of control or will as something fundamental, and I
don't see any reason to assume that it is. Honestly I that we think that we
have free, independent will is just the arrogance of our ego that feels it
has to have a fundamentally special place in the universe.
That is not to say that we are predetermined by a material universe, rather
control is just a phenomenon arising in consciousness like all other
phenomena eg feelings and perceptions.

benjayk
-- 
View this message in context: 
http://old.nabble.com/The-Overlords-Gambit-tp32662974p32674925.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The Overlords Gambit

2011-10-18 Thread Craig Weinberg
On Oct 18, 10:00 am, benjayk benjamin.jaku...@googlemail.com wrote:
 Craig Weinberg wrote:

  Here’s a little thought experiment about free will. Let’s say that
  there exists a technology which will allow us to completely control
  another person’s neurology. What if two people use this technology to
  control each other? If one person started before the other, then they
  could effectively ‘disarm’ the others control over them preemptively,
  but what if they both began at the exact same time? Would one ‘win’
  control over the other somehow? Would either of them even be able to
  try to win? How would they know if they were controlling the other or
  being controlled to think they are controlling the other?

 Complete control over anything is simply impossible. Control is just a
 feeling and not fundamental.

It depends what you mean by complete control. If I choose to hit the
letter m on my keyboard, am I not controlling the keyboard to the
extent that it is controllable?

 The closest one can get to controlling the brain is to make it
 dysfunctional. It's a bit boring, but the most realistic answer is that both
 would fall unconscious, as that is the only result of exerting excessive
 control over a brain.
 It's the same result as if you try to totally control an ecosystem, or an
 economy. It'll destroy the natural order, as control is not a fundamental
 ordering principle.

I generally agree. The thought experiment is to make people consider
the fallacy of exclusively bottom up processing. I don't think that
you could actually control a brain, I'm just saying that if you could,
how do you get around the fact that it violates the assumption that
only neurons can control the brain. If my neurons control a machine
that control another person's neurons, then what happens? How does
either the master or slave know if they are controlling or being
controlled? The point was to show that bottom up exclusivity fails,
and that  we must consider that our ordinary intuition of bi-
directional, high-low processing interdependence may indeed be valid.


 It seems like you think of control or will as something fundamental, and I
 don't see any reason to assume that it is.

That's a reasonable objection. If it's not fundamental, what is it
composed of, and why is there an appearance of anything other than
whatever that is?

Honestly I that we think that we
 have free, independent will is just the arrogance of our ego that feels it
 has to have a fundamentally special place in the universe.

I used to think that too, but now I see that it's every bit as much of
an egotistical arrogance to De-anthropomorphize ourselves. It's an
inverted, passive aggressive egotism to perpetually look to other
processes above and below our native level of individual cohesion to
give credit or blame, while all the while hiding invisibly behind the
voyeur's curtain. To think that we have no free will is to think that
we cannot think one way or another that we have free will. It's
circular, self-negating reasoning. I think that I don't really think,
because I think that I can explain that it's not necessary for
thinking to happen at all. Doesn't really make sense if you step out
of the system and observer your thinking, opinionated, controlling
self pronouncing that it controls nothing, thinks for no reason, and
has opinions for...for what again? What is an opinion doing in a
cosmos which has no free will? Literally. What does an opinion do? Why
are you here talking to me? What is making controlling you to do this
more than you yourself? Should I imagine that my neurons care what I
think?

 That is not to say that we are predetermined by a material universe, rather
 control is just a phenomenon arising in consciousness like all other
 phenomena eg feelings and perceptions.

Sure, but that's all that it needs to be. As long as we get the
sensory feedback that we expect from our motives, then we might as
well have free will. It just seems violate parsimony unnecessarily.
Why does it make sense for consciousness to be completely dominated by
the experience of control in a universe where that would be utterly
meaningless? How would such an illusion even work in the sense of how
does a feeling of will get invented in the first place? If you keep
throwing dice long enough they will start hallucinating that they are
an organism with a conscious will? Why? How? It's totally nuts and
explains nothing.

Once we understand that will is sort of a subjective fisheye view
which radiates evanescent waves of influence over the entire band of
micocosmic and macrocosmic phenomena, as well as being influenced by
the same, we can see that free will doesn't have to be completely
explained away nor does it have to be seen as a truly independent
phenomena. It get's kind of meta, because the degree to which free
will feels free is partially contingent upon your feelings about it -
your courage and independence. If you don't want free will, you don't
have to have 

Re: The Overlords Gambit

2011-10-18 Thread Bruno Marchal


On 16 Oct 2011, at 20:50, Craig Weinberg wrote:


Here’s a little thought experiment about free will. Let’s say that
there exists a technology which will allow us to completely control
another person’s neurology. What if two people use this technology to
control each other? If one person started before the other, then they
could effectively ‘disarm’ the others control over them preemptively,
but what if they both began at the exact same time? Would one ‘win’
control over the other somehow? Would either of them even be able to
try to win? How would they know if they were controlling the other or
being controlled to think they are controlling the other?

I think that what might happen is that where their wills conflict they
cancel each other out, and where they overlap they would be amplified.
The result is that the two people would become conjoined as a single
organism. That might be exactly how neurons hash it out in the brain,
molecules do it in a cell, atoms do it in a molecule. Add ‘If you
can’t beat em, join em’ to the list of sensorimotive primitives, along
with flux and flow, and perspective relationships. All experience is a
manifestation of perspective. What we see is neither solipsistic
simulation nor direct observation but rather the direct and actual
experience of what we can make sense of from the perspective of what
we are and how we participate in that relation. Our perception is the
net overlap of all of the sense experience of our subordinate and
supervening perspectives - whatever contentions and contradictions are
resolved by joining them.



A simple machine is too dumb to control itself.

A complex machine is too complex to control itself.

A complex machine can sometimes, luckily, design and control for a  
time simple machines.


Simple machines can grow and multiply, and get more complex. And no  
machine can control that, in the long run.


Deep machines like us, plausibly, cannot even be controlled by more  
complex machine (or by force and coercion). All what more complex  
machine can do, is to copy us, and emulate us with a sped-up universal  
machine, but all they will get are fuzzy trees of possibilities highly  
dependent on tiny parameters. Mind evolution is more complex than  
climate evolution.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The Overlords Gambit

2011-10-18 Thread benjayk


Craig Weinberg wrote:
 
 On Oct 18, 10:00 am, benjayk benjamin.jaku...@googlemail.com wrote:
 Craig Weinberg wrote:

  Here’s a little thought experiment about free will. Let’s say that
  there exists a technology which will allow us to completely control
  another person’s neurology. What if two people use this technology to
  control each other? If one person started before the other, then they
  could effectively ‘disarm’ the others control over them preemptively,
  but what if they both began at the exact same time? Would one ‘win’
  control over the other somehow? Would either of them even be able to
  try to win? How would they know if they were controlling the other or
  being controlled to think they are controlling the other?

 Complete control over anything is simply impossible. Control is just a
 feeling and not fundamental.
 
 It depends what you mean by complete control. If I choose to hit the
 letter m on my keyboard, am I not controlling the keyboard to the
 extent that it is controllable?
 
You can control everything to the extent that it is controllable for you,
obviously.
But you can't have control over the individual constituents of the keyboard
all at the same time in the exact way you want it. For the keyboard, you
don't need to, but the brain has no lever which you can use to make it do
what you want, because, contrary to the keyboard, it has not been designed
for that task - it is a holistic system, if you control a part of it
(sticking a electrode into you brain for example), it still won't do what
you want it to, as a whole.
So to control it, you'd have to do it on a broad scale and a fundamental
level. But we can't do that, and if someone could, the brain would just be a
puppet steered by a puppeter and as such it wouldn't be a brain as working
system, but rather a mass of flesh that is being manipulated.


Craig Weinberg wrote:
 
 The closest one can get to controlling the brain is to make it
 dysfunctional. It's a bit boring, but the most realistic answer is that
 both
 would fall unconscious, as that is the only result of exerting excessive
 control over a brain.
 It's the same result as if you try to totally control an ecosystem, or an
 economy. It'll destroy the natural order, as control is not a fundamental
 ordering principle.
 
 I generally agree. The thought experiment is to make people consider
 the fallacy of exclusively bottom up processing. I don't think that
 you could actually control a brain, I'm just saying that if you could,
 how do you get around the fact that it violates the assumption that
 only neurons can control the brain.
I don't think that many people would claim that. You probably mean that the
neurons control your behaviour, but I don't think many people believe that,
either. Materialist would rather claim that the neurons are the physical
cause for behaviour, and consciousness arises as a phenomenon alongside.
I don't see how this is any problem with regards to control, it just is a
claim of magic (mind coming out of non-mind, with no mechanism how this
could happen) that is not even directly subjectively validated (like the
magic of consciousness that we can directly witness).


Craig Weinberg wrote:
 
  The point was to show that bottom up exclusivity fails,
 and that  we must consider that our ordinary intuition of bi-
 directional, high-low processing interdependence may indeed be valid.
Yes, I guessed that this was your point, but I am not sure that your thought
experiment helps it. Neurons making thought is quite meaingless from the
start, I don't see how it is affected by what controls what.


Craig Weinberg wrote:
 

 It seems like you think of control or will as something fundamental, and
 I
 don't see any reason to assume that it is.
 
 That's a reasonable objection. If it's not fundamental, what is it
 composed of, and why is there an appearance of anything other than
 whatever that is?
It is not composed of anything (I am not a reductionist). Rather it arises
like other feelings/perceptions, for example being hungry (it is just more
essential to our identity).
The reason for its appearance is simply as a feedback mechanism, it shows us
that we are the source of the actions, which bring attention to our
actions (which is obviously quite useful). As such it is not more
fundamental than other such mechanism (like pain, which shows us something
is wrong in our body).
Also, in a state of enlightenment, the feeling of being in control
vanishes (together with the ego that is supposed to be the controller), and
people still function normally, which shows that it can't be that
fundamental. It is an artifact of seeing yourself as a person, seperate from
your environment, and intervening in it. Actually it is quite a crude tool,
as many times we feel to be in control when the main cause lies in something
else (like gambling), and often we don't feel in control of essential
interventions into our environment (like reflexes).


Craig Weinberg wrote:
 

Re: The Overlords Gambit

2011-10-18 Thread Craig Weinberg
On Oct 18, 3:15 pm, benjayk benjamin.jaku...@googlemail.com wrote:

  Complete control over anything is simply impossible. Control is just a
  feeling and not fundamental.

  It depends what you mean by complete control. If I choose to hit the
  letter m on my keyboard, am I not controlling the keyboard to the
  extent that it is controllable?

 You can control everything to the extent that it is controllable for you,
 obviously.
 But you can't have control over the individual constituents of the keyboard
 all at the same time in the exact way you want it.

Not sure what you mean. The keyboard registers each keystroke that I
intend. What more is there to control?

For the keyboard, you
 don't need to, but the brain has no lever which you can use to make it do
 what you want, because, contrary to the keyboard, it has not been designed
 for that task - it is a holistic system, if you control a part of it
 (sticking a electrode into you brain for example), it still won't do what
 you want it to, as a whole.

I agree that whatever you seek to control may have unintended
consequences that would have to control with a second order of
control, and so on, but the brain has millions of levers to make it do
what you want. Pharmacology, political science, neuroscience,
advertising, law enforcement, etc have identified many reliable
methods of controlling the brain, either directly or indirectly.

 So to control it, you'd have to do it on a broad scale and a fundamental
 level. But we can't do that, and if someone could, the brain would just be a
 puppet steered by a puppeter and as such it wouldn't be a brain as working
 system, but rather a mass of flesh that is being manipulated.

Right, that's what my Overlords Gambit is about. What are the
mechanics of manipulation and what happens when they themselves are
manipulated?










 Craig Weinberg wrote:

  The closest one can get to controlling the brain is to make it
  dysfunctional. It's a bit boring, but the most realistic answer is that
  both
  would fall unconscious, as that is the only result of exerting excessive
  control over a brain.
  It's the same result as if you try to totally control an ecosystem, or an
  economy. It'll destroy the natural order, as control is not a fundamental
  ordering principle.

  I generally agree. The thought experiment is to make people consider
  the fallacy of exclusively bottom up processing. I don't think that
  you could actually control a brain, I'm just saying that if you could,
  how do you get around the fact that it violates the assumption that
  only neurons can control the brain.

 I don't think that many people would claim that. You probably mean that the
 neurons control your behaviour,

Controlling your behavior begins with controlling your brain. The
people I have been debating with here do claim that neurons alone
control brain as a whole, while I maintain that control is shared from
the top down as well. The psyche can voluntarily control entire
regions of the brain, and does so routinely. The neurons which make up
the brain reflect that voluntary will rather than assemble an illusion
of will through the mechanics of their biology.

 but I don't think many people believe that,
 either. Materialist would rather claim that the neurons are the physical
 cause for behaviour, and consciousness arises as a phenomenon alongside.

Not the people I've talked to. They mostly all consider consciousness
an epiphenomenon or emergent property of neurological function.

 I don't see how this is any problem with regards to control, it just is a
 claim of magic (mind coming out of non-mind, with no mechanism how this
 could happen) that is not even directly subjectively validated (like the
 magic of consciousness that we can directly witness).

Some people argue that will is an illusion caused by neurological
function. I'm showing that the neurological function can also be made
into an epiphenomenon of conscious control. It has to be bi-
directional.


   The point was to show that bottom up exclusivity fails,
  and that  we must consider that our ordinary intuition of bi-
  directional, high-low processing interdependence may indeed be valid.

 Yes, I guessed that this was your point, but I am not sure that your thought
 experiment helps it. Neurons making thought is quite meaingless from the
 start, I don't see how it is affected by what controls what.

It's not about thought per-se, it's just the idea of supervenience
doesn't stand up to this thought experiment. If the brain is nothing
but predictable,  controllable, emulable  functions, then what happens
when we turn that control on itself? What happens when we, as the
deterministic puppets of our neurology, control someone else's
neurology. Whose puppet do they become then?


  It seems like you think of control or will as something fundamental, and
  I
  don't see any reason to assume that it is.

  That's a reasonable objection. If it's not fundamental, what is it
  

Re: Blindsight crushes absent qualia?

2011-10-18 Thread Stathis Papaioannou
On Tue, Oct 18, 2011 at 1:01 PM, Craig Weinberg whatsons...@gmail.com wrote:

 The speech centres must, through a relay of neurons, receive
 information from the visual centres if the subject is to make any
 statement about what he sees.

 What makes you think that's the case? That's a blatant fallacy, isn't
 it? The windshield wipers must, through a relay of mechanical parts,
 receive music from the from the radio station

 Visual centers don't talk, and speech centers don't see. People see
 and talk.

When you speak about what you see the information carried in the light
that comes into your pupils must somehow get to the motor neurons
controlling your vocal cords. How do you think this happens?

 If the visual centres are artificial,
 but producing the same neural outputs to the rest of the brain,

 They probably won't though. They can't because they don't feel the
 appropriate qualia to repspond to events in the same way over time. It
 depends how close they are to natural neurons, maybe even the neurons
 which are genetically specific to that individual.

 then
 the rest of the brain will respond as if vision is normal:

 If there is some part of the natural visual centers there, they may
 very well be able to use the artificial ones as a substitute - like a
 cane, but you can't use a cane as a substitute for your whole arm.

 the subject
 will say everything looks normal, he will grasp things normally with
 his hands, he will paint or write poetry about what he sees normally.
 His motor cortex cannot be aware that the visual cortex has changed,
 since the only awareness of the outside world the motor cortex can
 have must come through the surrounding tissue.

 I understand how you are thinking about it, but I think that would
 make sense if there were a such thing as functional equivalence of
 qualia, but qualia has no function. There is no way to know if you can
 make something that feels just like a neuron unless it is in fact a
 natural neuron.


  Since we know absolutely that we have experiences which cannot be
  observed directly in the tissue of the brain, there is no sense in
  imagining that replicating what we observe in the brain will not be
  missing crucial capacities which we can't anticipate. Even replacing
  simpler organs with actual human organs have a risk of rejection. Why
  would the brain, which is presumably infinitely more sensitive than a
  kidney, have no problem with a completely theoretical and unrealizably
  futuristic artificial device?

 We assume that the artificial device reproduces the pattern of neural
 firing and nothing else. Do you think that is *impossible*? Why?

 Sure, it might be impossible. Because the pattern is context
 dependent. If you have a bunch of separate heart cells, they will all
 beat regularly but not synchronized. So you make an artificial heart
 cell that beats regularly at the same interval as any of the other
 heart cells. When you put all of the separate heart cells in a dish
 together, they all will synchronize - except the artificial ones. If
 you were to put enough artificial heart cells in a living heart, you
 would cause an arrhythmia and kill the person who is using that heart
 to live.

Cadrdiac myocytes in culture can synchronise their beating through
direct contact. Artificial myocytes, if they were to replicate this
behaviour, would have to be sensitive to the action potential of the
natural myocytes. In general, any observable behaviour of the
biological system that you want to replicate can be replicated by some
technology. Qualia are not observable and it is an open question
whether they can be replicated, so we assume that they can't and
consider the consequences. The consequences are that a person's qualia
might change but, because the inputs to the motor neurons controlling
speech are the same, he would declare that nothing has changed.

 Neurons are orders of magnitude more interconnected than that. The
 idea that there is a fixed 'pattern of neural firing' which can be
 derived from a single neuron in isolation that can be extrapolated out
 to the brain as a whole is just factually incorrect. It's not some
 exotic wackiness that I dreamed up, it's actually not at all the way
 that the brain, or any living organism works. Neurons aren't just
 miniature brains, and brains aren't just a pile of neurons. It's like
 assuming that if you make a mannequin that acts like a nomadic hunter
 gatherer, you should have no trouble repopulating New York, London,
 and Hong Kong with a large group of them.

I keep repeating that there is no pattern of neural firing to
replicate. Whether a biological neuron fires or not depends on its
present state and its inputs. A neuron that would fire if the
temperature is 37 degrees and the extracellular potassium
concentration is 5 mM might not fire if the temperature is 39 degrees
and the potassium concentration 6 mM. The model of the neuron has to
incorporate knowledge about how the neuron is