On Sat, Sep 24, 2011 at 5:24 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
>> Do you agree or don't you that the observable (or public, or third
>> person) behaviour of neurons can be entirely explained in terms of a
>> chain of physical events?
>
> No, nothing can be *entirely* explained in terms of a chain of
> physical events in the way that you assume physical events occur.
> Physical events are a shared experiences, dependent upon the
> perceptual capabilities and choices of the participants in them. That
> is not to say we that the behavior of neurons can't be *adequately*
> explained for specific purposes: medical, biochemical,
> electromagnetic, etc.

OK, so you agree that the *observable* behaviour of neurons can be
adequately explained in terms of a chain of physical events. The
neurons won't do anything that is apparently magical, right?

>> At times you have said that thoughts, over
>> and above physical events, have an influence on neuronal behaviour.
>> For an observer (who has no access to whatever subjectivity the
>> neurons may have) that would mean that neurons sometimes fire
>> apparently without any trigger, since if thoughts are the trigger this
>> is not observable.
>
> No. Thoughts are not the trigger of physical events, they are the
> experiential correlate of the physical events. It is the sense that
> the two phenomenologies make together that is the trigger.
>
>> If, on the other hand, neurons do not fire in the
>> absence of physical stimuli (which may have associated with them
>> subjectivity - the observer cannot know this)
>
> We know that for example, gambling affects the physical behavior of
> the amygdala. What physical force do you posit that emanates from
> 'gambling' that penetrates the skull and blood brain barrier to
> mobilize those neurons?

The skull has various holes in it (the foramen magnum, the orbits,
foramina for the cranial nerves) through which sense data from the
environment enters and, via a series of neural relays, reaches the
amygdala and other parts of the brain.

>> But if thoughts influence behaviour and thoughts are not observed,
>> then observation of a brain would show things happening contrary to
>> physical laws,
>
> No. Thought are not observed by an MRI. An MRI can only show the
> physical shadow of the experiences taking place.

That's right, so everything that can be observed in the brain (or in
the body in general) has an observable cause.

>>such as neurons apparently firing for no reason, i.e.
>> magically. You haven't clearly refuted this, perhaps because you can
>> see it would result in a mechanistic brain.
>
> No, I have refuted it over and over and over and over and over. You
> aren't listening to me, you are stuck in your own cognitive loop.
> Please don't accuse me of this again until you have a better
> understanding of what I mean what I'm saying about the relationship
> between gambling and the amygdala.
>
> "We cannot solve our problems with the same thinking we used when we
> created them" - A. Einstein.

You have not answered it. You have contradicted yourself by saying we
*don't* observe the brain doing things contrary to physics and we *do*
observe the brain doing things contrary to physics. You seem to
believe that neurons in the amygdala will fire spontaneously when the
subject thinks about gambling, which would be magic. Neurons only fire
in response to a physical stimulus. That the physical stimulus has
associated qualia is not observable: a scientist would see the neuron
firing, explain why it fired in physical terms, and then wonder as an
afterthought if the neuron "felt" anything while it was firing.

>> A neuron has a limited number of duties: to fire if it sees a certain
>> potential difference across its cell membrane or a certain
>> concentration of neurotransmitter.
>
> That is a gross reductionist mispresentation of neurology. You are
> giving the brain less functionality than mold. Tell me, how does this
> conversation turn into cell membrane potentials or neurotransmitters?

Clearly, it does, since this conversation occurs when the neurons in
our brains are active. The important functionality of the neurons is
the action potential, since that triggers other neurons and ultimately
muscle. The complex cellular apparatus in the neuron is there to allow
this process to happen, as the complex cellular apparatus in the
thyroid is to enable secretion of thyroxine. An artificial thyroid
that measured TSH levels and secreted thyroxine accordingly could
replace the thyroid gland even though it was nothing like the original
organ in structure.

>>That's all that has to be
>> simulated. A neuron doesn't have one response for when, say, the
>> central bank raises interest rates and another response for when it
>> lowers interest rates; all it does is respond to what its neighbours
>> in the network are doing, and because of the complexity of the
>> network, a small change in input can cause a large change in overall
>> brain behaviour.
>
> So if I move my arm, that's because the neurons that have nothing to
> do with my arm must have caused the ones that do relate to my arm to
> fire? And 'I' think that I move 'my arm' because why exactly?

The neurons are connected in a network. If I see something relating to
the economy that may lead me to move my arm to make an online bank
account transaction. Obviously there has to be some causal connection
between my arm and the information about the economy. How do you
imagine that it happens?

> If the brain of even a flea were anywhere remotely close to the
> simplistic goofiness that you describe, we should have figured out
> human consciousness completely 200 years ago.

Even the brain of a flea is very complex. The brain of the nematode C
elegans is the simplest brain we know, and although we have the
anatomy of its neurons and their connections, no adequate computer
simulation exists because we do not know the strength of the
connections.

>> In theory we can simulate something perfectly if its behaviour is
>> computable, in practice we can't but we try to simulate it
>> sufficiently accurately. The brain has a level of engineering
>> tolerance, or you would experience radical mental state changes every
>> time you shook your head. So the simulation doesn't have to get it
>> exactly right down to the quantum level.
>
> Why would you experience a 'radical' mental state change? Why not just
> an appropriate mental state change? Likewise your simulation will
> experience an appropriate mental state to what is being used
> materially to simulate it.

There is a certain level of tolerance in every physical object we
might want to simulate. We need to know a lot about it, but we don't
need accuracy down to the position of every atom, for if the brain
were so delicately balanced it would malfunction with the slightest
perturbation.

>> My point was that even a simulation of a very simple nervous system
>> produces such a fantastic degree of complexity that it is impossible
>> to know what it will do until you actually run the program. It is,
>> like the weather, unpredictable and surprising even though it is
>> deterministic.
>
> There is still no link between predictability and intentionality. You
> might be able to predict what I'm going to order from a menu at a
> restaurant, but that doesn't mean that I'm not choosing it. You might
> not be able to predict a tsunami, but that doesn't mean it's because
> the tsunami is choosing to do something. The difference, I think, has
> to do with more experiential depth in between each input and output.

Whether something is conscious or not has nothing to do with whether
it is deterministic or predictable.

>> > I understand perfectly why you think this argument works, but you
>> > seemingly don't understand that my explanations and examples refute
>> > your false dichotomy. Just as a rule of thumb, anytime someone says
>> > something like "The only way out of this (meaning their) conclusion "
>> > My assessment is that their mind is frozen in a defensive state and
>> > cannot accept new information.
>>
>> You have agreed (sort of) that partial zombies are absurd
>
> No. Stuffed animals are partial zombies to young children. It's a
> linguistic failure to describe reality truthfully, not an insight into
> the truth of consciousness.

This statement shows that you haven't understood what a partial zombie
is. It is a conscious being which lacks consciousness in a particular
modality, such as visual perception or language processing, but does
not notice that anything is abnormal and presents no external evidence
that anything is abnormal. You have said a few posts back that you
think this is absurd: when you're conscious, you know you're
conscious.

>>and you have
>> agreed (sort of) that the brain does not do things contrary to
>> physics. But if consciousness is substrate-dependent, it would allow
>> for the creation of partial zombies. This is a logical problem. You
>> have not explained how to avoid it.
>
> Consciousness is not substrate-dependent, it is substrate descriptive.
> A partial zombie is just a misunderstanding of prognosia. A character
> in a computer game is a partial zombie.

A character in a computer game is not a partial zombie as defined
above. And what's prognosia? Do you mean agnosia, the inability to
recognise certain types of objects? That is not a partial zombie
either, since it affects behaviour and the patient is often aware of
the deficit.

>> Would it count as "internal motives" if the circuit were partly
>> controlled by thermal noise, which in most circuits we try to
>> eliminate? If the circuit were partly controlled by noise it would
>> behave unpredictably (although it would still be following physical
>> laws which could be described probabilistically). A free will
>> incompatibilist could then say that the circuit acted of its own free
>> will. I'm not sure that would satisfy you, but then I don't know what
>> else "internal motives" could mean.
>
> These are the kinds of things that can only be determined through
> experiments. Adding thermal noise could be a first step toward an
> organic-level molecular awareness. If it begins to assemble into
> something like a cell, then you know you have a winner.

What is special about a cell? Is it that it replicates? I don't see
that as having any bearing on intelligence or consciousness. Viruses
replicate and I would say many computer programs are already more
intelligent than viruses.

>> The outcome of the superbowl creates visual and aural input, to which
>> the relevant neurons respond using the same limited repertoire they
>> use in response to every input.
>
> There is no disembodied 'visual and aural input' to which neurons
> respond. Our experience of sound and vision *are* the responses of our
> neurons to their own perceptual niche - cochlear vibration summarized
> through auditory nerve and retinal cellular changes summarized through
> the optic nerve are themselves summarized by the sensory regions of
> the brain.
>
> The outcome of the superbowl creates nothing but meaningless dots on a
> lighted screen. Neurons do all the rest. If you call that a limited
> repertoire, in spite of the fact that every experience of every living
> being is ecapsulated entirely with it, then I wonder what could be
> less limited?

The individual neurons have a very limited repertoire of behaviour,
but the brain's behaviour and experiences are very rich. The richness
comes not from the individual neurons, which are not much different to
any other cell in the body, but from the complexity of the network. If
you devalue the significance of this then why do we need the network?
Why do we need a brain at all - why don't we just have a single neuron
doing all the thinking?

>> Intelligent organisms did as a matter of fact evolve. If they could
>> have evolved without being conscious (as you seem to believe of
>> computers) then why didn't they?
>
> Because the universe is not all about evolution. We perceive some
> phenomena in our universe to be more intelligent than others, as a
> function of what we are. Some phenomena have 'evolved' without much
> consciousness (in our view) - minerals, clouds of gas and vapor, etc.

The question is, why did humans evolve with consciousness rather than
as philosophical zombies? The answer is, because it isn't possible to
make a philosophical zombie since anything that behaves like a human
must be conscious as a side-effect.

>> No, the movie would repeat, since the digits of pi will repeat.
>
> The frames of the movie would not repeat unless you limit the sequence
> of frames to an arbitrary time.

Yes, a movie of arbitrary length will repeat. But consciousness for
the subject is not like a movie, it is more like a frame in a movie. I
am not right now experiencing my whole life, I am experiencing the
thoughts and perceptions of the moment. The rest of my life is only
available to me should I choose to recall a particular memory. Thus
the thoughts I am able to have are limited by my working memory - my
RAM, to use a computer analogy.

>> After
>> a 10 digit sequence there will be at least 1 digit repeating, after a
>> 100 digit sequence there will be at least a digit pair repeating,
>> after a 1000 digit sequence there will be at least a triplet of digits
>> repeating and so on. If you consider one minute sequences of a movie
>> you can calculate how long you would have to wait before a sequence
>> that you had seen before had to appear.
>
> The movie lasts forever though. I don't care if digits or pairs
> repeat, just as a poker player doesn't stop playing poker after he's
> seen what all the cards look like, or seen every winning hand there
> is.

Yes, but the point was that a brain of finite size can only have a
finite number of distinct thoughts.

>> If the Internet is implemented on a finite computer network then there
>> is only a finite amount of information that the network can handle.
>
> Only at one time. Giving an infinite amount of time, there is no limit
> to the amount of 'information' that it can handle.

As I explained:

>> For simplicity, say the Internet network consists of three logic
>> elements. Then the entire Internet could only consist of the
>> information 000, 001, 010, 100, 110, 101, 011 and 111.
>> Another way to
>> look at it is the maximum amount of information that can be packed
>> into a certain volume of space, since you can make computers and
>> brains more efficient by increasing the circuit or neuronal density
>> rather than increasing the size. The upper limit for this is set by
>> the Bekenstein bound (http://en.wikipedia.org/wiki/Bekenstein_bound).
>> Using this you can calculate that the maximum number of distinct
>> physical states the human brain can be in is about 10^10^42; a *huge*
>> number but still a finite number.
>
> The Bekenstein bound assumes only entropy, and not negentropy or
> significance. Conscious entities export significance, so that every
> atom in the cosmos is potentially an extensible part of the human
> psyche. Books. Libraries. DVDs. Microfiche. Nanotech.

Negetropy has a technical definition as the difference between the
entropy of a system and the maximum possible entropy. It has no
bearing on the Bekenstein bound, which is the absolute maximum
information that a volume can contain. It is a hard physical limit,
not disputed by any physicist as far as I am aware. Anyway, it's
pretty greedy of you to be dissatisfied with a number like 10^10^42,
which if it could be utilised would allow one brain to have far more
thoughts than all the humans who have ever lived put together.

>> That's right, we need only consider a substance that can successfully
>> substitute for the limited range of functions we are interested in,
>> whether it be cellular communication or cleaning windows.
>
> Which is why, since we have no idea what the ranges of functions or
> dependencies are contained in the human psyche, we cannot assume that
> watering the plants with any old clear liquid should suffice.

We need to know what the functions are before we can substitute for them.

>> But TV programs can be shown on a TV with an LCD or CRT screen. The
>> technologies are entirely different, even the end result looks
>> slightly different, but for the purposes of watching and enjoying TV
>> shows they are functionally identical.
>
> Ugh. Seriously? You are going to say it in a universe of only black
> and white versus color TVs, it's no big deal if it's in color or not?
> It's like saying that the difference between a loaded pistol blowing
> your brains out and a toy water gun are that one is a bit noisier and
> messier than the other. I made my point, you are grasping for straws.

The function of a black and white TV is different from that of a
colour TV. However, the function of a CRT TV is similar to that of an
LCD TV (both colour) even though the technology is completely
different.

>> Differences such as the weight
>> or volume of the TV exist but are irrelevant when we are discussing
>> watching the picture on the screen, even though weight and volume
>> contribute to functional differences not related to picture quality.

>> Yes, no doubt it would be difficult to go substituting cellular
>> components, but as I have said many times that makes no difference to
>> the functionalist argument, which is that *if* a way could be found to
>> preserve function in a different substrate it would also preserve
>> consciousness.
>
> Of course, the functionalist argument agrees with itself. If there is
> a way to do the impossible, then it is possible.

It's not impossible, there is a qualitative difference between
difficult and impossible. It would be difficult for humans to build a
planet the size of Jupiter, but there is no theoretical reason why it
could not be done. On the other hand, it is impossible to build a
square triangle, since it presents a logical contradiction. There is
no logical contradiction in substituting the function of parts of the
human body. Substituting one thing for another to maintain function is
one of the main tasks to which human intelligence is applied.

>> That's right, since the visual cortex does not develop properly unless
>> it gets the appropriate stimulation. But there's no reason to believe
>> that stimulation via a retina would be better than stimulation from an
>> artificial sensor. The cortical neurons don't connect directly to the
>> rods and cones but via ganglion cells which in turn interface with
>> neurons in the thalamus and midbrain. Moreover, the cortical neurons
>> don't directly know anything about the light hitting the retina: the
>> brain deduces the existence of an object forming an image because
>> there is a mapping from the retina to the visual cortex, but it would
>> deduce the same thing if the cortex were stimulated directly in the
>> same way.
>
> No, it looks like it doesn't work that way:
> http://www.mendeley.com/research/tms-of-the-occipital-cortex-induces-tactile-sensations-in-the-fingers-of-blind-braille-readers/

That is consistent with what I said.

>> It is irrelevant to the discussion whether the feeling of free will is
>> observable from the outside. I don't understand why you say that such
>> a feeling would have "no possible reason to exist or method of arising
>> in a deterministic world". People are deluded about all sorts of
>> things: what reason for existing or method of arising do those
>> delusions have that a non-deterministic free will delusion would lack?
>
> Because free will in a deterministic universe would not even be
> conceivable in the first place to have a delusion about it. Even
> delusional minds can't imagine a square circle or a new primary color.

You're saying that free will in a deterministic world is
contradictory. That may be the case if you define free will in a
particular way (and not everyone defines it that way), but still that
does not imply that the *feeling* of free will is incompatible with
determinism.

>> This is your guess, but if everything has qualia then perhaps a
>> computer running a program could have similar, if not exactly the
>> same, qualia to those of a human.
>
> Sure, and perhaps a trash can that says THANK YOU on it is sincerely
> expressing it's gratitude. With enough ecstasy, it very well might
> seem like it does. Why would that indicate anything about the native
> qualia of the trash can?

That's not an argument. There is no logical or empirical reason to
assume that the qualia of a computer that behaves like you cannot be
very similar to your own. Even if you believe qualia are
substrate-dependent, completely different materials can have the same
physical properties, so why not the same qualia?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to