On Sep 25, 7:39 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Sat, Sep 24, 2011 at 5:24 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> Do you agree or don't you that the observable (or public, or third
> >> person) behaviour of neurons can be entirely explained in terms of a
> >> chain of physical events?
> > No, nothing can be *entirely* explained in terms of a chain of
> > physical events in the way that you assume physical events occur.
> > Physical events are a shared experiences, dependent upon the
> > perceptual capabilities and choices of the participants in them. That
> > is not to say we that the behavior of neurons can't be *adequately*
> > explained for specific purposes: medical, biochemical,
> > electromagnetic, etc.
> OK, so you agree that the *observable* behaviour of neurons can be
> adequately explained in terms of a chain of physical events. The
> neurons won't do anything that is apparently magical, right?

Are not all of our observations observable behaviors of neurons?
You're not understanding how I think observation works. There is no
such thing as an observable behavior, it's always a matter of
observable how, and by who? If you limit your observation of how
neurons behave to what can be detected by a series of metal probes or
microscopic antenna, then you are getting a radically limited view of
what neurons are and what they do. You are asking a blind man what the
Mona Lisa looks like by having him touch the paint, then making a
careful impression of his fingers, and then announcing that the Mona
Lisa can only do what fingerpainting can do, and that inferring
anything beyond the nature of plain old paint to the Mona Lisa is
magical. No. It doesn't work that way. A universe where nothing more
than paint exists has no capacity to describe an intentional, higher
level representation through a medium of paint. The dynamics of paint
alone do not describe their important but largely irrelevant role to
creating the image.

> >> At times you have said that thoughts, over
> >> and above physical events, have an influence on neuronal behaviour.
> >> For an observer (who has no access to whatever subjectivity the
> >> neurons may have) that would mean that neurons sometimes fire
> >> apparently without any trigger, since if thoughts are the trigger this
> >> is not observable.
> > No. Thoughts are not the trigger of physical events, they are the
> > experiential correlate of the physical events. It is the sense that
> > the two phenomenologies make together that is the trigger.
> >> If, on the other hand, neurons do not fire in the
> >> absence of physical stimuli (which may have associated with them
> >> subjectivity - the observer cannot know this)
> > We know that for example, gambling affects the physical behavior of
> > the amygdala. What physical force do you posit that emanates from
> > 'gambling' that penetrates the skull and blood brain barrier to
> > mobilize those neurons?
> The skull has various holes in it (the foramen magnum, the orbits,
> foramina for the cranial nerves) through which sense data from the
> environment enters and, via a series of neural relays, reaches the
> amygdala and other parts of the brain.

What is 'sense data' made of and how does it get into 'gambling'?

> >> But if thoughts influence behaviour and thoughts are not observed,
> >> then observation of a brain would show things happening contrary to
> >> physical laws,
> > No. Thought are not observed by an MRI. An MRI can only show the
> > physical shadow of the experiences taking place.
> That's right, so everything that can be observed in the brain (or in
> the body in general) has an observable cause.

Not at all. The amygdala's response to gambling cannot be observed on
an MRI. We can only infer such a cause because we a priori understand
the experience of gambling. If we did not, of course we could not
infer any kind of association with neural patterns of firing with
something like 'winning a big pot in video poker'. That brain activity
is not a chain reaction from some other part of the brain. The brain
is actually responding to the sense that the mind is making of the
outside world and how it relates to the self. It is not going to be
predictable from whatever the amygala happens to be doing five seconds
or five hours before the win.

> >>such as neurons apparently firing for no reason, i.e.
> >> magically. You haven't clearly refuted this, perhaps because you can
> >> see it would result in a mechanistic brain.
> > No, I have refuted it over and over and over and over and over. You
> > aren't listening to me, you are stuck in your own cognitive loop.
> > Please don't accuse me of this again until you have a better
> > understanding of what I mean what I'm saying about the relationship
> > between gambling and the amygdala.
> > "We cannot solve our problems with the same thinking we used when we
> > created them" - A. Einstein.
> You have not answered it. You have contradicted yourself by saying we
> *don't* observe the brain doing things contrary to physics and we *do*
> observe the brain doing things contrary to physics.

We don't observe the Mona Lisa doing things contrary to the properties
of paint, but we do observe the Mona Lisa as a higher order experience
manifested through paint. It's the same thing. Physics doesn't explain
the psyche, but psyche uses the physical brain in the ordinary
physical ways that the brain can be used.

>You seem to
> believe that neurons in the amygdala will fire spontaneously when the
> subject thinks about gambling, which would be magic.

You don't understand that you are arguing against neuroscience and
common sense. Of course you can manually control your electrochemical
circuits with thought. That's what all thinking is. It's not that the
amygdala fires spontaneously, it's that the thrills and chills of
risktaking *are* the firing of the amygdala. You seem to be saying
that the brain has our entire life planned out for us in advance as
some kind of meaningless encephalographic housekeeping exercise where
we have no ability to make ourselves horny by thinking about sex or
hungry by thinking about food, no capacity to do or say things based
upon the realities outside of our skull rather than the inside.

>Neurons only fire
> in response to a physical stimulus.

Absurd. Is there a physical difference between a letter written in
Chinese and one written in English...some sort of magic neurochemical
that wafts off of the Chinese ink that prevents my cortex from parsing
the characters?

> That the physical stimulus has
> associated qualia is not observable:
> a scientist would see the neuron
> firing, explain why it fired in physical terms, and then wonder as an
> afterthought if the neuron "felt" anything while it was firing.

Which is why that approach is doomed to failure. There is no point to
the brain other than to help process qualia. Very little of the brain
is required for a body to survive. Insects have brains, and they
survive quite well.

> >> A neuron has a limited number of duties: to fire if it sees a certain
> >> potential difference across its cell membrane or a certain
> >> concentration of neurotransmitter.
> > That is a gross reductionist mispresentation of neurology. You are
> > giving the brain less functionality than mold. Tell me, how does this
> > conversation turn into cell membrane potentials or neurotransmitters?
> Clearly, it does, since this conversation occurs when the neurons in
> our brains are active.

My God. You are unbelievable. I give you a straightforward, unarguably
obvious example of a phenomenon which obviously has absolutely nothing
to do with cellular biology but is nonetheless controlling the
behavior of neurological cells, and you answer that that it must be
biological anyways. Your position, literally, is that 'I can't be
wrong, because I already know that I am right.'

>The important functionality of the neurons is
> the action potential, since that triggers other neurons and ultimately
> muscle. The complex cellular apparatus in the neuron is there to allow
> this process to happen, as the complex cellular apparatus in the
> thyroid is to enable secretion of thyroxine. An artificial thyroid
> that measured TSH levels and secreted thyroxine accordingly could
> replace the thyroid gland even though it was nothing like the original
> organ in structure.

But you have no idea what triggers the action potentials in the first
place other than other action potentials. This makes us completely
incapable of any kind of awareness of the outside world. You are
mistaking the steering wheel for the driver.

> >>That's all that has to be
> >> simulated. A neuron doesn't have one response for when, say, the
> >> central bank raises interest rates and another response for when it
> >> lowers interest rates; all it does is respond to what its neighbours
> >> in the network are doing, and because of the complexity of the
> >> network, a small change in input can cause a large change in overall
> >> brain behaviour.
> > So if I move my arm, that's because the neurons that have nothing to
> > do with my arm must have caused the ones that do relate to my arm to
> > fire? And 'I' think that I move 'my arm' because why exactly?
> The neurons are connected in a network. If I see something relating to
> the economy that may lead me to move my arm to make an online bank
> account transaction.

What is 'I' and how does it physically create action potentials? The
whole time you are telling me that only neurons can trigger other
neurons, and now you want to invoke 'I'? Does I follow the laws of
physics or is it magic? Which is it? Does 'I' do anything that cannot
be explained by action potentials and cerebrospinal fluid? I expect
I'm going to hear some metaphysical invocations of 'information' in
the network.

> Obviously there has to be some causal connection
> between my arm and the information about the economy. How do you
> imagine that it happens?

It happens because you make sense of the what you read about the
economy and that sense motivates you to instantiate your own arm
muscles to move your arm. The experience making sense of the economic
news, as you said, *may* lead 'you' to move your arm - not *will
cause* your arm to move, or your neurons to secrete acetylcholine by
itself. It's a voluntary, high level, top-down participation through
which you control your body and your life.

> > If the brain of even a flea were anywhere remotely close to the
> > simplistic goofiness that you describe, we should have figured out
> > human consciousness completely 200 years ago.
> Even the brain of a flea is very complex. The brain of the nematode C
> elegans is the simplest brain we know, and although we have the
> anatomy of its neurons and their connections, no adequate computer
> simulation exists because we do not know the strength of the
> connections.

Why is the strength of the connections so hard to figure out?

> >> In theory we can simulate something perfectly if its behaviour is
> >> computable, in practice we can't but we try to simulate it
> >> sufficiently accurately. The brain has a level of engineering
> >> tolerance, or you would experience radical mental state changes every
> >> time you shook your head. So the simulation doesn't have to get it
> >> exactly right down to the quantum level.
> > Why would you experience a 'radical' mental state change? Why not just
> > an appropriate mental state change? Likewise your simulation will
> > experience an appropriate mental state to what is being used
> > materially to simulate it.
> There is a certain level of tolerance in every physical object we
> might want to simulate. We need to know a lot about it, but we don't
> need accuracy down to the position of every atom, for if the brain
> were so delicately balanced it would malfunction with the slightest
> perturbation.

A few micrograms of LSD or ricin can change a person's entire life or
end it.

> >> My point was that even a simulation of a very simple nervous system
> >> produces such a fantastic degree of complexity that it is impossible
> >> to know what it will do until you actually run the program. It is,
> >> like the weather, unpredictable and surprising even though it is
> >> deterministic.
> > There is still no link between predictability and intentionality. You
> > might be able to predict what I'm going to order from a menu at a
> > restaurant, but that doesn't mean that I'm not choosing it. You might
> > not be able to predict a tsunami, but that doesn't mean it's because
> > the tsunami is choosing to do something. The difference, I think, has
> > to do with more experiential depth in between each input and output.
> Whether something is conscious or not has nothing to do with whether
> it is deterministic or predictable.

What makes you think that's true? Do you have a counterfactual?

> >> > I understand perfectly why you think this argument works, but you
> >> > seemingly don't understand that my explanations and examples refute
> >> > your false dichotomy. Just as a rule of thumb, anytime someone says
> >> > something like "The only way out of this (meaning their) conclusion "
> >> > My assessment is that their mind is frozen in a defensive state and
> >> > cannot accept new information.
> >> You have agreed (sort of) that partial zombies are absurd
> > No. Stuffed animals are partial zombies to young children. It's a
> > linguistic failure to describe reality truthfully, not an insight into
> > the truth of consciousness.
> This statement shows that you haven't understood what a partial zombie
> is. It is a conscious being which lacks consciousness in a particular
> modality, such as visual perception or language processing, but does
> not notice that anything is abnormal and presents no external evidence
> that anything is abnormal. You have said a few posts back that you
> think this is absurd: when you're conscious, you know you're
> conscious.

I can only use examples where the partial zombie is on the outside
rather than the inside, since there is no way to have an example like
that (you either can't tell if someone else is a zombie or you can't
tell anything if you yourself are a partial zombie). I understand
exactly what you are saying, I'm just illustrating that if you turn it
around so that we can see the zombie side out but assume a non-zombie
side inside, it's the same thing, and that it's no big deal.

> >>and you have
> >> agreed (sort of) that the brain does not do things contrary to
> >> physics. But if consciousness is substrate-dependent, it would allow
> >> for the creation of partial zombies. This is a logical problem. You
> >> have not explained how to avoid it.
> > Consciousness is not substrate-dependent, it is substrate descriptive.
> > A partial zombie is just a misunderstanding of prognosia. A character
> > in a computer game is a partial zombie.
> A character in a computer game is not a partial zombie as defined
> above. And what's prognosia?

Prognosia is a word I made up, inspired by agnosia, but I've been
using it a lot here. I mean it to refer to projecting our own
subjectivity onto an inanimate object or other unconscious process.
It's related to the concept of Hyperactive Agency Detection Device.

> Do you mean agnosia, the inability to
> recognise certain types of objects? That is not a partial zombie
> either, since it affects behaviour and the patient is often aware of
> the deficit.

Explained above. I can only use examples which reverse the partial
zombie observation dynamic, but it really makes no difference. A
partial zombie has a missing channel of experience with no exterior
sign of deficit, while something like a ventriloquist dummy or stuffed
animal has exterior signs of augmented agency but has no corresponding
interior experience. It's the same thing, just algebraically reversed.

> >> Would it count as "internal motives" if the circuit were partly
> >> controlled by thermal noise, which in most circuits we try to
> >> eliminate? If the circuit were partly controlled by noise it would
> >> behave unpredictably (although it would still be following physical
> >> laws which could be described probabilistically). A free will
> >> incompatibilist could then say that the circuit acted of its own free
> >> will. I'm not sure that would satisfy you, but then I don't know what
> >> else "internal motives" could mean.
> > These are the kinds of things that can only be determined through
> > experiments. Adding thermal noise could be a first step toward an
> > organic-level molecular awareness. If it begins to assemble into
> > something like a cell, then you know you have a winner.
> What is special about a cell? Is it that it replicates?

You tell me. Why do we care if we find a cell on Mars? It's because
it's what we're made of and what our lives are made of. We care about
life because we are alive. Cells are life. Without that first hand
experience, if all we had to go on were computationalist theories,
then we should make no particular distinction between crushing human
heads and cracking coconuts, or between a neuron and a rusty nail.

> I don't see
> that as having any bearing on intelligence or consciousness. Viruses
> replicate and I would say many computer programs are already more
> intelligent than viruses.

If a person could be conscious without having cells then I would agree
with you. Replication is part of what life does, but life is more than
replication, it is replication of feeling. Computer programs don't

> >> The outcome of the superbowl creates visual and aural input, to which
> >> the relevant neurons respond using the same limited repertoire they
> >> use in response to every input.
> > There is no disembodied 'visual and aural input' to which neurons
> > respond. Our experience of sound and vision *are* the responses of our
> > neurons to their own perceptual niche - cochlear vibration summarized
> > through auditory nerve and retinal cellular changes summarized through
> > the optic nerve are themselves summarized by the sensory regions of
> > the brain.
> > The outcome of the superbowl creates nothing but meaningless dots on a
> > lighted screen. Neurons do all the rest. If you call that a limited
> > repertoire, in spite of the fact that every experience of every living
> > being is ecapsulated entirely with it, then I wonder what could be
> > less limited?
> The individual neurons have a very limited repertoire of behaviour,
> but the brain's behaviour and experiences are very rich. The richness
> comes not from the individual neurons, which are not much different to
> any other cell in the body, but from the complexity of the network. If
> you devalue the significance of this then why do we need the network?
> Why do we need a brain at all - why don't we just have a single neuron
> doing all the thinking?

Why do we need a brain at all, why not just use the cells of the body
to host the complex network? You need both. A complex network of ping
pong balls is useless, and a single complex cell is too fragile and
limited. You need a complex network of complex awareness to get
something like human consciousness. You can get nematode level
consciousness from a much simpler rig.

> >> Intelligent organisms did as a matter of fact evolve. If they could
> >> have evolved without being conscious (as you seem to believe of
> >> computers) then why didn't they?
> > Because the universe is not all about evolution. We perceive some
> > phenomena in our universe to be more intelligent than others, as a
> > function of what we are. Some phenomena have 'evolved' without much
> > consciousness (in our view) - minerals, clouds of gas and vapor, etc.
> The question is, why did humans evolve with consciousness rather than
> as philosophical zombies? The answer is, because it isn't possible to
> make a philosophical zombie since anything that behaves like a human
> must be conscious as a side-effect.

I understand that you are able to take that argument seriously, but it
just jaw dropping to me that anyone could. Why does fire exist?
Because it isn't possible to burn anything without starting a fire
because anything that behaves like it's on fire must be burning as a
side effect. It's just the most nakedly fallacious non-explanation I
can imagine. It has zero explanatory power, and besides that, it's
completely untrue. An actor's presence in a movie behaves like a human
but the image on the screen is not 'conscious as a side-effect'. They
are not even a little bit more conscious than a picture of a circle.
Just, ugh.

> >> No, the movie would repeat, since the digits of pi will repeat.
> > The frames of the movie would not repeat unless you limit the sequence
> > of frames to an arbitrary time.
> Yes, a movie of arbitrary length will repeat. But consciousness for
> the subject is not like a movie, it is more like a frame in a movie. I
> am not right now experiencing my whole life, I am experiencing the
> thoughts and perceptions of the moment.

Not at all. Your ability to make sense of the thoughts and perceptions
of the moment are entirely predicated on the conditioning and
experiences of your life thus far. If you had no access to that, you
would be worse off even than an infant (did you know French and German
babies come out of the womb with their respective dialects intoned in
their crying?). You would be an embryo - unable to read or understand
language, to make sense of images or sound, to control your body. Your
naive perception is just the thin layer of Δt on the tip of the
iceberg of not only your accumulated sense and motives, but those of
your family, friends, culture, species, planet, etc.

> The rest of my life is only
> available to me should I choose to recall a particular memory. Thus
> the thoughts I am able to have are limited by my working memory - my
> RAM, to use a computer analogy.

Even each memory is braided out of the fabric of your entire life.
It's not like a computer where you load the OS and it runs in RAM,
each line of code recapitulates the entire OS from a specific, and
dynamically changing perspective. At any time you can draw upon not
only your limited recall, but the cumulatively entangled wisdom of
your entire lifetime - an accumulation which is still growing and
changing; not merely erasing and rewriting like a Turing machine, but
partially erasing, amplifying, distorting, and creating vast fugues of
ambiguous superposition and potential scenarios extending into the
past, future, and fantasy.

> >> After
> >> a 10 digit sequence there will be at least 1 digit repeating, after a
> >> 100 digit sequence there will be at least a digit pair repeating,
> >> after a 1000 digit sequence there will be at least a triplet of digits
> >> repeating and so on. If you consider one minute sequences of a movie
> >> you can calculate how long you would have to wait before a sequence
> >> that you had seen before had to appear.
> > The movie lasts forever though. I don't care if digits or pairs
> > repeat, just as a poker player doesn't stop playing poker after he's
> > seen what all the cards look like, or seen every winning hand there
> > is.
> Yes, but the point was that a brain of finite size can only have a
> finite number of distinct thoughts.

A finite number at one time, but thoughts do not occur in a single
instant. If you have infinite time, you have infinite thoughts. If you
have finite brain size, you maybe have finite kinds of thoughts, but
really, even that is moot because of how thought is recapitulated. You
can forget your entire childhood but retain the language skills which
you learned in it.

> >> If the Internet is implemented on a finite computer network then there
> >> is only a finite amount of information that the network can handle.
> > Only at one time. Giving an infinite amount of time, there is no limit
> > to the amount of 'information' that it can handle.
> As I explained:
> >> For simplicity, say the Internet network consists of three logic
> >> elements. Then the entire Internet could only consist of the
> >> information 000, 001, 010, 100, 110, 101, 011 and 111.
> >> Another way to
> >> look at it is the maximum amount of information that can be packed
> >> into a certain volume of space, since you can make computers and
> >> brains more efficient by increasing the circuit or neuronal density
> >> rather than increasing the size. The upper limit for this is set by
> >> the Bekenstein bound (http://en.wikipedia.org/wiki/Bekenstein_bound).
> >> Using this you can calculate that the maximum number of distinct
> >> physical states the human brain can be in is about 10^10^42; a *huge*
> >> number but still a finite number.
> > The Bekenstein bound assumes only entropy, and not negentropy or
> > significance. Conscious entities export significance, so that every
> > atom in the cosmos is potentially an extensible part of the human
> > psyche. Books. Libraries. DVDs. Microfiche. Nanotech.
> Negetropy has a technical definition as the difference between the
> entropy of a system and the maximum possible entropy.

In order to have any phenomena which is not at it's maximum possible
entropy, you would need to have a principle which creates and
accumulates order. That is what I mean by negentropy.

> It has no
> bearing on the Bekenstein bound, which is the absolute maximum
> information that a volume can contain. It is a hard physical limit,
> not disputed by any physicist as far as I am aware. Anyway, it's
> pretty greedy of you to be dissatisfied with a number like 10^10^42,
> which if it could be utilised would allow one brain to have far more
> thoughts than all the humans who have ever lived put together.

Yeah, I don't know why you are making such a fuss over the difference
between an astronomically huge number of possible states at any one
time and a truly infinite number of states, but my point is that not
only is the brain trillions of times more complex than simple binary
grid like a TV screen, but it's constantly growing and changing,
condensing and compressing experiences. Also it's awarenesses cannot
be limited to a single period of time. It takes years to experience
one complete 'childhood'.

> >> That's right, we need only consider a substance that can successfully
> >> substitute for the limited range of functions we are interested in,
> >> whether it be cellular communication or cleaning windows.
> > Which is why, since we have no idea what the ranges of functions or
> > dependencies are contained in the human psyche, we cannot assume that
> > watering the plants with any old clear liquid should suffice.
> We need to know what the functions are before we can substitute for them.

Exactly. We don't know them yet, and we don't know how to know them.

> >> But TV programs can be shown on a TV with an LCD or CRT screen. The
> >> technologies are entirely different, even the end result looks
> >> slightly different, but for the purposes of watching and enjoying TV
> >> shows they are functionally identical.
> > Ugh. Seriously? You are going to say it in a universe of only black
> > and white versus color TVs, it's no big deal if it's in color or not?
> > It's like saying that the difference between a loaded pistol blowing
> > your brains out and a toy water gun are that one is a bit noisier and
> > messier than the other. I made my point, you are grasping for straws.
> The function of a black and white TV is different from that of a
> colour TV. However, the function of a CRT TV is similar to that of an
> LCD TV (both colour) even though the technology is completely
> different.

The technology is completely different but both are designed
specifically to make the same sense out of the same signals, which are
designed also by us, specifically to be decoded. You're still ignoring
my observation that just making a TV doesn't mean that the color will
show up 'because it must be a side effect' of TVs.

> >> Differences such as the weight
> >> or volume of the TV exist but are irrelevant when we are discussing
> >> watching the picture on the screen, even though weight and volume
> >> contribute to functional differences not related to picture quality.
> >> Yes, no doubt it would be difficult to go substituting cellular
> >> components, but as I have said many times that makes no difference to
> >> the functionalist argument, which is that *if* a way could be found to
> >> preserve function in a different substrate it would also preserve
> >> consciousness.
> > Of course, the functionalist argument agrees with itself. If there is
> > a way to do the impossible, then it is possible.
> It's not impossible, there is a qualitative difference between
> difficult and impossible. It would be difficult for humans to build a
> planet the size of Jupiter, but there is no theoretical reason why it
> could not be done. On the other hand, it is impossible to build a
> square triangle, since it presents a logical contradiction. There is
> no logical contradiction in substituting the function of parts of the
> human body. Substituting one thing for another to maintain function is
> one of the main tasks to which human intelligence is applied.

I understand what you are saying, and I would agree with you if the
contents of the psyche were not so utterly different from the physical
characteristics of the brain. We have no precedent for engineering
such a thing. It dwarfs the idea of building Jupiter. If you say we
can substitute lead for gold, I would say, well, sure, if you blast it
down to protons and reassemble it atom by atom - or find an easier way
to do it with a particle accelerator. But we have no common
denominator of human consciousness to work from. A few micrograms off
here or chromosomes off there, and you get major changes. I'm much
more optimistic about replicating tissue, and augmenting the nervous
system, but actually replacing it and expecting 'you' to still be in
there is a completely different proposition.

> >> That's right, since the visual cortex does not develop properly unless
> >> it gets the appropriate stimulation. But there's no reason to believe
> >> that stimulation via a retina would be better than stimulation from an
> >> artificial sensor. The cortical neurons don't connect directly to the
> >> rods and cones but via ganglion cells which in turn interface with
> >> neurons in the thalamus and midbrain. Moreover, the cortical neurons
> >> don't directly know anything about the light hitting the retina: the
> >> brain deduces the existence of an object forming an image because
> >> there is a mapping from the retina to the visual cortex, but it would
> >> deduce the same thing if the cortex were stimulated directly in the
> >> same way.
> > No, it looks like it doesn't work that way:
> >http://www.mendeley.com/research/tms-of-the-occipital-cortex-induces-...
> That is consistent with what I said.

No, it's the opposite. "some of the blind subjects reported tactile
sensations in the fingers that were somatotopically organized onto the
visual cortex", meaning that blind subjects who have their visual
cortex don't suddenly start seeing images and colors - they just feel
it in their fingertips. The don't see light, they feel in braille.

> >> It is irrelevant to the discussion whether the feeling of free will is
> >> observable from the outside. I don't understand why you say that such
> >> a feeling would have "no possible reason to exist or method of arising
> >> in a deterministic world". People are deluded about all sorts of
> >> things: what reason for existing or method of arising do those
> >> delusions have that a non-deterministic free will delusion would lack?
> > Because free will in a deterministic universe would not even be
> > conceivable in the first place to have a delusion about it. Even
> > delusional minds can't imagine a square circle or a new primary color.
> You're saying that free will in a deterministic world is
> contradictory. That may be the case if you define free will in a
> particular way (and not everyone defines it that way), but still that
> does not imply that the *feeling* of free will is incompatible with
> determinism.

I think that it is, because determinism assumes that everything that
happens happens for a particular reason. What would the reason for
such a feeling to exist, and how would it come into existence? Why
would determinism care if something pretends that it is not
determined, and how could it even ontologically conceived of non-

> >> This is your guess, but if everything has qualia then perhaps a
> >> computer running a program could have similar, if not exactly the
> >> same, qualia to those of a human.
> > Sure, and perhaps a trash can that says THANK YOU on it is sincerely
> > expressing it's gratitude. With enough ecstasy, it very well might
> > seem like it does. Why would that indicate anything about the native
> > qualia of the trash can?
> That's not an argument.

That's not a rebuttal.

> There is no logical or empirical reason to
> assume that the qualia of a computer that behaves like you cannot be
> very similar to your own. Even if you believe qualia are
> substrate-dependent, completely different materials can have the same
> physical properties, so why not the same qualia?

It's possible, I just don't think it's likely. It's possible that you
could make a carrot out of aluminum, but I don't think that's how you
make a carrot.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to