On Sep 23, 11:13 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Thu, Sep 22, 2011 at 12:09 PM, Craig Weinberg <whatsons...@gmail.com> 
> wrote:

> >>You claim
> >> that ion channels can open and neurons fire in response to thoughts
> >> rather than a chain of physical events.
>
> > No, I observe that ion channels do in fact open and neurons do indeed
> > fire, not 'in response' to thoughts but as the public physical view of
> > events which are subjectively experienced as thoughts. The conclusion
> > that you continue to jump to is that thoughts are caused by physical
> > events rather than being the experience of events which also have a
> > physical dimension.
>
> Do you agree or don't you that the observable (or public, or third
> person) behaviour of neurons can be entirely explained in terms of a
> chain of physical events?

No, nothing can be *entirely* explained in terms of a chain of
physical events in the way that you assume physical events occur.
Physical events are a shared experiences, dependent upon the
perceptual capabilities and choices of the participants in them. That
is not to say we that the behavior of neurons can't be *adequately*
explained for specific purposes: medical, biochemical,
electromagnetic, etc.

> At times you have said that thoughts, over
> and above physical events, have an influence on neuronal behaviour.
> For an observer (who has no access to whatever subjectivity the
> neurons may have) that would mean that neurons sometimes fire
> apparently without any trigger, since if thoughts are the trigger this
> is not observable.

No. Thoughts are not the trigger of physical events, they are the
experiential correlate of the physical events. It is the sense that
the two phenomenologies make together that is the trigger.

> If, on the other hand, neurons do not fire in the
> absence of physical stimuli (which may have associated with them
> subjectivity - the observer cannot know this)

We know that for example, gambling affects the physical behavior of
the amygdala. What physical force do you posit that emanates from
'gambling' that penetrates the skull and blood brain barrier to
mobilize those neurons?

> then it would appear
> that the brain's activity is a chain of physical events, which could
> be computed.

If you watch a color TV show on a black and white TV, then it would
appear that the TV show is a black and white event. It's not that the
events are physical, it's that they have a physical side when they are
detected by the physical side of an observer.

>
> >>This would be magic by
> >> definition, and real magic rather than just apparent magic due to our
> >> ignorance, since the thoughts are not directly observable by any
> >> experimental technique.
>
> > Thoughts are not observed, they are experienced directly. There is
> > nothing magic about them, except that our identification with them
> > makes them hard to grasp and makes it easy for us take them for
> > granted.
>
> But if thoughts influence behaviour and thoughts are not observed,
> then observation of a brain would show things happening contrary to
> physical laws,

No. Thought are not observed by an MRI. An MRI can only show the
physical shadow of the experiences taking place.

>such as neurons apparently firing for no reason, i.e.
> magically. You haven't clearly refuted this, perhaps because you can
> see it would result in a mechanistic brain.

No, I have refuted it over and over and over and over and over. You
aren't listening to me, you are stuck in your own cognitive loop.
Please don't accuse me of this again until you have a better
understanding of what I mean what I'm saying about the relationship
between gambling and the amygdala.

"We cannot solve our problems with the same thinking we used when we
created them" - A. Einstein.

>
> >> How does nature "know" more than a computer simulation?
>
> > Because nature has to know everything. What nature doesn't know is not
> > possible, by definition. A computer simulation can only report what we
> > have programmed it to test for, it doesn't know anything by itself. A
> > real cell knows what to do when it encounters any particular
> > condition, whereas a computer simulation of a cell will fail if it
> > encounters a condition which was not anticipated in the program.
>
> A neuron has a limited number of duties: to fire if it sees a certain
> potential difference across its cell membrane or a certain
> concentration of neurotransmitter.

That is a gross reductionist mispresentation of neurology. You are
giving the brain less functionality than mold. Tell me, how does this
conversation turn into cell membrane potentials or neurotransmitters?

>That's all that has to be
> simulated. A neuron doesn't have one response for when, say, the
> central bank raises interest rates and another response for when it
> lowers interest rates; all it does is respond to what its neighbours
> in the network are doing, and because of the complexity of the
> network, a small change in input can cause a large change in overall
> brain behaviour.

So if I move my arm, that's because the neurons that have nothing to
do with my arm must have caused the ones that do relate to my arm to
fire? And 'I' think that I move 'my arm' because why exactly?

If the brain of even a flea were anywhere remotely close to the
simplistic goofiness that you describe, we should have figured out
human consciousness completely 200 years ago.

>
> >>I don't know
> >> what I'm going to do tomorrow or all the possible inputs I might
> >> receive from the universe tomorrow. A simulation is no different: in
> >> general, you don't know what it's going to do until it does it.
>
> > A simulation is different in that it is trying to simulate something
> > else. The genuine subject of simulation can never be wrong because it
> > is not trying to be anything other than what it is. If you have a
> > computer program that simulates an acorn, it is always going to be
> > different from an acorn in that if the behavior of the two things
> > diverges, the simulation will always be 'wrong'.
>
> In theory we can simulate something perfectly if its behaviour is
> computable, in practice we can't but we try to simulate it
> sufficiently accurately. The brain has a level of engineering
> tolerance, or you would experience radical mental state changes every
> time you shook your head. So the simulation doesn't have to get it
> exactly right down to the quantum level.

Why would you experience a 'radical' mental state change? Why not just
an appropriate mental state change? Likewise your simulation will
experience an appropriate mental state to what is being used
materially to simulate it.

>
> >>Even
> >> the simplest simulation of a brain treating neurons as switches would
> >> result in fantastically complex behaviour. The roundworm c. elegans
> >> has 302 neurons and treating them as on/off switches leads to 2^302 =
> >> 8*10^91 permutations.
>
> > Again, complexity does not impress me as far as a possibility for
> > making the difference between awareness and non-awareness.
>
> My point was that even a simulation of a very simple nervous system
> produces such a fantastic degree of complexity that it is impossible
> to know what it will do until you actually run the program. It is,
> like the weather, unpredictable and surprising even though it is
> deterministic.

There is still no link between predictability and intentionality. You
might be able to predict what I'm going to order from a menu at a
restaurant, but that doesn't mean that I'm not choosing it. You might
not be able to predict a tsunami, but that doesn't mean it's because
the tsunami is choosing to do something. The difference, I think, has
to do with more experiential depth in between each input and output.

>
> >> If philosophical zombies are possible then we
> >> are back with the fading qualia thought experiment: part of your brain
> >> is replaced with an analogue that behaves (third person observable
> >> behaviour, to be clear) like the biological equivalent but is not
> >> conscious, resulting in you being a partial zombie but behaving
> >> normally, including declaring that you have normal vision or whatever
> >> the replaced modality is.
>
> > No. You're stuck in a loop. If we do a video Skype, the moving image
> > and sound representing me is a philosophical zombie. If you ask it if
> > it feels and sees, it will tell you yes, but the image on your screen
> > is not feeling anything.
>
> When you see someone in real life you see their skin moving, but that
> doesn't mean the skin is conscious or the skin is a philosophical
> zombie. It is the thing driving the skin that we are concerned about.

Huh? If you ask their skin if it feels and sees it won't answer you.

>
> >>The only way out of this conclusion (which
> >> you have agreed is absurd) given that the brain functions following
> >> the laws of physics is to say that if the replacement part behaves
> >> normally (third person observable behaviour, I have to keep reminding
> >> you) then it must also have normal consciousness.
>
> > I understand perfectly why you think this argument works, but you
> > seemingly don't understand that my explanations and examples refute
> > your false dichotomy. Just as a rule of thumb, anytime someone says
> > something like "The only way out of this (meaning their) conclusion "
> > My assessment is that their mind is frozen in a defensive state and
> > cannot accept new information.
>
> You have agreed (sort of) that partial zombies are absurd

No. Stuffed animals are partial zombies to young children. It's a
linguistic failure to describe reality truthfully, not an insight into
the truth of consciousness.

>and you have
> agreed (sort of) that the brain does not do things contrary to
> physics. But if consciousness is substrate-dependent, it would allow
> for the creation of partial zombies. This is a logical problem. You
> have not explained how to avoid it.

Consciousness is not substrate-dependent, it is substrate descriptive.
A partial zombie is just a misunderstanding of prognosia. A character
in a computer game is a partial zombie.

>
> >> The cell is more simple-minded than the human, I'm sure you'll agree.
> >> It's the consciousness of many cells together that make the human. If
> >> you agree that an electric circuit may have a small spark of
> >> consciousness, then why couldn't a complex assembly of circuits scale
> >> up in consciousness as the cell scales up to the human?
>
> > This is at least the right type of question. To be clear, it's not
> > exactly an electric circuit that is conscious, it is the experience of
> > a semiconductor material which we know as an electric circuit. We see
> > the outside of the thing as objects in space, but the inside of the
> > thing is events experienced through time. You can ignore that though,
> > I'm just clarifying my hypothesis.
>
> > So yes, a complex assembly of circuits could scale up in consciousness
> > as cells do, if and only if the assembling of the circuits is driven,
> > at least in part, by internal motives rather than external
> > programming. People build cities, but cities do not begin to build
> > themselves without people.You can make a fire out of newspaper or
> > gasoline and a spark from some friction-reactive substance but you
> > can't make a substance out of 'fire' in general.
>
> > The materials we have chosen for semiconductors are selected
> > specifically for their ability to be precisely controlled and never to
> > deviate from their stable molecular patterns under electronic
> > stimulation. To expect them to scale up like living organisms is to
> > expect to start a fire with fire retardant.
>
> Would it count as "internal motives" if the circuit were partly
> controlled by thermal noise, which in most circuits we try to
> eliminate? If the circuit were partly controlled by noise it would
> behave unpredictably (although it would still be following physical
> laws which could be described probabilistically). A free will
> incompatibilist could then say that the circuit acted of its own free
> will. I'm not sure that would satisfy you, but then I don't know what
> else "internal motives" could mean.

These are the kinds of things that can only be determined through
experiments. Adding thermal noise could be a first step toward an
organic-level molecular awareness. If it begins to assemble into
something like a cell, then you know you have a winner.

>
> >> But the amygdala doesn't know who's going to win the Super Bowl
> >> either! For that matter, the TV doesn't know who's going to win, it
> >> just follows rules which tell it how to light up the pixels when
> >> certain signals come down the antenna.
>
> > Right. That's why you can't model the behavior of the amygdala -
> > because it depends on things like the outcome of the Super Bowl. The
> > behavior of atoms does not depend on human events, therefore it is an
> > insufficient model to predict the behavior of the brain.
>
> The outcome of the superbowl creates visual and aural input, to which
> the relevant neurons respond using the same limited repertoire they
> use in response to every input.

There is no disembodied 'visual and aural input' to which neurons
respond. Our experience of sound and vision *are* the responses of our
neurons to their own perceptual niche - cochlear vibration summarized
through auditory nerve and retinal cellular changes summarized through
the optic nerve are themselves summarized by the sensory regions of
the brain.

The outcome of the superbowl creates nothing but meaningless dots on a
lighted screen. Neurons do all the rest. If you call that a limited
repertoire, in spite of the fact that every experience of every living
being is ecapsulated entirely with it, then I wonder what could be
less limited?

> Basically, a neuron can only fire or
> not fire, and the trigger is either a voltage-activated or
> ligand-activated ion channel. The brain's behaviour is complex and
> unpredictable despite the relatively simple behaviour of the neurons
> due to the complexity of the network.

Neurons are living organisms. We can access some views of their
activities activities by looking under a microscope or MRI, and we can
access some views by experiencing those activities first hand as what
we call 'our entire lives', and 'the only universe we can ever know'.

>
> >>The conscious intention is invisible to an outside
> >> observer. An alien scientist could look at a human brain and explain
> >> everything that was going on while still remaining agnostic about
> >> human consciousness.
>
> > Yes, but if the alien scientist's perceptual niche overlaps our own,
> > they could infer our conscious intention (because that's how sense
> > works...it imitates locally what it cannot sense directly...for us,
> > that indirect sense is about material objects in space and
> > computation.
>
> The alien would possibly be based on organic chemistry but the
> chemicals would be different to ours, and its nervous system would be
> different to ours. It might even be based on a completely different
> structure, electrical circuits or hot plasma inside a star. How would
> it know if its perceptions were anything like ours?

If it's electrical circuits or star plasma then it's not organic
chemistry. If it is organic chemistry then it might be similar enough
to ourselves to allow for a modicum of 'common sense'.

>
> >> That there is no evolutionary role for consciousness is a significant
> >> point.
>
> > Yes, I agree. It doesn't make evolution any less relevant to shaping
> > the content consciousness, but the fact of it's existence at all shows
> > that not everything that evolves needs to be described in purely
> > physical terms. We can just expand our view of evolution so it's not
> > just a story about bodies eating and reproducing, but about evolving
> > perception and feeling. Spectacular moments like the invention of
> > photosynthesis coinciding with the discovery of color by marine
> > organisms finding different ways to feel the sun inside of themselves.
>
> >>It leads me to the conclusion that consciousness is a necessary
> >> side-effect of intelligent behaviour, since intelligent behaviour is
> >> the only thing that could have been selected for.
>
> > That would be what you would have to conclude if you were operating on
> > theory alone, but we have the benefit of first hand experience to draw
> > upon, which shows that intelligent behavior is a side-effect of
> > consciousness and not the other way around. It takes years for humans
> > to develop evolutionarily intelligent behavior. Also intelligent
> > behavior is not necessary for evolution. Blue-green algae is still
> > around. It hasn't hasn't had to learn any new tricks in over a billion
> > years to survive.
>
> Intelligent organisms did as a matter of fact evolve. If they could
> have evolved without being conscious (as you seem to believe of
> computers) then why didn't they?

Because the universe is not all about evolution. We perceive some
phenomena in our universe to be more intelligent than others, as a
function of what we are. Some phenomena have 'evolved' without much
consciousness (in our view) - minerals, clouds of gas and vapor, etc.

>
> >> > No, the brain size only could correlate with bandwidth. A TV screen
> >> > doesn't grow over time, and it will never have to start repeating. A
> >> > lizard is like a 70" HDTV flat screen compared to a flea's monochrome
> >> > monitor, or a silicon chip's single band radio.
>
> >> A TV screen *will* start repeating after a long enough period. If it
> >> has N pixels since each pixel can only be on or off the number of
> >> possible images it can show is 2^N.
>
> > A single screen will repeat but the sequence of screens will not. You
> > could make a movie where you assign one screen frame to each integer
> > (say an all red screen for 1, orange for 2, yellow for 3, etc) and
> > then just synchronize the movie to compute Pi to the last digit in
> > screen frames. Would you agree that movie would not repeat?
>
> No, the movie would repeat, since the digits of pi will repeat.

The frames of the movie would not repeat unless you limit the sequence
of frames to an arbitrary time.

> After
> a 10 digit sequence there will be at least 1 digit repeating, after a
> 100 digit sequence there will be at least a digit pair repeating,
> after a 1000 digit sequence there will be at least a triplet of digits
> repeating and so on. If you consider one minute sequences of a movie
> you can calculate how long you would have to wait before a sequence
> that you had seen before had to appear.

The movie lasts forever though. I don't care if digits or pairs
repeat, just as a poker player doesn't stop playing poker after he's
seen what all the cards look like, or seen every winning hand there
is.

>
> >> A mental state of arbitrary length will start repeating unless the
> >> brain can grow indefinitely.
>
> > That's like saying the internet will start repeating unless your hard
> > drive can grow indefinitely.
>
> If the Internet is implemented on a finite computer network then there
> is only a finite amount of information that the network can handle.

Only at one time. Giving an infinite amount of time, there is no limit
to the amount of 'information' that it can handle.

> For simplicity, say the Internet network consists of three logic
> elements. Then the entire Internet could only consist of the
> information 000, 001, 010, 100, 110, 101, 011 and 111.
> Another way to
> look at it is the maximum amount of information that can be packed
> into a certain volume of space, since you can make computers and
> brains more efficient by increasing the circuit or neuronal density
> rather than increasing the size. The upper limit for this is set by
> the Bekenstein bound (http://en.wikipedia.org/wiki/Bekenstein_bound).
> Using this you can calculate that the maximum number of distinct
> physical states the human brain can be in is about 10^10^42; a *huge*
> number but still a finite number.

The Bekenstein bound assumes only entropy, and not negentropy or
significance. Conscious entities export significance, so that every
atom in the cosmos is potentially an extensible part of the human
psyche. Books. Libraries. DVDs. Microfiche. Nanotech.

>
> >> > That assumes mechanism a priori. If a fire could burn without oxygen,
> >> > then it would do the same things as fire. Do you see how that is
> >> > circular reasoning, since fire is oxidation?
>
> >> If a fire could burn in an atmosphere of nitrous oxide, for example,
> >> it would still be a fire. It wouldn't be a fire in oxygen, but it
> >> would perform other functions of a fire, such as allowing you to cook
> >> your dinner.
>
> > How is nitrous oxide not 'without oxygen'? You're just disagreeing
> > with me to disagree.
>
> A chlorine atmosphere can also support 
> combustion:http://www.tutorvista.com/content/chemistry/chemistry-i/chlorine/chlo...

That just gets into a finer distinction of what we mean by fire. Not
all forms of combustion are oxidation, but that doesn't mean that all
compounds are equally combustible or something.

>
> >> >> and the deduction from this
> >> >> principle that any associated first person experiences would also be
> >> >> the same, otherwise we could have partial zombies.
>
> >> > People who have conversion disorders are partial zombies.
>
> >> No they're not. People with conversion disorders behave as if they
> >> have a neurological deficit for which there is no physiological basis,
> >> as evidenced by various tests.
>
> > Meaning that they experience something different than their neurology
> > indicates they should. A partial zombie is the same thing. Their brain
> > behaves normally but they experience something different than we would
> > expect.
>
> No, the definition of a philosophical zombie is that it behaves
> normally while lacking consciousness. The zombie does not have any
> observable deficit in neurological function, since then it would not
> behave normally.

Normally is just a matter of consensus. If you have two people with
hysterical blindness, then the guy who can actually see is the zombie
because he has no neurological deficit but has a different internal
experience.

>
> >> Some sort of substance is needed for the function but different types
> >> of substance will do.
>
> > Depends on the function. Alcohol can substitute for water in a
> > cocktail or for cleaning windows, but not for living organisms to
> > survive on.
>
> That's right, we need only consider a substance that can successfully
> substitute for the limited range of functions we are interested in,
> whether it be cellular communication or cleaning windows.

Which is why, since we have no idea what the ranges of functions or
dependencies are contained in the human psyche, we cannot assume that
watering the plants with any old clear liquid should suffice.

>
> >>The argument is that the consciousness depends
> >> on the function, not the substance. There is no consciousness if the
> >> brain is frozen, only when it is active, and consciousness is
> >> profoundly affected by small changes in brain function when the
> >> substance remains the same.
>
> > What is frozen is the substance of the brain. Function is nothing but
> > the activity of a substance. Substances have similarities and produce
> > similar function, but a function which is alien to a substance cannot
> > be introduced. A black and white TV cannot show programs in color and
> > TV programs cannot be shown without a TV.
>
> But TV programs can be shown on a TV with an LCD or CRT screen. The
> technologies are entirely different, even the end result looks
> slightly different, but for the purposes of watching and enjoying TV
> shows they are functionally identical.

Ugh. Seriously? You are going to say it in a universe of only black
and white versus color TVs, it's no big deal if it's in color or not?
It's like saying that the difference between a loaded pistol blowing
your brains out and a toy water gun are that one is a bit noisier and
messier than the other. I made my point, you are grasping for straws.

> Differences such as the weight
> or volume of the TV exist but are irrelevant when we are discussing
> watching the picture on the screen, even though weight and volume
> contribute to functional differences not related to picture quality.

Alright then, let's say instead of black and white it's black and
infra-red. Now how is there no difference to it's functional picture
quality?

>
> >> >> If you have a different
> >> >> substance that leaves function unchanged, consciousness is unchanged.
> >> >> For example, Parkinson's disease can be treated with L-DOPA which is
> >> >> metabolised to dopamine and it can also be treated with dopamine
> >> >> agonists such as bromocriptine, which is chemically quite different to
> >> >> dopamine. The example makes the point quite nicely: the actual
> >> >> substance is irrelevant, only the effect it has is important.
>
> >> > The substance is not irrelevant. The effect is created by using
> >> > different substances to address different substances that make up the
> >> > brain. Function is just a name for what happens between substances.
>
> >> Yes, but any substance that performs the function will do. A change in
> >> the body will only make a difference to the extent that it causes a
> >> change in function.
>
> > That's fine if you know for a fact that the substance can perform the
> > same function. Biology is verrrry picky about it's substances. 53
> > protons in a nucleus: Essential for good health. 33 protons: Death.
>
> Yes, no doubt it would be difficult to go substituting cellular
> components, but as I have said many times that makes no difference to
> the functionalist argument, which is that *if* a way could be found to
> preserve function in a different substrate it would also preserve
> consciousness.

Of course, the functionalist argument agrees with itself. If there is
a way to do the impossible, then it is possible.

>
> >> If you directly stimulate the visual cortex the subject can see red
> >> without there being a red object in front of him. It doesn't seem from
> >> this that the putative red-experiencing molecules in the retina are
> >> necessary for seeing red.
>
> > You're right, the retina is not necessary for seeing red once they
> > have exposed the visual cortex. The mind can continue to have access
> > to red for many years after childhood blindness sets in, depending on
> > how early it happens. If it's too early, eventually the memories fade.
> > It does seem to be necessary to have eyes that can see red at some
> > point though. The mind doesn't seem to be able to make it up on it's
> > own in blind people who have gained sight for the first time through
> > surgery.
>
> That's right, since the visual cortex does not develop properly unless
> it gets the appropriate stimulation. But there's no reason to believe
> that stimulation via a retina would be better than stimulation from an
> artificial sensor. The cortical neurons don't connect directly to the
> rods and cones but via ganglion cells which in turn interface with
> neurons in the thalamus and midbrain. Moreover, the cortical neurons
> don't directly know anything about the light hitting the retina: the
> brain deduces the existence of an object forming an image because
> there is a mapping from the retina to the visual cortex, but it would
> deduce the same thing if the cortex were stimulated directly in the
> same way.

No, it looks like it doesn't work that way:
http://www.mendeley.com/research/tms-of-the-occipital-cortex-induces-tactile-sensations-in-the-fingers-of-blind-braille-readers/

>
> >> But I'm not a computer program because there aren't any computer
> >> programs which can sustain a discussion like this. You know that. When
> >> there are computer programs that can pass as people we will have to
> >> wonder if they actually experience the things they say they do.
>
> > It's totally context dependent. In limited contexts, we already cannot
> > tell whether an email is spam or not or a blogger is a bot or not.
> > There will always be a ways to prove you're human. It will be demanded
> > by commercial interests and enforceable by law if it comes to that.
> > Identity creation is much more dangerous than identity threat.
>
> There aren't currently any computer programs that can pass the Turing
> test. I'm sure that within minutes if not seconds you could tell the
> difference between a program and a human if you were allowed to
> converse with it interactively.

I agree, of course.

>
> > No, I say that if you think you have non-determisintic free will then
> > it doesn't matter whether that feeling is detectable in some way from
> > the outside, and that such a feeling would have no possible reason to
> > exist or method of arising in a deterministic world. There is a
> > difference. I'm not saying that feeling free means that you are free,
> > I'm saying that feeling free means that determinism is insufficient.
>
> It is irrelevant to the discussion whether the feeling of free will is
> observable from the outside. I don't understand why you say that such
> a feeling would have "no possible reason to exist or method of arising
> in a deterministic world". People are deluded about all sorts of
> things: what reason for existing or method of arising do those
> delusions have that a non-deterministic free will delusion would lack?

Because free will in a deterministic universe would not even be
conceivable in the first place to have a delusion about it. Even
delusional minds can't imagine a square circle or a new primary color.

>
> >> You have said previously that you think every substance could have
> >> qualia, so why exclude silicon?
>
> > Silicon has qualia, but complex organizations of silicon seem like
> > they have different qualia than organisms made of water, sugar, and
> > protein. My guess is that the very things that make silicon a good
> > semiconductor make it a terrible basis for a living organism.
>
> This is your guess, but if everything has qualia then perhaps a
> computer running a program could have similar, if not exactly the
> same, qualia to those of a human.

Sure, and perhaps a trash can that says THANK YOU on it is sincerely
expressing it's gratitude. With enough ecstasy, it very well might
seem like it does. Why would that indicate anything about the native
qualia of the trash can?

>
> >> If there were clear evidence tomorrow of determinism I think people
> >> would continue to use the word "novel" as well as "free will" in
> >> exactly the same way.
>
> > Only because people would compartmentalize the mistaken belief in such
> > 'evidence' from the unimpeachable subjective experience of novelty and
> > free will.
>
> Subjective experience of anything does not mean it is true.

True as in not corresponding to consensus 3-p expectations, no, but
all subjective experience is true in the sense that it is a legitimate
experience occurring in the universe.

> The only
> unequivocal conclusion I can draw from my subjective experience is
> that I have subjective experiences.

It is only through that one unequivocal subjective experience that you
can draw any conclusions about anything.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to