Hi John,

On 06 Aug 2011, at 17:35, John Mikes wrote:


let me barge in with one fundamental - not dispersing my reply into those (many and long) details:

As I read your comments/replies (and I agree with your position within the limits I want to expose here), I have the feeling that you agree with the rest of the combatants in considering 'the brain', our idea about 'intelligence' and 'consciousness' as complete and total.

Not at all. We know now that no machine can have a complete theory of its own intelligence and consciousness. That is why we have to bet on a level. It is risky. Necessarily so.

We cannot make a complete theory of our brain, but by studying simpler brain and machine, we might bet that nature has already made such a bet, so that at some level, our components are functional, and so we can then survive with any universal machine (made of any stuff) mimicking our components at that level.

This does not mean that the doctor has any clue on how that machine bring intelligence or how it make it possible for a consciousness to be manifested. On the contrary, that hypothesis prevents such complete and total theories to ever exist. I am afraid you don't take into account that the discovery of the universal machine is really a discovery of a door on the non complete-able unknown which escapes all the total theories. I insist: mechanism is less reductionist than any other non mechanist theories. The non mechanist theories have a reductionist conception of machine and numbers: they exclude a large set of being from the class of sensible beings.

I argue that it is not. Upon historic reminiscence all such inventory about these (and many more) concepts has been growing by addition and by phasing out imaginary 'content' as we 'learn', - an ongoing process that does not seem to have reached the ultimate end/completion.

With comp, it will never succeed.

So you are right in considering whatever we new yesterday (my substitution for today) but not including what we may know tomorrow. Drawing conclusions upon incomplete inventory does not seem acceptable.

You dismiss what we know now, which put in a short way, is that we don't know what universal machine are able or unable to do. If we are universal machine, and not 3-angels or 3-gods, we will never know any ultimate end/completion on just numbers and universal machines, in any public third person communicable way. At least, with comp, we can at least explain this. We are warned of the infinite numbers of surprises which await us on that path.



On Sat, Aug 6, 2011 at 11:18 AM, Stathis Papaioannou <stath...@gmail.com > wrote: On Sat, Aug 6, 2011 at 11:03 PM, Craig Weinberg <whatsons...@gmail.com> wrote:

>> My position is that consciousness occurs necessarily if the sort of
>> activity that leads to intelligent behaviour occurs.
> Consciousness is the same thing as that which 'leads to intelligent
> behavior' for the subjective perspective (which makes your position
> tautological, that consciousness occurs if consciousness occurs) but
> for the objective perspective, there is no such thing as observable
> behavior that is intrinsically intelligent, only behavior which
> reminds one of their own intelligent motives. Let's call the
> subjective view of one's own behaviors 'motives' for clarity, and the > objective view of 'intelligent seeming' as 'anthropomorphic behavior',
> or more universally, isomorphic phenomenology.

Intelligence is a behaviour and consciousness is an internal state. It
appears that what most people call "intelligent behaviour" you call
"anthropomorphic behaviour".

>> This is not
>> immediately obvious, at least to me. I assume therefore that it is not
>> true: that it is possible to have intelligent behaviour (or
>> neuron-like behaviour) without consciousness. This assumption is then
>> shown to lead to absurdity.
> What absurdity? A cartoon of a neuron has neuron-like behavior, and
> it's clearly not intelligent. At what point does a cartoon improve
> enough that it becomes conscious? To me that shows the assumption that
> you can't have something that behaves like a physical neuron without
> there existing such a thing as consciousness as absurd. Of course you
> can make any physical design of surfaces and mechanical relations
> between them without there being some feeling entity appearing to
> enjoy your simulation.

Assume that this is so, and see where it leads.

> Yes, perception occurs at the brain - which is why you can numb pain
> with narcotics without affecting pain receptors, but perception also
> occurs everywhere in the nervous system, which is why you can use an
> local anesthetic on the nociceptors too which doesn't affect the
> brain. If it was truly all in the brain, it would compensate for the
> missing signals from your numb finger and perceive pain anyhow if it
> was being operated on, just like an optical illusion compensates for
> paradoxical inputs with a sensory simulation.

How could the brain compendsate for missing sensory signals? If it
could do that it would be a kind of telepathy and we would not need
sense organs. Perception occurs in the brain since you can have
perception without sense data, as in dreams and hallucinations, but
cutting the optic nerves causes blindness even though the eyes may be
functioning normally.

>> >> that could be the case
>> >> now: you could be completely blind, deaf, lacking in emotion but you >> >> behave normally and don't realise that anything is wrong. Please think
>> >> about this paragraph carefully before replying to it - it is
>> >> essentially the whole argument and you seem to have misunderstood it.
>> > I have thought about it many times. Like 25 years ago. It's the
>> > reductio ad absurdum of materialism. You can't seem to let go of the
>> > idea that perception is perception whether it happens completely
>> > within your own dreamworld, through the tailpipe of some computerized >> > lawnmower, or a crystal clear presentation of external realities. It. >> > makes. no. difference. Having a thought, any thought, any experience >> > whatsoever and being aware of that fact is consciousness. Period. It
>> > doesn't matter if you have a brain or not, or what other people
>> > observe of your behavior. Unless you are talking about a medical
>> > definition of being conscious as far as exhibiting signs of responsive >> > to the outside world, which is something else entirely. That would
>> > only be relevant for something which we assume to be capable of
>> > consciousness in the first place.
>> I'm talking about subjective experience, perceptions, qualia,
>> understanding, feelings. These things cannot be observed from the
>> outside, as opposed to associated behaviours which are observable. But
>> there is a big conceptual problem if it is possible to make brain
>> components that perform just the mechanical function without also
>> replicating consciousness. Sometimes you say it would be too difficult >> to create such components, which is irrelevant to the argument. Other
>> times you say that there would be a change in perception, but then
>> don't seem to understand that it would be impossible to notice such a
>> change given that the part of the brain that does the noticing gets
>> normal inputs by definition.
> I'm consistent in my position, you're just not seeing why the premise
> is fallacious from the beginning. There is no such thing as:
> 1. behaviors associated with qualia
> 2. mechanical functions that replicate consciousness
> 3. normal inputs
> 1. If I experience qualia, like color, then I can associate the
> conditions in my brain, my retina, exterior light meters, etc with the
> production of color. If, however, I'm blind, then I cannot associate
> those conditions with anything. It's not symmetric. A camera can't
> necessarily see just because it takes pictures that we can see. We see
> through lens and through the pictures and jpeg pixels. The pixels
> don't see, the monitor doesn't see, the lens doesn't see. These are
> just devices we can use to reflect our sight. What they 'see' is
> likely totally different.

If you see a red traffic light and stop that is what I call "behaviour
associated with qualia". That is, you have the experience, which only
you can know about, and it leads to a behaviour which everyone else
can observe.

> 2. Consciousness isn't a special logical design that turns inanimate
> objects and circuits into something that can feel. Matter feels
> already - or detects/reacts. Consciousness is just the same principle
> run through multiple organic elaborations so that it feels as the
> interior of an organism rather than just the interior of cells or
> molecules. It scales up.

The mechanical function is not intended to replicate consciousness but
only "anthropomorphic behaviour".

> 3. We've done this to death. You're just not understanding what I'm
> saying about the YouTubes and the voicemails. Fooling a neuron for a
> while doesn't mean that you can rely in it being fooled forever, and
> even if you could, it still doesn't mean that a counterfeit neuron
> will provide genuine neuron interiority, when scaled up to the level
> of the entire brain.

If a counterfeit neuron can fool the rest of the brain then BY
DEFINITION the rest of the brain will behave normally and have normal
consciousness, even if the counterfeit neuron has absent or different

>> > I just want to know how many times I can repeat the phrase 'there is
>> > no such thing as 'behaving as if it were conscious' before you
>> > acknowledge the meaning of it.
>> No, I don't understand how you could possibly not understand the
>> phrase. "Here's Joe, he behaves as if he's conscious, but I can't be
>> sure he is conscious because I'm noit he".
> That's what I'm saying - you can't be sure he is conscious because
> you're not him. So why do you keep wanting to claim that there is some
> kind of normal, conscious 'behavior' which can be emulated with
> confidence that Joe2 is as good as Joe?

I'm not saying that, I'm only saying that a machine can "behave as if
conscious". It remains to be proved that it is in fact conscious, by
assuming that it isn't and showing that it leads to absurdity.

>> >>would in
>> >> fact be conscious. I think it would,
>> > You are free to think whatever you want under my model. If you're
>> > right about consciousness being just a brain behavior, in which case >> > you can only think what your neurology makes you think. In that case >> > you might as well stop reading because there's no point in imagining
>> > you can have an opinion about anything.
>> I'm not saying consciousness is just a brain behaviour, I'm saying
>> consciousness is generated by a brain behaviour, and if you copy the
>> behaviour in a different substrate you will also copy the
>> consciousness. And of course I can only think what my neurology makes
>> me think, and my neurology can only do what the laws of physics
>> necessitate that it do.
> That's the problem, you're neurology can only do what the laws of
> physics allow it to do, but your copy can do whatever you program your
> copymaking algorithms to do. That's an ontological difference. Your
> copy of the laws of physics aren't physical. Which is why if you copy > the behavior of fire it won't burn anything. I agree that if you could
> copy the brain behavior into a different brain (ideally of course,
> practically transplanting the behavior of one group of millions of
> neurons to a different brain wouldn't work an more than taking the
> blueprints of every building in New York City to Sumatra is going to
> make Indonesian New Yorkers.) that potential to experience the sense
> and motives of the original would be reproduced *to the extent that
> the host brain can support it*. You can play a color movie on a black
> and white monitor, but the pattern of the DVD alone, even though it
> codes for color, can't change the monitor. For the same reason you
> can't just copy 'behavior' into any old substance, it has to be
> something that the brain does - live, breathe, feel like an animal's
> brain in an animal's world.

If your brain follows the laws of physics and the laws of physics are
computable then the behaviour of your brain can be modelled by a
computer. Put the computer inside a person's emptied skull, connect it
to sense organs and muscles, and the resulting being will walk, talk,
sing etc. like a human. Which part do you disagree with?

> If you accept my argument that substrate-dependence is true, then you > have to reject your initial assumption that a different substrate can > ever function 'normally'. The plastic neuron fails for the same reason > the plastic brain fails. The other neurons know it's an imposter. They
> might still make sweet sweet love to the blowup doll, but that just
> means they're lonely ;)

How will the other neurons know the difference if they receive all the
normal electrochemical signals?

>>So I ask you again, yes or
> Heh, no need to get ALL CAPPY about it, but I hope the above helps
> explain my position. We can't be ever be sure that a brain component
> can function "normally" unless it is made of brain. Hell, it might not
> even function exactly like the original if it's transplanted from an
> identical twin's brain. We just don't know. My position is that the
> closer the thing is physically and logically to a brain, the more like
> a brain it could be, but that just logic alone will not give an
> inorganic brain feeling and organic matter alone will not give tissue
> human logic.

Sorry about the caps, but it's a simple yes/no question and you still
haven't answered it. You bring up technical difficulties which are
philosophically irrelevant. IF a brain component in a different
substrate could function normally in a purely mechanical way would it
also preserve consciousness? This is analogous to "IF I had $100,000
would I be able to buy a Ferrari?" The correct answer is not "no,
because you don't have $100,000" - because the question assumes that I
do have $100,000!

>> > Being 'deluded about being conscious is a true non-sequitur'. Delusion >> > is consciousness too. A brick cannot be deluded. A computer cannot be >> > deluded. A brain cannot be deluded. A person can be deluded - because >> > they are the cumulatively entangled sensorimotive interior of a human
>> > brain, which contains many ambiguous and conflicting fugues of
>> > significance, organized dynamically and hierarchically through
>> > metaphor and association, image, instinct, etc on many different
>> > levels of awareness above and below the conscious threshold.
>> I agree you can't be deluded about your consciousness, but if
>> consciousness is substrate-dependent then you CAN be deluded about
>> your conscious. That's why consciousness can't be substrate- dependent!
> No, you can't be deluded about your consciousness, regardless of
> whether it is substrate dependent or not. You can be deluded about
> other things, whether substance-dependent or not, but in neither case
> can you think that you are thinking without thinking.

If consciousness is substrate dependent then it is possible to make an
unconscious brain component (being of the wrong substrate) which would
feed the rest of your brain normal signals fooling it into thinking
there had been no change. Thus, you could be blind but will honestly
believe that you can see, say that you can see and behave as if you
can see.

Stathis Papaioannou

You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to