On Wed, Sep 21, 2011 at 6:13 AM, Craig Weinberg <whatsons...@gmail.com> wrote:

>> For a long time, it was thought that the origin of species was
>> magical, special creation by God. But evolutionary theory showed that
>> species could originate in a manner consistent with physical law, so
>> it became surprising rather than magical.
> Yet nothing has changed except the way that we look at it. You are
> talking about a change in human expectations, not the ability of
> nature itself to define life, feeling, qualia, etc. through
> statistical probability alone. It doesn't weaken the obvious
> divergence between the evolution of organic life and that of inorganic
> mechanism.

If something happens contrary to physical laws it is magic. You claim
that ion channels can open and neurons fire in response to thoughts
rather than a chain of physical events. This would be magic by
definition, and real magic rather than just apparent magic due to our
ignorance, since the thoughts are not directly observable by any
experimental technique.

>> Human technology uses techniques many, many times faster and more
>> efficient than mutation and natural selection; it is "intelligent
>> design" rather than natural selection. Nature did not have that
>> option.
> What technology gains in speed, it loses in authenticity. Because it
> is only based upon satisfying the substitution level of the minds of
> programmers, simulations do not achieve replication.

Simulations can be easily replicated, much more easily than biological
replication, and they can be modified much more easily than biological
evolution modifies. Whatever other things can be said in favour of
biology over electronics, the slow and inefficient process of
evolution isn't one of them.

>> A simulation of evolution would be computationally difficult but
>> possible in theory. The program would not know what was going to
>> happen until it happened, just as nature doesn't know what's going to
>> happen until it happens. You seem to have difficulty with this idea,
>> holding computer simulations to a higher standard than the thing they
>> are simulating.
> It's not a matter of knowing what's going to happen, it's about
> knowing what can possibly happen, which nature does know but a
> computer simulation of nature cannot know as long is it's constructed
> according to the sense that we human beings make of the universe from
> our (technologically extended) perspective.

How does nature "know" more than a computer simulation? I don't know
what I'm going to do tomorrow or all the possible inputs I might
receive from the universe tomorrow. A simulation is no different: in
general, you don't know what it's going to do until it does it. Even
the simplest simulation of a brain treating neurons as switches would
result in fantastically complex behaviour. The roundworm c. elegans
has 302 neurons and treating them as on/off switches leads to 2^302 =
8*10^91 permutations.

>> You would not get a conversion disorder since people with conversion
>> disorders say and do unusual things, whereas if the neurons are making
>> the muscles contract normally (as they must if they do their thing)
>> then the person will be observed to behave normally.
> Affirming the consequent and missing the point. A conversion order has
> no neurological symptoms that we can see, therefore it must arise as
> the result from conditions we have not yet understood. A brain made
> from our incomplete understanding of the relation between perception
> and neurology could, and I think would result in a neurological
> pantomime with no corresponding qualia, if the materials used were not
> a close match for neurological tissue.

The conversion disorder is a bad example. What you mean is a
philosophical zombie. If philosophical zombies are possible then we
are back with the fading qualia thought experiment: part of your brain
is replaced with an analogue that behaves (third person observable
behaviour, to be clear) like the biological equivalent but is not
conscious, resulting in you being a partial zombie but behaving
normally, including declaring that you have normal vision or whatever
the replaced modality is. The only way out of this conclusion (which
you have agreed is absurd) given that the brain functions following
the laws of physics is to say that if the replacement part behaves
normally (third person observable behaviour, I have to keep reminding
you) then it must also have normal consciousness.

>> And the robot would say the same of you. Conceivably there are
>> intelligent beings in the universe based on completely different
>> physical processes to us who, if they encountered us, would claim that
>> we were not conscious. What would you say to convince them otherwise?
> It is not possible for me to doubt my own consciousness, because doubt
> is consciousness. You have to understand what subjectivity is. It is a
> vector of orientation. "I" is a MAC address that cannot be spoofed.
> I understand your logic though - that because others may assume that
> I'm conscious and be wrong, then I would also be wrong about not
> giving the benefit of the doubt to anything. The problem is that it
> tries to disqualify our ability to infer that something like a
> doorknob is not alive by using logic which supervenes upon the more
> fundamental categories of our perception. It's infinite regress
> sophistry - a strong claim of knowledge that we cannot have a strong
> claim of knowledge.

You haven't answered how you can be sure that the robot lacks
consciousness. You stamp your foot and say you can't doubt your own
consciousness - so does the robot. You don't believe the robot - the
robot doesn't believe you. You say the robot can't not believe you
because it can't believe or disbelieve anything - the robot says the
same about you. You say your cells are special - the robot says its
circuits are special and contain the essence of consciousness, which
organic material lacks. You say a doorknob is not conscious - the
robot agrees, and claims that you are more like a doorknob than a
magnificent, intelligent machine.

>> >> So you're now admitting the computer could have private experiences
>> >> you don't know about? Why couldn't these experiences be as rich and
>> >> complex as your own?
>> > I have always maintained that the semiconductor materials likely have
>> > some kind of private experience, but that their qualia pool is likely
>> > to be extremely shallow - say on the order of 10^ -17 magnitude in
>> > comparison to our own. They could be as rich and complex as our own
>> > theoretically but practically it seems to make sense that our
>> > intuition of relative levels of significance in different phenomena
>> > constitute some broad, but reasonably informed expectations. Not all
>> > of us have been smart enough to realize the humanity of all other homo
>> > sapiens through history, but most of us have reckoned, and I think
>> > correctly, that there is a difference between a human being and a
>> > coconut.
>> Brain consciousness seems to vary roughly with size and complexity of
>> the brain, so wouldn't computer consciousness vary with size and
>> complexity of the computer?
> That would mean that simple minded people are less human than
> sophisticated people. Intelligence is just one aspect of
> consciousness. Computation is intelligent form without (much) feeling.

The cell is more simple-minded than the human, I'm sure you'll agree.
It's the consciousness of many cells together that make the human. If
you agree that an electric circuit may have a small spark of
consciousness, then why couldn't a complex assembly of circuits scale
up in consciousness as the cell scales up to the human?

>> >> The whole idea of the program is that we're not smart enough to figure
>> >> out what to expect; otherwise why run the program? A program
>> >> simulating a bacterium will be as surprising as the actual bacterium
>> >> is.
>> > It's not that we're not smart enough, it's just that we're not patient
>> > enough. The program could be drawn by hand and calculated on paper,
>> > but it would be really boring and take way too long for our smart
>> > nervous system to tolerate. We need a mindless machine to do the
>> > idiotic repetitive work for us - not because we can't do it, but
>> > because it's beneath us; a waste of our infinitely more precious time.
>> That's right. And if we could follow the chemical reactions in a brain
>> by hand we would know what the brain was going to do next.
> No, because the chemical reactions are often a reflection of semantic
> conditions experienced by the person as a whole whose brain it is. If
> you can't predict who wins the Super Bowl, then you can't predict how
> a football fan's amygdala is going to look at the moment the game is
> over. Can. Not. Predict. There is no level of encephalic juices that
> represents a future victory for the Packers at 2200 Hours GMT on
> Sunday.

But the amygdala doesn't know who's going to win the Super Bowl
either! For that matter, the TV doesn't know who's going to win, it
just follows rules which tell it how to light up the pixels when
certain signals come down the antenna.

>> >> Our brain is a finite machine, and our consciousness apparently
>> >> supervenes on our brain states.
>> > Our consciousness cannot be said to supervene on our brain states
>> > because some of our brain states depend upon our conscious intentions.
>> Only to the extent that our conscious intentions have neural
>> correlates.
> Of course.

It is the neural correlate of the conscious intention that drives
behaviour. The conscious intention is invisible to an outside
observer. An alien scientist could look at a human brain and explain
everything that was going on while still remaining agnostic about
human consciousness.

>> The observable behaviour of the brain can be entirely
>> described without reference to consciousness.
> That's what I've been telling you all this time. That's why conversion
> disorders and p-zombies are the most probable result of a brain made
> from only models of the observable behavior of the brain. The brain
> can do what it appears to do perfectly well without any Disneyland
> fantasyworld of autobiographical simulation appearing invisibly out of
> nowhere. There is no evolutionary purpose for it and no pure mechanism
> which can generate it. It's a primitive potential of substance.

That there is no evolutionary role for consciousness is a significant
point. It leads me to the conclusion that consciousness is a necessary
side-effect of intelligent behaviour, since intelligent behaviour is
the only thing that could have been selected for.

>> If not then we would
>> observe neural behaviour contrary to physical law (because we can't
>> observe the consciousness which is having the physical effect), and we
>> don't observe this.
> You've got it. Now you just have to realize that the non-existence of
> your own consciousness is not a realistic option, and be as certain
> about it as you are the opinions which you are invested in within that
> consciousness.

I draw the conclusion that my consciousness is a consequence of my
intelligent behaviour and that zombies are probably impossible.

>> > Our brain is a finite machine but only at any particular moment. Over
>> > time it has infinite permutations of patterns sequences.
>> Only if it grows infinitely large. If it does not grow then over time
>> it will start repeating.
> No, the brain size only could correlate with bandwidth. A TV screen
> doesn't grow over time, and it will never have to start repeating. A
> lizard is like a 70" HDTV flat screen compared to a flea's monochrome
> monitor, or a silicon chip's single band radio.

A TV screen *will* start repeating after a long enough period. If it
has N pixels since each pixel can only be on or off the number of
possible images it can show is 2^N.

>> There are only so many different atoms used in brains and these atoms
>> can only be configured in so many ways unless the brain grows without
>> bound.
> That would be true if all brain states had to occur within a single
> moment. There are a fixed number pixels on a screen, but that does not
> mean that eventually we will only be able to produce re-runs. The
> brain states are like TV shows: patterns running through groups of
> pixels and through the changing screen frames. Nothing needs to grow
> without bound to ensure an infinite number of combinations of frame
> groupings over time.

We will eventually only be able to produce reruns, but it may take
10^1,000,000 years for this to happen.

>> An unbounded duration of time will allow you to realise all the
>> possible mental states, then they will start repeating unless the
>> brain can grow indefinitely.
> No, because the mental states are not of any fixed length. A single
> mental state could (and does) last a lifetime, or many, many lifetimes
> (through evolution and literacy).

A mental state of arbitrary length will start repeating unless the
brain can grow indefinitely.

>> By analogy, if you have a book with a
>> limited number of pages and attempt to fill it with every possible
>> combination of English characters, after a certain (very long) period
>> you will have written every possible book, and then you will start
>> repeating. Only if the book is unlimited in size can you have an
>> unlimited number of distinct books.
> I hear what you are saying, but the brain is not like a book that
> fills up, it's like a PC connected to the internet. It has local
> storage but that only augments the system, not limits it. You can use
> your computer forever (in theory) and never run out of internet
> without filling up your hard drive and having to start over.

You will run out of Internet eventually: only a finite number of
possible web pages can be displayed since only a finite number of
images can be displayed by a monitor.

>> Multiple mental states can be associated with the one physical state,
>> but the reverse cannot occur unless there is an immaterial soul.
> The reverse does occur. Think about an artificial sweetener. The
> physical state invoked by Splenda is a separate one than that which is
> invoked by sucrose or fructose, but the mental state of tasting a
> sweet flavor is a relatively singular mental state. Why would that
> mean there must be an immaterial 'soul'? There are lots of overlapping
> ways to achieve mental and physical states.

Sorry, it's what you said: multiple physical states can lead to the
same mental state. If multiple mental states are associated with the
one physical state that would mean that you were thinking without your
brain changing, which would imply you were thinking with something
other than your brain.

>> With any machine there may be parts that are difficult to replace.
>> That does not change the quite modest principle that IF you could
>> replace it with a functionally equivalent part THEN it would (third
>> person observable) function the same,
> That assumes mechanism a priori. If a fire could burn without oxygen,
> then it would do the same things as fire. Do you see how that is
> circular reasoning, since fire is oxidation?

If a fire could burn in an atmosphere of nitrous oxide, for example,
it would still be a fire. It wouldn't be a fire in oxygen, but it
would perform other functions of a fire, such as allowing you to cook
your dinner.

>> and the deduction from this
>> principle that any associated first person experiences would also be
>> the same, otherwise we could have partial zombies.
> People who have conversion disorders are partial zombies.

No they're not. People with conversion disorders behave as if they
have a neurological deficit for which there is no physiological basis,
as evidenced by various tests. A partial zombie has no obvious
neurological deficit. They behave normally and they believe that they
normal, although they lack certain qualia. This means that you could
be a partial zombie now: you behave as if you can see, you believe
that you can see, but in fact you are blind.

>> > The fact that human consciousness is powerfully altered by small
>> > amounts of some substances should be a clue that substance can drive
>> > function.
>> Of course, but it's the change in *function* as a result of the
>> substance that changes consciousness.
> The function that it changes though is just the function of another
> *substance*. The functions don't have an independent existence. You
> can bowl a strike on different kinds of bowling alleys of different
> sizes in different places, but the alley itself, the pins and ball,
> can't be made of steam. Function and substance cannot be separated
> from each other entirely.

Some sort of substance is needed for the function but different types
of substance will do. The argument is that the consciousness depends
on the function, not the substance. There is no consciousness if the
brain is frozen, only when it is active, and consciousness is
profoundly affected by small changes in brain function when the
substance remains the same.

>> If you have a different
>> substance that leaves function unchanged, consciousness is unchanged.
>> For example, Parkinson's disease can be treated with L-DOPA which is
>> metabolised to dopamine and it can also be treated with dopamine
>> agonists such as bromocriptine, which is chemically quite different to
>> dopamine. The example makes the point quite nicely: the actual
>> substance is irrelevant, only the effect it has is important.
> The substance is not irrelevant. The effect is created by using
> different substances to address different substances that make up the
> brain. Function is just a name for what happens between substances.

Yes, but any substance that performs the function will do. A change in
the body will only make a difference to the extent that it causes a
change in function.

>> So it's not the chemical elements that see red, it's a more complex
>> construct from elements that themselves are unable to see red.
>> Moreover, this construct has a particular third party observable
>> response to red light. So why is it absurd to say that a more complex
>> construct made from something else, be it electrical circuits or
>> whatever, can see red?
> It's not absurd to say that, it's just that the same thing that allows
> it to see red is the same thing that makes it a cone cell (or really a
> neural pathway including a bunch of cone cells and a bunch of neurons
> in the brain acting as a group). If you take molecules that don't
> experience 'red' and put them into a configuration that is not a
> living cell, there is no reason to assume that red will be seen by
> whatever configuration you make.

If you directly stimulate the visual cortex the subject can see red
without there being a red object in front of him. It doesn't seem from
this that the putative red-experiencing molecules in the retina are
necessary for seeing red.

>> The cartoon would not be conscious but the animators driving the
>> cartoon would be conscious. If a computer could do the job as well as
>> human animators then it too would be conscious.
> My point is that the cartoon is an unconscious vehicle for human
> consciousness. If a computer could do the job as well as the human
> animators, then it would not be a computer. It would have a sense of
> humor and an imagination.

Then it would graduate from being just a computer to a person.

>> No, they are too simple. If the characters can interact with us in the
>> same way as a biological human then there is reason to think they have
>> a similar consciousness to us. For example, I haven't seen you but I
>> have interacted with you via email over the past few weeks, and from
>> this I deduce that you are a sentient being. If I discover that you
>> are a computer program then I would have to consider that you are
>> still a sentient being. But no computer program today could do this,
>> so I assume you are human.
> That's where we differ. If I discover that you are a computer program
> then I would realize that I had been fooled and would lose interest in
> pursuing any further communication. People have fun playing with bots
> for a while, but they don't seriously consider their feelings. They
> don't worry about them or wonder how their life is going.

But I'm not a computer program because there aren't any computer
programs which can sustain a discussion like this. You know that. When
there are computer programs that can pass as people we will have to
wonder if they actually experience the things they say they do.

>> If you've explained it before I'm still confused as to what your
>> explanation is. It seems to me and to most other people that it *is*
>> possible to believe in free will in a deterministic universe - there
>> is no contradiction in the idea and there is no inconsistency with
>> empirical observation. You have to clearly explain why you disagree.
> Free will and determinism are mutually exclusive terms by definition.
> You would have to explain on what basis you can disagree with that
> before I can tell you why it doesn't make sense.

They are not mutually exclusive by definition - look up
"compatibilism". But even if they were incompatible it could still be
that we don't have free will but are deluded. You assert that this is
not possible because to believe anything you must be conscious and if
you are conscious you must have non-deterministic free will. But you
have presented no argument to say that consciousness is logically
incompatible with determinism.

>> So even though a human was made from scratch by an alien he could have
>> free will: the argument that he only does what he was designed to do
>> by the alien does not apply. Why does it apply to if the alien
>> designed an advanced computer?
> Because silicon is only capable of hosting 4D phenomena - like a black
> and white TV with no sound. A living person is a 5D phenomenon - a
> color TV show with sound. It doesn't matter if it's an alien or us
> making it, if you make a computer out of silicon it's only going to
> have 4D functionality.

You have said previously that you think every substance could have
qualia, so why exclude silicon?

>> The laws of physics are indeed just our observations but they can be
>> used to make predictions. If the predictions do not match reality then
>> the laws are shown to be wrong, and new laws may be discovered in
>> their place. By this process science gets closer and closer to a
>> description of reality.
> You are assuming that it is possible to get closer and closer to one
> description of reality without getting further and further from
> another, equally valid (and invalid) description of reality. I don't
> make that leap of faith, but I do think that if we recognize that
> descriptions of reality have a bias, we have a chance to describe
> reality in a way that not only calculates predictions but explains
> experiences.

An accurate description of reality would allow us to replicate the
function of the brain and that would replicate consciousness, even if
we can't really explain what it is.

>>The universe does not arise from "no rules at
>> all" since in that case nothing would happen. The tendency to form
>> stars, for example, is indicative of a physical law.
> It's like saying cars driving on the road is indicative of traffic
> laws. If you say that nothing can happen without rules, then you are
> saying that rules cannot happen without rules, which leads to infinite
> regress. Rules without something to rule are not the cause of
> anything. Rules are derived, after the fact by and through phenomena
> whose perception includes rule making.

If there were no gravity matter would not clump together to form stars
and we would not be here. If there were gravity but it followed an
inverse cube rather than inverse square law it wouldn't be strong
enough to form stars and we would not be here. The particular laws of
physics in our universe are responsible for our existence.

>> It could be both predetermined and novel and surprising when it
>> finally occurs. This is consistent with how these terms are used by
>> most people.
> I don't think that it's consistent with how people use it at all. Were
> The Beatles predetermined? Is the 12th digit of Pi novel? Novelty
> means that it was not predetermined. Novelty and predetermination are
> mutually exclusive (but different aspects of the same thing).

If there were clear evidence tomorrow of determinism I think people
would continue to use the word "novel" as well as "free will" in
exactly the same way.

>> I can't be sure but from what you have said before it doesn't seem you
>> would accept a true AGI if it tapped you on the shoulder and started
>> talking to you. It could come across as more intelligent than any
>> human but that would not be "true" intelligence if it was not organic.
> I don't know that it has to be organic, I just suspect that may be a
> reasonable expectation. To be a true intelligence, it has to
> understand, and to understand it has to feel, and to feel it has to be
> alive - meaning it be able to tell the difference between thriving and
> wasting and care for one more than the other. It has to have it's own
> sense of self interest and preferences for the world outside of
> itself. It is not enough that we say Yes to the Doctor, the machine
> must be able to say No to the Patient.

Why couldn't an AI made of electrical circuits have these qualities?

Stathis Papaioannou

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to