On Sep 21, 2:58 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Wed, Sep 21, 2011 at 6:13 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> For a long time, it was thought that the origin of species was
> >> magical, special creation by God. But evolutionary theory showed that
> >> species could originate in a manner consistent with physical law, so
> >> it became surprising rather than magical.
>
> > Yet nothing has changed except the way that we look at it. You are
> > talking about a change in human expectations, not the ability of
> > nature itself to define life, feeling, qualia, etc. through
> > statistical probability alone. It doesn't weaken the obvious
> > divergence between the evolution of organic life and that of inorganic
> > mechanism.
>
> If something happens contrary to physical laws it is magic.

Then physical laws themselves are magic.

>You claim
> that ion channels can open and neurons fire in response to thoughts
> rather than a chain of physical events.

No, I observe that ion channels do in fact open and neurons do indeed
fire, not 'in response' to thoughts but as the public physical view of
events which are subjectively experienced as thoughts. The conclusion
that you continue to jump to is that thoughts are caused by physical
events rather than being the experience of events which also have a
physical dimension.

>This would be magic by
> definition, and real magic rather than just apparent magic due to our
> ignorance, since the thoughts are not directly observable by any
> experimental technique.

Thoughts are not observed, they are experienced directly. There is
nothing magic about them, except that our identification with them
makes them hard to grasp and makes it easy for us take them for
granted.

>
> >> Human technology uses techniques many, many times faster and more
> >> efficient than mutation and natural selection; it is "intelligent
> >> design" rather than natural selection. Nature did not have that
> >> option.
>
> > What technology gains in speed, it loses in authenticity. Because it
> > is only based upon satisfying the substitution level of the minds of
> > programmers, simulations do not achieve replication.
>
> Simulations can be easily replicated, much more easily than biological
> replication, and they can be modified much more easily than biological
> evolution modifies. Whatever other things can be said in favour of
> biology over electronics, the slow and inefficient process of
> evolution isn't one of them.

You've already said that human technology is faster than natural
selection. I heard you the first time. I'm saying something else, but
you aren't listening. Simulations are faster to produce because they
lack authenticity. There is less there that needs to be produced. A
picture of the Empire State Building is faster to copy than the actual
Empire State Building. A programmer is only working from their
reductionist picture of consciousness, which is crippled by doctrines
of computational supremacy.

>
> >> A simulation of evolution would be computationally difficult but
> >> possible in theory. The program would not know what was going to
> >> happen until it happened, just as nature doesn't know what's going to
> >> happen until it happens. You seem to have difficulty with this idea,
> >> holding computer simulations to a higher standard than the thing they
> >> are simulating.
>
> > It's not a matter of knowing what's going to happen, it's about
> > knowing what can possibly happen, which nature does know but a
> > computer simulation of nature cannot know as long is it's constructed
> > according to the sense that we human beings make of the universe from
> > our (technologically extended) perspective.
>
> How does nature "know" more than a computer simulation?

Because nature has to know everything. What nature doesn't know is not
possible, by definition. A computer simulation can only report what we
have programmed it to test for, it doesn't know anything by itself. A
real cell knows what to do when it encounters any particular
condition, whereas a computer simulation of a cell will fail if it
encounters a condition which was not anticipated in the program.

>I don't know
> what I'm going to do tomorrow or all the possible inputs I might
> receive from the universe tomorrow. A simulation is no different: in
> general, you don't know what it's going to do until it does it.

A simulation is different in that it is trying to simulate something
else. The genuine subject of simulation can never be wrong because it
is not trying to be anything other than what it is. If you have a
computer program that simulates an acorn, it is always going to be
different from an acorn in that if the behavior of the two things
diverges, the simulation will always be 'wrong'.

>Even
> the simplest simulation of a brain treating neurons as switches would
> result in fantastically complex behaviour. The roundworm c. elegans
> has 302 neurons and treating them as on/off switches leads to 2^302 =
> 8*10^91 permutations.

Again, complexity does not impress me as far as a possibility for
making the difference between awareness and non-awareness.

>
> >> You would not get a conversion disorder since people with conversion
> >> disorders say and do unusual things, whereas if the neurons are making
> >> the muscles contract normally (as they must if they do their thing)
> >> then the person will be observed to behave normally.
>
> > Affirming the consequent and missing the point. A conversion order has
> > no neurological symptoms that we can see, therefore it must arise as
> > the result from conditions we have not yet understood. A brain made
> > from our incomplete understanding of the relation between perception
> > and neurology could, and I think would result in a neurological
> > pantomime with no corresponding qualia, if the materials used were not
> > a close match for neurological tissue.
>
> The conversion disorder is a bad example.

Haha, bad for your argument maybe. It is a great example for me.

>What you mean is a
> philosophical zombie.

No. A philosophical zombie is an intellectual construct which I think
I have correctly exposed as a HADD/prognosia projection. A conversion
disorder is an actual medical condition which proves that our
assumptions about neurological behavior do not always correlate to our
expectations about subjective experience.

> If philosophical zombies are possible then we
> are back with the fading qualia thought experiment: part of your brain
> is replaced with an analogue that behaves (third person observable
> behaviour, to be clear) like the biological equivalent but is not
> conscious, resulting in you being a partial zombie but behaving
> normally, including declaring that you have normal vision or whatever
> the replaced modality is.

No. You're stuck in a loop. If we do a video Skype, the moving image
and sound representing me is a philosophical zombie. If you ask it if
it feels and sees, it will tell you yes, but the image on your screen
is not feeling anything.

>The only way out of this conclusion (which
> you have agreed is absurd) given that the brain functions following
> the laws of physics is to say that if the replacement part behaves
> normally (third person observable behaviour, I have to keep reminding
> you) then it must also have normal consciousness.

I understand perfectly why you think this argument works, but you
seemingly don't understand that my explanations and examples refute
your false dichotomy. Just as a rule of thumb, anytime someone says
something like "The only way out of this (meaning their) conclusion "
My assessment is that their mind is frozen in a defensive state and
cannot accept new information.

>
> >> And the robot would say the same of you. Conceivably there are
> >> intelligent beings in the universe based on completely different
> >> physical processes to us who, if they encountered us, would claim that
> >> we were not conscious. What would you say to convince them otherwise?
>
> > It is not possible for me to doubt my own consciousness, because doubt
> > is consciousness. You have to understand what subjectivity is. It is a
> > vector of orientation. "I" is a MAC address that cannot be spoofed.
>
> > I understand your logic though - that because others may assume that
> > I'm conscious and be wrong, then I would also be wrong about not
> > giving the benefit of the doubt to anything. The problem is that it
> > tries to disqualify our ability to infer that something like a
> > doorknob is not alive by using logic which supervenes upon the more
> > fundamental categories of our perception. It's infinite regress
> > sophistry - a strong claim of knowledge that we cannot have a strong
> > claim of knowledge.
>
> You haven't answered how you can be sure that the robot lacks
> consciousness.

A robot doesn't lack consciousness. It lacks human consciousness. It
has molecular level consciousness of a particular class of very
reliable molecules arranged to serve human senses and motives which it
has no capacity to interpret on it's own.

>You stamp your foot and say you can't doubt your own
> consciousness - so does the robot.

What makes you think that the robot would say anything about it's own
consciousness were it not programmed to do so?

>You don't believe the robot - the
> robot doesn't believe you.

The carpet doesn't believe me either. What makes you think the robot
is more believable than the carpet?

>You say the robot can't not believe you
> because it can't believe or disbelieve anything - the robot says the
> same about you.

How do you know? Do you speak robot?

>You say your cells are special - the robot says its
> circuits are special and contain the essence of consciousness, which
> organic material lacks.

The robot's circuits are special and contain the essence of (it's)
consciousness which organic material lacks. I don't try to be a robot,
and robots shouldn't try to be me. We're no good at it.

>You say a doorknob is not conscious - the
> robot agrees, and claims that you are more like a doorknob than a
> magnificent, intelligent machine.

To a robot, I could be more like a doorknob than an intelligent
machine. If I don't have a USB port, then I am not going to seem very
intelligent if that's what the robot is using for it's criteria. I'm
not arguing for an absolute universal definition of consciousness vs
non consciousness. I'm saying the opposite (over and over) - that the
more something looks like us, acts like us, feels like us, thinks like
us, the more we think of them as conscious.

Everything has some kind of awareness, it's just that different
configurations of different substances correspond to different
qualitative ranges of awareness. A CCD detects some of the same
optical phenomena as we do, but if you ripped out someone's eyeballs
and replaced them with a CCD attached to the optic nerve, you would
not get the full range of human vision.

>
> >> >> So you're now admitting the computer could have private experiences
> >> >> you don't know about? Why couldn't these experiences be as rich and
> >> >> complex as your own?
>
> >> > I have always maintained that the semiconductor materials likely have
> >> > some kind of private experience, but that their qualia pool is likely
> >> > to be extremely shallow - say on the order of 10^ -17 magnitude in
> >> > comparison to our own. They could be as rich and complex as our own
> >> > theoretically but practically it seems to make sense that our
> >> > intuition of relative levels of significance in different phenomena
> >> > constitute some broad, but reasonably informed expectations. Not all
> >> > of us have been smart enough to realize the humanity of all other homo
> >> > sapiens through history, but most of us have reckoned, and I think
> >> > correctly, that there is a difference between a human being and a
> >> > coconut.
>
> >> Brain consciousness seems to vary roughly with size and complexity of
> >> the brain, so wouldn't computer consciousness vary with size and
> >> complexity of the computer?
>
> > That would mean that simple minded people are less human than
> > sophisticated people. Intelligence is just one aspect of
> > consciousness. Computation is intelligent form without (much) feeling.
>
> The cell is more simple-minded than the human, I'm sure you'll agree.
> It's the consciousness of many cells together that make the human. If
> you agree that an electric circuit may have a small spark of
> consciousness, then why couldn't a complex assembly of circuits scale
> up in consciousness as the cell scales up to the human?

This is at least the right type of question. To be clear, it's not
exactly an electric circuit that is conscious, it is the experience of
a semiconductor material which we know as an electric circuit. We see
the outside of the thing as objects in space, but the inside of the
thing is events experienced through time. You can ignore that though,
I'm just clarifying my hypothesis.

So yes, a complex assembly of circuits could scale up in consciousness
as cells do, if and only if the assembling of the circuits is driven,
at least in part, by internal motives rather than external
programming. People build cities, but cities do not begin to build
themselves without people.You can make a fire out of newspaper or
gasoline and a spark from some friction-reactive substance but you
can't make a substance out of 'fire' in general.

The materials we have chosen for semiconductors are selected
specifically for their ability to be precisely controlled and never to
deviate from their stable molecular patterns under electronic
stimulation. To expect them to scale up like living organisms is to
expect to start a fire with fire retardant.

>
> >> >> The whole idea of the program is that we're not smart enough to figure
> >> >> out what to expect; otherwise why run the program? A program
> >> >> simulating a bacterium will be as surprising as the actual bacterium
> >> >> is.
>
> >> > It's not that we're not smart enough, it's just that we're not patient
> >> > enough. The program could be drawn by hand and calculated on paper,
> >> > but it would be really boring and take way too long for our smart
> >> > nervous system to tolerate. We need a mindless machine to do the
> >> > idiotic repetitive work for us - not because we can't do it, but
> >> > because it's beneath us; a waste of our infinitely more precious time.
>
> >> That's right. And if we could follow the chemical reactions in a brain
> >> by hand we would know what the brain was going to do next.
>
> > No, because the chemical reactions are often a reflection of semantic
> > conditions experienced by the person as a whole whose brain it is. If
> > you can't predict who wins the Super Bowl, then you can't predict how
> > a football fan's amygdala is going to look at the moment the game is
> > over. Can. Not. Predict. There is no level of encephalic juices that
> > represents a future victory for the Packers at 2200 Hours GMT on
> > Sunday.
>
> But the amygdala doesn't know who's going to win the Super Bowl
> either! For that matter, the TV doesn't know who's going to win, it
> just follows rules which tell it how to light up the pixels when
> certain signals come down the antenna.

Right. That's why you can't model the behavior of the amygdala -
because it depends on things like the outcome of the Super Bowl. The
behavior of atoms does not depend on human events, therefore it is an
insufficient model to predict the behavior of the brain.

>
> >> >> Our brain is a finite machine, and our consciousness apparently
> >> >> supervenes on our brain states.
>
> >> > Our consciousness cannot be said to supervene on our brain states
> >> > because some of our brain states depend upon our conscious intentions.
>
> >> Only to the extent that our conscious intentions have neural
> >> correlates.
>
> > Of course.
>
> It is the neural correlate of the conscious intention that drives
> behaviour

They both drive behavior but from opposite sides. They are essentially
the same thing, but existentially distinct.

>The conscious intention is invisible to an outside
> observer. An alien scientist could look at a human brain and explain
> everything that was going on while still remaining agnostic about
> human consciousness.

Yes, but if the alien scientist's perceptual niche overlaps our own,
they could infer our conscious intention (because that's how sense
works...it imitates locally what it cannot sense directly...for us,
that indirect sense is about material objects in space and
computation.

>
> >> The observable behaviour of the brain can be entirely
> >> described without reference to consciousness.
>
> > That's what I've been telling you all this time. That's why conversion
> > disorders and p-zombies are the most probable result of a brain made
> > from only models of the observable behavior of the brain. The brain
> > can do what it appears to do perfectly well without any Disneyland
> > fantasyworld of autobiographical simulation appearing invisibly out of
> > nowhere. There is no evolutionary purpose for it and no pure mechanism
> > which can generate it. It's a primitive potential of substance.
>
> That there is no evolutionary role for consciousness is a significant
> point.

Yes, I agree. It doesn't make evolution any less relevant to shaping
the content consciousness, but the fact of it's existence at all shows
that not everything that evolves needs to be described in purely
physical terms. We can just expand our view of evolution so it's not
just a story about bodies eating and reproducing, but about evolving
perception and feeling. Spectacular moments like the invention of
photosynthesis coinciding with the discovery of color by marine
organisms finding different ways to feel the sun inside of themselves.

>It leads me to the conclusion that consciousness is a necessary
> side-effect of intelligent behaviour, since intelligent behaviour is
> the only thing that could have been selected for.

That would be what you would have to conclude if you were operating on
theory alone, but we have the benefit of first hand experience to draw
upon, which shows that intelligent behavior is a side-effect of
consciousness and not the other way around. It takes years for humans
to develop evolutionarily intelligent behavior. Also intelligent
behavior is not necessary for evolution. Blue-green algae is still
around. It hasn't hasn't had to learn any new tricks in over a billion
years to survive.

I think it makes more sense if you realize that we do things not only
out of necessity but because also because we want to or just for fun.
This need not be an invention of homo sapiens, it can be an aspect of
lots of phenomena in the universe to some extent or other. The
universe tells stories. It makes beautiful art. That's not an
epiphenomenon, it's just a part of the universe which we participate
directly in so we can't get 'behind it' so to speak to get an
objective view of it.


>
> >> If not then we would
> >> observe neural behaviour contrary to physical law (because we can't
> >> observe the consciousness which is having the physical effect), and we
> >> don't observe this.
>
> > You've got it. Now you just have to realize that the non-existence of
> > your own consciousness is not a realistic option, and be as certain
> > about it as you are the opinions which you are invested in within that
> > consciousness.
>
> I draw the conclusion that my consciousness is a consequence of my
> intelligent behaviour and that zombies are probably impossible.

That view of consciousness would make it a redundant appendage in a
metaphysical dimension. That would be unacceptable to me. It makes the
universe serve a theory instead of a theory which describes the
universe. Between conversion disorders, prognosia, near death
experiences, and anesthetic awakenings (http://www.msnbc.msn.com/id/
23597612/ns/health-health_care/t/people-year-wake-during-surgery/
#.TnqFnuxOV8E), I think that we have barely scratched the surface on
the possibility of psychosomatic separation.

>
> >> > Our brain is a finite machine but only at any particular moment. Over
> >> > time it has infinite permutations of patterns sequences.
>
> >> Only if it grows infinitely large. If it does not grow then over time
> >> it will start repeating.
>
> > No, the brain size only could correlate with bandwidth. A TV screen
> > doesn't grow over time, and it will never have to start repeating. A
> > lizard is like a 70" HDTV flat screen compared to a flea's monochrome
> > monitor, or a silicon chip's single band radio.
>
> A TV screen *will* start repeating after a long enough period. If it
> has N pixels since each pixel can only be on or off the number of
> possible images it can show is 2^N.

A single screen will repeat but the sequence of screens will not. You
could make a movie where you assign one screen frame to each integer
(say an all red screen for 1, orange for 2, yellow for 3, etc) and
then just synchronize the movie to compute Pi to the last digit in
screen frames. Would you agree that movie would not repeat?

It's much more complicated than that, because even if a single frame
is an identical pattern of pixels, it depends on the thing that is
looking at it to group those pixels into patterns. Every person is
going to be able to see slightly different patterns, other animals
will see even more different patterns, etc. Each change in image has
an effect on the perceptions of the previous images and on those that
come next.

>
> >> There are only so many different atoms used in brains and these atoms
> >> can only be configured in so many ways unless the brain grows without
> >> bound.
>
> > That would be true if all brain states had to occur within a single
> > moment. There are a fixed number pixels on a screen, but that does not
> > mean that eventually we will only be able to produce re-runs. The
> > brain states are like TV shows: patterns running through groups of
> > pixels and through the changing screen frames. Nothing needs to grow
> > without bound to ensure an infinite number of combinations of frame
> > groupings over time.
>
> We will eventually only be able to produce reruns, but it may take
> 10^1,000,000 years for this to happen.

The Pi show probably produces no re-runs ever.

>
> >> An unbounded duration of time will allow you to realise all the
> >> possible mental states, then they will start repeating unless the
> >> brain can grow indefinitely.
>
> > No, because the mental states are not of any fixed length. A single
> > mental state could (and does) last a lifetime, or many, many lifetimes
> > (through evolution and literacy).
>
> A mental state of arbitrary length will start repeating unless the
> brain can grow indefinitely.

That's like saying the internet will start repeating unless your hard
drive can grow indefinitely.

>
> >> By analogy, if you have a book with a
> >> limited number of pages and attempt to fill it with every possible
> >> combination of English characters, after a certain (very long) period
> >> you will have written every possible book, and then you will start
> >> repeating. Only if the book is unlimited in size can you have an
> >> unlimited number of distinct books.
>
> > I hear what you are saying, but the brain is not like a book that
> > fills up, it's like a PC connected to the internet. It has local
> > storage but that only augments the system, not limits it. You can use
> > your computer forever (in theory) and never run out of internet
> > without filling up your hard drive and having to start over.
>
> You will run out of Internet eventually: only a finite number of
> possible web pages can be displayed since only a finite number of
> images can be displayed by a monitor.

Only a finite number of images can be theoretically displayed but they
can be interpreted in an infinite number of ways.

>
> >> Multiple mental states can be associated with the one physical state,
> >> but the reverse cannot occur unless there is an immaterial soul.
>
> > The reverse does occur. Think about an artificial sweetener. The
> > physical state invoked by Splenda is a separate one than that which is
> > invoked by sucrose or fructose, but the mental state of tasting a
> > sweet flavor is a relatively singular mental state. Why would that
> > mean there must be an immaterial 'soul'? There are lots of overlapping
> > ways to achieve mental and physical states.
>
> Sorry, it's what you said: multiple physical states can lead to the
> same mental state. If multiple mental states are associated with the
> one physical state that would mean that you were thinking without your
> brain changing, which would imply you were thinking with something
> other than your brain.

No, it just means that mental states are multiplexed. You aren't
thinking with your brain anyhow, you think with your mind and the
outside of your mind looks like a brain (when the mind sees it through
the body's eyes), but the two things work in very different ways. I
have no reason to insist that the psyche can operate independently of
the brain or body, but I can't completely rule that out given the
nature of energy and time versus matter and space. Locality definitely
does not apply in the same way, and our minds may be more a function
of a human lifetime than a human body. The body and the lifetime may
merely share a common sense, but in some cases that may go beyond
contemporary medical understanding.

>
> >> With any machine there may be parts that are difficult to replace.
> >> That does not change the quite modest principle that IF you could
> >> replace it with a functionally equivalent part THEN it would (third
> >> person observable) function the same,
>
> > That assumes mechanism a priori. If a fire could burn without oxygen,
> > then it would do the same things as fire. Do you see how that is
> > circular reasoning, since fire is oxidation?
>
> If a fire could burn in an atmosphere of nitrous oxide, for example,
> it would still be a fire. It wouldn't be a fire in oxygen, but it
> would perform other functions of a fire, such as allowing you to cook
> your dinner.

How is nitrous oxide not 'without oxygen'? You're just disagreeing
with me to disagree.

>
> >> and the deduction from this
> >> principle that any associated first person experiences would also be
> >> the same, otherwise we could have partial zombies.
>
> > People who have conversion disorders are partial zombies.
>
> No they're not. People with conversion disorders behave as if they
> have a neurological deficit for which there is no physiological basis,
> as evidenced by various tests.

Meaning that they experience something different than their neurology
indicates they should. A partial zombie is the same thing. Their brain
behaves normally but they experience something different than we would
expect.

A partial zombie has no obvious
> neurological deficit. They behave normally and they believe that they
> normal, although they lack certain qualia. This means that you could
> be a partial zombie now: you behave as if you can see, you believe
> that you can see, but in fact you are blind.

That is the case when you are dreaming.

>
> >> > The fact that human consciousness is powerfully altered by small
> >> > amounts of some substances should be a clue that substance can drive
> >> > function.
>
> >> Of course, but it's the change in *function* as a result of the
> >> substance that changes consciousness.
>
> > The function that it changes though is just the function of another
> > *substance*. The functions don't have an independent existence. You
> > can bowl a strike on different kinds of bowling alleys of different
> > sizes in different places, but the alley itself, the pins and ball,
> > can't be made of steam. Function and substance cannot be separated
> > from each other entirely.
>
> Some sort of substance is needed for the function but different types
> of substance will do.

Depends on the function. Alcohol can substitute for water in a
cocktail or for cleaning windows, but not for living organisms to
survive on.

>The argument is that the consciousness depends
> on the function, not the substance. There is no consciousness if the
> brain is frozen, only when it is active, and consciousness is
> profoundly affected by small changes in brain function when the
> substance remains the same.

What is frozen is the substance of the brain. Function is nothing but
the activity of a substance. Substances have similarities and produce
similar function, but a function which is alien to a substance cannot
be introduced. A black and white TV cannot show programs in color and
TV programs cannot be shown without a TV.

>
> >> If you have a different
> >> substance that leaves function unchanged, consciousness is unchanged.
> >> For example, Parkinson's disease can be treated with L-DOPA which is
> >> metabolised to dopamine and it can also be treated with dopamine
> >> agonists such as bromocriptine, which is chemically quite different to
> >> dopamine. The example makes the point quite nicely: the actual
> >> substance is irrelevant, only the effect it has is important.
>
> > The substance is not irrelevant. The effect is created by using
> > different substances to address different substances that make up the
> > brain. Function is just a name for what happens between substances.
>
> Yes, but any substance that performs the function will do. A change in
> the body will only make a difference to the extent that it causes a
> change in function.

That's fine if you know for a fact that the substance can perform the
same function. Biology is verrrry picky about it's substances. 53
protons in a nucleus: Essential for good health. 33 protons: Death.

>
> >> So it's not the chemical elements that see red, it's a more complex
> >> construct from elements that themselves are unable to see red.
> >> Moreover, this construct has a particular third party observable
> >> response to red light. So why is it absurd to say that a more complex
> >> construct made from something else, be it electrical circuits or
> >> whatever, can see red?
>
> > It's not absurd to say that, it's just that the same thing that allows
> > it to see red is the same thing that makes it a cone cell (or really a
> > neural pathway including a bunch of cone cells and a bunch of neurons
> > in the brain acting as a group). If you take molecules that don't
> > experience 'red' and put them into a configuration that is not a
> > living cell, there is no reason to assume that red will be seen by
> > whatever configuration you make.
>
> If you directly stimulate the visual cortex the subject can see red
> without there being a red object in front of him. It doesn't seem from
> this that the putative red-experiencing molecules in the retina are
> necessary for seeing red.

You're right, the retina is not necessary for seeing red once they
have exposed the visual cortex. The mind can continue to have access
to red for many years after childhood blindness sets in, depending on
how early it happens. If it's too early, eventually the memories fade.
It does seem to be necessary to have eyes that can see red at some
point though. The mind doesn't seem to be able to make it up on it's
own in blind people who have gained sight for the first time through
surgery.

>
> >> The cartoon would not be conscious but the animators driving the
> >> cartoon would be conscious. If a computer could do the job as well as
> >> human animators then it too would be conscious.
>
> > My point is that the cartoon is an unconscious vehicle for human
> > consciousness. If a computer could do the job as well as the human
> > animators, then it would not be a computer. It would have a sense of
> > humor and an imagination.
>
> Then it would graduate from being just a computer to a person.

It would already be a person. There is no graduation path from
computer to person.

>
> >> No, they are too simple. If the characters can interact with us in the
> >> same way as a biological human then there is reason to think they have
> >> a similar consciousness to us. For example, I haven't seen you but I
> >> have interacted with you via email over the past few weeks, and from
> >> this I deduce that you are a sentient being. If I discover that you
> >> are a computer program then I would have to consider that you are
> >> still a sentient being. But no computer program today could do this,
> >> so I assume you are human.
>
> > That's where we differ. If I discover that you are a computer program
> > then I would realize that I had been fooled and would lose interest in
> > pursuing any further communication. People have fun playing with bots
> > for a while, but they don't seriously consider their feelings. They
> > don't worry about them or wonder how their life is going.
>
> But I'm not a computer program because there aren't any computer
> programs which can sustain a discussion like this. You know that. When
> there are computer programs that can pass as people we will have to
> wonder if they actually experience the things they say they do.

It's totally context dependent. In limited contexts, we already cannot
tell whether an email is spam or not or a blogger is a bot or not.
There will always be a ways to prove you're human. It will be demanded
by commercial interests and enforceable by law if it comes to that.
Identity creation is much more dangerous than identity threat.

>
> >> If you've explained it before I'm still confused as to what your
> >> explanation is. It seems to me and to most other people that it *is*
> >> possible to believe in free will in a deterministic universe - there
> >> is no contradiction in the idea and there is no inconsistency with
> >> empirical observation. You have to clearly explain why you disagree.
>
> > Free will and determinism are mutually exclusive terms by definition.
> > You would have to explain on what basis you can disagree with that
> > before I can tell you why it doesn't make sense.
>
> They are not mutually exclusive by definition - look up
> "compatibilism". But even if they were incompatible it could still be
> that we don't have free will but are deluded. You assert that this is
> not possible because to believe anything you must be conscious and if
> you are conscious you must have non-deterministic free will.

No, I say that if you think you have non-determisintic free will then
it doesn't matter whether that feeling is detectable in some way from
the outside, and that such a feeling would have no possible reason to
exist or method of arising in a deterministic world. There is a
difference. I'm not saying that feeling free means that you are free,
I'm saying that feeling free means that determinism is insufficient.
Nothing may actually be absolutely free, but that does not mean that
freedom is absolutely nothing, and if freedom is anything then it
might as well be what we feel since it's the closest that we can get
to it.

>But you
> have presented no argument to say that consciousness is logically
> incompatible with determinism.

It's not incompatible, it just serves no logical purpose, therefore
the logic that determinism is based on would not be logical (which I
think is actually the case. The way we feel determines how we think
and act, but it isn't always logical).

>
> >> So even though a human was made from scratch by an alien he could have
> >> free will: the argument that he only does what he was designed to do
> >> by the alien does not apply. Why does it apply to if the alien
> >> designed an advanced computer?
>
> > Because silicon is only capable of hosting 4D phenomena - like a black
> > and white TV with no sound. A living person is a 5D phenomenon - a
> > color TV show with sound. It doesn't matter if it's an alien or us
> > making it, if you make a computer out of silicon it's only going to
> > have 4D functionality.
>
> You have said previously that you think every substance could have
> qualia, so why exclude silicon?

Silicon has qualia, but complex organizations of silicon seem like
they have different qualia than organisms made of water, sugar, and
protein. My guess is that the very things that make silicon a good
semiconductor make it a terrible basis for a living organism.

>
> >> The laws of physics are indeed just our observations but they can be
> >> used to make predictions. If the predictions do not match reality then
> >> the laws are shown to be wrong, and new laws may be discovered in
> >> their place. By this process science gets closer and closer to a
> >> description of reality.
>
> > You are assuming that it is possible to get closer and closer to one
> > description of reality without getting further and further from
> > another, equally valid (and invalid) description of reality. I don't
> > make that leap of faith, but I do think that if we recognize that
> > descriptions of reality have a bias, we have a chance to describe
> > reality in a way that not only calculates predictions but explains
> > experiences.
>
> An accurate description of reality would allow us to replicate the
> function of the brain and that would replicate consciousness, even if
> we can't really explain what it is.

We can already do that by having kids.

>
> >>The universe does not arise from "no rules at
> >> all" since in that case nothing would happen. The tendency to form
> >> stars, for example, is indicative of a physical law.
>
> > It's like saying cars driving on the road is indicative of traffic
> > laws. If you say that nothing can happen without rules, then you are
> > saying that rules cannot happen without rules, which leads to infinite
> > regress. Rules without something to rule are not the cause of
> > anything. Rules are derived, after the fact by and through phenomena
> > whose perception includes rule making.
>
> If there were no gravity matter would not clump together to form stars
> and we would not be here.

If there were no matter that clumps together there would be no
gravity.

> If there were gravity but it followed an
> inverse cube rather than inverse square law it wouldn't be strong
> enough to form stars and we would not be here.

Neither would gravity be 'here'.

>The particular laws of
> physics in our universe are responsible for our existence.

Then they are magic and metaphysical.

>
> >> It could be both predetermined and novel and surprising when it
> >> finally occurs. This is consistent with how these terms are used by
> >> most people.
>
> > I don't think that it's consistent with how people use it at all. Were
> > The Beatles predetermined? Is the 12th digit of Pi novel? Novelty
> > means that it was not predetermined. Novelty and predetermination are
> > mutually exclusive (but different aspects of the same thing).
>
> If there were clear evidence tomorrow of determinism I think people
> would continue to use the word "novel" as well as "free will" in
> exactly the same way.

Only because people would compartmentalize the mistaken belief in such
'evidence' from the unimpeachable subjective experience of novelty and
free will.

>
> >> I can't be sure but from what you have said before it doesn't seem you
> >> would accept a true AGI if it tapped you on the shoulder and started
> >> talking to you. It could come across as more intelligent than any
> >> human but that would not be "true" intelligence if it was not organic.
>
> > I don't know that it has to be organic, I just suspect that may be a
> > reasonable expectation. To be a true intelligence, it has to
> > understand, and to understand it has to feel, and to feel it has to be
> > alive - meaning it be able to tell the difference between thriving and
> > wasting and care for one more than the other. It has to have it's own
> > sense of self interest and preferences for the world outside of
> > itself. It is not enough that we say Yes to the Doctor, the machine
> > must be able to say No to the Patient.
>
> Why couldn't an AI made of electrical circuits have these qualities?

What we made of can be described as electrical circuits too, it's just
a matter of what it is that is doing the circulating that determines
the meaning of the signals.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to