On Sep 19, 11:46 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Sun, Sep 18, 2011 at 2:13 PM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> Evolution follows from the fact that DNA replication is not 100%
> >> accurate and if the resulting organism is successful the mutation will
> >> be propagated. This can lead to surprising results but not magical
> >> results.
>
> > Therein lies the rub. What is the difference between surprising and
> > magical? What is merely surprising behavior for living organisms is
> > magical for inorganic matter. If the contents of human imagination
> > were public instead of private they would be magic - but still not
> > omnipotent. It still could not invent a new color or a square circle.
> > This is why it matters what is doing the computing. If you try to make
> > a new operating system by making imperfect copies of Windows, you are
> > not going to even get a better version of Windows, let alone one that
> > flies or lays eggs.
>
> For a long time, it was thought that the origin of species was
> magical, special creation by God. But evolutionary theory showed that
> species could originate in a manner consistent with physical law, so
> it became surprising rather than magical.

Yet nothing has changed except the way that we look at it. You are
talking about a change in human expectations, not the ability of
nature itself to define life, feeling, qualia, etc. through
statistical probability alone. It doesn't weaken the obvious
divergence between the evolution of organic life and that of inorganic
mechanism.

>
> Human technology uses techniques many, many times faster and more
> efficient than mutation and natural selection; it is "intelligent
> design" rather than natural selection. Nature did not have that
> option.

What technology gains in speed, it loses in authenticity. Because it
is only based upon satisfying the substitution level of the minds of
programmers, simulations do not achieve replication.

>
> >>The potential for evolution is programmed into the organism
> >> to begin with, and if you had a good enough simulation you could run
> >> it and see the variations possible under different environments.
>
> > To make a good enough simulation may take as much time and resources
> > as the genuine process. You could see possible genome variations, but
> > so what. To translate into phenome variations you would have to
> > simulate the proteins, cells, tissues, body, and environmental effects
> > on the body (potentially the Earth). Even so, that might only give you
> > an idea of which of existing phenome expressions to expect but it is
> > not clear that you could guess what novel variations would produce at
> > all. How would you guess what a tongue was going to do if nobody had
> > ever heard of flavor before?
>
> A simulation of evolution would be computationally difficult but
> possible in theory. The program would not know what was going to
> happen until it happened, just as nature doesn't know what's going to
> happen until it happens. You seem to have difficulty with this idea,
> holding computer simulations to a higher standard than the thing they
> are simulating.

It's not a matter of knowing what's going to happen, it's about
knowing what can possibly happen, which nature does know but a
computer simulation of nature cannot know as long is it's constructed
according to the sense that we human beings make of the universe from
our (technologically extended) perspective.
>
> >> What would happen if the neurons and neurotransmitters did their thing
> >> without the awareness that you postulate? Would the observable
> >> behaviour be the same? How could it not be, if the chemical reactions
> >> remain the same?
>
> > If the neurons and neurotransmitters did their thing without any
> > subjective awareness correlated with it, you get a conversion
> > disorder, like hysterical blindness. If neurons and neurotransmitters
> > had no subjective correlations at all in the universe then there would
> > be no human observation of anything, but the human body would
> > theoretically be able to respond to it's environment unconsciously
> > (like the digestive system or immune system is presumed to do under
> > substance monism). In our real universe, human beings, their bodies,
> > immune systems, digestive systems, etc all have interactive perception
> > and participation (sensori-motive phenomena).
>
> You would not get a conversion disorder since people with conversion
> disorders say and do unusual things, whereas if the neurons are making
> the muscles contract normally (as they must if they do their thing)
> then the person will be observed to behave normally.

Affirming the consequent and missing the point. A conversion order has
no neurological symptoms that we can see, therefore it must arise as
the result from conditions we have not yet understood. A brain made
from our incomplete understanding of the relation between perception
and neurology could, and I think would result in a neurological
pantomime with no corresponding qualia, if the materials used were not
a close match for neurological tissue.

>
> >> What if the robot said that you had no understanding of red, it was
> >> just chemical reactions in your brain?
>
> > That is pretty much what substance monism does say. I would say the
> > robot should be reprogrammed to say something different.
>
> And the robot would say the same of you. Conceivably there are
> intelligent beings in the universe based on completely different
> physical processes to us who, if they encountered us, would claim that
> we were not conscious. What would you say to convince them otherwise?

It is not possible for me to doubt my own consciousness, because doubt
is consciousness. You have to understand what subjectivity is. It is a
vector of orientation. "I" is a MAC address that cannot be spoofed.

I understand your logic though - that because others may assume that
I'm conscious and be wrong, then I would also be wrong about not
giving the benefit of the doubt to anything. The problem is that it
tries to disqualify our ability to infer that something like a
doorknob is not alive by using logic which supervenes upon the more
fundamental categories of our perception. It's infinite regress
sophistry - a strong claim of knowledge that we cannot have a strong
claim of knowledge.

>
> >> So you're now admitting the computer could have private experiences
> >> you don't know about? Why couldn't these experiences be as rich and
> >> complex as your own?
>
> > I have always maintained that the semiconductor materials likely have
> > some kind of private experience, but that their qualia pool is likely
> > to be extremely shallow - say on the order of 10^ -17 magnitude in
> > comparison to our own. They could be as rich and complex as our own
> > theoretically but practically it seems to make sense that our
> > intuition of relative levels of significance in different phenomena
> > constitute some broad, but reasonably informed expectations. Not all
> > of us have been smart enough to realize the humanity of all other homo
> > sapiens through history, but most of us have reckoned, and I think
> > correctly, that there is a difference between a human being and a
> > coconut.
>
> Brain consciousness seems to vary roughly with size and complexity of
> the brain, so wouldn't computer consciousness vary with size and
> complexity of the computer?

That would mean that simple minded people are less human than
sophisticated people. Intelligence is just one aspect of
consciousness. Computation is intelligent form without (much) feeling.

>
> >> The whole idea of the program is that we're not smart enough to figure
> >> out what to expect; otherwise why run the program? A program
> >> simulating a bacterium will be as surprising as the actual bacterium
> >> is.
>
> > It's not that we're not smart enough, it's just that we're not patient
> > enough. The program could be drawn by hand and calculated on paper,
> > but it would be really boring and take way too long for our smart
> > nervous system to tolerate. We need a mindless machine to do the
> > idiotic repetitive work for us - not because we can't do it, but
> > because it's beneath us; a waste of our infinitely more precious time.
>
> That's right. And if we could follow the chemical reactions in a brain
> by hand we would know what the brain was going to do next.

No, because the chemical reactions are often a reflection of semantic
conditions experienced by the person as a whole whose brain it is. If
you can't predict who wins the Super Bowl, then you can't predict how
a football fan's amygdala is going to look at the moment the game is
over. Can. Not. Predict. There is no level of encephalic juices that
represents a future victory for the Packers at 2200 Hours GMT on
Sunday.

>
> >> Our brain is a finite machine, and our consciousness apparently
> >> supervenes on our brain states.
>
> > Our consciousness cannot be said to supervene on our brain states
> > because some of our brain states depend upon our conscious intentions.
>
> Only to the extent that our conscious intentions have neural
> correlates.

Of course.

> The observable behaviour of the brain can be entirely
> described without reference to consciousness.

That's what I've been telling you all this time. That's why conversion
disorders and p-zombies are the most probable result of a brain made
from only models of the observable behavior of the brain. The brain
can do what it appears to do perfectly well without any Disneyland
fantasyworld of autobiographical simulation appearing invisibly out of
nowhere. There is no evolutionary purpose for it and no pure mechanism
which can generate it. It's a primitive potential of substance.

> If not then we would
> observe neural behaviour contrary to physical law (because we can't
> observe the consciousness which is having the physical effect), and we
> don't observe this.

You've got it. Now you just have to realize that the non-existence of
your own consciousness is not a realistic option, and be as certain
about it as you are the opinions which you are invested in within that
consciousness.

>
> > Our brain is a finite machine but only at any particular moment. Over
> > time it has infinite permutations of patterns sequences.
>
> Only if it grows infinitely large. If it does not grow then over time
> it will start repeating.

No, the brain size only could correlate with bandwidth. A TV screen
doesn't grow over time, and it will never have to start repeating. A
lizard is like a 70" HDTV flat screen compared to a flea's monochrome
monitor, or a silicon chip's single band radio.

>
> >>Since there are a finite number of
> >> possible brain states
>
> > Not true. Brains are always evolving new possible brain states. Each
> > individual from every species has different numbers and varieties of
> > possible brain states.
>
> There are only so many different atoms used in brains and these atoms
> can only be configured in so many ways unless the brain grows without
> bound.

That would be true if all brain states had to occur within a single
moment. There are a fixed number pixels on a screen, but that does not
mean that eventually we will only be able to produce re-runs. The
brain states are like TV shows: patterns running through groups of
pixels and through the changing screen frames. Nothing needs to grow
without bound to ensure an infinite number of combinations of frame
groupings over time.

>
> >> there are a finite number of possible conscious
> >> states.
>
> > Not given an unbounded duration of time.
>
> An unbounded duration of time will allow you to realise all the
> possible mental states, then they will start repeating unless the
> brain can grow indefinitely.

No, because the mental states are not of any fixed length. A single
mental state could (and does) last a lifetime, or many, many lifetimes
(through evolution and literacy).

> By analogy, if you have a book with a
> limited number of pages and attempt to fill it with every possible
> combination of English characters, after a certain (very long) period
> you will have written every possible book, and then you will start
> repeating. Only if the book is unlimited in size can you have an
> unlimited number of distinct books.

I hear what you are saying, but the brain is not like a book that
fills up, it's like a PC connected to the internet. It has local
storage but that only augments the system, not limits it. You can use
your computer forever (in theory) and never run out of internet
without filling up your hard drive and having to start over.

>
> >>Do you claim that multiple conscious states could be
> >> associated with the one brain state? That would mean we are thinking
> >> without our brain.
>
> > I think if anything it's the other way around. There are probably
> > multiple brain states associates with one conscious states. This is
> > all but confirmed by neuroplastic regeneration. If one conscious state
> > were literally tied to one brain state, the failure of the region of
> > the brain involved in that brain state would not be compensated for by
> > the rest of the brain, but of course, it often does.
>
> Multiple mental states can be associated with the one physical state,
> but the reverse cannot occur unless there is an immaterial soul.

The reverse does occur. Think about an artificial sweetener. The
physical state invoked by Splenda is a separate one than that which is
invoked by sucrose or fructose, but the mental state of tasting a
sweet flavor is a relatively singular mental state. Why would that
mean there must be an immaterial 'soul'? There are lots of overlapping
ways to achieve mental and physical states.

>
> >> Different substances can perform the same function.
>
> > Only for functions not linked to specific substances. In living
> > organisms most every function is narrowly fulfilled by a single
> > substance. Water cannot be replaced. Oxygen. ATP. Nothing else can
> > perform these same functions.
>
> With any machine there may be parts that are difficult to replace.
> That does not change the quite modest principle that IF you could
> replace it with a functionally equivalent part THEN it would (third
> person observable) function the same,

That assumes mechanism a priori. If a fire could burn without oxygen,
then it would do the same things as fire. Do you see how that is
circular reasoning, since fire is oxidation?

> and the deduction from this
> principle that any associated first person experiences would also be
> the same, otherwise we could have partial zombies.

People who have conversion disorders are partial zombies.

>
> >> You claim that the
> >> consciousness is associated somehow with the substance more than the
> >> function.
>
> > I wouldn't say 'more'. Consciousness is associated with the relation
> > between substance and function.
>
> >>This is not obvious a priori - one claim is not obviously
> >> better than the other, and you need to present evidence to help decide
> >> which is correct.
>
> > The fact that human consciousness is powerfully altered by small
> > amounts of some substances should be a clue that substance can drive
> > function.
>
> Of course, but it's the change in *function* as a result of the
> substance that changes consciousness.

The function that it changes though is just the function of another
*substance*. The functions don't have an independent existence. You
can bowl a strike on different kinds of bowling alleys of different
sizes in different places, but the alley itself, the pins and ball,
can't be made of steam. Function and substance cannot be separated
from each other entirely.

> If you have a different
> substance that leaves function unchanged, consciousness is unchanged.
> For example, Parkinson's disease can be treated with L-DOPA which is
> metabolised to dopamine and it can also be treated with dopamine
> agonists such as bromocriptine, which is chemically quite different to
> dopamine. The example makes the point quite nicely: the actual
> substance is irrelevant, only the effect it has is important.

The substance is not irrelevant. The effect is created by using
different substances to address different substances that make up the
brain. Function is just a name for what happens between substances.

>
> >> > Those are challenges of a reductio as absurdum nature. I'm hoping that
> >> > you'll see that they are silly. When you say that a group of milk
> >> > bottles can see red, you are intending for me to take you seriously,
> >> > but I don't think that you really take that position seriously
> >> > yourself, you're just making an empty, legalistic argument about it.
>
> >> Why is it not absurd to say that a handful of chemical elements can see 
> >> red?
>
> > I don't think that they can. I'd say that groups of cone cells and
> > neurons can see red. Our eyeballs basically recapitulate pre-cambrian
> > evolution of solar photosynthesizing micororganisms in an aqueous
> > saline environment. What we see is something like chlorophyll green,
> > hemoglobin red, and hemacyanin blue (http://www.applet-magic.com/
> > lifemolecules.htm). Color that we see is cellular molecular awareness
> > shelled out to primate visual consciousness.
>
> So it's not the chemical elements that see red, it's a more complex
> construct from elements that themselves are unable to see red.
> Moreover, this construct has a particular third party observable
> response to red light. So why is it absurd to say that a more complex
> construct made from something else, be it electrical circuits or
> whatever, can see red?

It's not absurd to say that, it's just that the same thing that allows
it to see red is the same thing that makes it a cone cell (or really a
neural pathway including a bunch of cone cells and a bunch of neurons
in the brain acting as a group). If you take molecules that don't
experience 'red' and put them into a configuration that is not a
living cell, there is no reason to assume that red will be seen by
whatever configuration you make.

>
> >> Handling counterfactuals means the entity would behave differently if
> >> circumstances were different, which is what programs and humans but
> >> not recordings do.
>
> > A cartoon doesn't have to be a recording. You could have animators
> > drawing them in real time and responding to different circumstances
> > dynamically. It doesn't make the cartoon itself conscious, just as
> > handling counterfactuals don't make programs themselves conscious.
>
> The cartoon would not be conscious but the animators driving the
> cartoon would be conscious. If a computer could do the job as well as
> human animators then it too would be conscious.

My point is that the cartoon is an unconscious vehicle for human
consciousness. If a computer could do the job as well as the human
animators, then it would not be a computer. It would have a sense of
humor and an imagination.

>
> >> >> Someone creating agents in a computer so he could torture them should 
> >> >> also
> >> >> be culpable, and stopped.
>
> >> > Really? So no violent video games?
>
> >> If the violent video games caused the characters to feel distress then yes.
>
> > By the preceding claim of counterfactual relevance, are you not saying
> > that they might feel distress already?
>
> No, they are too simple. If the characters can interact with us in the
> same way as a biological human then there is reason to think they have
> a similar consciousness to us. For example, I haven't seen you but I
> have interacted with you via email over the past few weeks, and from
> this I deduce that you are a sentient being. If I discover that you
> are a computer program then I would have to consider that you are
> still a sentient being. But no computer program today could do this,
> so I assume you are human.

That's where we differ. If I discover that you are a computer program
then I would realize that I had been fooled and would lose interest in
pursuing any further communication. People have fun playing with bots
for a while, but they don't seriously consider their feelings. They
don't worry about them or wonder how their life is going.

>
> >> How do you know you're not deluded about having what you call free
> >> will (which you think is incompatible with determinism)?
>
> > I've answered this several times. Free will is a feeling. It doesn't
> > matter whether or not your feelings of free will are validated by any
> > objective criteria, because the existence of the possibility of the
> > delusion is sufficient to invalidate determinism. Such a fantasy has
> > no conceivable reason to exist or possible mechanism to arise out of
> > (how does a machine pretend to believe it's not a machine?)
>
> The existence of the possibility of a delusion is sufficient to
> invalidate determinism? So if it's possible that you are deluded
> determinism is false?

Not just 'a delusion' but *the* delusion that you have free will.

> And if you aren't deluded does that mean
> determinism is true?

Being able to tell that you are deluded would be hard to pull off if
subjectivity were solipsistic.

>Does a fantasy need a reason to exist?

Fantasy is made of reasons, and it doesn't exist, it insists.

> If a
> machine did pretend to believe it was not a machine would that mean it
> wasn't a machine?

No. A machine can't pretend or believe anything. It is us who project
our own experience of belief and pretending onto a simulated
intelligence.

>
> If you've explained it before I'm still confused as to what your
> explanation is. It seems to me and to most other people that it *is*
> possible to believe in free will in a deterministic universe - there
> is no contradiction in the idea and there is no inconsistency with
> empirical observation. You have to clearly explain why you disagree.

Free will and determinism are mutually exclusive terms by definition.
You would have to explain on what basis you can disagree with that
before I can tell you why it doesn't make sense.

>
> >> So if an advanced alien made a human using the appropriate organic
> >> material (and not those unfeeling electronic circuits) the human would
> >> lack free will, even though he would behave as if he had free will and
> >> believe he had free will.
>
> > You don't need an alien to lose your free will. Addiction,
> > brainwashing, intimidation, and torture can do that. Taking a person's
> > freedom away is one thing, giving freedom to a stone is something
> > else. The human can recover their free will though. If an alien did
> > make a human though, it would not lack free will because free will
> > would be the fifth dimension of awareness. It cannot be programmed in
> > the same way as a 4D chip, it needs to be motivated voluntarily rather
> > than scripted.
>
> So even though a human was made from scratch by an alien he could have
> free will: the argument that he only does what he was designed to do
> by the alien does not apply. Why does it apply to if the alien
> designed an advanced computer?

Because silicon is only capable of hosting 4D phenomena - like a black
and white TV with no sound. A living person is a 5D phenomenon - a
color TV show with sound. It doesn't matter if it's an alien or us
making it, if you make a computer out of silicon it's only going to
have 4D functionality.

>
> >> Subatomic particles become water when they are subjected to the
> >> appropriate conditions. They have no foreknowledge of water and they
> >> don't care if they are water or something else. All they do is
> >> interact in a particular way given certain circumstances, blindly
> >> following a program if you like.
>
> > There is no reason to think that such a program exists. Water is an
> > invention/discovery of atoms. It was first initiated at a particular
> > time by specific atoms in this universe (as opposed to all possible
> > universes). Water is not an arithmetic inevitable, it is the living
> > echo of an event. The universe makes it up as it goes along, just as
> > we do (as part of the universe).
>
> The atoms follow a program in that they rigidly follow particular
> rules - more rigidly than any human-programmed computer, in fact,
> which could have bugs or hardware faults. The existence of water was
> implicit in the laws of physics even before the universe had cooled
> down enough for chemical reactions to occur. That is, it is inevitable
> that hydrogen and oxygen will combust to form water under certain
> physical conditions. It is not logically necessary, but it is
> necessary given the particular physical laws, and as far as we know in
> this universe the laws have been fixed for all time.

That's one way of looking at it, but my way I think is a more accurate
model. Atoms are the embodiment of the laws of physics. They do not
follow a 'program' (which is why it has no bugs or faults) - they are
the program. We have different perceptions and observations of what
they are, and we distill those commonalities with the commonalities of
other observations and perceptions of substances and call the result
'laws of physics'. They have no independent existence.
>
> >> It is from many, many such
> >> interactions following simple rules that the complex universe arises.
>
> > I think it's more likely the other way around. Simple rules can be
> > derived by complex entities to make sense of the universe. The
> > universe arises from no rules at all. It arises as the possibility of
> > sense experience from the impossibility of non-sense non-experience.
> > Like infancy or awakening from sleep, coherent order emerges from
> > incoherent multivalent singularity. Complexity can give rise to
> > simplicity and the other way around.
>
> The laws of physics are indeed just our observations but they can be
> used to make predictions. If the predictions do not match reality then
> the laws are shown to be wrong, and new laws may be discovered in
> their place. By this process science gets closer and closer to a
> description of reality.

You are assuming that it is possible to get closer and closer to one
description of reality without getting further and further from
another, equally valid (and invalid) description of reality. I don't
make that leap of faith, but I do think that if we recognize that
descriptions of reality have a bias, we have a chance to describe
reality in a way that not only calculates predictions but explains
experiences.

>The universe does not arise from "no rules at
> all" since in that case nothing would happen. The tendency to form
> stars, for example, is indicative of a physical law.

It's like saying cars driving on the road is indicative of traffic
laws. If you say that nothing can happen without rules, then you are
saying that rules cannot happen without rules, which leads to infinite
regress. Rules without something to rule are not the cause of
anything. Rules are derived, after the fact by and through phenomena
whose perception includes rule making.

>
> >> >> > Where does novelty come from in a
> >> >> > universe of fixed laws?
>
> >> >> Different permutations, arrangements and organizations.
>
> >> > If H2O is water before H2O exists, then it's not novel.
>
> >> Something that didn't exist before is novel. Something that didn't
> >> exist before and we could not anticipate is novel and surprising.
>
> > At some point, H2O had to have either been novel and surprising or
> > predetermined and redundant. The possibility of it's existence can't
> > be both novel and eternally predetermined.
>
> It could be both predetermined and novel and surprising when it
> finally occurs. This is consistent with how these terms are used by
> most people.

I don't think that it's consistent with how people use it at all. Were
The Beatles predetermined? Is the 12th digit of Pi novel? Novelty
means that it was not predetermined. Novelty and predetermination are
mutually exclusive (but different aspects of the same thing).

>
> >> Complexity may not impress you, but the multiple permutation your
> >> brain can be in accounts for the multiple thoughts you can have.
>
> > Accounting is not explaining. Which actually sums up my entire
> > position on this endless thread. Consciousness explains and counts.
> > Computers only count. Come up with an algorithm for explanation, and
> > put it into an electronic explainer, and we will have true AGI.
>
> I can't be sure but from what you have said before it doesn't seem you
> would accept a true AGI if it tapped you on the shoulder and started
> talking to you. It could come across as more intelligent than any
> human but that would not be "true" intelligence if it was not organic.

I don't know that it has to be organic, I just suspect that may be a
reasonable expectation. To be a true intelligence, it has to
understand, and to understand it has to feel, and to feel it has to be
alive - meaning it be able to tell the difference between thriving and
wasting and care for one more than the other. It has to have it's own
sense of self interest and preferences for the world outside of
itself. It is not enough that we say Yes to the Doctor, the machine
must be able to say No to the Patient.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to