On Sep 17, 9:53 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Sun, Sep 18, 2011 at 8:14 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> > On Sep 17, 12:44 pm, Jason Resch <jasonre...@gmail.com> wrote:
> >> Some are evolved, and not designed by the programmers mind, and even do
> >> things that surprise the programmer.  Like when I observed flocking
> >> behaviors in the smart sweepers which was a way to optimize food 
> >> collection,
> >> but not one I thought of.
>
> > I get what you mean, but it's only the visualization of the behavior
> > that is unanticipated by the observer. The smart sweepers don't evolve
> > into anything that can't be recreated with the same program, such as
> > growing ears or learning how to fly.
>
> Evolution follows from the fact that DNA replication is not 100%
> accurate and if the resulting organism is successful the mutation will
> be propagated. This can lead to surprising results but not magical
> results.

Therein lies the rub. What is the difference between surprising and
magical? What is merely surprising behavior for living organisms is
magical for inorganic matter. If the contents of human imagination
were public instead of private they would be magic - but still not
omnipotent. It still could not invent a new color or a square circle.
This is why it matters what is doing the computing. If you try to make
a new operating system by making imperfect copies of Windows, you are
not going to even get a better version of Windows, let alone one that
flies or lays eggs.

>The potential for evolution is programmed into the organism
> to begin with, and if you had a good enough simulation you could run
> it and see the variations possible under different environments.

To make a good enough simulation may take as much time and resources
as the genuine process. You could see possible genome variations, but
so what. To translate into phenome variations you would have to
simulate the proteins, cells, tissues, body, and environmental effects
on the body (potentially the Earth). Even so, that might only give you
an idea of which of existing phenome expressions to expect but it is
not clear that you could guess what novel variations would produce at
all. How would you guess what a tongue was going to do if nobody had
ever heard of flavor before?

>
> >> A single instruction is like a single neurotransmitter release, which only
> >> tells a nearby neuron: "Consider releasing your neurotransmitters".  
> >> However
> >> when you have millions or billions of such instructions interacting
> >> with each other, very complex and novel behaviors can result.
>
> > I see that as using complexity as a crutch or veil to obscure a
> > nonsense premise behind it. It only makes sense if the neuron and
> > neurotransmitter both have some degree of awareness associated with
> > them to begin with. One neuron can understand a neurotransmitter sent
> > by another, but there is no understanding in the 'sendingness', which
> > is what you are actually saying. A book does not read itself. There is
> > no consciousness of words by themselves on a page. They need an
> > interpreter which is word-conscious.
>
> What would happen if the neurons and neurotransmitters did their thing
> without the awareness that you postulate? Would the observable
> behaviour be the same? How could it not be, if the chemical reactions
> remain the same?

If the neurons and neurotransmitters did their thing without any
subjective awareness correlated with it, you get a conversion
disorder, like hysterical blindness. If neurons and neurotransmitters
had no subjective correlations at all in the universe then there would
be no human observation of anything, but the human body would
theoretically be able to respond to it's environment unconsciously
(like the digestive system or immune system is presumed to do under
substance monism). In our real universe, human beings, their bodies,
immune systems, digestive systems, etc all have interactive perception
and participation (sensori-motive phenomena).

>
> >> Awareness of red is in principal no different then the awareness of zero,
> >> just much more involved.
>
> > A robot probably has no awareness of zero either. Just current that is
> > flowing through circuits or not.
>
> What if the robot said that you had no understanding of red, it was
> just chemical reactions in your brain?

That is pretty much what substance monism does say. I would say the
robot should be reprogrammed to say something different.

>
> >> We can inspect the computational state, it is not private.  Any programmer
> >> who has attached a debugger to a process knows this.
>
> > If the computational state is the only interiority a computer has,
> > then we know for sure that it has no sensorimotive experience. You
> > have to understand that if it's not private, it's not sensorimotive. I
> > allow for a hypothetical proto-awareness detection of doped materials
> > semiconducting electric current, and that would be a private
> > experience, but the observation of a computational state only tells us
> > about the public consequences of electromagnetic change, not anything
> > about feeling or awareness.
>
> So you're now admitting the computer could have private experiences
> you don't know about? Why couldn't these experiences be as rich and
> complex as your own?

I have always maintained that the semiconductor materials likely have
some kind of private experience, but that their qualia pool is likely
to be extremely shallow - say on the order of 10^ -17 magnitude in
comparison to our own. They could be as rich and complex as our own
theoretically but practically it seems to make sense that our
intuition of relative levels of significance in different phenomena
constitute some broad, but reasonably informed expectations. Not all
of us have been smart enough to realize the humanity of all other homo
sapiens through history, but most of us have reckoned, and I think
correctly, that there is a difference between a human being and a
coconut.

>
> >> Some programs do things we never expected.
>
> > Only if we haven't been smart enough or taken enough time to figure
> > out what to expect. Execution of code is just a formality of exposing
> > the inevitable consequences of it's logic. Experience is the opposite.
> > It is an essential exposure of unknowable and translogical
> > possibilities. Yellow does not logically follow from red and blue.
> > It's nothing like the herding behavior of a smart sweeper.
>
> The whole idea of the program is that we're not smart enough to figure
> out what to expect; otherwise why run the program? A program
> simulating a bacterium will be as surprising as the actual bacterium
> is.

It's not that we're not smart enough, it's just that we're not patient
enough. The program could be drawn by hand and calculated on paper,
but it would be really boring and take way too long for our smart
nervous system to tolerate. We need a mindless machine to do the
idiotic repetitive work for us - not because we can't do it, but
because it's beneath us; a waste of our infinitely more precious time.

>
> >> What you say is that computers will become conscious once we attach 3D
> >> printers to them so they can reproduce.
>
> > I have never said anything remotely like that. I'm saying that
> > computation is irrelevant in instantiating consciousness, only in
> > modulating and elaborating it's forms. Your brain computes, but it
> > also feels and experiences a human life. A PC computes, but it feels
> > only what any electronic component feels, which is probably not much
> > compared to us.
>
> You have several times said that the ability to evolve and reproduce
> is somehow relevant to consciousness, so it's reasonable to ask if a
> computer that can reproduce is closer to consciousness than one that
> can't, and whether a sterilised human is less conscious than a human
> who has not been sterilised.

Reproduction introduces evolution into systems which are fundamentally
able to benefit from reproduction. Mutation is improbable, so a system
which reproduces improbably useful improbabilities is a hedge against
entropy. It keeps the improbable in circulation for longer, eventually
making it common and probable. Any physical system that is able to
evolve independently likely has a corresponding awareness that evolves
as well. Reproduction doesn't evolve anything within the individual
that reproduces, it's only the hereditary downline that evolve in
relation to their ancestors, so of course it makes no difference
whether to an individual human whether they are able or willing to
reproduce or not.

Evolution, like information, isn't a concrete phenomenon, it's more of
an analytical model of the sense that a phenomena makes. A species
isn't a thing, it's a group of organisms we find biologically similar
enough to classify as a species. Nothing is happening to the species,
it's just that if we compare creatures to their offspring in relation
to their ecological niche, we can make sense of the changes as
adaptive over time.

>
> >> That doesn't seem plausible.
> >>  Besides, in the realm of software, self-replicating viruses and worms have
> >> existed for a long time already.
>
> > Viruses don't seem to have any more capacity to feel or generate
> > novelty than any other program. It doesn't develop it's own agenda, it
> > just executes the motives of the programmer recursively by replicating
> > itself to any instances of the target software that it can locate.
>
> Like natural viruses.

Functionally, yes. Once the virus interacts with living organisms
there may be some symbiotic exchange that goes beyond what a
programmed digital sequence could encounter in a semiconductor
environment.

>
> >> Because in the same way a record player has the ability to play any record,
> >> a silicon chip has the ability to replicate/predict the behavior of any
> >> finite machine.
>
> > Then the problem is that you assume that our consciousness is a finite
> > machine. It isn't. It has finite aspects and mechanistic aspects, but
> > it has many other senses and motives that cannot be meaningfully
> > described that way. A record player can play a record for someone to
> > listen to, but it can't itself listen to any record it plays. You need
> > a subject to experience the computation in it's own concrete
> > perceptual terms.
>
> Our brain is a finite machine, and our consciousness apparently
> supervenes on our brain states.

Our consciousness cannot be said to supervene on our brain states
because some of our brain states depend upon our conscious intentions.
Our brain is a finite machine but only at any particular moment. Over
time it has infinite permutations of patterns sequences.

>Since there are a finite number of
> possible brain states

Not true. Brains are always evolving new possible brain states. Each
individual from every species has different numbers and varieties of
possible brain states.

> there are a finite number of possible conscious
> states.

Not given an unbounded duration of time.

>Do you claim that multiple conscious states could be
> associated with the one brain state? That would mean we are thinking
> without our brain.

I think if anything it's the other way around. There are probably
multiple brain states associates with one conscious states. This is
all but confirmed by neuroplastic regeneration. If one conscious state
were literally tied to one brain state, the failure of the region of
the brain involved in that brain state would not be compensated for by
the rest of the brain, but of course, it often does.

>
> >> Ping pong balls can be arranged in a near infinite number of combinations.
> >>  Assuming a Turing machine made of ping pong balls, then you do get
> >> different characteristics from the different combinations.
>
> > You are assuming that characteristics are produced by combinations
> > rather than the character of the fundamental unit. My point is that
> > assumption is unsupported and that in fact, the entire universe is
> > based upon the principle that the same combinations of different
> > fundamental units behave differently. A group of protons is different
> > than a group of atoms or cells or stars.
>
> Different substances can perform the same function.

Only for functions not linked to specific substances. In living
organisms most every function is narrowly fulfilled by a single
substance. Water cannot be replaced. Oxygen. ATP. Nothing else can
perform these same functions.

> You claim that the
> consciousness is associated somehow with the substance more than the
> function.

I wouldn't say 'more'. Consciousness is associated with the relation
between substance and function.

>This is not obvious a priori - one claim is not obviously
> better than the other, and you need to present evidence to help decide
> which is correct.

The fact that human consciousness is powerfully altered by small
amounts of some substances should be a clue that substance can drive
function.

>
> > Those are challenges of a reductio as absurdum nature. I'm hoping that
> > you'll see that they are silly. When you say that a group of milk
> > bottles can see red, you are intending for me to take you seriously,
> > but I don't think that you really take that position seriously
> > yourself, you're just making an empty, legalistic argument about it.
>
> Why is it not absurd to say that a handful of chemical elements can see red?

I don't think that they can. I'd say that groups of cone cells and
neurons can see red. Our eyeballs basically recapitulate pre-cambrian
evolution of solar photosynthesizing micororganisms in an aqueous
saline environment. What we see is something like chlorophyll green,
hemoglobin red, and hemacyanin blue (http://www.applet-magic.com/
lifemolecules.htm). Color that we see is cellular molecular awareness
shelled out to primate visual consciousness.

>
> >> I said all the functions behaviors and patterns.  This includes the ones
> >> within the brain, not just external signs like the salinity of tears.
>
> > But a computer replicates none of the functions, behaviors, and
> > patterns of the brain.
>
> It can.

It depends on what is doing the computing.

>
> >> It would be a depiction of a computation, or a recording of a computation,
> >> not a computation.  There are no counterfactual conditions in the cartoon.
>
> (To Jason) But an artificial brain component that accidentally
> replicated the behaviour of the original component, without the
> ability to handle counterfactuals, would leave consciousness intact
> for the period of time it was behaving appropriately.
>
> > A computation is a depiction or recording too. You could make a
> > cartoon with counterfactual conditions in the same way that you can
> > make counterfactual conditions arithmetically in a program. If you
> > adhere to certain rules within a cartoon or computer simulation, then
> > those are the factual conditions subject to error, distortion, etc.
> > Very different from actual experience where the conditions are
> > literally factual and the possibility of counterfactuals cannot, by
> > definition exist.
>
> Handling counterfactuals means the entity would behave differently if
> circumstances were different, which is what programs and humans but
> not recordings do.

A cartoon doesn't have to be a recording. You could have animators
drawing them in real time and responding to different circumstances
dynamically. It doesn't make the cartoon itself conscious, just as
handling counterfactuals don't make programs themselves conscious.

>
> >> Someone creating agents in a computer so he could torture them should also
> >> be culpable, and stopped.
>
> > Really? So no violent video games?
>
> If the violent video games caused the characters to feel distress then yes.

By the preceding claim of counterfactual relevance, are you not saying
that they might feel distress already?

>
> > No, the software is just a GUI for the programmer to turn high level
> > programming language into binary code. It doesn't know anything. It's
> > like a comb. It doesn't do or know anything, it's just a tool for you
> > to extend your thoughts into a microelectronic inertial frame.
>
> The brain doesn't know anything. It's just an evolved tool to
> propagate genes, which themselves know nothing either.

You don't need a brain to propagate genes. I agree that it doesn't
know who we are, but it knows about neurology.

>
> >> You jump through hoops to invent a reason for humans to have undetermined
> >> free will, but you are able to see clearly that the evolution of a computer
> >> program depends on lower level factors.
>
> > There is no reason for humans to have undetermined free will, they
> > just do (to one degree or another). In another universe, it could
> > theoretically be computers that have free will and us having no choice
> > but to make them, but it isn't a reality in this universe.
>
> How do you know you're not deluded about having what you call free
> will (which you think is incompatible with determinism)?

I've answered this several times. Free will is a feeling. It doesn't
matter whether or not your feelings of free will are validated by any
objective criteria, because the existence of the possibility of the
delusion is sufficient to invalidate determinism. Such a fantasy has
no conceivable reason to exist or possible mechanism to arise out of
(how does a machine pretend to believe it's not a machine?)

>
> >> The software makes the decision, so in this sense it has a will.  Whether 
> >> or
> >> not its software was programmed.  Our DNA was programmed by evolution, yet
> >> we can still make decisions.
>
> > The software doesn't make the decision. The programmer makes the
> > decision, the software just superimposes her model of her decision
> > process on a device. It has no will, it's a 4D reproduction of her 5D
> > will.
>
> So if an advanced alien made a human using the appropriate organic
> material (and not those unfeeling electronic circuits) the human would
> lack free will, even though he would behave as if he had free will and
> believe he had free will.

You don't need an alien to lose your free will. Addiction,
brainwashing, intimidation, and torture can do that. Taking a person's
freedom away is one thing, giving freedom to a stone is something
else. The human can recover their free will though. If an alien did
make a human though, it would not lack free will because free will
would be the fifth dimension of awareness. It cannot be programmed in
the same way as a 4D chip, it needs to be motivated voluntarily rather
than scripted.

>
> >> If you studied computer science in more depth I think you would change your
> >> mind.
>
> > I understand why you think that, but no. I think it may actually be
> > too much computer science conditioning that is keeping your mind from
> > changing.
>
> It can't hurt to know more about something if you are going to criticise it.
>
> > If protons, electrons, and neutrons know exactly how to become water,
> > then why don't they just do that themselves without having to assemble
> > in nuclear clumps to do it? If the computations that give rise to
> > water behavior from H2O already exist before the existence of H2O,
> > then why go through with the formality of existence?
>
> Subatomic particles become water when they are subjected to the
> appropriate conditions. They have no foreknowledge of water and they
> don't care if they are water or something else. All they do is
> interact in a particular way given certain circumstances, blindly
> following a program if you like.

There is no reason to think that such a program exists. Water is an
invention/discovery of atoms. It was first initiated at a particular
time by specific atoms in this universe (as opposed to all possible
universes). Water is not an arithmetic inevitable, it is the living
echo of an event. The universe makes it up as it goes along, just as
we do (as part of the universe).


> It is from many, many such
> interactions following simple rules that the complex universe arises.

I think it's more likely the other way around. Simple rules can be
derived by complex entities to make sense of the universe. The
universe arises from no rules at all. It arises as the possibility of
sense experience from the impossibility of non-sense non-experience.
Like infancy or awakening from sleep, coherent order emerges from
incoherent multivalent singularity. Complexity can give rise to
simplicity and the other way around.

>
> >> > Where does novelty come from in a
> >> > universe of fixed laws?
>
> >> Different permutations, arrangements and organizations.
>
> > If H2O is water before H2O exists, then it's not novel.
>
> Something that didn't exist before is novel. Something that didn't
> exist before and we could not anticipate is novel and surprising.

At some point, H2O had to have either been novel and surprising or
predetermined and redundant. The possibility of it's existence can't
be both novel and eternally predetermined.

>
> >> > What law allows for novelty?
>
> >> Imagine a fixed set of laws that describes how to move a card in a deck of
> >> playing cards to another position in the deck.  This allows
> >> 8,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
> >> novel deck configurations.  From just 52 cards.  Now imagine a universe 
> >> with
> >> 10^90 particles.
>
> > That's that same complexity fetish I keep mentioning. Complexity does
> > not impress me. Red impresses me. Complexity means nothing if the
> > fundamental unit can't do something different with it. You can have an
> > infinite number of permutations and an infinite number of cards but it
> > won't mean anything if the cards are all blank.
>
> Complexity may not impress you, but the multiple permutation your
> brain can be in accounts for the multiple thoughts you can have.

Accounting is not explaining. Which actually sums up my entire
position on this endless thread. Consciousness explains and counts.
Computers only count. Come up with an algorithm for explanation, and
put it into an electronic explainer, and we will have true AGI.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to