On Sun, Sep 18, 2011 at 8:14 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Sep 17, 12:44 pm, Jason Resch <jasonre...@gmail.com> wrote:

>> Some are evolved, and not designed by the programmers mind, and even do
>> things that surprise the programmer.  Like when I observed flocking
>> behaviors in the smart sweepers which was a way to optimize food collection,
>> but not one I thought of.
>
> I get what you mean, but it's only the visualization of the behavior
> that is unanticipated by the observer. The smart sweepers don't evolve
> into anything that can't be recreated with the same program, such as
> growing ears or learning how to fly.

Evolution follows from the fact that DNA replication is not 100%
accurate and if the resulting organism is successful the mutation will
be propagated. This can lead to surprising results but not magical
results. The potential for evolution is programmed into the organism
to begin with, and if you had a good enough simulation you could run
it and see the variations possible under different environments.

>> A single instruction is like a single neurotransmitter release, which only
>> tells a nearby neuron: "Consider releasing your neurotransmitters".  However
>> when you have millions or billions of such instructions interacting
>> with each other, very complex and novel behaviors can result.
>
> I see that as using complexity as a crutch or veil to obscure a
> nonsense premise behind it. It only makes sense if the neuron and
> neurotransmitter both have some degree of awareness associated with
> them to begin with. One neuron can understand a neurotransmitter sent
> by another, but there is no understanding in the 'sendingness', which
> is what you are actually saying. A book does not read itself. There is
> no consciousness of words by themselves on a page. They need an
> interpreter which is word-conscious.

What would happen if the neurons and neurotransmitters did their thing
without the awareness that you postulate? Would the observable
behaviour be the same? How could it not be, if the chemical reactions
remain the same?

>> Awareness of red is in principal no different then the awareness of zero,
>> just much more involved.
>
> A robot probably has no awareness of zero either. Just current that is
> flowing through circuits or not.

What if the robot said that you had no understanding of red, it was
just chemical reactions in your brain?

>> We can inspect the computational state, it is not private.  Any programmer
>> who has attached a debugger to a process knows this.
>
> If the computational state is the only interiority a computer has,
> then we know for sure that it has no sensorimotive experience. You
> have to understand that if it's not private, it's not sensorimotive. I
> allow for a hypothetical proto-awareness detection of doped materials
> semiconducting electric current, and that would be a private
> experience, but the observation of a computational state only tells us
> about the public consequences of electromagnetic change, not anything
> about feeling or awareness.

So you're now admitting the computer could have private experiences
you don't know about? Why couldn't these experiences be as rich and
complex as your own?

>> Some programs do things we never expected.
>
> Only if we haven't been smart enough or taken enough time to figure
> out what to expect. Execution of code is just a formality of exposing
> the inevitable consequences of it's logic. Experience is the opposite.
> It is an essential exposure of unknowable and translogical
> possibilities. Yellow does not logically follow from red and blue.
> It's nothing like the herding behavior of a smart sweeper.

The whole idea of the program is that we're not smart enough to figure
out what to expect; otherwise why run the program? A program
simulating a bacterium will be as surprising as the actual bacterium
is.

>> What you say is that computers will become conscious once we attach 3D
>> printers to them so they can reproduce.
>
> I have never said anything remotely like that. I'm saying that
> computation is irrelevant in instantiating consciousness, only in
> modulating and elaborating it's forms. Your brain computes, but it
> also feels and experiences a human life. A PC computes, but it feels
> only what any electronic component feels, which is probably not much
> compared to us.

You have several times said that the ability to evolve and reproduce
is somehow relevant to consciousness, so it's reasonable to ask if a
computer that can reproduce is closer to consciousness than one that
can't, and whether a sterilised human is less conscious than a human
who has not been sterilised.

>> That doesn't seem plausible.
>>  Besides, in the realm of software, self-replicating viruses and worms have
>> existed for a long time already.
>
> Viruses don't seem to have any more capacity to feel or generate
> novelty than any other program. It doesn't develop it's own agenda, it
> just executes the motives of the programmer recursively by replicating
> itself to any instances of the target software that it can locate.

Like natural viruses.

>> Because in the same way a record player has the ability to play any record,
>> a silicon chip has the ability to replicate/predict the behavior of any
>> finite machine.
>
> Then the problem is that you assume that our consciousness is a finite
> machine. It isn't. It has finite aspects and mechanistic aspects, but
> it has many other senses and motives that cannot be meaningfully
> described that way. A record player can play a record for someone to
> listen to, but it can't itself listen to any record it plays. You need
> a subject to experience the computation in it's own concrete
> perceptual terms.

Our brain is a finite machine, and our consciousness apparently
supervenes on our brain states. Since there are a finite number of
possible brain states there are a finite number of possible conscious
states. Do you claim that multiple conscious states could be
associated with the one brain state? That would mean we are thinking
without our brain.

>> Ping pong balls can be arranged in a near infinite number of combinations.
>>  Assuming a Turing machine made of ping pong balls, then you do get
>> different characteristics from the different combinations.
>
> You are assuming that characteristics are produced by combinations
> rather than the character of the fundamental unit. My point is that
> assumption is unsupported and that in fact, the entire universe is
> based upon the principle that the same combinations of different
> fundamental units behave differently. A group of protons is different
> than a group of atoms or cells or stars.

Different substances can perform the same function. You claim that the
consciousness is associated somehow with the substance more than the
function. This is not obvious a priori - one claim is not obviously
better than the other, and you need to present evidence to help decide
which is correct.

> Those are challenges of a reductio as absurdum nature. I'm hoping that
> you'll see that they are silly. When you say that a group of milk
> bottles can see red, you are intending for me to take you seriously,
> but I don't think that you really take that position seriously
> yourself, you're just making an empty, legalistic argument about it.

Why is it not absurd to say that a handful of chemical elements can see red?

>> I said all the functions behaviors and patterns.  This includes the ones
>> within the brain, not just external signs like the salinity of tears.
>
> But a computer replicates none of the functions, behaviors, and
> patterns of the brain.

It can.

>> It would be a depiction of a computation, or a recording of a computation,
>> not a computation.  There are no counterfactual conditions in the cartoon.

(To Jason) But an artificial brain component that accidentally
replicated the behaviour of the original component, without the
ability to handle counterfactuals, would leave consciousness intact
for the period of time it was behaving appropriately.

> A computation is a depiction or recording too. You could make a
> cartoon with counterfactual conditions in the same way that you can
> make counterfactual conditions arithmetically in a program. If you
> adhere to certain rules within a cartoon or computer simulation, then
> those are the factual conditions subject to error, distortion, etc.
> Very different from actual experience where the conditions are
> literally factual and the possibility of counterfactuals cannot, by
> definition exist.

Handling counterfactuals means the entity would behave differently if
circumstances were different, which is what programs and humans but
not recordings do.

>> Someone creating agents in a computer so he could torture them should also
>> be culpable, and stopped.
>
> Really? So no violent video games?

If the violent video games caused the characters to feel distress then yes.

> No, the software is just a GUI for the programmer to turn high level
> programming language into binary code. It doesn't know anything. It's
> like a comb. It doesn't do or know anything, it's just a tool for you
> to extend your thoughts into a microelectronic inertial frame.

The brain doesn't know anything. It's just an evolved tool to
propagate genes, which themselves know nothing either.

>> You jump through hoops to invent a reason for humans to have undetermined
>> free will, but you are able to see clearly that the evolution of a computer
>> program depends on lower level factors.
>
> There is no reason for humans to have undetermined free will, they
> just do (to one degree or another). In another universe, it could
> theoretically be computers that have free will and us having no choice
> but to make them, but it isn't a reality in this universe.

How do you know you're not deluded about having what you call free
will (which you think is incompatible with determinism)?

>> The software makes the decision, so in this sense it has a will.  Whether or
>> not its software was programmed.  Our DNA was programmed by evolution, yet
>> we can still make decisions.
>
> The software doesn't make the decision. The programmer makes the
> decision, the software just superimposes her model of her decision
> process on a device. It has no will, it's a 4D reproduction of her 5D
> will.

So if an advanced alien made a human using the appropriate organic
material (and not those unfeeling electronic circuits) the human would
lack free will, even though he would behave as if he had free will and
believe he had free will.

>> If you studied computer science in more depth I think you would change your
>> mind.
>
> I understand why you think that, but no. I think it may actually be
> too much computer science conditioning that is keeping your mind from
> changing.

It can't hurt to know more about something if you are going to criticise it.

> If protons, electrons, and neutrons know exactly how to become water,
> then why don't they just do that themselves without having to assemble
> in nuclear clumps to do it? If the computations that give rise to
> water behavior from H2O already exist before the existence of H2O,
> then why go through with the formality of existence?

Subatomic particles become water when they are subjected to the
appropriate conditions. They have no foreknowledge of water and they
don't care if they are water or something else. All they do is
interact in a particular way given certain circumstances, blindly
following a program if you like. It is from many, many such
interactions following simple rules that the complex universe arises.

>> > Where does novelty come from in a
>> > universe of fixed laws?
>>
>> Different permutations, arrangements and organizations.
>
> If H2O is water before H2O exists, then it's not novel.

Something that didn't exist before is novel. Something that didn't
exist before and we could not anticipate is novel and surprising.

>> > What law allows for novelty?
>>
>> Imagine a fixed set of laws that describes how to move a card in a deck of
>> playing cards to another position in the deck.  This allows
>> 8,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
>> novel deck configurations.  From just 52 cards.  Now imagine a universe with
>> 10^90 particles.
>>
>
> That's that same complexity fetish I keep mentioning. Complexity does
> not impress me. Red impresses me. Complexity means nothing if the
> fundamental unit can't do something different with it. You can have an
> infinite number of permutations and an infinite number of cards but it
> won't mean anything if the cards are all blank.

Complexity may not impress you, but the multiple permutation your
brain can be in accounts for the multiple thoughts you can have.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to