On Sat, Jul 30, 2011 at 2:55 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

> On Jul 30, 1:08 am, Jason Resch <jasonre...@gmail.com> wrote:
> > On Fri, Jul 29, 2011 at 12:50 AM, Craig Weinberg <whatsons...@gmail.com
> >wrote:
> > It may or may not, but a system which could respond to questions in the
> same
> > way you do must understand how you think.
> It's not responding to questions, it's just blowing digits out of an
> exhaust pipe that you interpret as answers to your questions. As far
> as you are concerned, it's useful, maybe even more useful than having
> a real person understand you since it has no choice but to process
> your input to the extent that the script circumscribes whereas any
> entity which is capable of understanding is capable of ignoring them
> or lying as well.
So would an accurate reproduction of your brain processes, assuming you are
capable of lying and ignoring.

> > The system which responds in the same way you do can offer its opinion on
> > anythings.
> It's a fantasy. No system can respond exactly the way you do unless it
> is you.

Explain what you mean by "you".  If you mean the processes, relation, and
information which define me, then I would agree.  If you mean the atoms,
molecules, and wet organic tissues I disagree.

> Any system can appear to respond the way you do under limited
> conditions, as the YouTube simulation, without having an opinion or
> being aware of you or being a simulation of you.
> > Meaning is anything which makes a difference.
> That's an assumption. I would say that meaning is the experience of
> significance. It's a sensorimotive quality which informs your
> conscious attention. Meaning is something you feel.
> >  If the logic and information
> > dictate changes in some system, then that information is meaningful to
> the
> > system.
> I understand what you're saying, but it assumes consciousness in an
> unconscious system. Elmer Fudd acts as a system, but he doesn't feel
> anything when he gets slapped but when he gets slapped upside the head
> with a frying pan, there is no meaning generated within Elmer Fudd,
> even though the cartoon system has changed in a particular way. The
> meaning comes through the feelings we experience when we consciously
> pay attention to the system. Elmer Fudd is fiction. It can't feel.
> Google is fiction. Super mega Google is fiction.

In this example, the frying pan is not what causes Elmer Fudd's depiction to
appear to react.

> > That's not true.  There is not enough memory in the world to store every
> > possible game of chess.  The first 7 or so moves were came directly from
> a
> > game database, but the rest involved the computer deciding what to do
> based
> > on its understanding of the game.
> It may not have to store all possible games of chess because it can
> compress the redundancies arithmetically.

The ability to form an arithmetic compression is one definition of


> It still uses a strategy
> which requires no understanding. It just methodically compares the
> positions of each piece to the positions which will result from each
> scenario that can be simulated.

How do people play chess?

> It has no idea that it's playing a
> game, it experiences no difference between winning and not winning.

This is an assumption of yours.  If you saw a paralyzed person, you might
likewise conclude they have no inner experience based on what you could

> A
> computer playing a game by itself is not a game. What happens when
> Deep Blue plays itself I wonder?
> > What in the human brain is not mechanical?
> The experience of the human brain, or any other physical phenomenon is
> not mechanical.

Does experience affect how the brain works or is it an epiphenomenon?  If it
is not a  epiphenomenon, how is it physically/mechanically realized such
that it can have physical effects on how the brain works?

> The notion of a machine is as a logical abstraction
> which can be felt or felt through, but it has no feeling itself.
This is your hope.

> > > Oh, they absolutely do use some of the same methods. Our brain is
> > > mechanical as well as experiential. We can emulate the mechanical side
> > > with anything. We can't emulate experience at all.
> >
> > Movement, speech, etc. are all mechanical.  So could the mechanical
> actions
> > of a human be perfectly emulated?  Earlier you suggested it would
> eventually
> > break down.
> The actions themselves can be emulated but the way that the actions
> are strung together can reveal the holes in the fabric of the
> mechanism. When you make a phone call, how long does it take for you
> to be able to tell whether it's a voicemail system vs a live person?
> Sometimes you can hear recording noise even before the voice begins.
> Occasionally you could be fooled for a few seconds. It's not that the
> system breaks down, it's that the genuine person will sooner or later
> pick up on subtle cues about the system which give it away.

Any discrete simulation can be made 2^512 times more precise by just using
512 times more memory.  Even if the brain were continuous rather than
discrete, there is no limit to how accurately its behavior could be
simulated.  If you have 512 decimal places to work with, it is not clear at
all that one would ever detect these subtle cues.

> > > If that were the case, then software would be evolving by itself. You
> > > would have to worry about Google self-driving cars teaming up to kill
> > > us. We don't though.
> >
> > The cars aren't programmed with that capability.
> Exactly. A machine can't evolve for it's own purposes.

A-life can and does.  Download smart sweepers and see it happen before your

> It has no point
> of view, it can't decide to do something new (even though it can end
> up doing something that is new to us, there is no decision to
> accomplish novelty). It's not really a thing even except in our minds.
> > > Machines don't team up by themselves.
> >
> > They could.
> You just said they can't unless they are programmed with that
> capability. That's not 'by themselves'.

Human beings did not design human brains.  They were programmed by years of

> > Neurons are very flexible.  They self organize, recover from partial
> damage,
> > reproduce, move around, etc.  But an emulation of neurons would provide
> the
> > same capabilities.
> You can emulate the alphabet of neuron behaviors but that doesn't mean
> it can use those behaviors in the way that live neurons can.

Why couldn't they?

> > > No, it doesn't know what a 1 or 0 is. A machine executed in
> > > microelectronics likely has the simplest possible sensorimotive
> > > experience, barring some physical environmental effect like heat or
> > > corrosion. It has a single motive: 'Go' and 'No Go'. It has a single
> > > sense: This, this, this, and this are going, that that that are not
> > > going. It's the same sensorimotive experience of a simple molecule.
> > > Valence open. Valence filled.
> >
> > Why is it biochemistry can get past its lowest levels of valance
> elections
> > filled or not, but computer systems cannot get past the properties of
> > silicon?  It is unfair ignore all the higher level complexity that is
> > possible in a computer.
> For the same reason that hydrogen can't be gold and inorganic matter
> can't be alive.

A lump of coal and a glass of water aren't alive until they are organized
appropriately, what makes them "organic" is a matter of their organization
and relationships between each other.  So long as this organization and the
relationships are preserved, the pattern and its complexity are preserved.

> The computer has no high level complexity,

I think that is an absurd statement.  Is not a program running a
protein-folding simulation more complex than a computer with a newly
formatted hard drive and all zeros in its memory?

> we just use
> it to store our high level complexity. Complexity is not awareness.
> > > It's a category error. No amount of sophisticated programming is going
> > > to simulate fire either.
> >
> > Tell that to the God who is running this universe in his computer.
> This universe could no more be simulated on a computer than it could
> on a network of plumbing.

If one can build a Turing machine out of plumbing it could simulate anything
any other computer can.

> Fire isn't made of plumbing.

Fire is made of atoms swapping places and moving around quickly and emitting
photons.  These are all discrete particles with understandable properties
and known relations between them.  Thus these same relations can be
replicated in any computer so that burning may take place.

Also, are you sure you have never heard of Searle?  He also used the
plumbing example as well:

> > > > Did some alien intelligence or God have to look down on the first
> life
> > > forms
> > > > that evolved on Earth for them to be alive?
> >
> > > Nah. It would have been too boring. A billion years of blue green
> > > algae?
> >
> > So then why does simulated life need a human to interpret it?
> Because it is simulated only through our interpretation.

A tree falling has to be heard for it to produce a sound?  I can understand
that, but what if there is a process which can hear within the simulation?

> By itself it
> is a completely different phenomenon from the original.

Not necessarily.  From the perspective of inside the simulation it can
appear identical.

> A photograph
> of a fish is a simulation using a chemical emulsion on paper. Looks
> like a fish to us, but not to a blind cat.
> > > I'm sympathetic to this as an understanding of the role that qualia
> > > plays in behavior and evolution, but it doesn't address the main
> > > problem of qualia, which is why it would need to exist at all.
> >
> > If you couldn't see, you would be blind.
> That's tautology. It's like saying 'if you couldn't see the inside of
> your stomach, you wouldn't know how to digest food'. Qualia is not
> necessary to the function of the brain.

I think you need to "see" (having qualia) in order to see.  In other words,
Zombies are not possible.  You cannot have something which performs as
though it can see if it has no visual experience.

> Indeed you can lose
> consciousness every night and your brain has no problem surviving
> without your experience of it.

You might have trouble running away from a lion while sleeping though.

> Your body can find food and reproduce
> without there being any sense of color or shape, odor, flavor, etc.

It seems it would be significantly harder to find food if we had no sense of
shape or color (is that a strawberry or a rock?).  You would expend a lot of
energy chasing phantom food.

> It
> could detect and utilize it's surroundings mechanically, like Deep
> Blue without ever having to feel that it is a thing.

I doubt you could have something with the same level of survivability as a
human that did not have experience.

> > > Why
> > > have experience when you can just have organisms responding to their
> > > environment as a machine does.
> >
> > Experiences are necessary for you to behave as you do.
> Tautological.

So you also doubt zombies?  Then you should not be worrying about processes
which behave as humans that have no inner life.

> Why is it necessary that I behave as I do?
> > In the right context, an isomorphic organization is possible, assuming no
> > infinities are involved.
> >
> > > That's why no living thing can survive on silicon.
> >
> > Silicon doesn't replace carbon in biochemistry because it is too large,
> and
> > does not fit snugly between the atoms it holds together as the smaller
> > carbon atoms do
> Exactly. It's different. Can't do the same things with it.
No, but you could use it to build a universal machine which can emulate any
definable process.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to