Hi Jason,

Might I point out that making copies only applies when one has a different position in space to put the copy, which presupposes space. What if you don't have any space?

Onward!

Stephen

On 7/28/2011 3:44 PM, Craig Weinberg wrote:
On Jul 28, 11:23 am, Jason Resch<jasonre...@gmail.com>  wrote:
On Thu, Jul 28, 2011 at 8:31 AM, Craig Weinberg<whatsons...@gmail.com>wrote:
No, not at all. It's like saying that a copy machine would have to
understand Chinese to be able to copy all of the pages in all of the
books of a library in China.
No it would not.  Copying a page is like generating a recording of a page,
using paper as the recording medium.  My example is more like a copy machine
that could generating a page of responses given a page of questions.
Remember, in this example, the machine is generating behavior equivalent to
your own, based purely on the inputs of questions and statements received by
other humans, not based on what you have already done.
It's only you who is aware that they are questions. Your copy machine
is no different from any copy machine except that it generates a
recording of a different page than the one fed into it. The logic of
which input records correlate to which output records can be quite
effective (like Google) but there is no experience of intelligence
being generated within the system. Any perceived intelligence is
purely the reflection of an intelligent observer's own semantic
awareness.

Intelligence doesn't rub off on an a-
signifying machine, even if that machine is extremely robust. A glove
doesn't grow a hand inside of it just because it's shaped like a hand.
This is what I'm talking about with ACME-OMMM. The exterior of the
cosmos works in the opposite way of the interior. It's a completely
different kind of sense, but the two topologies overlap - as
sensation<>electromagnetism and through fusion><transcendence.
Please tell me where the field of AI will run into a wall.  So far we have
machines that can understand speech, drive cars, translate between
languages, beat chess grandmasters, and beat jeopardy champions.  Do you
have a prediction for what machines will not be able to do (behavior wise).
What do you consider behavior? I don't think that inorganic machines
will be able to care about anything, to imagine anything, to feel, to
experience anything, etc. Machines don't understand speech, they just
imitate the understanding of speech. They don't beat people in chess,
they become chess..they have no idea what a person is. Same with
Jeopardy - they embody the game, not the player. The can beat a person
because they have the answers already and it's just a matter of
quantifying semantic similarity and probability. It's impressive but
it has actually very little to do with machines being able to think or
feel by themselves.

Think of the burning log example. I think that feeling is fairly
analogous to fire in this example.
Remember, we are ignoring the internal processes for this example and
considering only external behavior.
Remember, external behavior is influenced by internal experience
sooner or later.

Should I accept the possibility of
intelligent machines made of non-organic material being able to
someday reproduce the heat and flame of fire so well that we can toast
marshmallows over it? What a computer can 'reproduce' is 100%
dependent upon it's human interface. If you build a monitor that has a
hot pixels, then you can get some heat out of a picture of fire
recorded in heatvision. If not, there's just the image. With no
monitor, there's nothing.
To ignore issues of replicating fire, we have constrained all input and
output to what a monitor can offer:
Case 1: Video generated by encoding an interview with you seen over a
monitor
Case 2: Video generated by an intelligent process seen over a monitor
Not sure what you're saying here.

If, as I'm suggesting, human feeling is a function of biochemical
interiority and not arithmetic, you would have to, at some point, use
organic-like materials to get organic feeling to drive organic
behavior.
Computers drive robots which build cars, but they don't need feelings to
move.  Remember, everything externally visible that a human does is motion.
Moving arms, legs, vocal cords, eyelids, etc.  What you are proposing is
that there are ways a human can move that are impossible for machines.
If a person cares about something, they do things differently than if
they don't. Sooner or later, the machine forgets to stop playing
Monopoly and the house burns down. A person doesn't do that as often.
It's not about ways a human can move that a machine cannot, it's about
being able to relate to having to sneeze or worrying about money.
Movement is not intelligence or consciousness. As with frog legs and
headless chickens, movement does not require sentience.

Think of it like a fractal, no matter how deep you go into
the design, it's still the same thing. A picture of water is more and
more like a picture and less and less like water the more that you can
examine it. It's just a matter of time before any inorganic material
reveals it's lack of feeling and reliance on canned subroutines rather
than sensorimotive participation at the biochemical level.
Is there a limit on what that matter of time is?  Could it be a day, a year,
a century, 100 billion years, infinitely long?
It depends entirely on the circumstances and perceptual relativity.
Luck. Intention. Like everything else, it will happen when it is time
for it to happen.

No. The aspects of intelligence which can be replicated by a
mechanical process are only superficial services, by, for, and of
human organic intelligence. On it's own such a mechanism replicates
nothing. It just cycles through meaningless patterns of semiconductor
circuitry.
Again your silicon racism is showing through.  What makes the patterns in
silicon meaningless but the patterns in carbon meaningful?
Patterns in carbon are not more meaningful, it's just that carbon gets
together with oxygen, hydrogen, and nitrogen to make molecules that
team up as a cell... the cell can experience patterns of a
qualitatively more significant nature than can the molecule or atom.
If silicon could be made to group into a self-replicating molecule
that autopoeisized into a 'sill' then it too would be able to have
qualitatively more significant experiences, however not necessarily
the same as a carbon based cell.

I often feel I
am talking to John Searle himself when I converse with you.
Hah. Stephen thinks I might be channeling Leibniz. I don't know enough
about either one of them to say.

Can we see our own intelligence reflected in an inanimate
system? Sure, if we choose to. I can imagine that Watson or a talking
teddy bear is sentient if I want. Neither of them will ever be able to
imagine anything though.
There are computers with creative powers.  One computer is actually credited
with a 
patent:http://www.miraclesmagicinc.com/science/man-builds-inventing-machine....
Yes, reflected intelligence can be useful and creative if we choose to
consider it that way. The computer doesn't know that it has a patent
or that it invented anything though.

If a zombie can believe something then it's not a zombie.
This is why you must reject the idea that machines can have beliefs to
remain consistent.  I think Bruno has some proofs that demonstrate machines
have beliefs.  I am not certain of this though.
I do reject the idea that machines have beliefs. A belief implies that
one is choosing to care about something. A machine can't care about
anything. That's what we like about them. They will work 24 hours a
day until they break. They don't care that they are killing
themselves.

If would would answer the following questions, I think it would clarify your
position to me much more clearly:

Do you think machines can learn?
Yes, but not in the way you're thinking of. A semiconductor learns
that there is a sexy capacitor behind door number one or a new parking
lot of RAM that was just added. A brand new engine or pair of shoes
learns to settle into an optimal performance plateau and to fall apart
eventually.

Do you think machines can adapt?
Living organisms are machines too, and they do adapt. What experiences
the adaptation however is not a machine.

Do you think machines can have information?
Not by themselves. Information is in the eye of the beholder. It has
no independent existence.

Do you think machines can have knowledge?
Humans access their own knowledge through machines. Inorganic machines
don't have their own knowledge.

Do you think machines can have internal models of external patterns?
Not really. Machines don't have an interior other than the experiences
of the material that they are made of. Our machines are external
models of our internal patterns superimposed on a willing inert
substance. Machines don't model, we model using machines.

Do you think machines can understand?
No

Do you think machines can use their internal models to make optimal
decisions?
There isn't a 'their' there. The models are ours. We use machines to
optimize our own models. They don't know that they are doing that,
they are just trying to complete the circuit.

Do you think machines can behave intelligently?
Machines can be designed intelligently, and that design is reflected
in it's behavior.

It can't have identical patterns of thought unless it is a physically
identical brain.
This means thought depends on everything going on in the brain, including
things which have apparently no relation whatsoever to how one thinks.  Such
as the number of neurtrinos passing through the person's head.  Certainly
there are some physical details which do not matter toward how the brain
works.  Would you agree that two almost physically identical minds, one with
neutrinos passing through, and one with 3 times as many neutrinos passing
through could have the same patterns of thought?
I don't necessarily believe that neutrinos exist. I doubt the entire
standard model. Anything that does exist would interact with the brain
at some level, although that level may not make the cut up to any kind
of conscious awareness. That doesn't mean that nothing in the brain is
aware of it.

If I walk down the Champs-Élysées I'm just walking in
a straight line. I can walk the identical straight line pattern
through a junkyard, but I have not replicated the Champs-Élysées.
There is a concept of two things looking very different, but in essense
being identical.  For example graphs may be isomorphic 
(https://secure.wikimedia.org/wikipedia/en/wiki/Graph_isomorphism#Example)
they are identical in all the ways that are important.  This is what I am
proposing with an identical mechanical mind.  It might look very different,
but it preserves the same patterns, organization, and capabilities that are
relevant to the operation of that mind.  Its inputs and outputs may be
translated to yield identical behaviors given identical inputs.
They aren't identical in any way. It's your intellect that equates
them. What is important to you may not meet the minimum requirements
of what is important to building a mind.

Neurology is like the four dimensional shadow of perception, which is
a completely different four dimensions, organized not through physical
patterns but semantic patterns which can manifest throughout the
nervous system and even beyond it.
Only biology can have semantic patterns because only biology evolved?
No, there are semantic patterns happening in every physical
phenomenon, it's just not the kind of semantic patterns that we can
relate to directly, because what we are is neurology, which is meta-
zoology, which is meta-biology, which is meta-chemistry (which is
technically meta-physics).

  What
is your opinion on the field of a-life (in which the evolution of life forms
is simulated).  Could these develop minds that perceieve or have semantic
patterns?  Why or why not?  For an example of a-life, 
see:http://www.ventrella.com/darwin/darwin.html
If we choose to see there patterns as having a mind, we can do that,
but no. For the same reason that you can't have a photograph of a
campfire warm your hands. There's no life there. No survival. No
reason for it to have to care. It's just an intellectual shadow of
survival, life, reason, and caring. It could make some really
interesting patterns though. We could definitely learn something.

I used to have the same view you had as well, that computers could not be
conscious.  Somewhere along the way of a philosophy of mind course, studying
computer science, seeing the movie "The Prestige" and much thought and
research on the matter, my opinion changed.  I think it was mainly in seeing
how dualism and epiphenominalism fails to explain anything, and then
understanding the universality of Turing machines along with functionalism
imply that any process, including that of the mind, could be simulated.
Then my belief in the logical impossibility of zombies led to the belief
that such a process would necessarily be conscious.  I think since then I
have gone further, and developed a theory of how qualia result from such
processes.
If you realize that a computer cannot virtualize the hardware that it
runs on as faster, more powerful hardware, then it follows that a
Turing machine can only simulate that which can be simulated. First
hand experience either occurs through experience or it does not occur
at all. A machine is not capable of experience, by definition. A
physical substance can be made to act in accordance with a machine
logic, but that substance does not experience that logic unless it can
relate to it directly. If you put a giant magnet next to a computer,
it's going to have an effect on that computer's logic regardless of
the program.

What's your theory of qualia?

If we look at
what really is going on in our own experience though, rather than
trying to make sense out of it using only linear logic, we can see
that there is more to being a person than can be represented
symbolically. There is no substitute for the ontology of experience
(repeat 100,000 times).
Depends what you are repeating.  Consider in your brain:
Step 1: Update position of each of the ~10^30 particles in your body
according to the field forces
Step 2: Repeat
What is the experience of any 10 of those particles? How many
particles does it take before an experience takes place? Will that
experience be the same whether those particles are atoms or ping pong
balls or digital vectors? Why can't we just update the experience
directly? Where is it being experienced? Why can't we just see it with
the naked eye?

Craig


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to