On Thu, Jul 28, 2011 at 8:31 AM, Craig Weinberg <whatsons...@gmail.com>wrote:

> On Jul 28, 12:02 am, Jason Resch <jasonre...@gmail.com> wrote:
>
> > Say we made a youtube program that could be interviewed for years by your
> > friends and family, people on this list, and so forth, and it could
> generate
> > responses similar enough to what you would do and say in the same
> > circumstance that no one could tell the difference.  Would the
> intelligence
> > of this process not have to (at a minimum) understand your intelligence
> at
> > some level, in order to replicate it?
>
> No, not at all. It's like saying that a copy machine would have to
> understand Chinese to be able to copy all of the pages in all of the
> books of a library in China.


No it would not.  Copying a page is like generating a recording of a page,
using paper as the recording medium.  My example is more like a copy machine
that could generating a page of responses given a page of questions.
Remember, in this example, the machine is generating behavior equivalent to
your own, based purely on the inputs of questions and statements received by
other humans, not based on what you have already done.



> Intelligence doesn't rub off on an a-
> signifying machine, even if that machine is extremely robust. A glove
> doesn't grow a hand inside of it just because it's shaped like a hand.
> This is what I'm talking about with ACME-OMMM. The exterior of the
> cosmos works in the opposite way of the interior. It's a completely
> different kind of sense, but the two topologies overlap - as
> sensation<>electromagnetism and through fusion><transcendence.
>

Please tell me where the field of AI will run into a wall.  So far we have
machines that can understand speech, drive cars, translate between
languages, beat chess grandmasters, and beat jeopardy champions.  Do you
have a prediction for what machines will not be able to do (behavior wise).


>
> > I think a smart process can emulate a stupid one, but I don't see any way
> > for a stupid process to emulate a smart one.  Unless you doubt the
> > possibility of strong ai (http://en.wikipedia.org/wiki/Strong_AI), then
> > you should accept the possibility of intelligent machines made of
> > non-organic material, which can reproduce the externally visible behavior
> of
> > any intelligent being.
>
> Think of the burning log example. I think that feeling is fairly
> analogous to fire in this example.


Remember, we are ignoring the internal processes for this example and
considering only external behavior.


> Should I accept the possibility of
> intelligent machines made of non-organic material being able to
> someday reproduce the heat and flame of fire so well that we can toast
> marshmallows over it? What a computer can 'reproduce' is 100%
> dependent upon it's human interface. If you build a monitor that has a
> hot pixels, then you can get some heat out of a picture of fire
> recorded in heatvision. If not, there's just the image. With no
> monitor, there's nothing.
>

To ignore issues of replicating fire, we have constrained all input and
output to what a monitor can offer:
Case 1: Video generated by encoding an interview with you seen over a
monitor
Case 2: Video generated by an intelligent process seen over a monitor


>
> If, as I'm suggesting, human feeling is a function of biochemical
> interiority and not arithmetic, you would have to, at some point, use
> organic-like materials to get organic feeling to drive organic
> behavior.


Computers drive robots which build cars, but they don't need feelings to
move.  Remember, everything externally visible that a human does is motion.
Moving arms, legs, vocal cords, eyelids, etc.  What you are proposing is
that there are ways a human can move that are impossible for machines.


> Think of it like a fractal, no matter how deep you go into
> the design, it's still the same thing. A picture of water is more and
> more like a picture and less and less like water the more that you can
> examine it. It's just a matter of time before any inorganic material
> reveals it's lack of feeling and reliance on canned subroutines rather
> than sensorimotive participation at the biochemical level.
>

Is there a limit on what that matter of time is?  Could it be a day, a year,
a century, 100 billion years, infinitely long?


>
> > If you are in agreement so far, that intelligence, even human
> intelligence
> > can be replicated by a mechanical process,
>
> No. The aspects of intelligence which can be replicated by a
> mechanical process are only superficial services, by, for, and of
> human organic intelligence. On it's own such a mechanism replicates
> nothing. It just cycles through meaningless patterns of semiconductor
> circuitry.


Again your silicon racism is showing through.  What makes the patterns in
silicon meaningless but the patterns in carbon meaningful?  I often feel I
am talking to John Searle himself when I converse with you.


> Can we see our own intelligence reflected in an inanimate
> system? Sure, if we choose to. I can imagine that Watson or a talking
> teddy bear is sentient if I want. Neither of them will ever be able to
> imagine anything though.
>

There are computers with creative powers.  One computer is actually credited
with a patent:
http://www.miraclesmagicinc.com/science/man-builds-inventing-machine.html


>
> > This is closely related to the other thought experiment, which presume
> > zombies that act like they can see, think that they can see, and believe
> > that they can see (but in truth they are blind).  If a zombie believes it
> > can see, what makes its belief false but your belief that you can see
> true?
>
> If a zombie can believe something then it's not a zombie.


This is why you must reject the idea that machines can have beliefs to
remain consistent.  I think Bruno has some proofs that demonstrate machines
have beliefs.  I am not certain of this though.


> You're
> answering your own question. Beliefs do not have to correspond to
> anything external to be true. Therefore nothing external necessarily
> corresponds to an internal belief, however internal beliefs can and do
> drive external behaviors. If you worship Ganesha, you might eventually
> wear something with Ganesha on it, but putting a Ganesha T-Shirt on a
> mannequin doesn't give it a belief in Hindu gods. Even a really sporty
> audioanamatronic mannequin with eyes that seem to follow you. That I
> can see is not a belief. It doesn't need any external 'truth' to
> validate it. It is a self-evident presentation. I may not be able to
> see what others see, but that's something else. Maybe they're deluded
> and only think they can see.
>

If would would answer the following questions, I think it would clarify your
position to me much more clearly:

Do you think machines can learn?
Do you think machines can adapt?
Do you think machines can have information?
Do you think machines can have knowledge?
Do you think machines can have internal models of external patterns?
Do you think machines can understand?
Do you think machines can use their internal models to make optimal
decisions?
Do you think machines can behave intelligently?


>
> > It is wrong for reasons no one can ever prove or demonstrate.  The
> process
> > of lying in a brain is different from telling the truth, yet a mechanical
> > brain with the same neural network would have identical patterns of
> thought
> > that would not be consistent with lying.
>
> It can't have identical patterns of thought unless it is a physically
> identical brain.


This means thought depends on everything going on in the brain, including
things which have apparently no relation whatsoever to how one thinks.  Such
as the number of neurtrinos passing through the person's head.  Certainly
there are some physical details which do not matter toward how the brain
works.  Would you agree that two almost physically identical minds, one with
neutrinos passing through, and one with 3 times as many neutrinos passing
through could have the same patterns of thought?


> If I walk down the Champs-Élysées I'm just walking in
> a straight line. I can walk the identical straight line pattern
> through a junkyard, but I have not replicated the Champs-Élysées.
>

There is a concept of two things looking very different, but in essense
being identical.  For example graphs may be isomorphic (
https://secure.wikimedia.org/wikipedia/en/wiki/Graph_isomorphism#Example )
they are identical in all the ways that are important.  This is what I am
proposing with an identical mechanical mind.  It might look very different,
but it preserves the same patterns, organization, and capabilities that are
relevant to the operation of that mind.  Its inputs and outputs may be
translated to yield identical behaviors given identical inputs.


> Neurology is like the four dimensional shadow of perception, which is
> a completely different four dimensions, organized not through physical
> patterns but semantic patterns which can manifest throughout the
> nervous system and even beyond it.
>

Only biology can have semantic patterns because only biology evolved?  What
is your opinion on the field of a-life (in which the evolution of life forms
is simulated).  Could these develop minds that perceieve or have semantic
patterns?  Why or why not?  For an example of a-life, see:
http://www.ventrella.com/darwin/darwin.html


>
> >To believe in zombies is to
> > believe in the rationality of these consequences.
>
> You're not seeing what I'm pointing out. I understand where you're
> coming from and once held the same view that you have.


I used to have the same view you had as well, that computers could not be
conscious.  Somewhere along the way of a philosophy of mind course, studying
computer science, seeing the movie "The Prestige" and much thought and
research on the matter, my opinion changed.  I think it was mainly in seeing
how dualism and epiphenominalism fails to explain anything, and then
understanding the universality of Turing machines along with functionalism
imply that any process, including that of the mind, could be simulated.
Then my belief in the logical impossibility of zombies led to the belief
that such a process would necessarily be conscious.  I think since then I
have gone further, and developed a theory of how qualia result from such
processes.


> If we look at
> what really is going on in our own experience though, rather than
> trying to make sense out of it using only linear logic, we can see
> that there is more to being a person than can be represented
> symbolically. There is no substitute for the ontology of experience
> (repeat 100,000 times).
>

Depends what you are repeating.  Consider in your brain:
Step 1: Update position of each of the ~10^30 particles in your body
according to the field forces
Step 2: Repeat

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to