On 19 Jan 2012, at 03:56, Jason Resch wrote:



On Tue, Jan 17, 2012 at 2:20 PM, Craig Weinberg <whatsons...@gmail.com> wrote:
On Jan 17, 12:51 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Mon, Jan 16, 2012 at 10:29 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

>
> > That's what I'm saying though. A Turing machine cannot be built in
> > liquid, gas, or vacuum. It is a logic of solid objects only. That
> > means it's repertoire is not infinite, since it can't simulate a
> > Turing machine that is not made of some simulated solidity.
>
> Well you're asking for something impossible, not something impossible to
> simulate, but something that is logically impossible.

We can simulate logical impossibilities graphically though (Escher,
etc). My point is that a Turing machine is not even truly universal,
let alone infinite. It's an object oriented syntax that is limited to
particular kinds of functions, none of which include biological
awareness (which might make sense since biology is almost entirely
fluid-solution based.)

But its not entirely free of solids. You can build a computer out of mostly fluids and solutions too.

I agree. Even with gas in some volume.




>
> Also, something can be infinite without encompassing everything. A line > can be infinite in length without every point in existence having to lie on
> that line.

If that's what you meant though, it's not saying much of anything
about the repertoire. A player piano has an infinite repertoire too.
So what?

A piano cannot tell you how any finite process will evolve over time.

Yes. Craig argue that machine cannot thinks by pointing on its fridge.





>
> > > To date, there is nothing we
> > > (individually or as a race) has accomplished that could not in principle > > > also be accomplished by an appropriately programed Turing machine.
>
> > Even if that were true, no Turing machine has ever known what it has
> > accomplished,
>
> Assuming you and I aren't Turing machines.

It would be begging the question otherwise.

All known biological processes are Turing emulable.

Yes.




>
> > so in principle nothing can ever be accomplished by a
> > Turing machine independently of our perception.
>
> Do asteroids and planets exist "out there" even if no one perceives them?

They don't need humans to perceive them to exist, but my view is that
gravity is evidence that all physical objects perceive each other. Not
in a biological sense of feeling, seeing, or knowing, but in the most
primitive forms of collision detection, accumulation, attraction to
mass, etc.

If atoms can perceive gravitational forces, why can't computers perceive their inputs?

Indeed.




>
> > What is an
> > 'accomplishment' in computational terms?
>
> I don't know.
>
>
>
> > > > You can't build it out of uncontrollable living organisms.
> > > > There are physical constraints even on what can function as a simple
> > > > AND gate. It has no existence in a vacuum or a liquid or gas.
>
> > > > Just as basic logic functions are impossible under those ordinary > > > > physically disorganized conditions, it may be the case that awareness > > > > can only develop by itself under the opposite conditions. It needs a > > > > variety of solids, liquids, and gases - very specific ones. It's not > > > > Legos. It's alive. This means that consciousness may not be a concept > > > > at all - not generalizable in any way. Consciousness is the opposite, > > > > it is a specific enactment of particular events and materials. A brain > > > > can only show us that a person is a live, but not who that person is. > > > > The who cannot be simulated because it is an unrepeatable event in the > > > > cosmos. A computer is not a single event. It is parts which have been > > > > assembled together. It did not replicate itself from a single living
> > > > cell.
>
> > > > > > You can't make a machine that acts like a person without
> > > > > > it becoming a person automatically. That clearly is ridiculous to
> > me.
>
> > > > > What do you think about Strong AI, do you think it is possible?
>
> > > > The whole concept is a category error.
>
> > > Let me use a more limited example of Strong AI. Do you think there is
> > any
> > > existing or past human profession that an appropriately built android > > > (which is driven by a computer and a program) could not excel at?
>
> > Artist, musician, therapist, actor, talk show host, teacher,
> > caregiver, parent, comedian, diplomat, clothing designer, director,
> > movie critic, author, etc.
>
> What do you base this on? What is it about being a machine that precludes
> them from fulfilling any of these roles?

Machines have no feeling. These kinds of careers rely on sensitivity
to human feeling and meaning. They require that you care about things
that humans care about. Caring cannot be programmed.

A program model of a psychologist's biology will tell you exactly what the psychologist will do and say in any situation.

But Craig might be right. Caring and many things can be Turing emulable, yet not programmable. If artificial machine evolves, they might indeed not care about what humans care about. Especially if we dismiss them as strange, foreigners, or slaves.



That is the
opposite of caring, because programming requires no investment by the
programmed. There is no subject in a program, only an object
programmed to behave in a way that seems like it could be a subject in
some ways.

If there is no subject in the emulation of the psychologist's biology, then it is a zombie. The evolution of the program can be used to drive the servos and motors in the android, and it will behave indistinguishably.



>
> Also, although their abilities are limited, the below examples certainly > show that computers are making inroads along many of these lines of work,
> and will only improve overtime as computers become more powerful.

Many professions would be much better performed by a computer. Human
oversight might be desirable for something like surgery, but I would
probably go with the computer over a human surgeon.

>
> Artist and Musician: Computer generated music has been around since at
> least the 60s:http://www.youtube.com/watch?v=X4Neivqp2K4

Yep, 47 years since then and still no improvement whatsoever. Based on
that I think we cannot assume that computer generated music will
improve significantly over time as computers become more powerful.
They can just make more realistic music sound just as bad.

Until computers have greater power than the parts of the human brain involved in these skills, and we understand those mechanisms (or reverse engineer/copy them) AI will lag behind human ability.

Perhaps. It is also possible that we will transform ourself into computer more quickly that hand made computers evolve enough to surpass humans.

When you look how education investment and quality decreased over the 50 last years, it looks like today's machine might already be clever than our future kids. The singularity point might get closer, not by machine evolving, but by humans regressing.




> Therapist: ELIZA, the computer psychologist has been around since 1964:http://nlp-addiction.com/eliza/

Again, no improvement in almost 50 years. Does anyone use ELIZA for
psychology? No. It's utterly useless except as a novelty and
linguistics demonstration.

> Teacher:http://en.wikipedia.org/wiki/Rosetta_Stone_%28software%29

It's not a teacher, it's a computer assisted learning regimen. An
exercise machine is not the same thing as a personal trainer or a
coach.

> Caregiver: The Japanese are actively researching and developing caregiving
> robots to take care of their aging 
population:http://web-japan.org/trends/09_sci-tech/sci100225.html

That doesn't mean that they will excel at being caregivers.

> Comedian: "What kind of murderer has moral fiber?" — "A cereal killer." > This joke was written by a computer. (http://www.newscientist.com/article/dn1719 )
> Movie Critic:http://www.netflixprize.com/

Again, generating a sophomoric pun (in a sea of garbage jokes) is not
the same thing as 'excelling at being a comedian.' All of these
examples reveal the utter failure of computation to get passed square
one in any of these areas. It is obvious to me that the failure is
rooted in precisely the failure of computation to simulate awareness
beyond a trivial form of sophistication. Limited capacities for
simulating trivial music, conversation, humor, compassion are
radically overestimated, even though there has been no sign of
progress at all since the beginning of computing.

>
> > >  Could
> > > there be a successful android surgeon, computer programmer, psychologist,
> > > lawyer, etc.
>
> > I would say there could be very successful android surgeons, less so
> > computer programmers and lawyers because there is an element of
> > creativity there,
>
> Computers have demonstrated creativity:http://www.mendeley.com/research/automated-design-previously-patented ...
>

link doesn't come up.

Sorry, it was incomplete: 
http://www.mendeley.com/research/automated-design-previously-patented-aspherical-optical-lens-system-means-genetic-programming/


> > and not so much for a psychologist, because the job
> > requires the understanding of feeling, which is not possible for a
> > computer executed in material that cannot feel like an animal feels.
>
> But a computer program will have the same output (outwardly visible
> behavior) regardless of its substrate. Clearly the material on which the > Turing machine is executed cannot have any effect on its performance.

If that were the case then a Turing machine should be executable as a
truck load of live hamsters or a dense layer of fog.

It has to be a Turing machine before it gains all the powers of every other Turing machine. Not everything is a Turing machine, but any Turing machine, regardless of its substrate is equally capable.

Yes.




The fact that it
cannot work that way is evidence that the material does relate to the
ability of a Turing machine to perform even basic functions.

Art, music, comedy, compassion, etc are not 'output'.

Niether are the nerve impulses from your spinal cord art, music, comedy, compassion, etc. But the output can be used to control a body or other mechanism to express any and all of those things.

They are
experiences which can be shared. A Turing machine can't experience
anything by itself, it is only the substrate that experiences.



> If a
> Turing machine run on carbon makes a better psychologist, then that same > program executed on a silicon Turing machine will be just as successful.

The machine exploits the common sense of object oriented substrates.
It doesn't matter whether it runs on silicon or boron or gadolinium,
because any sufficiently polite solid material will do. None of them
make a good psychologist. For that you need something that neurons run
on themselves.

You need to believe a psychologist is capable of hyper-computation for this position to be consistent.

That's what we told to Craig since the beginning.





>
> > Until silicon can feel proud and ashamed, it won't be any good at
> > psychology.
>
> Unless there is something about psychologists that is infinite, then there > is no externally visible behavior a psychologist is capable of that the
> android controlled by a Turing machine could not also do.

A keyboard can be programmed to type any sentence. Does that mean it
is Shakespeare? A Turing machine can only impersonate intelligence
trivially,

Again different human skills require different levels of computational resources. Do you think Deep Blue can only play chess at a trivial level?

it can't embody it authentically.

If something behaves intelligently it is intelligent.

Yes. Note that something intelligent does not necessarily behave intelligently. In fact intelligence is a prerequisite for being stupid. Pebbles are not stupid.



It's not about matching
behaviors, it's about having the sensitivity and feeling to know when
and why the behaviors are appropriate. It's about originating new
behaviors that are significant improvements over previous approaches.

>
>
>
> > > Or do you believe there is some inherent limitation of
> > > computers that would prevent them from being capable in one of these
> > > roles?  If so please provide an example.
>
> > Computers are inherently limited by their material substrate. A
> > mechanism of electronic silicon will never know what it is to feel
> > pain, fear, pleasure, etc. Any role which emphasizes a talent for
> > feeling and understanding would fail to be fulfilled by the promise of
> > disembodied recursive enumeration.
>
> Do you think something have to feel to perfectly act as though it is
> feeling? Actors can pretend to suffer if their role is to be tortured in a
> movie, yet they feel no pain.

They aren't feeling pain at the moment, but they are capable of
experiencing pain, therefore they can fake it with feeling.

An actors performance comes down to how they move their muscles, there is a limited number of muscles in a human body, and a finite number of ways in which an actor can move them in any performance of finite length. These movements could be replicated even by a process which has never felt pain, and do so as convincingly as any actor.


>  If you are into sci-fi, you should watch the
> recent (not 1970s) Battlestar Galactica series. Among other things, it > explores a racism against machines who in all respects look act and behave
> like humans.

Yeah I have watched a lot of that BSG. I like how the cylons are
monotheistic and humans are pagan.  It's a good show. I would agree,
if it were the case that AI robots were indistinguishable to us that
it would be a valid philosophical issue. My view though is that there
are some good reasons that will never be the case.

If we scanned brain images into computers powerful enough to run them, and they always broke down or failed to function, I would consider that evidence against computationalism. The fact that the current computers of our time have not equaled or surpassed us in every respect is no more evidence against computationalism, then would it have been in the 1900's that no man-made mechanical machine could convincingly behave like a human. We know roughly the computational power of the human brain, and computers of today are somewhere between that of an insect and that of a mouse. Once we reach the computational power of a mouse brain, then in 7 years we will reach the power of a cat brain, and 7 years later the power of a human brain (Assuming Moore's law of doubling computational power for the same price each year).

As the AI horizon
continues to recede infinitely, even in the face of ever faster
hardware and more bloated software, we will continue to have to deal
with actual racism rather than theoretical anthropism.

20 years ago would you have been surprised to learn that a computer would beat the leading Jeapordy champions, or that we would have self-driving cars before flying cars?

Sure. I have been treated as a complete morons for having said that computer will able to play chess and will able to to symboblic derivation and integration. The dogma, when I was young, was that computer are only crunching numbers machines, capable of doing only numerical calculations and nothing else.



If the cylons
were genetically engineered beings instead, well, that's a different
story entirely. Living creatures matter, programs don't (except to the
living creatures that use them).

Are you afraid to burn coal in your stove out of concern that the material will sense being burned?

Yes. Craig's "theory" is a bit frightening with respect of this. But of course that is not an argument. Craig might accuse you of wishful thinking.

Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to