The burden of proof is not on me.  I am replying to your initial claim, that
Elvis will not appear in context of GoL.  

 

Intellectual honesty implies the proof is on you.

 

wrb

 

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Tuesday, August 28, 2012 2:08 PM
To: everything-list@googlegroups.com
Subject: Re: Two reasons why computers IMHO cannot exhibit intelligence

 

It's intentional hyperbole, not a non-sequitur. I am making the comparison
between a program designed to produce simple patterns of pixels achieving a
trivial level of novelty within that constraint of design and the event of
any such program achieving an authentic transgression of its own
programmatic constraints.

There is no need to prove this claim as it is not a claim, it is a factual
description and a clarification of the implications of that description. If
you are claiming that GoL can produce something other than meaningless
iterations of quantitative pixels, then the burden of proof is on you. Where
is the Elvis?


On Tuesday, August 28, 2012 4:13:22 PM UTC-4, William R. Buckley wrote:

Proof of non-sequitur.  You assert that GoL cannot invent Elvis Presley.
You have no proof of this claim.  You simply claim it.  Further, see 

 

http://en.wikipedia.org/wiki/Non_sequitur_%28logic%29

 

Your relevant statement is:  Conway's game of life can produce a new kind of
glider, but it can't come up with the invention of Elvis Presley, regardless
of how sophisticated the game is. 

 

QED

 

 

 

From: everyth...@googlegroups.com <javascript:>
[mailto:everyth...@googlegroups.com <javascript:> ] On Behalf Of Craig
Weinberg
Sent: Tuesday, August 28, 2012 12:45 PM
To: everyth...@googlegroups.com <javascript:> 
Subject: Re: Two reasons why computers IMHO cannot exhibit intelligence

 

On Tuesday, August 28, 2012 2:55:54 PM UTC-4, William R. Buckley wrote:

No, it is not ad hominem.  It is a serious issue.

Are they mutually exclusive? Telling someone they have a bad haircut could
be a serious issue too, but it doesn't mean it isn't ad hominem.
 

 

The discussion of COMP is one of essentialism.

 

Your first argument hinges upon a non-sequitur.

I can't defend against an unsupported accusation. All I can do is say, 'no
it doesn't'.

 

Your second argument hinges upon semiotics.  You have no way to 

compare your experience (conscious or otherwise) to that of any 

other creature; your umwelt is not my umwelt.


Your presumption of my capacities to compare experiences depends on exactly
the same capacity that mine does. If you are right that your umwelt is not
my umwelt, then how do you know that my umwelt doesn't contain yours?

Instead, why not assume a psychic unity of mankind. I don't have to assume
that I can't compare my experience to another creature at all. I can say
that if I step on a cat's tail and it reacts, that there is in fact every
reason to assume a comparable dimension of pain. It's sophistry to pretend
that we can't compare our own experience to others...we do it all the time.
Our sanity depends on it. It need not be questioned as the questioning
itself implies a hyper-reality of sense comparison between umwelts which
would be inaccessible if your proposition was true. You cut off the limb you
are sitting on to try to hit me with it.
 

 

And, vitalism is not necessarily a call to Deity.  There are a great 

many non-deist connotations to vitality.


Who said anything about a deity?

Craig
 

 

wrb

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On
Behalf Of Craig Weinberg
Sent: Tuesday, August 28, 2012 10:51 AM
To: everyth...@googlegroups.com
Subject: Re: Two reasons why computers IMHO cannot exhibit intelligence

 

I agree with what Roger is saying here (and have of course expressed that
before often) and do not think that accusations of vitalism add anything to
the issue. It's really nothing but an ad hominem attack.

I would only modify Roger's view in two ways:

1. Programs can and do produce outcomes that are not directly anticipated by
the programmer, but that these outcomes are trivial and do not transcend the
constraints of the program itself. Conway's game of life can produce a new
kind of glider, but it can't come up with the invention of Elvis Presley,
regardless of how sophisticated the game is. Blue cannot be generated by any
combination of black and white or one and zero.

2. Hardware does actually feel something, but not necessarily what we would
imagine. We use certain materials for computer chips and not hamsters or
milkshakes because reliable computation requires specific properties. We
only use materials which are subject to absolute control by outside
intervention and behave in an absolutely automatic way to sustain those
introduced controls. Living organisms are very much the opposite of that,
but that doesn't mean that inorganic matter has no experience or proto
experience on its own inertial frame of perception. It might, but we don't
know that. I would give the benefit of the doubt to all matter as having
common physical sense, but that organic chemistry, biology, zoology, and
anthropology present dramatic qualitative breakthroughs in elaboration of
sense.

This is not vitalism. There is no magic juice of life-ness, only a rough
segmentation or diffracted caste relation of participation richness and
significance intensity. A living baby is not the same thing as a spare tire
to us, but it isn't significantly different to a tsunami. Neither the
significance nor the insignificance is an 'illusion', they are just measures
of the relations of the investment of experience across eons and species and
how that investment relates to the participants on every level.

Roger and Searle are correct however in pointing out that the machine has no
stake in the outcome of the program, nor can it. I suggest that there is an
experience there, but likely very primitive - a holding and releasing which
is what we know as electric current within the semiconductors. There is no
actual current, only excited-empowered molecules. There is no program, only
a mirroring of our meticulous transcription of human motive and its
inevitable tautological products. 

Since we are multi-layered, we can become confused when we assume that who
we are must be a monolithic representation of all that we are. If we expect
that the contents of all processes of the psyche should be available to our
verbal-cognitive specialists then we will be disappointed and turn to Libet.
We will mistake the automatism which supports lower levels of what we are
for the quasi-independence of the spectrum of identity which we embody.

Craig


On Tuesday, August 28, 2012 12:13:23 AM UTC-4, William R. Buckley wrote:

Roger:

 

I suggest that at root, you have vitalist sympathies.

 

wrb

 

From: everyth...@googlegroups.com [mailto:everyth...@googlegroups.com] On
Behalf Of Roger Clough
Sent: Monday, August 27, 2012 4:07 AM
To: everything-list
Subject: Two reasons why computers IMHO cannot exhibit intelligence

 

Hi meekerdb 

 

IMHO I don't think that computers can have intelligence

because intelligence consists of at least one ability:

the ability to make autonomous choices (choices completely

of one's own). Computers can do nothing on their own,

they can only do what softward and harfdware tells them to do. 

 

Another, closely related, reason, is that there must be an agent that does
the choosing,

and IMHO the agent has to be separate from the system.

Godel, perhaps, I speculate. 

 

 

Roger Clough, rcl...@verizon.net <javascript:> 

8/27/2012 

Leibniz would say, "If there's no God, we'd have to invent him so everything
could function."

----- Receiving the following content ----- 

From: meekerdb 

Receiver: everything-list 

Time: 2012-08-26, 14:56:29

Subject: Re: Simple proof that our intelligence transcends that of computers

 

On 8/26/2012 10:25 AM, Bruno Marchal wrote:
>
> On 25 Aug 2012, at 12:35, Jason Resch wrote:
>
>>
>> I agree different implementations of intelligence have different
capabilities and 
>> roles, but I think computers are general enough to replicate any
intelligence (so long 
>> as infinities or true randomness are not required).
>
> And now a subtle point. Perhaps.
>
> The point is that computers are general enough to replicate intelligence
EVEN if 
> infinities and true randomness are required for it.
>
> Imagine that our consciousness require some ORACLE. For example under the
form of a some 
> non compressible sequence 11101000011101100011111101010110100001... (say)
>
> Being incompressible, that sequence cannot be part of my brain at my
substitution level, 
> because this would make it impossible for the doctor to copy my brain into
a finite 
> string. So such sequence operates "outside my brain", and if the doctor
copy me at the 
> right comp level, he will reconstitute me with the right "interface" to
the oracle, so I 
> will survive and stay conscious, despite my consciousness depends on that
oracle.
>
> Will the UD, just alone, or in arithmetic, be able to copy me in front of
that oracle?
>
> Yes, as the UD dovetails on all programs, but also on all inputs, and in
this case, he 
> will generate me successively (with large delays in between) in front of
all finite 
> approximation of the oracle, and (key point), the first person
indeterminacy will have 
> as domain, by definition of first person, all the UD computation where my
virtual brain 
> use the relevant (for my consciousness) part of the oracle.
>
> A machine can only access to finite parts of an oracle, in course of a
computation 
> requiring oracle, and so everything is fine.

That's how I imagine COMP instantiates the relation between the physical
world and 
consciousness; that the physical world acts like the oracle and provides
essential 
interactions with consciousness as a computational process. Of course that
doesn't 
require that the physical world be an oracle - it may be computable too.

Brent

>
> Of course, if we need the whole oracular sequence, in one step, then comp
would be just 
> false, and the brain need an infinite interface.
>
> The UD dovetails really on all programs, with all possible input, even
infinite non 
> computable one.
>
> Bruno
>
> http://iridia.ulb.ac.be/~marchal/ <http://iridia.ulb.ac.be/%7Emarchal/> 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsub...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to
everything-li...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/everything-list/-/ObNixtlbX5cJ.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to
everything-li...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/everything-list/-/iza3JOH_rIgJ.
To post to this group, send email to everyth...@googlegroups.com
<javascript:> .
To unsubscribe from this group, send email to
everything-li...@googlegroups.com <javascript:> .
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To view this discussion on the web visit
https://groups.google.com/d/msg/everything-list/-/L7dqv1xtL3sJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to