On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

> On Jan 14, 4:41 pm, Jason Resch <jasonre...@gmail.com> wrote:
> > On Sat, Jan 14, 2012 at 1:38 PM, Craig Weinberg <whatsons...@gmail.com
> >wrote:
> >
> > > Thought I'd throw this out there. If computationalism argues that
> > > zombies can't exist,
> >
> > I think the two ideas "zombies are impossible" and computationalism are
> > independent.  Where you might say they are related is that a disbelief in
> > zombies yields a strong argument for computationalism.
>
> I don't think that it's possible to say that any two ideas 'are'
> independent from each other.


Okay.  Perhaps 'independent' was not an ideal term, but computationalism is
at least not dependent on an argument against zombies, as far as I am aware.


> All ideas can be related through semantic
> association, however distant. As far as your point though, of course I
> see the opposite relation - while admitting even the possibility of
> zombies suggests computationalism is founded on illusion., but a
> disbelief in zombies gives no more support for computationalism than
> it does for materialism or panpsychism.
>

If one accepts that zombies are impossible, then to reject computationalism
requires also rejecting the possibility of Strong AI (
https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI ).


>
> >
> > > therefore anything that we cannot distinguish
> > > from a conscious person must be conscious, that also means that it is
> > > impossible to create something that acts like a person which is not a
> > > person. Zombies are not Turing emulable.
> >
> > I think there is a subtle difference in meaning between "it is impossible
> > to create something that acts like a person which is not a person" and
> > saying "Zombies are not Turing emulable".  It is important to remember
> that
> > the non-possibility of zombies doesn't imply a particular person or thing
> > cannot be emulated, rather it means there is a particular consequence of
> > certain Turing emulations which is unavoidable, namely the
> > consciousness/mind/person.
>
> That's true, in the sense that emulable can only refer to a specific
> natural and real process being emulated rather than a fictional one.
> You have a valid point that the word emulable isn't the best term, but
> it's a red herring since the point I was making is that it would not
> be possible to avoid creating sentience in any sufficiently
> sophisticated cartoon, sculpture, or graphic representation of a
> person. Call it emulation, simulation, synthesis, whatever, the result
> is the same.


I think you and I have different mental models for what is entailed by
"emulation, simulation, synthesis".  Cartoons, sculptures, recordings,
projections, and so on, don't necessarily compute anything (or at least,
what they might depict as being computed can have little or no relation to
what is actually computed by said cartoon, sculpture, recording,
projection...  For actual computation you need counterfactuals conditions.
A cartoon depicting an AND gate is not required to behave as a genuine AND
gate would, and flashing a few frames depicting what such an AND gate might
do is not equivalent to the logical decision of an AND gate.


> You can't make a machine that acts like a person without
> it becoming a person automatically. That clearly is ridiculous to me.
>

What do you think about Strong AI, do you think it is possible?  If so, if
the program that creates a strong AI were implemented on various
computational substrates, silicon, carbon nanotubes, pen and paper, pipes
and water, do you think any of them would yield a mind that is conscious?
If yes, do you think the content of that AI's consciousness would differ
depending on the substrate?  And finally, if you believe at least some
substrates would be conscious, are there any cases where the AI would
respond or behave differently on one substrate or the other (in terms of
the Strong AI program's output) when given equivalent input?


>
> >
> >
> >
> > > If we run the zombie argument backwards then, at what substitution
> > > level of zombiehood does a (completely possible) simulated person
> > > become an (non-Turing emulable) unconscious puppet? How bad of a
> > > simulation does it have to be before becoming an impossible zombie?
> >
> > > This to me reveals an absurdity of arithmetic realism. Pinocchio the
> > > boy is possible to simulate mechanically, but Pinocchio the puppet is
> > > impossible. Doesn't that strike anyone else as an obvious deal breaker?
> >
> > Not every Turing emulable process is necessarily conscious.
>
> Why not? What makes them unconscious?


In my guess, it would be a lack of sophistication.  For example, one
program might simply consist of a for loop iterating from 1 to 10.  Is this
program conscious?  I don't know, but it almost certainly isn't conscious
in the way you or I are.


> You can't draw the line in one
> direction but not the other. If you say that anything that seems to
> act alive well enough must be alive, then you also have to say that
> anything that does not seem conscious may just be poorly programmed.
>
>
When you talk about changing substitution levels, you are talking about
different programs.  Some levels may be so high-level that the important
and necessary aspects are eliminated and replaced with functions which
fundamentally alter the experience of the simulated mind.  Whether or not
this would be noticed depends on the sophistication of the Turing test.
Examination of outward appearance may not even be sufficient.  I think Ned
Block had an argument against that you could have a giant state table that
is infinite in size and for any possible question it had the stored
output.  Such a program might pass a Turing test, but internally it is
performing only a very trivial computation.  If we inspected the code of
this program we could say it has no understanding of individual words, no
complex thought processes, etc.  However, most zombies are defined to be
functionally (if not physically) identical rather than merely capable of
passing a some limited test based on external appearances.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to