Le 28 mars 2015 21:20, "meekerdb" <meeke...@verizon.net> a écrit :
>
> On 3/28/2015 2:06 AM, Quentin Anciaux wrote:
>>
>> Le 28 mars 2015 00:50, "meekerdb" <meeke...@verizon.net> a écrit :
>> >
>> > On 3/27/2015 3:21 PM, Quentin Anciaux wrote:
>> >>
>> >>
>> >> Le 27 mars 2015 23:09, "meekerdb" <meeke...@verizon.net> a écrit :
>> >> >
>> >> > On 3/27/2015 4:06 AM, Quentin Anciaux wrote:
>> >> >>
>> >> >>
>> >> >>
>> >> >> 2015-03-27 11:44 GMT+01:00 LizR <lizj...@gmail.com>:
>> >> >>>
>> >> >>> On 27 March 2015 at 23:24, Quentin Anciaux <allco...@gmail.com>
wrote:
>> >> >>>>
>> >> >>>> 2015-03-27 10:12 GMT+01:00 LizR <lizj...@gmail.com>:
>> >> >>>>>
>> >> >>>>> On 27 March 2015 at 19:28, Quentin Anciaux <allco...@gmail.com>
wrote:
>> >> >>>>>>
>> >> >>>>>> The ab asurdo is showing computationalism is incompatible with
physical supervenience, not that it is true.
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> Yes sorry, "reject" was a poor choice of words. I meant argue
from the comp position rather than the materialist one, and know what I'm
talking about.
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> In the end by being forced to accept consciousness must
supervene on the movie + broken gate... If you believe it,  then you've
abandon computationalism as a theory of the mind as the movie+broken gates
is not a computation... Or you can keep computationalism and abandon
physical supervenience.... QED
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> Yes I realise that. The same applies to Maudlin. All I wanted
to know at the moment was how the contradiction arises in the MGA.
>> >> >>>>>
>> >> >>>> It seems to me that's what I explained...
>> >> >>>
>> >> >>>
>> >> >>> I'm sure it does. As I said, I can't quite get my head around it,
so it's unlikely a quick overview is going to help me do so. (After all I
couldn't follow Bruno's explanation, which involved smoke and mirrors, or
something similar.) Maybe I'm just the wrong type of geek to be able to
grok this argument, but I keep trying.
>> >> >>>
>> >> >>>>
>> >> >>>> it arises because under computationalism, it is assumed
consciousness is supported by a computation.... under computationlism +
physical supervenience, it assumed the computation is eventually supported
by physcial activity and eventually this leads to attribute consciousness
to the record, which is not a computation, contradicting the assumption of
computationalism...
>> >> >>>>
>> >> >>> Yes, I can see that if you are led to attribute consciousness to
a record then that will contradict the original assumption. But I haven't
yet been able to see how the MGA leads to attributing consciousness to a
record. I'm sure it does show that, but for me it doesn't quite click.
Maybe I'm doomed to never get an intuitive grasp of the argument.
>> >> >
>> >> >
>> >> >>
>> >> >> 1- It is assumed you have a machinery/program that is conscious.
(a real conscious AI)
>> >> >> 2- You have (for example) a conversation with it.
>> >> >> 3- While doing that conversation, you record all inputs fed to the
machine.
>> >> >> 4- You replay those inputs to the machine.
>> >> >> 5- Assuming in 3 the machine was conscious, replaying the same
inputs, the machine should still be conscious.
>> >> >> 6- You remove from the machine all the transistor not in use
during that particular run (given the recorded input)
>> >> >> 7- You replay those inputs to the ("crippled") machine.
>> >> >> 8- Assuming in 3 and 5 the machine was conscious, replaying the
same inputs, the machine should still be conscious as in 5 (because what
you removed wasn't in use anyway).
>> >> >> 9- You break one transistor, but you make a device (in the MGA
it's the projection of the record on the graph) that permits (even if the
transistor is broke) to mimic the output at the exact moment it should have
happen if the transistor wasn't broken (like the lucky cosmic ray replacing
the firing of a neuron).
>> >> >> 10- Assuming in 3,5 and 8 the machine was conscious, replaying the
same inputs, the machine should still be conscious as the broken
 transistor while not working did nonetheless gave the correct output
thanks to the lucky ray/devide/movie projection.
>> >> >> 11- You do 9 for all the transistor, so as to leave only the
mimic...
>> >> >> 12- Assuming in 3,5,8 and 10 the machine was conscious, then the
machine is still conscious while no computation occur anymore....
contradicting computationalism.
>> >> >>
>> >> >> From that, either computationalism is false or physical
supervenience is false.
>> >> >
>> >> >
>> >> >
>> >> > A good outline, but it doesn't address the question of
counterfactual correctness.  After step 6 the machine can no longer respond
correctly to a different input - it and whatever computation it does, is no
longer counterfactually correct.
>> >>
>> >> Assuming non active parts are needed (negating the move of step 6)
basically means physical supervenience is false.
>> >
>> >
>> > ?? Of course the non-active parts are needed for different inputs -
otherwise they're not needed for anything and need not be part of the AI.
>>
>> Not for that particular run... If you say they are for that run, then
you're saying physical supervenience is false.
>>
>> >
>> >
>> >> > Of course you can expand the AI to include so much of the world
that there are effectively no inputs; which is the same as saying it
computes the outputs for all possible inputs.  But then it has become a
Matrix type world unto itself.
>> >>
>> >> Here you're talking about the level at which the emulation occurs...
Or, the conclusion is valid for any finite level, whatever it is.
>> >
>> >
>> > My point is that if the level has to be very large, e.g. the whole
universe, then no reversal has been achieved;
>>
>> It's dubious it would have to be so large... Meaning is internal to the
conscious program, it is it who internalize and record it's input... Inputs
in the end are only numbers, the what it means come from the program itself
(if it's conscious, as the what it means is the 1st person POV).
>
>
> But that seems to rely on something (a soul) in addition to the numbers
that is conscious and supplies the meaning.

No it's like current image recognition programs, the "meaning" built by
running the neuronals network algorithm is internal to the program. There
is no ghost in the machine. We're talking about programs, they have
interfaces for inputs and that's all. If ypu see it as a tape like the
turing machine, inputs are on the tapes as the program... In the end it's
only numbers so as the meaning must be in them if computationalism is true
by virtue of computation alone... There is in the end no external world to
computations to add any meaning.

Quentin

> Bruno's idea is that the "meaning" is just a relation to other numbers.
But what other numbers?  I think it's the other numbers that would, from an
external viewpoint, be a simulation of the world.  Some number in the
conscious program would refer to the Moon in the sense that they were
related to a lot of other numbers that represented the Moon.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
"Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to