Le 25 mars 2015 00:11, "meekerdb" <[email protected]> a écrit :
>
> On 3/24/2015 2:23 AM, Quentin Anciaux wrote:
>>
>>
>>
>> 2015-03-24 1:57 GMT+01:00 meekerdb <[email protected]>:
>>>
>>> On 3/23/2015 5:44 PM, LizR wrote:
>>>>
>>>> On 24 March 2015 at 13:07, meekerdb <[email protected]> wrote:
>>>>>
>>>>> Yes, as I understand it that's the argument.  It's consistent with
Platonism.  A computer program's execution written out on paper is just as
much a calculation as a lot of transistors switching.
>>>>
>>>>
>>>> So is the idea to show that a recording is just as conscious as the
original calculation?
>>>>>
>>>>>
>>>>> My caveat is that neither of them is conscious in THIS world because
being conscious requires being conscious OF something.  An isolated, pure
consciousness is an oxymoron.  Consciousness only exists as part of
thoughts and thoughts only have meaning by reference to an external world
and potential action in that world.
>>>>
>>>>
>>>> I am under the impression Bruno gets around that by potentially
allowing the environment to be simulated as well. Or contrariwise, can't
all the inputs to the conscisouness be provided as though it was in the
world? (as for a brain in a vat for example. I mean hypothetically, and to
simplify the argument, not as a general model of consciousness.)
>>>
>>>
>>> Yes, he casually dismisses the objection by saying we'll just include
the environment too.  But that's my point that it's then no longer a new
radical result.  It's just saying that if you simulate a world it can
include conscious beings who are conscious of that world.  But IN THAT
WORLD their substrate is not inert - even if it's inert in our world, e.g.
consider the novel "Mody Dick" being simulated in a computer.  To Ishmael
and Ahab in the computer they'd be conscious and experiencing the hunt for
the white whale.  And, according to Platonists, they are as printed on the
page too.
>>>
>>
>> If the world is a computation, conscious part of it are subprogram that
can be isolated by definition...
>
>
> That's the point I disagree with.

If it's a program then you've no choice.

> When Bruno starts the comp argument by asking if you would say "Yes" to
the doctor, it is implicit that the doctor is going to replace some part or
all of your brain, BUT it's going to remain within the same environmental
context.

Yes... But that context could be also simulated... In the end the only
thing the conscious program can know, it knows it through an interface...

> I think the "consciousness subprogram" can run without the context, but I
think it gets it's meaning, what it's about, from the context

The context is internal to the conscious subprogram as it is it by
definition who gives meaning. The 'external' world is only inputs received
from the interface of the subprogram, no more.

- and I think that context has to be very broad, including evolutionary
history for example.
>
>
>> now that when they run, for their consciousness to have meaning they
must be fed input that have meaning to the conscious subprogram is a
tautology...
>>
>> Also, the MGA *never* assert that the consciousness simulated is
conscious of *our* world
>
>
> It's implied by his Alice discussion.

When rerunning the program with the recorded initial input, by hypothesis
the second run must be as conscious as the first when the inputs came from
the 'real'  external world... The program itself can't tell as it receives
exactly the same inputs... Not similar inputs but *exactly* the same. So
either the second run is as conscious as the first or none are.

>If the computation were just some arbitrary program we would have no
reason to think it instantiated consciousness.

I never said it's an arbitrary program,  I said it's a program thought to
instantiate a conscious moment... However you determine it is a conscious
moment in the first place is irrelevant for the argument.

>  We only think that because it is record to a conscious computation in
our world.
>
>
>> (as it is obvious it can't be as it isn't fed inputs from our world)...
it only assumes that you're running a program who is thought to be
conscious (simulating a conscious being) and shows that if you accept that,
and you accept the supervenience thesis and so accept that it is conscious
in virtue of running in bare matter, you have to accept that the same
stream of consciousness supervene on the projection + broken gate.
>
>
> But I'm not accepting the supervenience thesis as applied to an isolated
sequence of states.  Without the context (which is implicit in the
counterfactuals) the same sequence of computations could correspond to two
different meanings, two different conscious thoughts - just as the same set
of differential equations can model two different physical systems.
>
> I'm not sure how this plays into the UD because there they are infinitely
many threads of computation through the same state.  The state cannot, by
itself, instantiate a thought.  A thought must require a long sequence of
identical or similar states.  But in the UD there are no counterfactuals,
because every possibility occurs at some point and branches from the
thread.  At least that's how I understand it.
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
"Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to