On Fri, Aug 15, 2014 at 11:09 PM, meekerdb <[email protected]> wrote:

>  On 8/15/2014 5:30 PM, Jesse Mazer wrote:
>
>
>
> On Fri, Aug 15, 2014 at 1:27 AM, Russell Standish <[email protected]>
> wrote:
>
>> On Thu, Aug 14, 2014 at 09:41:00PM -0700, meekerdb wrote:
>> > On 8/14/2014 8:32 PM, Russell Standish wrote:
>> > >On Thu, Aug 14, 2014 at 08:12:30PM -0700, meekerdb wrote:
>> > >>That does seem strange, but I don't know that it strikes me as
>> > >>*absurd*.  Isn't it clearer that a recording is not a computation?
>> > >>And so if consciousness supervened on a recording it would prove
>> > >>that consciousness did not require computation?
>> > >>
>> > >To be precise "supervening on the playback of a recording". Playback
>> > >of a recording _is_ a computation too, just a rather simple one.
>> > >
>> > >In other words:
>> > >
>> > >#include <stdio.h>
>> > >int main()
>> > >{
>> > >   printf("hello world!\n");
>> > >   return 1;
>> > >}
>> > >
>> > >is very much a computer program (and a playback of recording of the
>> > >words "hello world" when run). I could change "hello world" to the
>> contents of
>> > >Wikipedia, to illustrate the point more forcibly.
>> > OK.  So do you think consciousness supervenes on such a simple
>> > computation - one that's functionally identical with a recording? Or
>> > does instantiating consciousness require some degree of complexity
>> > such that CC comes into play?
>> >
>>
>>  My opinion on whether the recording is conscious or not aint worth a
>> penny.
>>
>> Nevertheless, the definition of computational supervenience requires
>> countefactual correctness in the class of programs being supervened
>> on.
>>
>> AFAICT, the main motivation for that is to prevent recordings being
>> conscious.
>
>
>  I think it is possible to have a different definition of when a
> computation is "instantiated" in the physical world that prevents
> recordings from being conscious, a solution which doesn't actually depend
> on counterfactuals at all. I described it in the post at
> http://www.mail-archive.com/[email protected]/msg16244.html
>  (or
> https://groups.google.com/d/msg/everything-list/GC6bwqCqsfQ/rFvg1dnKoWMJ
> on google groups). Basically the idea is that in any system following
> mathematical rules, including both abstract Turing machines and the
> physical universe, everything about its mathematical structure can be
> encoded as a (possibly infinite) set of logical propositions. So if you
> have a Turing machine running whose computations over some finite period
> are supposed to correspond to a particular "observer moment", you can take
> all the propositions dealing with the Turing machine's behavior during that
> period (propositions like "on time-increment 107234320 the read/write head
> moved to square 2398311 and changed the digit there from 0 to 1, and
> changed its internal state from M to Q"), and look at the structure of
> logical relations between them (like "proposition A and B together imply
> proposition C, proposition B and C together do not imply A", etc.). Then
> for any other computation or even any physical process, you can see if it's
> possible to find a set of propositions with a completely *isomorphic*
> logical structure.
>
>
> But physical processes don't have *logical* structure.  Theories of
> physical processes do, but I don't think that serves your purpose.
>

Propositions about physical processes have a logical structure, don't they?
And wouldn't such propositions--if properly defined using variables that
appear in whatever the correct fundamental theory turns out to be--have
objective truth-values?

Also, would you say physical processes don't have a mathematical structure?
If you would say that, what sort of "structure" would you say they *do*
have, given that we have no way of empirically measuring any properties
other than ones with mathematical values? Any talk of physical properties
beyond mathematical ones gets into the territory of some kind of
"thing-in-itself" beyond all human comprehension.



>   And even restricting the domain to Turing machines, I don't see what
> proposition A and proposition B are?
>

They could be propositions about basic "events" in the course of the
computation--state changes of the Turing machine and string on each
time-step, like the example I gave "on time-increment 107234320 the
read/write head moved to square 2398311 and changed the digit there from 0
to 1, and changed its internal state from M to Q". There would also have to
be propositions for the general rules followed by the Turing machine, like
"if the read/write head arrives at a square with a 1 and the machine's
internal state is P, change the 1 to a 0, change the internal state to S,
and advance along the tape by 3 squares".




>   Aren't they just they transition diagram of the Turing machine?  So if
> the Turing machine goes thru the same set of states that set defines an
> equivalence class of computations.  But what about a different Turing
> machine that computes the same function?  It may not go thru the same
> states even for the same input and output.  In fact there is one such
> Turing machine that just executes the recording.  Right?
>

What I'm imagining here is that if there is a true mathematical theory of
consciousness of the kind David Chalmers imagines, it would define distinct
observer-moments in terms of distinct logical networks, not merely in terms
of "functions" defined solely in terms of input-output relations. Obviously
I can't prove this, but the advantage is that it would preserve most of the
features that make computational theories of mind appealing--see the
discussion by Chalmers at http://consc.net/papers/qualia.html of problems
that arise when you reject computationalism and imagine that the particular
type of matter doing the computation is important for conscious experience
(as John Searle supposes), for instance--while at the same time avoiding
the types of pitfalls about 'instantiation' pointed out by the movie-graph
argument and Maudlin's Olympia.

If correct input-output relations were sufficient for consciousness, then
lookup tables--which can just be giant libraries of previously recorded
computations, showing responses of the computed being (a mind upload, say)
to all possible series of inputs, starting from a given initial
state--would have to be conscious too. But my hunch is that replaying a
recording from a lookup table doesn't count as an "instantiation" of the
observer-moment which is being replayed, and doesn't increase its measure.

Jesse

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to