> Date: Mon, 22 Feb 2010 11:41:38 -0800
> From: jackmal...@yahoo.com
> Subject: RE: problem of size '10
> To: everything-list@googlegroups.com
> 
> Jesse, how do you access the everything list?  I ask because I have not 
> recieved my own posts in my inbox, nor have others such as Bruno replied.  I 
> use yahoo email.  I may need to use a different method to prevent my posts 
> from getting lost.  They do seem to show up on Google groups though.  There 
> was never a problem until recently, so I'll see if this one works.
I just get the messages in my email--if you want to give a link to one of the 
emails that didn't show up in your inbox, either from google groups or from 
http://www.mail-archive.com/everything-list@googlegroups.com/maillist.html , 
then I can check if that email showed up in my own inbox, since I haven't 
deleted any of the everything-list emails for a few days.

> 
> --- On Mon, 2/22/10, Jesse Mazer <laserma...@hotmail.com> wrote:
> > Hi Jack, to me the idea that counterfactuals would be essential to defining 
> > what counts as an "implementation" has always seemed counterintuitive for 
> > reasons separate from the Olympia or movie-graph argument. The 
> > thought-experiment I'd like to consider is one where some device is 
> > implanted in my brain that passively monitors the activity of a large group 
> > of neurons, and only if it finds them firing in some precise prespecified 
> > sequence does it activate and stimulate my brain in some way, causing a 
> > change in brain activity; otherwise it remains causally inert
> > According to the counterfactual definition of implementations, would the 
> > mere presence of this device change my qualia from what they'd be if it 
> > wasn't present, even if the neurons required to activate it never actually 
> > fire in the correct sequence and the device remains completely inert? That 
> > would seem to divorce qualia from behavior in a pretty significant way...
> 
> The link between qualia and computations is, of course, hard to know anything 
> about.  But it seems to me quite likely that qualia would be insensitive to 
> the sort of changes in computations that you are talking about.  Such 
> modified computations could give rise to the same (or nearly the same) set of 
> qualia for the 'inert device' runs as unmodified ones would have.  I am not 
> saying that this must always be the case, since if you take it too far you 
> could run into Maudlin-type problems, but in many cases it would make sense.

OK, so you're suggesting there may not be a one-to-one relationship between 
distinct observer-moments in the sense of distinct qualia, and distinct 
computations defined in terms of counterfactuals? Distinct computations might 
be associated with identical qualia, in other words? What about the 
reverse--might a single computation be associated with multiple distinct 
observer-moments with different qualia?
> 
> > If you have time, perhaps you could take a look at my post
> > http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html
> > where I discussed a vague idea for how one might define isomorphic "causal 
> > structures" that could be used to address the implementation problem, in a 
> > way that wouldn't depend on counterfactuals at all
> 
> You do need counterfactuals to define implementations.
> 
> Consider the computation c(t+1) = a(t) AND b(t), where a,b,c, are bits.  
> Suppose that a(t),b(t),and c(t) are all true.  Without counterfactuals, how 
> would you distinguish the above from another computation such as c(t+1) = 
> a(t)?
> 
> Even worse, suppose that c(t+1) is true no matter what.  a(t) and b(t) happen 
> to be true.  Is the above computation implemented?

You say "Suppose that a(t),b(t),and c(t) are all true", but that's not enough 
information--the notion of causal structure I was describing involved not just 
the truth or falsity of propositions, but also the logical relationships 
between these propositions given the axioms of the system. For example, if we 
are looking at three propositions A, B, and C in the context of an axiomatic 
system, we can ask whether or not the axioms (which might represent the laws of 
physics, or the internal rules of a turing machine) along with propositions A 
and B (which could represent specific physical facts such as initial 
conditions, or facts about particular cells on the turing machine's tape at a 
particular time) can together be used to prove C, or whether they are 
insufficient to prove C. The causal structure for a given set of propositions 
could then be defined in terms of all possible combinations of logical 
implications for those propositions, like this:
1. Axioms + A imply B: true or false?2. Axioms + A imply C: true or false?3. 
Axioms + B imply A: true or false?4. Axioms + B imply C: true or false?5. 
Axioms + C imply A: true or false?6. Axioms + C imply B: true or false?7. 
Axioms + A + B imply C: true or false?8. Axioms + A + C imply B: true or 
false?9. Axioms + B + C imply A: true or false?
For example, one possible causal structure for three propositions would be:
1. Axioms + A imply B: false2. Axioms + A imply C: true3. Axioms + B imply A: 
false4. Axioms + B imply C: false5. Axioms + C imply A: false6. Axioms + C 
imply B: false7. Axioms + A + B imply C: true8. Axioms + A + C imply B: true9. 
Axioms + B + C imply A: false
Then if you had three other propositions D, E, F, they would have an isomorphic 
causal structure to A, B, C if you could map the two sets of propositions to 
one another such that all the logical implications would be the same. For 
example, suppose the following logical relations hold for D, E, F:
1. Axioms + E imply D: false2. Axioms + E imply F: true3. Axioms + D imply E: 
false4. Axioms + D imply F: false5. Axioms + F imply E: false6. Axioms + F 
imply D: false7. Axioms + E + D imply F: true8. Axioms + E + F imply D: true9. 
Axioms + D + F imply E: false
Then if you map D to B, E to A, and F to C, you can see that their causal 
structures are isomorphic. On the other hand, suppose the logical relations 
were:
1. Axioms + D imply E: true2. Axioms + D imply F: true3. Axioms + E imply D: 
false4. Axioms + E imply F: false5. Axioms + F imply D: false6. Axioms + F 
imply E: false7. Axioms + D + E imply F: true8. Axioms + D + F imply E: true9. 
Axioms + E + F imply D: false
In this case their could be no isomorphism with A, B, and C, since D can be 
used to prove either E or F, but neither A nor B nor C can be used to prove 
both the other two propositions in that group. So in this case A, B, C would 
not have the same causal structure as D, E, F.
So, it seems to me that identifying observer-moments with particular causal 
structures avoids the implication that any possible system can be "interpreted" 
in such a way as to instantiate any possible observer-moment, but it also 
avoids the need to consider counterfactuals, since we can restrict ourselves to 
propositions about events which actually occurred.                              
            

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to