Jesse Mazer wrote:



> Date: Sat, 13 Feb 2010 10:48:28 -0800
> From: jackmal...@yahoo.com
> Subject: Re: problem of size '10
> To: everything-list@googlegroups.com
>
> --- On Fri, 2/12/10, Bruno Marchal <marc...@ulb.ac.be> wrote:
> > Jack Mallah wrote:
> > --- On Thu, 2/11/10, Bruno Marchal <marc...@ulb.ac.be>
> > > > MGA is more general (and older).
> > > > The only way to escape the conclusion would be to attribute consciousness to a movie of a computation
> > >
> > > That's not true. For partial replacement scenarios, where part of a brain has counterfactuals and the rest doesn't, see my partial brain paper: http://cogprints.org/6321/
> >
> > It is not a question of true or false, but of presenting a valid or non valid deduction.
>
> What is false is your statement that "The only way to escape the conclusion would be to attribute consciousness to a movie of a computation". So your argument is not valid.
>
> > I don't see anything in your comment or links which prevents the conclusions of being reached from the assumptions. If you think so, tell me at which step, and provide a justification.
>
> Bruno, I don't intend to be drawn into a detailed discussion of your arguments at this time. The key idea though is that a movie could replace a computer brain. The strongest argument for that is that you could gradually replace the components of the computer (which have the standard counterfactual (if-then) functioning) with components that only play out a pre-recorded script or which behave correctly by luck. You could then invoke the 'fading qualia' argument (qualia could plausibly not vanish either suddenly or by gradually fading as the replacement proceeds) to argue that this makes no difference to the consciousness. My partial brain paper shows that the 'fading qualia' argument is invalid.



Hi Jack, to me the idea that counterfactuals would be essential to defining what counts as an "implementation" has always seemed counterintuitive for reasons separate from the Olympia or movie-graph argument. The thought-experiment I'd like to consider is one where some device is implanted in my brain that passively monitors the activity of a large group of neurons, and only if it finds them firing in some precise prespecified sequence does it activate and stimulate my brain in some way, causing a change in brain activity; otherwise it remains causally inert (I suppose because of the butterfly effect, the mere presence of the device would eventually affect my brain activity, but we can imagine replacing the device with a subroutine in a deterministic program simulating my brain in a deterministic virtual environment, with the subroutine only being activated and influencing the simulation if certain simulated neurons fire in a precise sequence).

It seems that these thought experiments inevitably lead to considering a digital simulation of the brain in a virtual environment. This is usually brushed over as an inessential aspect, but I'm coming to the opinion that it is essential. Once you have encapsulated the whole thought experiment in a closed virtual environment in a digital computer you have the paradox of the rock that computes everything. How we know what is being computed in this virtual environment? Ordinarily the answer to this is that we wrote the program and so we provide the interpretation of the calculation *in this world*. But it seems that in these thought experiments we are implicitly supposing that the simulation is inherently providing it's own interpretation. Maybe, so; but I see no reason to have confidence that this inherent interpretation is either unique or has anything to do with the interpretation we intended. I suspect that this simulated consciousness is only consciousness *in our external interpretation*.

Brent

According to the counterfactual definition of implementations, would the mere presence of this device change my qualia from what they'd be if it wasn't present, even if the neurons required to activate it never actually fire in the correct sequence and the device remains completely inert? That would seem to divorce qualia from behavior in a pretty significant way...

If you have time, perhaps you could take a look at my post at http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html where I discussed a vague idea for how one might define isomorphic "causal structures" that could be used to address the implementation problem, in a way that wouldn't depend on counterfactuals at all (there was some additional discussion in the followup posts on that thread, linked at the bottom of that mail-archive.com page). The basic idea was to treat the physical world as a formal axiomatic system, the axioms being laws of physics and initial conditions, the theorems being statements about physical events at later points in spacetime; then "causal structure" could be defined in terms of the patterns of logical relations between theorems, like "given the axioms along with theorems A and B, we can derive theorem C". Since all theorems concern events that actually did happen, counterfactuals would not be involved, but we could still perhaps avoid the type of problem Chalmers discussed where a rock can be viewed as implementing any possible computation. If you do have time to look over the idea and you see some obvious problems with it, let me know...

Jesse
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to