Re: problem of size '10

`On Tue, Feb 23, 2010 at 10:40 AM, Jack Mallah <jackmal...@yahoo.com> wrote:`
```
> My last post worked (I got it in my email).  I'll repost one later and then
> post on the measure thread - though it's still a very busy time for me so
> maybe not today.
>
> --- On Mon, 2/22/10, Jesse Mazer <laserma...@hotmail.com> wrote:
> > OK, so you're suggesting there may not be a one-to-one relationship
> between distinct observer-moments in the sense of distinct qualia, and
> distinct computations defined in terms of counterfactuals? Distinct
> computations might be associated with identical qualia, in other words?
>
> Sure.  Otherwise, there'd be little point in trying to simulate someone, if
> any detail could change everything.
>

If by "change everything" you mean radical differences in the qualia, that
wasn't really what I was suggesting--even if there was a one-to-one
relationship between distinct computations and distinct observer-moments
with distinct qualia, very similar computations could produce very similar
qualia, so if you produced a "good enough" simulation of anyone's brain then
the simulation's experience could be nearly identical to that of the
original brain (and of course the simulation's experience would start to
diverge from the original brain's anyway as they'd receive different sensory
input)

>
> > What about the reverse--might a single computation be associated with
> multiple distinct observer-moments with different qualia?
>
> Certainly. For example, a sufficiently detailed simulation of the Earth
> would be associated with an entire population of observers.
>

But isn't that just breaking up the computation into various
sub-computations and saying that each sub-computation has distinct
experiences? In this case you're not really saying that the Earth
computation *taken as a whole* is associated with multiple qualia. It's as
if we associated distinct qualia with distinct sets--the set {{}, {{}}}
might be associated with different qualia than the set {} which is contained
within it, but that's not the same as saying that the set {{}, {{}}} is
*itself* associated with multiple distinct qualia.

>
> > You say "Suppose that a(t),b(t),and c(t) are all true", but that's not
> enough information--the notion of causal structure I was describing involved
> not just the truth or falsity of propositions, but also the logical
> relationships between these propositions given the axioms of the system.
>
> OK, I see what you're saying, Jesse.  I don't think it's a good solution
> though.
>
> First, you are implicitly including a lot of counterfactual information
> already, which is the reason it works at all.  "B implies A" is logically
> equivalent to "Not A implies Not B".  I'll use ~ for "Not", "-->" for
> "implies", and the axiom context is assumed.  A,B are Boolean variables /
> bits.  So if you say
>
> A --> B
> B --> A
>
> that's the same as saying
>
> A --> B
> ~A --> ~B
>
> which is the same as saying B = A.  Your way is just a clumsy way to
> provide some of the counterfactual information, which is often most
> consisely expressed as equations.  So if you think you have escaped
> counterfactuals, I disagree.
>

Well, the idea is that to determine what causal structures are contained in
a given "universe" (whether a physical universe or a computation), we adopt
the self-imposed rule that we *only* look at a set of propositions
concerning events that actually occurred in that universe, not at other
propositions concerning events that didn't occur in that universe. Then the
only causal structures contained in this universe are the ones that can be
found in the logical relations between this restricted set of propositions.

Aside from that though, the counterfactuals you mention are of a very
limited kind, just involving negations of propositions about events that
actually occurred. Perhaps I'm misunderstanding, but I thought that the way
you (and Chalmers) wanted to define implementations of computations using
counterfactuals involved a far "richer" set of counterfactuals about
detailed alternate histories of what could have occurred if the inputs were
different. For example, if the computation were a simulation of my brain
receiving sensory input from the external world, and it so happened that
this sensory input involved me seeing my desk and computer in front of me,
then your type of proposed solution to the implementation problem would
require considering how the simulation would have responded if it had
instead been fed some very different sensory input such as the sudden
appearance of a miniature dragon flying out of my computer monitor. In your
proposal, two computers running identical brain simulations and being fed
identical sensory inputs can only be considered implementations of the same
computation if it's true that they both *would* respond the same way to
totally different inputs, in other words. Is this understanding correct or
have I got it wrong?

>
> The next problem is that for a larger number of bits, you won't express the
> full dynamics of the system.  For example with 10 bits, there are more
> possible combinations than your system will have statements.  I guess you
> see that as a feature rather than a bug - after all, it's what allows you to
> ignore "inert machinery".  I don't like it but perhaps that's a matter a
> taste.

When you say "possible combinations", do you mean propositions about
different states of the bits besides the ones that actually occurred? If
proposition A in my previous example referred to the (true) proposition "at
time t, cell N of the turing machine tape was in state 1" then are you
bothered by the fact that we're not also considering the separate
(counterfactual) proposition "at time t, cell N of the turing machine tape
was in state 0"? If so, then yes, I am treating that as a feature rather
than a bug since my goal is to try to find a way to define causal structure
(and escape Chalmers' issue that any object such as a rock could be viewed
as implementing any possible computation) in a way that doesn't rely on
knowing counterfactuals about how a system *would* have behaved in scenarios
other than the ones that actually occurred.

--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to