Lost and not lost?

2008-11-26 Thread Ronald Held

I joined this list due to Tegmark's site, and got help on my multiverse talk.
I have (tried) to read other threads and do not understand most of
them. Some of which may be due to shared implicit knowledge. Some of
which seems like philosophy and not proofs or calculations I can
understand.not even certain what to ask first, so I will wait to see
what explanations I may received and ask additional questions.
   Ronald

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-26 Thread Abram Demski

Bruno,

I am glad for the opportunity to discuss these things with someone who
knows something about these issues.

> In my opinion, revision theories are useful when a machine begins to
> bet on an universal environment independent of herself. Above her
> Godel-Lob-Solovay correct self-reference logic, she will have to
> develop a non monotonic surface to be able to handle its errors,
> dreams, etc. It is a bit more close to practical artifiicial
> intelligence engineering than machine theology, but I am ok with that.

I am interested in nonmonotonic logics as an explanation of how we can
have "concepts" that don't just reduce to first-order theories--
specifically, concepts such as "number" that fall prey to Godelian
incompleteness. In other words, I think that "we use nonmonotonic
logic" is at least a partial answer to what I called the little
puzzle.

>> First we have "true" and "false". Dealing with these in an
>> unrestricted manner, we can construct sentences such as "this sentence
>> is false".
>
> I don't think we can really do that. We cannot, I think.  (And I can
> prove this making the assumption
> that we are ideally sound universal machines).

I'm not claiming that we can *consistently* construct such sentences,
just that we can try to construct them, and then run into problems
when we try to reason about them. Luckily we have what you called a
"nonmonotonic surface" so we draw back and either give up or try from
different angles.

--Abram

On Wed, Nov 26, 2008 at 10:54 AM, Bruno Marchal <[EMAIL PROTECTED]> wrote:
>
> Hi Abram,
>
>
> On 26 Nov 2008, at 00:01, Abram Demski wrote:
>
>>
>> Bruno,
>>
>> Yes, I have encountered the provability logics before, but I am no
>> expert.
>
>
> We will perhaps have opportunity to talk about this.
>
>
>>
>>
 In any given
 generation, the entity who can represent the truth-predicate of the
 most other entities will dominate.
>>>
>>> Why?
>>
>> The notion of the entities adapting their logics in order to better
>> reason about each other is meant to be more of an informal
>> justification than an exact proof, so I'm not worried about stating my
>> assumptions precisely... If I did, I might simply take this to be an
>> assumption rather than a derived fact. But, here is an informal
>> justification.
>>
>> Since the entities start out using first-order logic, it will be
>> useful to solve the halting problem to reach conclusions about what a
>> fellow-creature *won't* ever reach conclusions about. This means a
>> "provable" predicate will be useful. To support deduction with this
>> predicate, of course, the entities will gain more and more axioms over
>> time; axioms that help solve instances of the halting problem will
>> survive, while axioms that provide incorrect information will not.
>> This means that the "provable" predicate has a moving target: more and
>> more is provable over time.
>
>
> All right.
>
>
>
>> Eventually it will become useful to
>> abstract away from the details with a "true" predicate.
>
>
> Here, assuming the mechanist hypothesis (or some weakening), the way
> the "truth predicate" is introduced is really what will decide if the
> soul of the machine will fall in Hell, or get enlightened and go to
> Heaven. The all encompassing "truth" is not even nameable or
> describable by the machines.
>
>
>
>
>
>> The "true"
>> predicate essentially says "provable by some sufficiently evolved
>> system". This allows an entity to ignore the details of the entity it
>> is currently reasoning about.
>
>
> If PA (Peano Arithmetic) deduces "I can prove that I am consistent"
> from "I can prove that ZF (Zermelo Fraenkel Set Theory) proves that I
> am consistent", then PA goes to hell!
> If an entity refers to a more powerful entity, even if "we" trust that
> more powerful entity, it just an invalid "argument of authority".
> Of course if PA begins to *believe* in the axioms of ZF, then PA
> becomes ZF, and can assert the consistency of PA without problem. But
> then, "we" are no more talking *about* PA, but about ZF.
>
>
>
>> This won't always work-- sometimes it
>> will need to resort to reasoning about provability again. But, it
>> should be a useful concept (after all, we find it to be so).
>
>
> Sure. But truth is really an interrogation mark. We can only "search"
> it.
>
>
>
>>
>>
 Of course, this gives rise to an outlandish number of truth-values
 (one
 for each ordinal number), when normally any more than 2 is
 considered
 questionable.
>>>
>>>
>>> Not really, because those truth value are, imo, not really truth
>>> value, but they quantify a ladder toward infinite credibility,
>>> assurance or something. Perhaps security.
>>
>> I agree that the explosion of "truth-values" is acceptable because
>> they are not really truth-values... but they do not go further and
>> further into absolute confidence, but rather further and further into
>> meaninglessness. Obviously my previous explanation was not adequate.
>>
>> 

Re: join post

2008-11-26 Thread Abram Demski

Russel,

I do not see why some appropriately modified version of that theorem
couldn't be proven for other settings. As a concrete example let's
just use Schmidhuber's GTMs. There would be universal GTMs and a
constant cost for conversion and everything else needed for a version
of the theorem, wouldn't there be? (I am assuming things, I will look
up some details this afternoon... I have the book you refer to, I'll
look at the theorem... but I suppose I should also re-read the paper
about GTMs before making bold claims...)

--Abram

On Tue, Nov 25, 2008 at 5:41 PM, Russell Standish <[EMAIL PROTECTED]> wrote:
>
> On Tue, Nov 25, 2008 at 04:58:41PM -0500, Abram Demski wrote:
>>
>> Russel,
>>
>> Can you point me to any references? I am curious to hear why the
>> universality goes away, and what "crucially depends" means, et cetera.
>>
>> -Abram Demski
>>
>
> This is sort of discussed in my book "Theory of Nothing", but not in
> technical detail. Excuse the LaTeX notation below.
>
> Basically any mapping O(x) from the set of infinite binary strings
> {0,1}\infty (equivalently the set of reals [0,1) ) to the integers
> induces a probability distribution relative to the uniform measure dx
> over {0,1}\infty
>
> P(x) = \int_{y\in O^{-1}(x)} dx
>
> In the case where O(x) is a universal prefix machine, P(x) is just the
> usual Solomonoff-Levin universal prior, as discussed in chapter 3 of
> Li and Vitanyi. In the case where O(x) is not universal, or perhaps
> even not a machine at all, the important Coding theorem (Thm 4.3.3 in
> Li and Vitanyi)  no longer holds, so the distribution is no longer
> universal, however it is still a probability distribution (provided
> O(x) is defined for all x in {0,1}\infty) that depends on the choice
> of observer map O(x).
>
> Hope this is clear.
>
> --
>
> 
> A/Prof Russell Standish  Phone 0425 253119 (mobile)
> Mathematics
> UNSW SYDNEY 2052 [EMAIL PROTECTED]
> Australiahttp://www.hpcoders.com.au
> 
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-26 Thread Brent Meeker

Stathis Papaioannou wrote:
> 2008/11/26 Kory Heath <[EMAIL PROTECTED]>:
>>
>> On Nov 24, 2008, at 5:40 PM, Stathis Papaioannou wrote:
>>> The question turns on what is a computation and why it should have
>>> magical properties. For example, if someone flips the squares on a
>>> Life board at random and accidentally duplicates the Life rules does
>>> that mean the computation is carried out?
>> I would say no. But of course, the real question is, "Why does it
>> matter?" If I'm reading you correctly, you're taking the view that
>> it's the pattern of bits that matters, not what created it (or
>> "caused" it, or "computed it", etc.)
> 
> Yes. Suppose one of the components in my computer is defective but,
> with incredible luck, is outputting the appropriate signals due to
> thermal noise. Would it then make sense to say that the computer isn't
> "really" running Firefox, but only pretending to do so, reproducing
> the Firefox behaviour but lacking the special Firefox
> qualia-equivalent?

Doesn't this antinomy arise because we equivocate on "running Firefox".  Do we 
mean a causal chain of events in the computer according to a certain program 
specification or do we mean the appearance on the screen of the same thing that 
the causal chain would have produced?  We'd say "no" by the first meaning, but 
"yes" by the second.  Obviously, there the question is not black-and-white.  If 
the computer simply dropped a bit or two and miscolored a few pixels, no one 
would notice and no one would assert it wasn't running Firefox.  So really, 
when 
we talk about "running Firefox" we are referring to a fuzzy, holistic process 
that admits of degrees.

I'm developing a suspicion of arguments that say "suppose by accident...".  If 
we say that the (putative) possibility of something happening "by accident" 
destroys the relevance of it happening as part of a causal chain, we are, in a 
sense, rejecting the concept of causal chains and relations - and not just in 
consciousness, as your Firefox example illustrates.

I wrote "putative" above because this kind of thought experiment hypothesizes 
events whose probability is infinitesimal.  If you take a finitist view, there 
is a lower bound to non-zero probabilities.


> 
>> It would help me if I had a clearer idea of how you view
>> consciousness. I assume that, for you, if someone flips the squares on
>> a Life board at random and creates the expected "chaos", there's no
>> consciousness there, but that there are certain configurations that
>> could arise (randomly) that you would consider conscious. I assume
>> that these patterns would show some kind of regularity - some kind of
>> law-like behavior.
> 
> In the first instance, yes. But then the problem arises that under a
> certain interpretation, the chaotic patterns could also be seen as
> implementing any given computation. A common response to this is that
> although it may be true in a trivial sense, as it is true that a block
> of marble contains every possible statue, it is useless to define
> something as a computation unless it can process information in a way
> that interacts with its environment. This seems reasonable so far, but
> what if the putative computation is of a virtual world with conscious
> observers? The trivial sense in which such a computation can be said
> to be hiding in chaos is no longer trivial, 

It is still trivial in the sense that it could be said to instantiate all 
possible conscious worlds (at least up to some size limit).  Since we don't 
know 
what is necessary to instantiate consciousness, this seems much more 
speculative 
than saying the block of marble instantiates all computations - which we 
already 
agree is true only in a trivial sense.

>as I see no reason why the
> consciousness of these observers should be contingent on the
> possibility of interaction with the environment containing the
> substrate of their implementation. My conclusion from this is that
> consciousness, in general, is not dependent on the orderly physical
> activity which is essential for the computations that we observe.

Yet this is directly contradicted by those specific instances in which 
consciousness is interrupted by disrupting the physical activity.


> Rather, consciousness must be a property of the abstract computation
> itself, which leads to the conclusion that the physical world is
> probably a virtual reality generated by the big computer in Platonia,

This seems to me to be jumping to a conclusion by examining only one side of 
the 
argument and, finding it flawed, embracing the contrary.  Abstract computations 
are atemporal and don't have to be generated.  So it amounts to saying that the 
physical world just IS in virtue of there being some mapping between the world 
and some computation.


> since there is no basis for believing that there is a concrete
> physical world separate from the necessarily existing virtual one.
> 
>> It's not easy for me to explain why I think it matte

Re: MGA 1

2008-11-26 Thread Bruno Marchal


On 25 Nov 2008, at 20:16, Brent Meeker wrote:

>
> Bruno Marchal wrote:
>>

>>
>>> Brent: I don't see why the mechanist-materialists are
>>> logically disallowed from incorporating that kind of physical
>>> difference into their notion of consciousness.
>>
>>
>> Bruno: In our setting, it means that the neuron/logic gates have  
>> some form of
>> prescience.
>
> Brent: I'm not sure I agree with that.  If consciousness is a  
> process it may be
> instantiated in physical relations (causal?).  But relations are in  
> general not
> attributes of the relata.  Distance is an abstract relation but it  
> is always
> realized as the distance between two things.  The things themselves  
> don't have
> "distance".  If some neurons encode my experience of "seeing a rose"  
> might not
> the experience depend on the existence of roses, the evolution of  
> sight, and the
> causal chain as well as the immediate state of the neurons?


With *digital* mechanism, it would just mean that we have not chosen  
the right level of substitution. Once the level is well chosen, then  
we can no more give role to the implementations details. They can no  
more be relevant, or we introduce prescience in the elementary  
components.


>>
>>
>>> Bostrom's views about fractional
>>> "quantities" of experience are a case in point.
>>
>> If that was true, why would you say "yes" to the doctor without
>> knowing the thickness of the artificial axons?
>> How can you be sure your consciousness will not half diminish when  
>> the
>> doctor proposes to you the new cheaper brain which use thinner  
>> fibers,
>> or half the number of redundant security fibers (thanks to a progress
>> in security software)?
>> I would no more dare to say "yes" to the doctor if I could loose a
>> fraction of my consciousness and become a partial zombie.
>
> But who would say "yes" to the doctor if he said that he would take  
> a movie of
> your brain states and project it?  Or if he said he would just  
> destroy you in
> this universe and you would continue your experiences in other  
> branches of the
> multiverse or in platonia?  Not many I think.


I agree with you. Not many will say yes to such a doctor!  Even  
rightly so (with MEC). I think MGA 3 should make this clear.
The point is just that if we assume both MEC  *and*  MAT, then the  
movie is "also" conscious, but of course (well: by MGA 3) it is not  
conscious "qua computatio", so that we get the (NON COMP or NON MAT)  
conclusion.

I keep COMP (as my working hypothesis, but of course I find it  
plausible for many reasons), so I abandon MAT. With comp,  
consciousness can still supervene on computations (in Platonia, or  
more concretely in the universal deployment), but not on its physical  
implementation. By UDA we have indeed the obligation now to explain  
the physical, by the computational. It is the reversal I talked about.  
Somehow, consciousness does not supervene on brain activity, but brain  
activity supervene on consciousness. To be short, because  
consciousness is now somehow related with the whole of arithmetical  
truth, and things are no so simple.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: join post

2008-11-26 Thread Bruno Marchal

Hi Abram,


On 26 Nov 2008, at 00:01, Abram Demski wrote:

>
> Bruno,
>
> Yes, I have encountered the provability logics before, but I am no  
> expert.


We will perhaps have opportunity to talk about this.


>
>
>>> In any given
>>> generation, the entity who can represent the truth-predicate of the
>>> most other entities will dominate.
>>
>> Why?
>
> The notion of the entities adapting their logics in order to better
> reason about each other is meant to be more of an informal
> justification than an exact proof, so I'm not worried about stating my
> assumptions precisely... If I did, I might simply take this to be an
> assumption rather than a derived fact. But, here is an informal
> justification.
>
> Since the entities start out using first-order logic, it will be
> useful to solve the halting problem to reach conclusions about what a
> fellow-creature *won't* ever reach conclusions about. This means a
> "provable" predicate will be useful. To support deduction with this
> predicate, of course, the entities will gain more and more axioms over
> time; axioms that help solve instances of the halting problem will
> survive, while axioms that provide incorrect information will not.
> This means that the "provable" predicate has a moving target: more and
> more is provable over time.


All right.



> Eventually it will become useful to
> abstract away from the details with a "true" predicate.


Here, assuming the mechanist hypothesis (or some weakening), the way  
the "truth predicate" is introduced is really what will decide if the  
soul of the machine will fall in Hell, or get enlightened and go to  
Heaven. The all encompassing "truth" is not even nameable or  
describable by the machines.





> The "true"
> predicate essentially says "provable by some sufficiently evolved
> system". This allows an entity to ignore the details of the entity it
> is currently reasoning about.


If PA (Peano Arithmetic) deduces "I can prove that I am consistent"  
from "I can prove that ZF (Zermelo Fraenkel Set Theory) proves that I  
am consistent", then PA goes to hell!
If an entity refers to a more powerful entity, even if "we" trust that  
more powerful entity, it just an invalid "argument of authority".
Of course if PA begins to *believe* in the axioms of ZF, then PA  
becomes ZF, and can assert the consistency of PA without problem. But  
then, "we" are no more talking *about* PA, but about ZF.



> This won't always work-- sometimes it
> will need to resort to reasoning about provability again. But, it
> should be a useful concept (after all, we find it to be so).


Sure. But truth is really an interrogation mark. We can only "search"  
it.



>
>
>>> Of course, this gives rise to an outlandish number of truth-values  
>>> (one
>>> for each ordinal number), when normally any more than 2 is  
>>> considered
>>> questionable.
>>
>>
>> Not really, because those truth value are, imo, not really truth
>> value, but they quantify a ladder toward infinite credibility,
>> assurance or something. Perhaps security.
>
> I agree that the explosion of "truth-values" is acceptable because
> they are not really truth-values... but they do not go further and
> further into absolute confidence, but rather further and further into
> meaninglessness. Obviously my previous explanation was not adequate.
>
> First we have "true" and "false". Dealing with these in an
> unrestricted manner, we can construct sentences such as "this sentence
> is false".


I don't think we can really do that. We cannot, I think.  (And I can  
prove this making the assumption
that we are ideally sound universal machines).





> We need to label these somehow as meaningless or
> pathological. I think either a fixed-point construction or the
> revision theory are OK options for doing this;


In my opinion, revision theories are useful when a machine begins to  
bet on an universal environment independent of herself. Above her  
Godel-Lob-Solovay correct self-reference logic, she will have to  
develop a non monotonic surface to be able to handle its errors,  
dreams, etc. It is a bit more close to practical artifiicial  
intelligence engineering than machine theology, but I am ok with that.



> perhaps one is better
> than the other, perhaps they are ultimately equivalent where it
> matters, I don't know. Anyway, now we are stuck with a new predicate:
> "meaningless". Using this in an unrestricted manner, I can say "this
> sentence is either meaningless or false". I need to rule this out, but
> I can't label it "meaningless", or I will then conclude it is true
> (assuming something like classical logic). So I need to invent a new
> predicate, 2-meaningless. Using this in an unrestricted manner again
> would lead to trouble, so I'll need 3-meaningless and 4-meaningless
> and finitely-meaningless and countably-meaningless and so on.


Indeed. It seems you make the point.

Best,


Bruno Marchal

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~---

Re: MGA 3

2008-11-26 Thread Michael Rosefield
There's a quote you might like, by Korzybski: "That which makes no
difference _is_ no difference."

--
- Did you ever hear of "The Seattle Seven"?
- Mmm.
- That was me... and six other guys.


2008/11/26 Bruno Marchal <[EMAIL PROTECTED]>

> MGA 3
>
> It is the last MGA !
>
> I realize MGA is complete, as I thought it was, but I was doubting this
> recently. We don't need to refer to Maudlin, and MGA 4 is not necessary.
> Maudlin 1989 is an independent argument of the 1988 Toulouse argument
> (which I present here). Note that Maudlin's very interesting "Olympization
> technic"  can be used to defeat a wrong form of MGA 3, that is, a wrong
> argument for the assertion that  the movie cannot be conscious. (the
> argument that the movie lacks the counterfactual). Below are hopefully
> correct (if not very simple) argument. ( I use Maudlin sometimes when people
> gives this non correct form of MGA 3, and this is probably what makes me
> think Maudlin has to be used, at some point).
>
>
>
> MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the
> "luckiness" feature of the MGA 1 experiment was a red herring. We can
> construct, from MEC+COMP, an home made lucky rays generator, and use it at
> will. If we accept both digital mechanism, in particular Dennet's principle
> that neurons have no intelligence, still less prescience, and this
>  *together with* the supervenience principle; we have to accept that Alice
> conscious dream experience supervenes on the projection of her brain
> activity movie.
>
> Let us show now that Alice consciousness *cannot* supervene on that
> *physical* movie projection.
>
>
>
> I propose two (deductive) arguments.
>
> 1)
>
> Mechanism implies the following tautological functionalist principle: if,
> for some range of activity, a system does what it is supposed to do, and
> this before and after a change is made in its constitution, then the change
> does not change what the system is supposed to do, for that range of
> activity.
> Example:
> - A car is supposed to broken but only if the driver is faster than 90
> miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car is
> supposed to do what she is supposed to do, with respect of its range of
> activity defined by Pepe Pepito.
> - Claude bought a 1000 thousand processors computer. One day he realized
> that he used only 990 processors, for his type of activity, so he decided to
> get rid of those 10 useless processors. And indeed the machine will satisfy
> Claude ever.
>
> - Alice has (again) a math exam. Theoreticians have correctly predict that
> in this special circumstance, she will never use neurons X, Y and Z.  Now
> Alice go (again, again) to this exam in the same condition, but with the
> neurons X, Y, Z removed. Again, not only will she behaved like if she
> succeed her exam, but her consciousness, with both MEC *and* MAT still
> continue.
> The idea is that if something is not useful, for an active process to go
> on, for some range of activity, then you can remove it, for that range of
> activity.
>
> OK?
>
> Now, consider the projection of the movie of the activity of Alice's brain,
> "the movie graph".
> Is it necessary that someone look at that movie? Certainly not. No more
> than it is needed that someone is look at your reconstitution in Moscow for
> you to be conscious in Moscow after a teleportation. All right? (with MEC
> assumed of course).
> Is it necessary to have a screen? Well, the range of activity here is just
> one dynamical description of one computation. Suppose we make a hole in the
> screen. What goes in and out of that hole is exactly the same, with the hole
> and without the hole. For that unique activity, the hole in the screen is
> functionally equivalent to the subgraph which the hole removed. Clearly we
> can make a hole as large as the screen, so no need for a screen.
> But this reasoning goes through if we make the hole in the film itself.
> Reconsider the image on the screen: with a hole in the film itself, you get
> a "hole" in the movie, but everything which enters and go out of the hole
> remains the same, for that (unique) range of activity.  The "hole" has
> trivially the same functionality than the subgraph functionality whose
> special behavior was described by the film. And this is true for any
> subparts, so we can remove the entire film itself.
>
> Does Alice's dream supervene (in real time and space) on the projection of
> the empty movie?
>
> Remark.
> 1° Of course, this argument can be sum up by saying that the movie lacks
> causality between its parts so that it cannot really be said that it
> computes any thing, at least physically. The movie is just an ordered record
> of computational states. This is neither a physical computation, nor an
> (immaterial) computation where the steps follows relatively to some
> universal machine. It is just a description of a computation, already
> existing in the Universal Deployment.
> 2° Not

Re: MGA 1

2008-11-26 Thread Stathis Papaioannou

2008/11/25 Kory Heath <[EMAIL PROTECTED]>:

> The answer I *used* to give was that it doesn't matter, because no
> matter what "accidental order" you find in Platonia, you also find the
> "real order". In other words, if you find some portion of the digits
> of PI that "seems to be" following the rules of Conway's Life, then
> there is also (of course) a Platonic object that represents the
> "actual" computations that the digits of PI "seem to be" computing.
> This is, essentially, Bostrom's "Unification" in the context of
> Platonia. It doesn't matter whether or not "accidental order" in the
> digits of PI can be viewed as conscious, because either way, we know
> the "real order" exists in Platonia as well, and multiple
> "instantiations" of the same pain in Platonia wouldn't result in
> multiple pains.
>
> I'm uncomfortable with the philosophical vagueness of some of this. At
> the very least, I want a better handle on why Unification is correct
> and Duplication is not in the context of Platonia (or why that
> question is confused, if it is).

I'd agree with your first paragraph quoted above. It isn't possible to
introduce, eliminate or duplicate Platonic objects; they're all just
there, eternally.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-26 Thread Stathis Papaioannou

2008/11/26 Kory Heath <[EMAIL PROTECTED]>:
>
>
> On Nov 24, 2008, at 5:40 PM, Stathis Papaioannou wrote:
>> The question turns on what is a computation and why it should have
>> magical properties. For example, if someone flips the squares on a
>> Life board at random and accidentally duplicates the Life rules does
>> that mean the computation is carried out?
>
> I would say no. But of course, the real question is, "Why does it
> matter?" If I'm reading you correctly, you're taking the view that
> it's the pattern of bits that matters, not what created it (or
> "caused" it, or "computed it", etc.)

Yes. Suppose one of the components in my computer is defective but,
with incredible luck, is outputting the appropriate signals due to
thermal noise. Would it then make sense to say that the computer isn't
"really" running Firefox, but only pretending to do so, reproducing
the Firefox behaviour but lacking the special Firefox
qualia-equivalent?

> It would help me if I had a clearer idea of how you view
> consciousness. I assume that, for you, if someone flips the squares on
> a Life board at random and creates the expected "chaos", there's no
> consciousness there, but that there are certain configurations that
> could arise (randomly) that you would consider conscious. I assume
> that these patterns would show some kind of regularity - some kind of
> law-like behavior.

In the first instance, yes. But then the problem arises that under a
certain interpretation, the chaotic patterns could also be seen as
implementing any given computation. A common response to this is that
although it may be true in a trivial sense, as it is true that a block
of marble contains every possible statue, it is useless to define
something as a computation unless it can process information in a way
that interacts with its environment. This seems reasonable so far, but
what if the putative computation is of a virtual world with conscious
observers? The trivial sense in which such a computation can be said
to be hiding in chaos is no longer trivial, as I see no reason why the
consciousness of these observers should be contingent on the
possibility of interaction with the environment containing the
substrate of their implementation. My conclusion from this is that
consciousness, in general, is not dependent on the orderly physical
activity which is essential for the computations that we observe.
Rather, consciousness must be a property of the abstract computation
itself, which leads to the conclusion that the physical world is
probably a virtual reality generated by the big computer in Platonia,
since there is no basis for believing that there is a concrete
physical world separate from the necessarily existing virtual one.

> It's not easy for me to explain why I think it matters what kind of
> process (or in Platonia, what kind of abstract computation) generated
> that order. But it's also not easy for me to understand the
> alternative view. During those stretches of time when the random field
> of bits is creating a pattern that you would call conscious, what do
> you *mean* when you say it's conscious? By definition, you can't mean
> anything about how it's reacting to its environment, or that it's
> doing something "because of" something else, etc.

I know what I mean by consciousness, being intimately associated with
it myself, but I can't explain it.

>> I think there is a partial zombie problem regardless of whether
>> Unification or Duplication is accepted.
>
> Can you elaborate on this? What partial zombie problem do you see that
> Unification doesn't address?

If by "Unification" you mean the idea that two identical brains with
identical input will result in only one consciousness, I don't see how
this solves the conceptual problem of partial zombies. What would
happen if an identical part of both brains were replaced with a
non-concious but otherwise identically functioning equivalent?

> And do you think that the move away from
> "physical reality" to "mathematical reality" solves that problem? If
> so, how?

The Fading Qualia argument proves functionalism, assuming that the
physical behaviour of the brain is computable (some people like Roger
Penrose dispute this). Functionalism then leads to the conclusion that
consciousness isn't dependent on physical activity, as discussed in
the recent threads. So, either functionalism is wrong, or
consciousness resides in the Platonic realm.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



MGA 3

2008-11-26 Thread Bruno Marchal
MGA 3

It is the last MGA !

I realize MGA is complete, as I thought it was, but I was doubting  
this recently. We don't need to refer to Maudlin, and MGA 4 is not  
necessary.
Maudlin 1989 is an independent argument of the 1988 Toulouse argument  
(which I present here).
Note that Maudlin's very interesting "Olympization technic"  can be  
used to defeat a wrong form of MGA 3, that is, a wrong argument for  
the assertion that  the movie cannot be conscious. (the argument that  
the movie lacks the counterfactual). Below are hopefully correct (if  
not very simple) argument. ( I use Maudlin sometimes when people gives  
this non correct form of MGA 3, and this is probably what makes me  
think Maudlin has to be used, at some point).



MGA 1 shows that Lucky Alice is conscious, and MGA 2 shows that the  
"luckiness" feature of the MGA 1 experiment was a red herring. We can  
construct, from MEC+COMP, an home made lucky rays generator, and use  
it at will. If we accept both digital mechanism, in particular  
Dennet's principle that neurons have no intelligence, still less  
prescience, and this  *together with* the supervenience principle; we  
have to accept that Alice conscious dream experience supervenes on the  
projection of her brain activity movie.

Let us show now that Alice consciousness *cannot* supervene on that  
*physical* movie projection.



I propose two (deductive) arguments.

1)

Mechanism implies the following tautological functionalist principle:  
if, for some range of activity, a system does what it is supposed to  
do, and this before and after a change is made in its constitution,  
then the change does not change what the system is supposed to do, for  
that range of activity.
Example:
- A car is supposed to broken but only if the driver is faster than 90  
miles/h. Pepe Pepito NEVER drives faster than 80 miles/h. Then the car  
is supposed to do what she is supposed to do, with respect of its  
range of activity defined by Pepe Pepito.
- Claude bought a 1000 thousand processors computer. One day he  
realized that he used only 990 processors, for his type of activity,  
so he decided to get rid of those 10 useless processors. And indeed  
the machine will satisfy Claude ever.

- Alice has (again) a math exam. Theoreticians have correctly predict  
that in this special circumstance, she will never use neurons X, Y and  
Z.  Now Alice go (again, again) to this exam in the same condition,  
but with the neurons X, Y, Z removed. Again, not only will she behaved  
like if she succeed her exam, but her consciousness, with both MEC  
*and* MAT still continue.
The idea is that if something is not useful, for an active process to  
go on, for some range of activity, then you can remove it, for that  
range of activity.

OK?

Now, consider the projection of the movie of the activity of Alice's  
brain, "the movie graph".
Is it necessary that someone look at that movie? Certainly not. No  
more than it is needed that someone is look at your reconstitution in  
Moscow for you to be conscious in Moscow after a teleportation. All  
right? (with MEC assumed of course).
Is it necessary to have a screen? Well, the range of activity here is  
just one dynamical description of one computation. Suppose we make a  
hole in the screen. What goes in and out of that hole is exactly the  
same, with the hole and without the hole. For that unique activity,  
the hole in the screen is functionally equivalent to the subgraph  
which the hole removed. Clearly we can make a hole as large as the  
screen, so no need for a screen.
But this reasoning goes through if we make the hole in the film  
itself. Reconsider the image on the screen: with a hole in the film  
itself, you get a "hole" in the movie, but everything which enters and  
go out of the hole remains the same, for that (unique) range of  
activity.  The "hole" has trivially the same functionality than the  
subgraph functionality whose special behavior was described by the  
film. And this is true for any subparts, so we can remove the entire  
film itself.

Does Alice's dream supervene (in real time and space) on the  
projection of the empty movie?

Remark.
1° Of course, this argument can be sum up by saying that the movie  
lacks causality between its parts so that it cannot really be said  
that it computes any thing, at least physically. The movie is just an  
ordered record of computational states. This is neither a physical  
computation, nor an (immaterial) computation where the steps follows  
relatively to some universal machine. It is just a description of a  
computation, already existing in the Universal Deployment.
2° Note this: If we take into consideration the relative destiny of  
Alice, and supposing one day her brain broke down completely, she has  
more chance to survive through "holes in the screen" than to the  
"holes in the film". The film contains the relevant information to  
reconstitute Alice from her brain description,