Le 22-juil.-12, à 21:10, Stephen P. King a écrit :

On 7/22/2012 7:24 AM, Bruno Marchal wrote:

 Le 21-juil.-12, à 20:04, Stephen P. King a écrit :

On 7/21/2012 7:57 AM, Bruno Marchal wrote:
Hi Stephen,

I appreciate very much Louis Kauffman, including that paper. But I don't see your point. Nothing there seems to cast any problem for comp or its consequences.

Why not read the MGA threads directly, and address the points specifically?

    I already did. My contention is that computational universality is NOT the separation of computations from physical systems, it is the independence of a given computation from any one particular physical systems.

Computational universality is an arithmetic notion. You don't need UDA to separate it from physics, you need only a good intro to computer science. This critics is wrong at the very start.

 Dear Bruno,

     Could you be more specific on what they are wrong about and how?

You might wait I come back on this on the FOAR list. I have already xplain this here, but you can also read any textbook on theoretical computer science, or any paper: the notion of universality can be defined in arithmetic and has nothing to do with physics.





The independence of a given computation from any particular physical system is obviously part of the comp assumption, and should not be confused with the impossibility of any physical system to capture or produce consciousness, which is related to the mathematical and the theological by comp, and that is the consequence of UDA including MGA.

By addressing MGA, I meant you to quote that text, and that text only, and tell me were you disagree, and for what reason.

    Let me quote some previous arguments by some others that are making the same case:

http://old.nabble.com/Re%3A-Movie-Graph-Argument-p32993663.html

You should better quote the answer I gave to them. You should try to make your own mind so as to be able to ask a specific point you feel mistaken, without any interpretation and still less philosophical beliefs, except those needed (like comp) for the sake of the argument.




Re: Movie Graph Argument <inconnu.png> 
  <inconnu.png><inconnu.png><inconnu.png>
by pedro7 Dec 17, 2011; 07:08am 

On Dec 17, 4:39 pm, Russell Standish <li...@...>  wrote:

> On Fri, Dec 16, 2011 at 08:26:21PM -0800, Pierz wrote: 
> 
> ...snip... 
> 
> > The problem is even deeper than this, however. How does the system 
> > ‘know’ when two locations should be bilocated? This works OK for a 
> > single copy of Klara, since she is a static system. But if she must  > > physically interact with all the previous editions of herself further 
> > back in the calculation chain, then she will be forced to ‘build’ 
> > pipes on the go, a ridiculously contrived procedure that totally 
> > vitiates the idea of a mindlessly proceeding, inert system. And how  > > does Klara (or rather, Olympia) remember which path she has followed  > > in order to know which trough to drain? New mechanisms must be devised  > > which effectively mean retaining the activity of previous Klaras in 
> > the chain and are no different from a form of backtracking. 
> 
> My understanding is that to construct Olympia, we  take n copies of 
> Klara, and run each copy to step i of the program, where i=1..n-1. The 
> construct the sequence of water troughs such that they are equal to 
> that of K_i at step i. We also connect K_i to Olympia at that point, 
> ready to take over in the event of a counterfactual being true. 
> 
> I don't think the issue of pipes is a problem - we  can assume each 
> trough in state i is connected to the troughs of states i-1 such 
> that when the armature moves through to state i, it closes a valve 
> connecting the troughs to the previous state's troughs. 
> 
> It may seem complex, but it is mere complication, not complexity, if 
> you understand the difference. 
> 
> > If Maudlin’s argument is a foundation of the UDA, then it seems to me  > > the UDA is on shaky ground, though I have yet to investigate the MGA  > > in depth. People talk about the Movie Graph Argument, but the links  > > provided refer to Alice and a distant supernova with lucky rays that 
> > substitute for functional neurons. I don’t see a connection to the 
> > idea of a recording or a filmed graph. Can someone enlighten me? 
> 
> Maudlin's argument has been compared with the MGA, which is step 8 of 
> the UDA. The previous steps are independent of Maudlin. 
> 
> Olympia can be compared with a recording of the computation. That is 
> the "filmed graph" (aka movie graph). 
> 
> -- 
> 
> ----------------------------------------------------------------------- ---- - 
> Prof Russell Standish                  Phone 0425 253119 (mobile) 
> Principal, High Performance Coders 
> Visiting Professor of Mathematics      hpco...@... 
> University of New South Wales          http://www.hpcoders.com.au
> ----------------------------------------------------------------------- ---- -
... [show rest of quote]

> Maudlin's argument has been compared with the MGA, which is step 8 of 
> the UDA. The previous steps are independent of Maudlin. 

I understand that, but all the steps are necessary to  support the 
argument. If consciousness supervenes only on physical computation, 
then one requires a physical instantiation of the UD, not a purely 
arithmetical one. 

But the step 8 point is precisely that consciousness cannot supervene on the physical computation only.




> My understanding is that to construct Olympia, we take  n copies of 
> Klara, and run each copy to step i of the program, where i=1..n-1. The 
> construct the sequence of water troughs such that they  are equal to 
> that of K_i at step i. We also connect K_i to Olympia at that point, 
> ready to take over in the event of a counterfactual being true. 

Invalid because of the infinite regress problem. How can we  run the 
program on the individual Klaras without connecting them to the 
Olympia in the first place?

?

That is supposed to have been done.




The Klaras cannot calculate anything 
without the counterfactual mechanism of all the other Klaras ensuring 
they don't go wrong.

The Klaras are build to keep the conunterfacualness correct, but without any change in the physical activity of Olympia, so if you accept the 323 principle, you can already abandon the physical supervenience at this stage.



 If all the Klaras have already been run somehow 
so the troughs prior to the branch onto the active Klara contain the 
calculated values then there is no need to run Oylmpia at  all. The 
state of the last Klara already contains the output of the calculation 
and we can discard Olympia and just say that we already calculated the 
value in the past. This makes a mockery of the entire elaborate 
mechanism Maudlin postulates and the business about inert parts and so 
on is irrelevant. I don't think that saying that a live calculation 
can always be replaced by one that was completed in the past solves 
anything. Certainly consciousness (or a computer) may draw on the 
results of completed calculations in order to speed up its work (a 
computer doesn't need to recalculate the value of pi every time it 
needs that constant), but it cannot solve every problem that  way, 
obviously! A computer game may pre-render an explosion made  by 
computing hundreds of thousands of particles, as a shortcut, but it 
cannot pre-render every possible game and just branch into  the 
relevant branch of that movie as required. Unless you grant it 
infinite calculation resources in the past and none in the present, an 
abject sophistry. 

The result of the computations are not relevant, as the physical supervenience has to associate
consciousness to a process, not to a result of a computation.



I can't find anything in Maudlin's paper that suggests the method you 
propose - pre-running every copy of Klara as if it had dealt  with all 
prior counterfactuals.

Maudlin builds the Klara from a pre-running of Olympia.


Each copy is merely another dumb Klara ready to 
wrong the next instant. That is both essential to the argument, and 
its fatal flaw.
 ***

I don't see any flaw here. If someone else understand, please provide some help to Stephen to convey it.




    Here is another: http://old.nabble.com/Re%3A-Movie-Graph-Argument-p32978306.html


Re: Movie Graph Argument
<inconnu.png> <inconnu.png><inconnu.png><inconnu.png>
by Joseph Knight Dec 14,  2011; 05:09pm ::
On Wed, Dec 14, 2011 at 1:51 PM, meekerdb <meekerdb@...> wrote:

On 12/14/2011 10:40 AM, Joseph Knight wrote:


On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimjones@...> wrote:

Any chance someone might précis for me/us dummies out here in maybe 3 sentences what Tim Maudlin's argument is? Nothing too heavy - just a quick refresher.


I'll try, but with a few more than 3 sentences. Suppose the consciousness of a machine can be said to supervene on the running of some program X. We can have a machine run the program but only running a constant program Y that gives the same output as X for one given input. In other words, it cannot "handle" counterfactual inputs because it is just a constant program that does the same thing no matter what. Surely such a machine is not conscious. It would be like, if I decided "I will answer A B D B D D C A C..." in response to the Chemistry test I am about to run off and take, and happened to get them all correct, I wouldn't really know Chemistry, right?

But I think Russell has reasonably questioned this.  You say X wouldn't know chemistry.  But that's a matter of intelligence, not necessarily consciousness.  We already know that computers can be intelligent, and there's nothing mysterious about intelligence "supervening" on machines.  Intelligence includes returning appropriate outputs for many different inputs.  But does consciousness?

I was really just using my Chemistry test as an imperfect analogy to the machine running Y being conscious (or not), so it doesn't affect the rest of the argument. But I see your point. Would you argue that a constant program (giving the same output no matter the input) can be conscious in principle? Maudlin assumes that such a program cannot be conscious, in his words, "it would make a mockery of the computational theory of mind." I am agnostic. In my opinion the Filmed Graph argument is more convincing than Maudlin, because with Maudlin one can still fall back to the position "consciousness can in principle supervene on a constant program".

But this would contradict comp, and finish the reductio ad absurdum, as a program is supposed to make a sophistiocate computation, and I would say no to any doctor who would use the argument above to replace my brain for a constant machine.






(For  those interested, here is the article itself)
 


Brent






So consciousness doesn't supervene on Y. But Maudlin (basically) shows that you can just add some additional parts to the machine that handle the counterfactuals as needed. These extra parts don't actually do anything, but their "presence" means the machine now could exactly emulate program X, i.e., is conscious. So a computationalist is forced to assert that the machine's consciousness supervenes on the presence of these extra parts, which in fact perform no computations at all. 



I think what Russell said about this earlier, i.e., in a multiverse the extra parts are doing things, so consciousness then appears at the scale of the multiverse -- is fascinating. But I am out of time. Hope this helped. I would recommend reading the original paper for the details.
--
   The idea is that we can obtain a fully explanatory model of interaction between multiple minds that allows us to reconstruct physics *and* that this explanation need not have anything to do with physicality at all. It "all is just the dreams of numbers". It is this thesis that I am claiming is wrong, for it ignores the necessity of relatively persistent media on and in which representations of the models can occur. If you cannot write a description of a theory or model such that some other entity can read it, how do you assume that it is real? Reality requires incontrovertibility, the lack of contradiction for all involved.     Thus there has to be a means by which a plurality of observers, each autonomous in thought and action (at some level), can met in a common medium. That common medium is, I claim, our physical world and such cannot be abstracted out of existence.

Good, but the last claim would be a problem with comp only if you mean "primitive physical world". If not you are restating the conclusion of UDA. We have to derive that physical world by the coherence conditions on machine's dream in arithmetic. We do not abtyract it out of existence: we justify its need from arithmetic (and comp).

Keep in mind that UDA reformulate the mind body problem, and show that if comp is true, then the solution cannot be coherent with physicalism.







The former Seperation is categorical in that one has seperate categories with no connection between them whatsoever. The latter is a duality between a pair of categories in that for the class of equivalent computations there is at least one physical system that can implement it and for a class of equivalent physical systems there is at least one computation that can simulate it. (Equivalent between physical systems is defined mathematically in terms of homologies such as diffeomorphisms) This idea was first pointed out by Leibniz and known as Leibniz equivalence. See http://plato.stanford.edu/entries/spacetime-holearg/ Leibniz_Equivalence.html

This makes sense in Aristotelian physics, which, that is the point, does not make sense if comp is true. Unless you can find a flaw. May be you have a problem with what a proof consists in. Proofs does not depend on the interpretation of the terms and formula occurring in it. A proof in math, and in applied math, is always complete in itself. Keep in mind that comp does not presuppose any theory of physics. It assumes only that the physical reality is Turing complete at least. If not, asking for an artificial digital brain would not make sense.


     The flaw is the assumption of separation = independence.

Then you have to work harder on what is wrong with step 8.



You are assuming that Platonism is a immaterial monism theory, it is actually a weak dualism theory as it requires a plurality of entities "to whom" the meanings of the Forms obtains. Yes, there must be some ontological level where all of the "seperate" entities are united and singular. The materialist monist would claim that the singularity is in the physical world and thus nothing else exists but the physical world. The immaterialist and ideal monist claim that all is in the mind of God. I claim that the Mind and the Body of God are degenerate at the ultimate singuloarity level, they are indistinguishable, thus the ground is neutral and singular from which all emerges.

I do the same, and it is already done (not a project), but the neutral ontology (neutral in the philosophy of mind sense) is given by any Turing complete theory, like arithmetic. This makes comp testable, etc.

You are to vague with your "neutral monism" to see if we are close on this. You seems unable to be put it into a theory, which makes impossible to progress, and make your "invalidity" argument not communicable. Mere existence is not a theory.

Bruno


http://iridia.ulb.ac.be/~marchal/

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to