Re: MGA 3

2008-11-30 Thread Russell Standish

On Sat, Nov 29, 2008 at 10:11:30AM +0100, Bruno Marchal wrote:
 
 
 On 28 Nov 2008, at 10:46, Russell Standish wrote:
 
 
  On Wed, Nov 26, 2008 at 10:09:01AM +0100, Bruno Marchal wrote:
  MGA 3
 
  ...
 
  But this reasoning goes through if we make the hole in the film
  itself. Reconsider the image on the screen: with a hole in the film
  itself, you get a hole in the movie, but everything which enters  
  and
  go out of the hole remains the same, for that (unique) range of
  activity.  The hole has trivially the same functionality than the
  subgraph functionality whose special behavior was described by the
  film. And this is true for any subparts, so we can remove the entire
  film itself.
 
 
  I don't think this step follows at all. Consciousness may supervene on
  the stationary unprojected film,
 
 This, I don't understand. And, btw, if that is true, then the physical  
 supervenience thesis is already wrong. The
 physical supervenience thesis asks that consciousness is associated in  
 real time and space with the activity of some machine (with MEC).

I am speaking as someone unconvinced that MGA2 implies an
absurdity. MGA2 implies that the consciousness is supervening on the
stationary film.

BTW - I don't think the film is conscious by virtue of the
counterfactuals issue, but that's a whole different story. And
Olympization doesn't work, unless we rule out the multiverse.

 
  Why does the physical supervenience require that all instantiations of
  a consciousness be dynamic? Surely, it suffices that some are?
 
 
 What do you mean by an instantiation of a dynamical process which is  
 not dynamic. Even a block universe describe a dynamical process, or a  
 variety of dynamical processes.
 

A block universe is nondynamic by definition. But looked at another
way, (ie from the inside) it is dynamic. It neatly illustrates why
consciousness can supervene on a stationary film (because it is
stationary when viewed from the inside). The film, however does need
to be sufficiently rich, and also needs to handle counterfactuals
(unlike the usual sort of movie we see which has only one plot).

 
 
 
 
  c) Eliminate the hypothesis there is a concrete deployment in the
  seventh step of the UDA. Use UDA(1...7) to define properly the
  computationalist supervenience thesis. Hint: reread the remarks  
  above.
 
  I have no problems with this conclusion. However, we cannot eliminate
  supervenience on phenomenal physics, n'est-ce pas?
 
 We cannot eliminate supervenience of consciousness on what we take as  
 other persons indeed. Of course phenomenal physics is a first person  
 subjective creation, and it helps to entangle our (abstract)  
 computational histories. That is the role of a brain. It does not  
 create consciousness, it does only make higher the probability for  
 that consciousness to be able to manifest itself relatively to other  
 consciousness. But consciousness can rely, with MEC, only to the  
 abstract computation.
 

The problem is that eliminating the brain from phenomenal experience
makes that experience even more highly probable than without. This is
the Occam catastrophe I mention in my book. Obviously this contradicts
experience. 

Therefore I conclude that supervenience on a phenomenal physical brain
is necessary for consciousness. I speculate a bit that this may be due
to self-awareness, but don't have a good argument for it. It is the
elephant in the room with respect to pure MEC theories.

 Sorry for being a bit short, I have to go,
 
 Bruno
 
 
 
 
 http://iridia.ulb.ac.be/~marchal/
 
 
 
 
 
-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Lost and not lost?

2008-11-30 Thread Günther Greindl

Hey,

Kim Jones wrote:

 I think this idea is so momentous that I actually wish to compose a  
 piece of music - possibly a symphony - which seeks to represent this  
 idea in music.

That would be cool!

 Et pourquoi pas? Most of the great composers attempted to represent  
 the TRANSCENDENTAL in music. 

Yes. In Bach (for instance The Art of Fugue) I can hear it most 
clearly :-)


Concerning Bruno:
I believe you, more than any human whose  
 mind I have frotté (grazed? Rubbed against?) has a representation of  
 ultimate things.

I would like to second this opinion. I think Bruno is onto something 
deep :-) And certainly he has thought more about this than most (all?) 
people I know.

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-30 Thread Günther Greindl



Stathis Papaioannou wrote:

 I realise this coming close to regarding consciousness as akin to the
 religious notion of a disembodied soul. But what are the alternatives?
 As I see it, if we don't discard computationalism the only alternative
 is to deny that consciousness exists at all, which seems to me
 incoherent.

ACK

But the differences are so enormous that one is again very far from 
religion. In religion, the soul is an essence of a person interfacing 
with a material body and usually exposed to some kind of judgement in an 
afterlife.

With COMP the soul - better: mind - is all there is - no material 
world, no essence, no judgements, just COMP. And it supervenes on - 
better: is (inside/outside view) - computations (see UDA for details ;-)

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness and free will

2008-11-30 Thread John Mikes
Bruno,
I wanted to submit some reflections to M.A. but you did it better.
Two words, however, I picked out:

*1. bifurcate*
I consider it a human narrowness to expect anything *to split in
TWO*(only) - Nature (the existence?) does not 'count'.
It has unlimited varants and the choices come under the 2nd word I picked
out:

*2. (free?) WILL*
 The *'Free Will'* is the invention of the religious trend to invoke
responsibility and punishment.
In 'my' position-kind even 'Will implies some personal(?) decision instead
of a deterministic *consequence of relations all over* acting upon the
observed change of the observed item.

As for the elusive *consciousness?*
'my' attempt to find some generalized and widened identification for all
those different things people (as is said)* 'everybody knows what it
is'*(but many in
*different ways* G), I ended up with the ID (first published 1992):
   *Acknowledgement of and response *
*to information (changes?)*
(considering it rather a process than only an 'awareness'.) I posted this on
several lists for psych, mind, consciousness, even diverse complexities and
did NOT get a refusal over the 15 years). Acceptance neither. So I thought:
Si tacent, clamant (or dormiunt?)
I hold one thing for sure: Ccness (whatever it may be) is NOT a 'thing'
callable 'physical'.

(I feel M.A. tacitly assigns to universe,the program, or whatever some
god-like authoritative decisionmaking role).

John M

* *
On Sat, Nov 29, 2008 at 3:49 PM, Bruno Marchal [EMAIL PROTECTED] wrote:


  On 29 Nov 2008, at 16:45, M.A. wrote:

  *(Assuming MEC/Comp.and MWI) If the computational universe which I
 experience*



 Assuming MEC I would say *you* experience an infinity of computational
 histories. The term universe is far too ambiguous (now).



  *is a single instance of a vast array of similar universes playing out
 every possible variation of the initial axioms, then no one universe could
 depart from its predetermined program since in so doing it would alter its
 program and duplicate that of another universe thus spoiling the overall
 mission of implementing every possible variation.*


 Histories can bifurcate in a way that you will find yourself in both
 histories (you seen from some third person point of view). Each histories
 is deterministic but, your future is uncertain.



  *It follows that each program-universe is completely detirministic *


 All right.



  *and that consciousness is merely an observing passenger inside the
 program;*



 At some point I could defined consciousness as the state of
 (instinctively at first) betting on a history. This will speed up yourself
 relatively to your current stories, and make greater the set of your
 possible continuation. As an exemple you become aware an asteroïd is coming
 nearby make it possible for you to envisage a set of possible decisions,
 which can themselves augment your probability of survival.



  * thus each program that contains a thinking entity is in a schizophrenic
 condition. *



 Come on!



  *This is because consciousness--which is part of the program--is capable
 of judging the actions of the program. When the program acts in a way
 approved by it, *


 by it?


  *the thinker is encouraged to believe that its free will produced the
 action. *


 ?



  *But when the program acts in a manner repugnant to it,*



 to who?


  *the conscious observer, refusing to give up the notion of free will,
 explains the lapse by rationalizations such as: God, luck, destiny,
 possession, halluciation etc. *


 As far as I understand, the program here acknowledge its ignorance. If, by
 being too much proud, he doesn't, then he make higher some catastrophe
 probabilities.



  *So every consciousness, bearing burdensome memories of repugnant
 actions, must either surrender the possibility of free will (fatalism),*


 Wrongly, I would say.



  *accept the intercession of supernatural powers (theology), *



 it could just accept it belongs to a collection of deep unknown
 histories, and many other unknown things, some even not nameable (and deadly
 if named). It can consolate itself by pointing on its *partial* control.

 Note also that it is not really the program or the machine who thinks, but
 the people vehiculated trough that machine computation relatively to its
 most probable (and local) computational histories.



  *or theorize an inaccessible part of itself that is able to override
 its purposes (Freud). *



 That is not entirely meaningless imo.



  *All of which implies a schism between consciousness and one of the
 following: the program, the universe or itself.*



 Here I agree. Universal machine are born to experience such a schism. We
 can come back on this. In its purer form it is a consequence of
 incompleteness. All universal machine hides a mystery to themselves, and
 more the machine learn, more that mystery is bigger. (This is related to the
 gap between G and G*, for those who reminds previous 

Re: MGA 3

2008-11-30 Thread Bruno Marchal


On 30 Nov 2008, at 04:23, Brent Meeker wrote:


 Bruno Marchal wrote:

 On 29 Nov 2008, at 15:56, Abram Demski wrote:

 Bruno,

 The argument was more of the type : removal of unnecessay and
 unconscious or unintelligent parts. Those parts have just no
 perspective. If they have some perpective playing arole in Alice's
 consciousness, it would mean we have not well chosen the  
 substitution
 level. You are reintroducing some consciousness on the elementary
 parts, here, I think.

 The problem would not be with removing individual elementary parts  
 and
 replacing them with functionally equivalent pieces; this obviously
 preserves the whole. Rather with removing whole subgraphs and
 replacing them with equivalent pieces. As Alice-in-the-cave is
 supposed to show, this can remove consciousness, at least in the  
 limit
 when the entire movie is replaced...


 The limit is not relevant. I agrre that if you remove Alice, you
 remove any possibility for Alice to manifest herself in your most
 probable histories. The problem is that in the range activity of the
 projected movie, removing a part of the graph change nothing. It
 changes only the probability of recoevering Alice from her history  
 in,
 again, your most probable history.

 Isn't this reliance on probable histories assuming some physical  
 theory that is
 no in evidence?



Not at all. I have defined history by a computation as see from a  
first person (plural or not).
Of course, well I guess I should insist on that perhaps, by  
computation I always mean the mathematical object; It makes sense only  
with respect to to some universal machine, and I have chosen  
elementary arithmetic as the primitive one.

Although strictly speaking the notion of computable is an epistemic  
notion, it happens that Church thesis makes it equivalent with purely  
mathematical notion, and this is used for making the notion of  
probable history a purely mathematical notion, (once we got a  
mathematical notion of first person, but this is simple in the thought  
experience (memory, diary ..., and a bit more subtle in the interview  
(AUDA)).

A difficulty, in those post correspondences, is that I am reasoning  
currently with MEC and MAT, just to get the contradiction, but in many  
(most) posts I reason only with MEC (having abandon MAT).
After UDA, you can already understand that physical has to be  
equivalent with probable history  for those who followed the whole  
UDA+MGA. physical has to refer the most probable (and hopefully)  
sharable relative computational history.
This is already the case with just UDA, if you assume both the  
existence of a physical universe and of a concrete UD running in  
that concrete universe. MGA is designed to eliminate the assumption of  
a physical universe and of the concrete UD.






 IThere are no physical causal link
 between the experience attributed to the physical computation and the
 causal history of projecting a movie.

 But there is a causal history for the creation of the movie - it's a  
 recording
 of Alice's brain functions which were causally related to her  
 physical world.



Assuming MEC+MAT you are right indeed. But the causal history of the  
creation of the movie, is not the same computation or causal chain  
than the execution of Alice's mind and Alice's brain during her  
original dream. If you make abstraction of that difference, it means  
you already don't accept the physical supervenience thesis, or, again,  
you are introducing magical knowledge in the elementary part running  
the computation.
You can only forget the difference of those two computations by  
abstracting from the physical part of the story. This means you are  
using exclusively the computational supervenience. MGA should make  
clear (but OK, I warned MGA is subtle) that the consciousness has to  
be related to the genuine causality or history. But it is that very  
genuineness that physics can accidentally reproduced in a non genuine  
way, like the brain movie projection, making the physical  
supervenience absurd.

It seems to me quasi obvious that it is ridiculous to attribute  
consciousness to the physical events of projecting the movie of a  
brain. That movie gives a pretty detailed description of the  
computations, but there is just no computation, nor even genuine  
causal relation between the states. Even one frame is not a genuine  
physical computational states. Only a relative description of it. In a  
cartoon, if you see someone throwing a ball on a window, the  
description of the broken glass are not caused by the description of  
someone throwing a ball. And nothing changes, for the moment of the  
projection of the movie, if the cartoon has been made from a real  
similar filmed situation.
To attribute consciousness to the stationary (non projected)  
contradict immediately the supervenience thesis of course.

All this is a bit complex because we have to take well into account  
the distinction between


Re: MGA 3

2008-11-30 Thread Bruno Marchal
Abram,


 My answer would have to be, no, she lacks the necessary counterfactual
 behaviors during that time.

? The film of the graph lacks also the counterfactuals.



 And, moreover, if only part of the brain
 were being run by a recording


... which lacks the counterfactual, ...


 then she would lack only some
 counterfactuals,


I don't understand. The recording lacks all the counterfactuals. You  
can recover them from inert material, true, but this is true for the  
empty graph too (both in dream and awake situations).



 and so she would count as partially conscious.


Hmmm  Can she be conscious that she is partially conscious? I mean  
is it like after we drink alcohol or something?


Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Platonia and causality

2008-11-30 Thread Günther Greindl

Hi all,

Bruno, do you still keep a notion of causality and the likes in 
platonia? I have collected these snips from some recent posts:

Brent Meeker wrote:

 But is causality an implementation detail?  There seems to be an 
 implicit
 assumption that digitally represented states form a sequence just 
 because there
 is a rule that defines that sequence, but in fact all digital (and 
 other)
 sequences depend on causal chains.

Kory wrote:

  I have an intuition that causality
 (or its logical equivalent in Platonia) is somehow important for
 consciousness. You argue that the the slide from Fully-Functional
 Alice to Lucky Alice (or Fully-Functional Firefox to Lucky Firefox)
 indicates that there's something wrong with this idea. However, you
 have an intuition that order is somehow important for consciousness.

But we must realise that causality is a concept that is deeply related 
(cognitively, in humans) to time and physical change.

But both time and space _emerge_ only from the inside view (1st person 
or 1st person shareable) in the sum over all computations.

In Platonia (viewed, for the time being, ludicrously and impossibly, 
from the outside) - there is no notion of time, space, sequentiality, 
before and after.

The very notion of causation must be one that arises only in the inside 
view, as a succession of consistent patterns.

In a sense, order (shareable histories) must arise from the Platonic 
Eternal Mess (chaos) - somehow along the lines of self-organization maybe:
http://en.wikipedia.org/wiki/Self-organization#Self-organization_in_mathematics_and_computer_science

In this sense, the computations would assemble themselves to 
consistent histories.

Bruno said:
 Even
 in Platonia consciousness does not supervene on description of the
 computation, even if those description are 100% precise and correct

Hmm, I understand the difference between description and computation in 
maths and logic, and also in real world, but I do not know if this still 
makes sense in Platonia - viewed from the acausal perspective outlined 
above. Well maybe in the sense that in some histories there will be 
platonic descriptions that are not conscious.

But in other histories those descriptions will be computations and 
conscious.

Cheers,
Günther




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Bruno Marchal

On 30 Nov 2008, at 11:57, Russell Standish wrote:


 On Sat, Nov 29, 2008 at 10:11:30AM +0100, Bruno Marchal wrote:


 On 28 Nov 2008, at 10:46, Russell Standish wrote:


 On Wed, Nov 26, 2008 at 10:09:01AM +0100, Bruno Marchal wrote:
 MGA 3

 ...

 But this reasoning goes through if we make the hole in the film
 itself. Reconsider the image on the screen: with a hole in the film
 itself, you get a hole in the movie, but everything which enters
 and
 go out of the hole remains the same, for that (unique) range of
 activity.  The hole has trivially the same functionality than the
 subgraph functionality whose special behavior was described by the
 film. And this is true for any subparts, so we can remove the  
 entire
 film itself.


 I don't think this step follows at all. Consciousness may  
 supervene on
 the stationary unprojected film,

 This, I don't understand. And, btw, if that is true, then the  
 physical
 supervenience thesis is already wrong. The
 physical supervenience thesis asks that consciousness is associated  
 in
 real time and space with the activity of some machine (with MEC).

 I am speaking as someone unconvinced that MGA2 implies an
 absurdity. MGA2 implies that the consciousness is supervening on the
 stationary film.


?  I could agree, but is this not absurd enough, given MEC and the  
definition of the physical superveneience thesis;




 BTW - I don't think the film is conscious by virtue of the
 counterfactuals issue, but that's a whole different story. And
 Olympization doesn't work, unless we rule out the multiverse.


 Why does the physical supervenience require that all  
 instantiations of
 a consciousness be dynamic? Surely, it suffices that some are?


 What do you mean by an instantiation of a dynamical process which is
 not dynamic. Even a block universe describe a dynamical process, or a
 variety of dynamical processes.


 A block universe is nondynamic by definition. But looked at another
 way, (ie from the inside) it is dynamic. It neatly illustrates why
 consciousness can supervene on a stationary film (because it is
 stationary when viewed from the inside).

OK, but then you clearly change the physical supervenience thesis.


 The film, however does need
 to be sufficiently rich, and also needs to handle counterfactuals
 (unlike the usual sort of movie we see which has only one plot).


OK. Such a film could be said to be a computation. Of course you are  
not talking about a stationary thing, which, be it physical or  
immaterial, cannot handle counterfactuals.


 The problem is that eliminating the brain from phenomenal experience
 makes that experience even more highly probable than without. This is
 the Occam catastrophe I mention in my book. Obviously this contradicts
 experience.

 Therefore I conclude that supervenience on a phenomenal physical brain
 is necessary for consciousness.


It is vague enough so that I can interpret it favorably through MEC.


Bruno




 I speculate a bit that this may be due
 to self-awareness, but don't have a good argument for it. It is the
 elephant in the room with respect to pure MEC theories.

 Sorry for being a bit short, I have to go,

 Bruno




 http://iridia.ulb.ac.be/~marchal/





 -- 

 
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics   
 UNSW SYDNEY 2052   [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 

 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Günther Greindl

Hello Bruno,

I must admit you have completely lost me with MGA 3.

With MGA 1 and 2, I would say that, with MEC+MAT, also the the 
projection of the movie (and Lucky Alice in 1) are conscious - because 
it supervenes on the physical activity.

MEC says: it's the computation that counts, not the substrate.

MAT says: we need some substrate to perform a computation. In MGA 1 and 
2 we have substrates (neurons or optical boolean graph that performs the 
computation).

Now in MGA 3 you say:

 Now, consider the projection of the movie of the activity of Alice's 
 brain, the movie graph.
 Is it necessary that someone look at that movie? Certainly not. 

Agreed.

 Is it necessary to have a screen? Well, the range of activity here is 
 just one dynamical description of one computation. Suppose we make a 
 hole in the screen. What goes in and out of that hole is exactly the 
 same, with the hole and without the hole. For that unique activity, the 
 hole in the screen is functionally equivalent to the subgraph which the 
 hole removed. 

We can remove those optical boolean nodes which are not relevant for the 
  caterpillar dream

 Clearly we can make a hole as large as the screen, so no
 need for a screen.

but no! Then we wouldn't have a substrate anymore. You are dropping MAT 
at this step, not leading MEC+MAT to a contradiction.

 But this reasoning goes through if we make the hole in the film itself. 
 Reconsider the image on the screen: with a hole in the film itself, you 
 get a hole in the movie, but everything which enters and go out of the 
 hole remains the same, for that (unique) range of activity.  The hole 
 has trivially the same functionality than the subgraph functionality 
 whose special behavior was described by the film. And this is true for 
 any subparts, so we can remove the entire film itself.

We can talk about this part after I understand why you can drop our 
optical boolean network *grin*


Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Abram Demski

Bruno,

No, she cannot be conscious that she is partially conscious in this
case, because the scenario is set up such that she does everything as
if she were fully conscious-- only the counterfactuals change. But, if
someone tested those counterfactuals by doing something that the
recording didn't account for, then she may or may not become conscious
of the fact of her partial consciousness-- in that case it would be
very much like brain damage.

Anyway, yes, I am admitting that the film of the graph lacks
counterfactuals and is therefore not conscious. My earlier splitting
of the argument into an argument about (1) and a separate argument
against (2) was perhaps a bit silly, because the objection to (2) went
far enough back that it was also an objection to (1). I split the
argument like that just because I saw an independent flaw in the
reasoning of (1)... anyway...

Basically, I am claiming that there is a version of COMP+MAT that MGA
is not able to derive a contradiction from. The version goes something
like this:

Yes, consciousness supervenes on computation, but that computation
needs to actually take place (meaning, physically). Otherwise, how
could consciousness supervene on it? Now, in order for a computation
to be physically instantiated, the physical instantiation needs to
satisfy a few properties. One of these properties is clearly some sort
of isomorphism between the computation and the physical instantiation:
the actual steps of the computation are represented in physical form.
A less obvious requirement is that the physical computation needs to
have the proper counterfactuals: if some external force were to modify
some step in the computation, the computation must progress according
to the new computational state (as translated by the isomorphism).

--Abram

On Sun, Nov 30, 2008 at 12:51 PM, Bruno Marchal [EMAIL PROTECTED] wrote:
 Abram,

 My answer would have to be, no, she lacks the necessary counterfactual
 behaviors during that time.

 ? The film of the graph lacks also the counterfactuals.


 And, moreover, if only part of the brain
 were being run by a recording

 ... which lacks the counterfactual, ...

 then she would lack only some
 counterfactuals,

 I don't understand. The recording lacks all the counterfactuals. You can
 recover them from inert material, true, but this is true for the empty graph
 too (both in dream and awake situations).


 and so she would count as partially conscious.

 Hmmm  Can she be conscious that she is partially conscious? I mean is it
 like after we drink alcohol or something?

 Bruno

 http://iridia.ulb.ac.be/~marchal/



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Günther Greindl

Bruno,

I have reread MGA 2 and would like to add the following:

We have the

optical boolean graph: OBG - this computes alice's dream.
we make a movie of this computation.


Now we run again, but in OBG some nodes do not make the computation 
correctly, BUT the movie _triggers_ the nodes, so in the end, the 
computation is performed.

So, with MEC+MAT and ALL NODES broken, I say this:

a) If the OBG nodes MALFUNCTION, but their function is subsituted with 
the movie (on/off), it is conscious.

b) If the OBG is broken that in a way that all nodes are not active 
anymore (no on/off, no signal passing), then no consciousness.



I think we can split the intuitions along these lines: if you assume 
that consciousness depends on activity along the vertices, then Alice is 
conscious neither in a nor in b, and then indeed I see why already MGA 2 
leads to a problem with MEC+MAT.

But if I think that consciousness supervenes only on the correct 
lighting up of the nodes (not the vertices!! - I don't need causality 
then, only the correct order), than a) would be conscious, b) not, and 
MGA 3 does not work I you take away my OBG (with the node intuition)!

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-30 Thread Bruno Marchal


On 30 Nov 2008, at 16:31, Günther Greindl wrote:




 Stathis Papaioannou wrote:

 I realise this coming close to regarding consciousness as akin to the
 religious notion of a disembodied soul. But what are the  
 alternatives?
 As I see it, if we don't discard computationalism the only  
 alternative
 is to deny that consciousness exists at all, which seems to me
 incoherent.

 ACK

 But the differences are so enormous that one is again very far from
 religion. In religion, the soul is an essence of a person  
 interfacing
 with a material body and usually exposed to some kind of judgement  
 in an
 afterlife.


I guess you mean our occidental religion ( which are about 40%  
Plato, 60% Aristotle, say).





 With COMP the soul - better: mind - is all there is - no material
 world, no essence, no judgements,


Well, to be frank, we don't know that. Open problem :)



 just COMP.


Well, mainly its consequences, IF true.

Thanks for your encouraging kind remarks in your posts,  Günther.

Bruno




http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Platonia and causality

2008-11-30 Thread Brent Meeker

Günther Greindl wrote:
 Hi all,
 
 Bruno, do you still keep a notion of causality and the likes in 
 platonia? I have collected these snips from some recent posts:
 
 Brent Meeker wrote:
 
  But is causality an implementation detail?  There seems to be an 
  implicit
  assumption that digitally represented states form a sequence just 
  because there
  is a rule that defines that sequence, but in fact all digital (and 
  other)
  sequences depend on causal chains.
 
 Kory wrote:
 
   I have an intuition that causality
  (or its logical equivalent in Platonia) is somehow important for
  consciousness. You argue that the the slide from Fully-Functional
  Alice to Lucky Alice (or Fully-Functional Firefox to Lucky Firefox)
  indicates that there's something wrong with this idea. However, you
  have an intuition that order is somehow important for consciousness.
 
 But we must realise that causality is a concept that is deeply related 
 (cognitively, in humans) to time and physical change.
 
 But both time and space _emerge_ only from the inside view (1st person 
 or 1st person shareable) in the sum over all computations.
 
 In Platonia (viewed, for the time being, ludicrously and impossibly, 
 from the outside) - there is no notion of time, space, sequentiality, 
 before and after.
 
 The very notion of causation must be one that arises only in the inside 
 view, as a succession of consistent patterns.

I agree.  But what is it about the patterns that creates a succession as viewed 
from the inside?  And how do we know that this does not obtain in the 
projection of the MGA?

Brent

 
 In a sense, order (shareable histories) must arise from the Platonic 
 Eternal Mess (chaos) - somehow along the lines of self-organization maybe:
 http://en.wikipedia.org/wiki/Self-organization#Self-organization_in_mathematics_and_computer_science
 
 In this sense, the computations would assemble themselves to 
 consistent histories.
 
 Bruno said:
  Even
  in Platonia consciousness does not supervene on description of the
  computation, even if those description are 100% precise and correct
 
 Hmm, I understand the difference between description and computation in 
 maths and logic, and also in real world, but I do not know if this still 
 makes sense in Platonia - viewed from the acausal perspective outlined 
 above. Well maybe in the sense that in some histories there will be 
 platonic descriptions that are not conscious.
 
 But in other histories those descriptions will be computations and 
 conscious.
 
 Cheers,
 Günther
 
 
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-30 Thread Kory Heath


On Nov 30, 2008, at 3:19 AM, Stathis Papaioannou wrote:
 Yes, and I think of consciousness as an essential side-effect of the
 computation, as addition is an essential side-effect of the sum of two
 numbers.

Ok, I'm with you so far. But I'd like to get a better handle your  
concept of a computation in Platonia. Here's one way I've been  
picturing platonic computation:

Imagine an infinite 2-dimensional grid filled with the binary digits  
of PI. Now imagine an infinite number of 2-dimensional grids on top of  
that one, with each grid containing the bits from the grid beneath it,  
as transformed by the Conway's Life rules. This is a description of a  
platonic computational object. Of course, my language is somewhat  
visual, but that's incidental. The point is, this is a precisely  
defined mathematical object. We can point at any cell in this  
infinite grid, and there is an answer to whether or not this bit is on  
or off, given our definitions. (More formally, we can define an  
abstract computational function that accepts any integer and returns  
the state of that bit, given all of our definitions.)

Do you find this an acceptable way (not necessarily the only way) of  
describing a computational platonic object? How would you talk about  
how consciousness relates to the conscious-seeming patterns in this  
platonic object? Would you say that consciousness supervenes on  
those portions of this platonic computation?

-- Kory



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness and free will

2008-11-30 Thread M.A.
Bruno,
  Thanks for the reply. I appreciate the detailed explanations. I'll 
post my responses in an interlinear manner using color to differentiate (if 
that's ok).   M.A.
  - Original Message - 
  From: Bruno Marchal 
  To: [EMAIL PROTECTED] 
  Sent: Saturday, November 29, 2008 3:49 PM
  Subject: Re: Consciousness and free will




  On 29 Nov 2008, at 16:45, M.A. wrote:


(Assuming MEC/Comp.and MWI) If the computational universe which I experience




  Assuming MEC I would say *you* experience an infinity of computational 
histories. The term universe is far too ambiguous 
(now).

  But isn't each history separated from all others by impermeable walls?
  Do you mean that the word universe is ambiguous or just my 
use of it?






is a single instance of a vast array of similar universes playing out every 
possible variation of the initial axioms, then no one universe could depart 
from its predetermined program since in so doing it would alter its program and 
duplicate that of another universe thus spoiling the overall mission of 
implementing every possible variation.


  Histories can bifurcate in a way that you will find yourself in both 
histories (you seen from some third person point of view). Each histories is 
deterministic but, your future is uncertain.

  But what about the first person me?  I am only conscious of one history.






It follows that each program-universe is completely detirministic 


  All right.






and that consciousness is merely an observing passenger inside the program;




  At some point I could defined consciousness as the state of (instinctively 
at first) betting on a history. This will speed up yourself relatively to your 
current stories, and make greater the set of your possible continuation. As an 
exemple you become aware an asteroïd is coming nearby make it possible for you 
to envisage a set of possible decisions, which can themselves augment your 
probability of survival.

  It seems like the present copy of me can envisage these decisions, but be 
unable to carry them out unless they are part of his deterministic history. 






 thus each program that contains a thinking entity is in a schizophrenic 
condition. 




  Come on! You agree to the presense of schism below. Is it the connotation of 
schizophrenic that you don't like?






This is because consciousness--which is part of the program--is capable of 
judging the actions of the program. When the program acts in a way approved by 
it, 


  by it?  Sorry. It means consciousness in this and the following paragraphs.




the thinker is encouraged to believe that his will produced the action. 


  ?






But when the program acts in a manner repugnant to it,




  to who? (The conscious observer.)




the conscious observer, refusing to give up the notion of free will, 
explains the lapse by rationalizations such as: God, luck, destiny, possession, 
halluciation etc. 


  As far as I understand, the program here acknowledge its ignorance. If, by 
being too much proud, he doesn't, then he make higher some catastrophe 
probabilities. 

  But isn't his problem of pride determined in some history, namely the one I 
experience?






So every consciousness, bearing burdensome memories of repugnant actions, 
must either surrender the possibility of free will (fatalism),


  Wrongly, I would say.






accept the intercession of supernatural powers (theology), 




  it could just accept it belongs to a collection of deep unknown histories, 
and many other unknown things, some even not nameable (and deadly if named). It 
can consolate itself by pointing on its *partial* control.

  Not very consoling when entangled with the intense immediacy and sensitivity 
of one's ego.


  Note also that it is not really the program or the machine who thinks, but 
the people vehiculated trough that machine computation relatively to its most 
probable (and local) computational histories.

  But I think as an individual, not as a group.






or theorize an inaccessible part of itself that is able to override its 
purposes (Freud). 




  That is not entirely meaningless imo.






All of which implies a schism between consciousness and one of the 
following: the program, the universe or itself.




  Here I agree. Universal machine are born to experience such a schism. We can 
come back on this. In its purer form it is a consequence of incompleteness. All 
universal machine hides a mystery to themselves, and more the machine learn, 
more that mystery is bigger. (This is related to the gap between G and G*, for 
those who reminds previous explanations).

  I find this most profound.





I'd be interested to know to what extent my thinking about this question 
agrees with or goes against the present discussion.



Re: Platonia and causality

2008-11-30 Thread Kory Heath


On Nov 30, 2008, at 9:53 AM, Günther Greindl wrote:
 Kory wrote:

 I have an intuition that causality
 (or its logical equivalent in Platonia) is somehow important for
 consciousness. You argue that the the slide from Fully-Functional
 Alice to Lucky Alice (or Fully-Functional Firefox to Lucky Firefox)
 indicates that there's something wrong with this idea. However, you
 have an intuition that order is somehow important for consciousness.

 But we must realise that causality is a concept that is deeply related
 (cognitively, in humans) to time and physical change.

 But both time and space _emerge_ only from the inside view (1st person
 or 1st person shareable) in the sum over all computations.

 In Platonia (viewed, for the time being, ludicrously and impossibly,
 from the outside) - there is no notion of time, space, sequentiality,
 before and after.

 The very notion of causation must be one that arises only in the  
 inside
 view, as a succession of consistent patterns.

For what it's worth, I do think that that there's a *kind* of  
causality in Platonia. Let me once again trot out the picture of a  
platonic block universe in which the initial state is the binary  
digits of PI, and the succeeding states are determined by the rules of  
Conway's Life. This block universe exists unchangingly and eternally  
in Platonia, but the states of the bits within it are related in a  
kind of causal fashion. The state of each bit in the block is  
determined (in a sense, caused) by the pyramid of cells beneath it,  
stretching back to the initial state, which is determined by the  
algorithm for computing the binary digits of PI. In this sense,  
causality is an essential aspect of the platonic notion of computation.

One might argue that this is really a misuse of the concept of  
causality - that I should just talk about the necessary logical  
relationships that are there by definition in my platonic object.  
But my point is that these logical relationships fill the exact role  
that causality is supposed to fill for the physicalist. When  
patterns of bits within this platonic block universe discuss their  
own physics, they might talk about how current configurations of  
physical matter were caused by previous states. The logical  
connections in Platonia are a good candidate for what they can  
actually be talking about.

This platonic form of causality may not always be directly related  
to the concept of time that patterns of bits in a block universe might  
have. For instance, there's a cellular automaton rule (which deserves  
to be much more widely known than it is) called Critters which is as  
simple as Conway's Life, uses only bits (on or off), is known to be  
computation universal, and is also fully reversible. This gets weird,  
because the computational structures within a Critters block universe  
will still seem to favor one direction in time - they'll store  
memories about the past and try to anticipate the future, etc. But  
in fact, our own physics seems to be reversible, so we have these same  
issues to work out regarding our own consciousness. The point is that,  
within a Critters block universe in Platonia, the states will still be  
logically related to each other in a way that precisely matches what  
physicists in the block universe (the critters within Critters!)  
would think of as causality.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Consciousness and free will

2008-11-30 Thread Kim Jones


On 01/12/2008, at 6:21 AM, M.A. wrote:

 Is it the connotation of schizophrenic that you don't like?



The term schizophrenic is an incredibly misused/misunderstood  
adjective. It specifically DOES NOT mean multiple personality  
(disorder) which is the common coin usage (ie not in a medico- 
diagnostic context)

Please help out by using some other word or term: perhaps split  
existence or multiple instantiation which conveys graphically what  
you mean. Perhaps there is another single word.

 From the Wikipedia article on Schizophrenia:


The word schizophrenia—which translates roughly as splitting of the  
mind and comes from the Greek roots schizein(σχίζειν, to  
split) and phrēn, phren- (φρήν, φρεν-, mind)[187]—was  
coined by Eugen Bleuler in 1908 and was intended to describe the  
separation of function between personality, thinking, memory, and  
perception. Bleuler described the main symptoms as 4 A's: flattened  
Affect, Autism, impaired Association of ideas and Ambivalence.[188]  
Bleuler realized that the illness was not a dementia as some of his  
patients improved rather than deteriorated and hence proposed the term  
schizophrenia instead.

The term schizophrenia is commonly misunderstood to mean that affected  
persons have a split personality. Although some people diagnosed  
with schizophrenia may hear voices and may experience the voices as  
distinct personalities, schizophrenia does not involve a person  
changing among distinct multiple personalities. The confusion arises  
in part due to the meaning of Bleuler's term schizophrenia (literally  
split or shattered mind). The first known misuse of the term to  
mean split personality was in an article by the poet T. S. Eliot in  
1933.[189]


Kim
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Kory Heath


On Nov 30, 2008, at 10:14 AM, Günther Greindl wrote:
 I must admit you have completely lost me with MGA 3.

I still find the whole thing easier to grasp when presented in terms  
of cellular automata.

Let's say we have a computer program that starts with a large but  
finite 2D grid of bits, and then iterates the rules to some CA  
(Conway's Life, Critters, whatever) on that grid a large but finite  
number of times, and stores all of the resulting computations in  
memory, so that we have a 3D block universe in memory. And lets say  
that the resulting block universe contains patterns that MECH-MAT  
would say are conscious.

If we believe that consciousness supervenes on the physical act of  
playing back the data in our block universe like a movie, then we  
have a problem. Because before we play back the movie, we can fill any  
portions of the block universe we want with zeros. So then our played  
back movie can contain conscious creatures who are walking around  
with (say) zeros where their visual cortexes should be, or their high- 
level brain functions should be, etc. In other words, we have a fading  
qualia problem (which we have also called a partial zombie problem  
in these threads).

I find the argument compelling as far as it goes. But I'm not  
convinced that all or most actual, real-world mechanist-materialists  
believe that consciousness supervenes on the physical act of playing  
back the stored computations. Bruno indicates that it must, by the  
logical definitions of MECH and MAT. This just makes me feel like I  
don't really understand the logical definitions of MECH and MAT.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 3

2008-11-30 Thread Russell Standish

On Sun, Nov 30, 2008 at 07:10:43PM +0100, Bruno Marchal wrote:
 
  I am speaking as someone unconvinced that MGA2 implies an
  absurdity. MGA2 implies that the consciousness is supervening on the
  stationary film.
 
 
 ?  I could agree, but is this not absurd enough, given MEC and the  
 definition of the physical superveneience thesis;

It is, prima facie, no more absurd than consciousness supervening on a
block universe.

 
  A block universe is nondynamic by definition. But looked at another
  way, (ie from the inside) it is dynamic. It neatly illustrates why
  consciousness can supervene on a stationary film (because it is
  stationary when viewed from the inside).
 
 OK, but then you clearly change the physical supervenience thesis.
 

How so? The stationary film is a physical object, I would have thought.

 
  The film, however does need
  to be sufficiently rich, and also needs to handle counterfactuals
  (unlike the usual sort of movie we see which has only one plot).
 
 
 OK. Such a film could be said to be a computation. Of course you are  
 not talking about a stationary thing, which, be it physical or  
 immaterial, cannot handle counterfactuals.
 

If true, then a block universe could not represent the
Multiverse. Maybe so, but I think a lot of people might be surprised
at this one.

 
  The problem is that eliminating the brain from phenomenal experience
  makes that experience even more highly probable than without. This is
  the Occam catastrophe I mention in my book. Obviously this contradicts
  experience.
 
  Therefore I conclude that supervenience on a phenomenal physical brain
  is necessary for consciousness.
 
 
 It is vague enough so that I can interpret it favorably through MEC.
 

That is my point - physical supervenience (aka materialism) is not
only not contradicted by MEC (aka COMP), but in fact is necessary for
to even work. Only what I call naive physicalism,
(aka the need for a concrete instantiation of a computer running the
UD) is contradicted by MEC.

What _is_ interesting is that not all philosophers distinguish between
physicalism and materialism. David Chalmers does not, but Michael
Lockwood does, for instance. Much of this revolves around the
ontological status of emergence.

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---