Re: MGA 2

2009-01-12 Thread Mirek Dobsicek

Hello Bruno,

 I think you are correct, but allowing the observer to be mechanically
 described as obeying the wave equation (which solutions obeys to comp),

 Hmm well if you have a basis, yes; - but naked infinite-dimensional
 Hilbert Space (the everything in QM)? 
 
 
 You put the finger on a problem I have with QM. I ill make a confession:
 I don't believe QM is really turing universal.
 The universal quantum rotation does not generate any interesting
 computations! 

Could you please elaborate a bit on the two above sentences. I am
missing a more context to understand where really points to. And with
the second sentence, I simply don't understand it.

 I am open, say, to the idea that quantum universality needs measurement,
 and this could only exists internally. So the naked infinidimensional
 Hilbert space + the universal wave (rotation, unitary transformation) is
 a simpler ontology than arithmetical truth.
 Yet, even on the vaccum, from inside its gives all the non linearities
 you need to build arithmetic ... and consciousness.

Cheers,
 mirek

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2009-01-12 Thread Bruno Marchal
Hi Mirek,


On 12 Jan 2009, at 15:36, Mirek Dobsicek wrote:


 Hello Bruno,

 I think you are correct, but allowing the observer to be  
 mechanically
 described as obeying the wave equation (which solutions obeys to  
 comp),

 Hmm well if you have a basis, yes; - but naked infinite- 
 dimensional
 Hilbert Space (the everything in QM)?


 You put the finger on a problem I have with QM. I ill make a  
 confession:
 I don't believe QM is really turing universal.
 The universal quantum rotation does not generate any interesting
 computations!

 Could you please elaborate a bit on the two above sentences. I am
 missing a more context to understand where really points to.



really was just some emphases. Also I should have said instead: I  
don't understand how QM can be really Turing Universal.
This could be, and probably is, due to my incompetence. It is due to  
the fact that I have never succeed in programming a clear precise  
quantum Universal dovetailer in a purely unitary way. The classical  
universal dovetailer generates easily all the quantum computations,  
but I find hard to just define *one* unitary transformation, without  
measurement, capable of generating forever greater computational  
memory space. Other problems are more technical, and are related to  
the very notion of universality and are rather well discussed in the  
2007 paper:


Deutsch's Universal Quantum Turing Machine revisited.
http://arxiv.org/pdf/quant-ph/0701108v1




 And with
 the second sentence, I simply don't understand it.


Me too. Forget it. Let me try to remember what I did have in the mind.  
I guess I did have wanted to say that a universal unitary  
transformation (what I meant by Universal Quantum rotation) cannot  
generate infinite complexity, although I have a good idea why a  
sufficiently big or rich unitary transformation can generate any  
long (but finite) simulation of any universal Turing machine. This is  
again related to my lack of success in just programming the Universal  
quantum Dovetailer. If you have any idea how to do that, let me know.  
I am not sure I am saying deep things (here :), just that I have not  
enough practice in quantum computing to make all this clear, and when  
I consult the literature on quantum universality it makes things worse  
(see the paper above).

I could relate this with technical problem with the BCI combinator  
algebra, that is those structure in which every process are  
reversible, and no cloning are possible (cf the No Kestrel, No  
Starling summary of physics(*)). Those algebra are easily shown being  
non turing universal, and pure unitarity seems to me to lead to such  
algebra.

This leads to the prospect that a sort of Everything-structure could  
exist, yet not be Turing universal. Computers would just not exist, in  
the sense that the universe, in that case, would not been able to  
provide the extendable memory space without which universality does  
not exist. This would not make the UDA (AUDA) reasoning false, but it  
would make the ultimate physics still much more constrained. Physical  
reality would be essentially finite.


I was pointing on place where I am a bit lost myself, which means that  
I am the one who would like a bit more explanation.

Could you implement with a quantum computer the really infinite  
counting algorithm by a purely unitary transformation? The one which  
generates without stopping 0, 1, 2, 3, ... That would already be a big  
help.

Bruno

(*) Marchal B., 2005, Theoretical computer science and the natural  
sciences, Physics of Life Reviews, Vol. 2 Issue 4 December 2005, pp.  
251-289.




 I am open, say, to the idea that quantum universality needs  
 measurement,
 and this could only exists internally. So the naked  
 infinidimensional
 Hilbert space + the universal wave (rotation, unitary  
 transformation) is
 a simpler ontology than arithmetical truth.
 Yet, even on the vaccum, from inside its gives all the non  
 linearities
 you need to build arithmetic ... and consciousness.

 Cheers,
 mirek

 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-12-06 Thread Kory Heath


On Dec 3, 2008, at 5:02 AM, Stathis Papaioannou wrote:
 I struggle with the question of what a platonic object actually is,
 even for something very simple. Let's say the implementation of a
 circle supports roundness in the same way that a certain computation
 supports consciousness. We can easily think of many ways a circle can
 be represented in the real world, but which of these should we think
 of when considering the platonic object? Is it possible to point to
 platonic square and say it isn't round, or does the square support
 roundness implicitly since it could be considered a circle
 transformed? And is there any reason not to consider roundness as a
 basic platonic object in itself, perhaps with circles somehow
 supervening on roundness rather than the other way around?

I see what you mean. But I'm uncomfortable with (what I perceive as)  
the resulting vagueness in the platonic view of consciousness. You've  
indicated that you think of consciousness as fundamentally  
computational and Platonic - that's it's an essential side-effect of  
platonic computations, as addition is the essential side-effect of the  
sum of two-numbers. But if we don't have a clear conception of  
platonic computations, do we even really know what we're talking  
about? I'm worried, essentially, that the move to Platonia solves  
the problems created by these thought experiments only by creating a  
view of consciousness that's too vague to allow such problems to arise.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-12-06 Thread Stathis Papaioannou

2008/12/7 Kory Heath [EMAIL PROTECTED]:

 I see what you mean. But I'm uncomfortable with (what I perceive as)
 the resulting vagueness in the platonic view of consciousness. You've
 indicated that you think of consciousness as fundamentally
 computational and Platonic - that's it's an essential side-effect of
 platonic computations, as addition is the essential side-effect of the
 sum of two-numbers. But if we don't have a clear conception of
 platonic computations, do we even really know what we're talking
 about? I'm worried, essentially, that the move to Platonia solves
 the problems created by these thought experiments only by creating a
 view of consciousness that's too vague to allow such problems to arise.

I agree that it's vague, but any way you look at it consciousness is
vague, slippery and elusive. This is probably why philosophers and
scientists who like to be clear about things have sometimes come to
the conclusion that consciousness is not real at all: the only real
thing is intelligence, which manifests as intelligent behaviour. This
idea steers a course between Scylla (paradoxes) and Charybdis
(vagueness and mysticism) and is attractive... as long as you avoid
introspection.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-12-03 Thread Stathis Papaioannou

2008/12/1 Kory Heath [EMAIL PROTECTED]:

 Ok, I'm with you so far. But I'd like to get a better handle your
 concept of a computation in Platonia. Here's one way I've been
 picturing platonic computation:

 Imagine an infinite 2-dimensional grid filled with the binary digits
 of PI. Now imagine an infinite number of 2-dimensional grids on top of
 that one, with each grid containing the bits from the grid beneath it,
 as transformed by the Conway's Life rules. This is a description of a
 platonic computational object. Of course, my language is somewhat
 visual, but that's incidental. The point is, this is a precisely
 defined mathematical object. We can point at any cell in this
 infinite grid, and there is an answer to whether or not this bit is on
 or off, given our definitions. (More formally, we can define an
 abstract computational function that accepts any integer and returns
 the state of that bit, given all of our definitions.)

 Do you find this an acceptable way (not necessarily the only way) of
 describing a computational platonic object? How would you talk about
 how consciousness relates to the conscious-seeming patterns in this
 platonic object? Would you say that consciousness supervenes on
 those portions of this platonic computation?

I struggle with the question of what a platonic object actually is,
even for something very simple. Let's say the implementation of a
circle supports roundness in the same way that a certain computation
supports consciousness. We can easily think of many ways a circle can
be represented in the real world, but which of these should we think
of when considering the platonic object? Is it possible to point to
platonic square and say it isn't round, or does the square support
roundness implicitly since it could be considered a circle
transformed? And is there any reason not to consider roundness as a
basic platonic object in itself, perhaps with circles somehow
supervening on roundness rather than the other way around?



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-30 Thread Günther Greindl



Stathis Papaioannou wrote:

 I realise this coming close to regarding consciousness as akin to the
 religious notion of a disembodied soul. But what are the alternatives?
 As I see it, if we don't discard computationalism the only alternative
 is to deny that consciousness exists at all, which seems to me
 incoherent.

ACK

But the differences are so enormous that one is again very far from 
religion. In religion, the soul is an essence of a person interfacing 
with a material body and usually exposed to some kind of judgement in an 
afterlife.

With COMP the soul - better: mind - is all there is - no material 
world, no essence, no judgements, just COMP. And it supervenes on - 
better: is (inside/outside view) - computations (see UDA for details ;-)

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-30 Thread Bruno Marchal


On 30 Nov 2008, at 16:31, Günther Greindl wrote:




 Stathis Papaioannou wrote:

 I realise this coming close to regarding consciousness as akin to the
 religious notion of a disembodied soul. But what are the  
 alternatives?
 As I see it, if we don't discard computationalism the only  
 alternative
 is to deny that consciousness exists at all, which seems to me
 incoherent.

 ACK

 But the differences are so enormous that one is again very far from
 religion. In religion, the soul is an essence of a person  
 interfacing
 with a material body and usually exposed to some kind of judgement  
 in an
 afterlife.


I guess you mean our occidental religion ( which are about 40%  
Plato, 60% Aristotle, say).





 With COMP the soul - better: mind - is all there is - no material
 world, no essence, no judgements,


Well, to be frank, we don't know that. Open problem :)



 just COMP.


Well, mainly its consequences, IF true.

Thanks for your encouraging kind remarks in your posts,  Günther.

Bruno




http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-30 Thread Kory Heath


On Nov 30, 2008, at 3:19 AM, Stathis Papaioannou wrote:
 Yes, and I think of consciousness as an essential side-effect of the
 computation, as addition is an essential side-effect of the sum of two
 numbers.

Ok, I'm with you so far. But I'd like to get a better handle your  
concept of a computation in Platonia. Here's one way I've been  
picturing platonic computation:

Imagine an infinite 2-dimensional grid filled with the binary digits  
of PI. Now imagine an infinite number of 2-dimensional grids on top of  
that one, with each grid containing the bits from the grid beneath it,  
as transformed by the Conway's Life rules. This is a description of a  
platonic computational object. Of course, my language is somewhat  
visual, but that's incidental. The point is, this is a precisely  
defined mathematical object. We can point at any cell in this  
infinite grid, and there is an answer to whether or not this bit is on  
or off, given our definitions. (More formally, we can define an  
abstract computational function that accepts any integer and returns  
the state of that bit, given all of our definitions.)

Do you find this an acceptable way (not necessarily the only way) of  
describing a computational platonic object? How would you talk about  
how consciousness relates to the conscious-seeming patterns in this  
platonic object? Would you say that consciousness supervenes on  
those portions of this platonic computation?

-- Kory



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-29 Thread Stathis Papaioannou

2008/11/28 Kory Heath [EMAIL PROTECTED]:

 I still feel like I don't have a handle on how you feel the move to
 Platonia solves these problems. If we imagine the mathematical
 description of filling a 3D grid with the binary digits of PI,
 somewhere within it we will find some patterns of bits that look as
 though they're following the rules to Conway's Life. If we see
 creatures in there, would they be conscious? What about the areas in
 that grid where we find the equivalent of Empty-Headed Alice, where
 most of the cells seem to be following the rules of Conway's Life,
 but the section where a creature's visual cortex ought to be is just
 filled with zeros? In other words, why doesn't the partial zombie
 problem still exist for us in Platonia?

Asking questions like this about platonic objects isn't like asking
the same questions about objects in a physical world. Abstract
threeness is not a kind of picture of what we would recognise as
threeness in the physical world: three objects, or five objects which
could be seen as two lots of two and one lot of one object, or the
Arabic numeral 3. Similarly, you can't point to a picture of a
physical computer and ask whether that is giving rise to a particular
computation in Platonia. Threeness, computations and consciousness
exist eternally and necessarily, and can't be created, destroyed or
localised.

I realise this coming close to regarding consciousness as akin to the
religious notion of a disembodied soul. But what are the alternatives?
As I see it, if we don't discard computationalism the only alternative
is to deny that consciousness exists at all, which seems to me
incoherent.



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-29 Thread Kory Heath


On Nov 29, 2008, at 7:52 PM, Stathis Papaioannou wrote:
 Threeness, computations and consciousness
 exist eternally and necessarily, and can't be created, destroyed or
 localised.

I understand (I think) how threeness and computations exist eternally  
in Platonia, but I don't understand your Platonic notion of  
consciousness. Even after the move to Platonia, I'm still viewing  
consciousness as something fundamentally computational. Are you?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-27 Thread Stathis Papaioannou

2008/11/27 Brent Meeker [EMAIL PROTECTED]:

 Doesn't this antinomy arise because we equivocate on running Firefox.  Do we
 mean a causal chain of events in the computer according to a certain program
 specification or do we mean the appearance on the screen of the same thing 
 that
 the causal chain would have produced?  We'd say no by the first meaning, but
 yes by the second.  Obviously, there the question is not black-and-white.  
 If
 the computer simply dropped a bit or two and miscolored a few pixels, no one
 would notice and no one would assert it wasn't running Firefox.  So really, 
 when
 we talk about running Firefox we are referring to a fuzzy, holistic process
 that admits of degrees.

A functionally equivalent copy of Firefox behaves in the same way as
the standard copy to which we are comparing it, giving the same output
for a given input. Differences which the program can't know about
are not important in this context, and the exact nature of the
hardware - whether solid state or valve, causal or random - is one
such difference. Of course, if the hardware is causal the program will
run much more reliably, but if the random hardware runs appropriately
through luck, I don't see how the program could know this.

 I'm developing a suspicion of arguments that say suppose by accident  If
 we say that the (putative) possibility of something happening by accident
 destroys the relevance of it happening as part of a causal chain, we are, in a
 sense, rejecting the concept of causal chains and relations - and not just in
 consciousness, as your Firefox example illustrates.

I would say that the significance of the causal chain is in
reliability, not in the experience the computation has, such as it may
be.

 I wrote putative above because this kind of thought experiment hypothesizes
 events whose probability is infinitesimal.  If you take a finitist view, there
 is a lower bound to non-zero probabilities.

Can't we stay finitist and say these improbable things are very likely
to happen given a very big universe, say 3^^^3 metres across in
Knuth's notation?

 It is still trivial in the sense that it could be said to instantiate all
 possible conscious worlds (at least up to some size limit).  Since we don't 
 know
 what is necessary to instantiate consciousness, this seems much more 
 speculative
 than saying the block of marble instantiates all computations - which we 
 already
 agree is true only in a trivial sense.

We do know what it takes to instantiate consciousness: chemical
reactions in the brain. If these chemical reactions are computable
then an appropriate computation should also instantiate consciousness.
If we consider only the case of inputless conscious beings, I still
don't see why they won't be instantiated in randomness.

as I see no reason why the
 consciousness of these observers should be contingent on the
 possibility of interaction with the environment containing the
 substrate of their implementation. My conclusion from this is that
 consciousness, in general, is not dependent on the orderly physical
 activity which is essential for the computations that we observe.

 Yet this is directly contradicted by those specific instances in which
 consciousness is interrupted by disrupting the physical activity.

But if it's all a virtual reality, it isn't a concrete physical
disruption that affects consciousness. It's just that the program
takes a turn which manifests in the virtual world as brain and
consciousness disruption.

 Rather, consciousness must be a property of the abstract computation
 itself, which leads to the conclusion that the physical world is
 probably a virtual reality generated by the big computer in Platonia,

 This seems to me to be jumping to a conclusion by examining only one side of 
 the
 argument and, finding it flawed, embracing the contrary.  Abstract 
 computations
 are atemporal and don't have to be generated.  So it amounts to saying that 
 the
 physical world just IS in virtue of there being some mapping between the world
 and some computation.

Yes. But I arrive at this conclusion because I can't think of a reason
to constrain computation so that it is only implemented by
conventional computers, and not by any and every random process.

 The Fading Qualia argument proves functionalism, assuming that the
 physical behaviour of the brain is computable (some people like Roger
 Penrose dispute this). Functionalism then leads to the conclusion that
 consciousness isn't dependent on physical activity, as discussed in
 the recent threads. So, either functionalism is wrong, or
 consciousness resides in the Platonic realm.

 Of there's something wrong with the argument that functionalism implies
 consciousness isn't dependent on physical activity.

Yes, but I find the argument convincing.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything 

Re: MGA 2

2008-11-27 Thread Kory Heath


On Nov 26, 2008, at 5:29 AM, Stathis Papaioannou wrote:
 Yes. Suppose one of the components in my computer is defective but,
 with incredible luck, is outputting the appropriate signals due to
 thermal noise. Would it then make sense to say that the computer isn't
 really running Firefox, but only pretending to do so, reproducing
 the Firefox behaviour but lacking the special Firefox
 qualia-equivalent?

It seems to me that this reasoning creates just as serious a problem  
for your perspective as it does for mine. Suppose we physically remove  
the defective component from the computer, but, with incredible luck,  
the surrounding components continue to act as though they were  
receiving the signals they would have received. Your experience of  
using Firefox remains the same, so (by your argument above) it  
shouldn't make sense to say that the computer isn't really running  
Firefox. But we can keep removing components until all that's left is  
a monitor that, with incredible luck due to thermal noise, is  
displaying the pixels that would have been displayed if your computer  
was actually functioning, doing things like displaying a mouse-pointer  
that (very improbably!) happens to move when you move your mouse, etc.

This is, of course, just a recapitulation of the argument we've  
already been considering - the slide from Fully-Functional Alice to  
Lucky Alice to Empty-Headed Alice. I have an intuition that causality  
(or its logical equivalent in Platonia) is somehow important for  
consciousness. You argue that the the slide from Fully-Functional  
Alice to Lucky Alice (or Fully-Functional Firefox to Lucky Firefox)  
indicates that there's something wrong with this idea. However, you  
have an intuition that order is somehow important for consciousness.  
(Without trying to beg the question, I might use the term mere  
order, to indicate the fact that, for you, it doesn't matter whether  
the blinking bits in some hypothetical 2D array were generated by  
(say) a random process, it just matters that they display the  
requisite order.) But the slide from Lucky Alice to Empty-Headed Alice  
is just as problematic for that view as the slide from Fully- 
Functional Alice to Lucky Alice is for mine.

My point isn't that your intuition must be incorrect. My point is that  
the above argument fails to show me why your mere order intuition is  
more correct than my real order intuition, since the argument is  
equally destructive to both intuitions. Instead of giving up your  
intuition, you make a move to Platonia. But in that new context, I  
think it still makes sense to ask if mere order (for instance, in  
the binary digits of PI) is enough for consciousness, and the Alice /  
Firefox thought experiments don't help me answer that question.

 If by Unification you mean the idea that two identical brains with
 identical input will result in only one consciousness, I don't see how
 this solves the conceptual problem of partial zombies. What would
 happen if an identical part of both brains were replaced with a
 non-concious but otherwise identically functioning equivalent?

I was referring to the idea that my Conway's Life version of Bruno's  
MGA 2 may only present a problem for Duplicationists. If one believes  
that physically re-performing all of the Conway's Life computations  
would create a second experience of pain (assuming that there's a  
creature in there with that description), and if you *don't* believe  
that the act of playing the move back creates a second experience of  
pain, then you have a partial zombie problem. But it you accept  
Unification, the problem might go away (although I'm unsure of this).

I still feel like I don't have a handle on how you feel the move to  
Platonia solves these problems. If we imagine the mathematical  
description of filling a 3D grid with the binary digits of PI,  
somewhere within it we will find some patterns of bits that look as  
though they're following the rules to Conway's Life. If we see  
creatures in there, would they be conscious? What about the areas in  
that grid where we find the equivalent of Empty-Headed Alice, where  
most of the cells seem to be following the rules of Conway's Life,  
but the section where a creature's visual cortex ought to be is just  
filled with zeros? In other words, why doesn't the partial zombie  
problem still exist for us in Platonia?

-- Kory



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-27 Thread John Mikes
Si nisi non esset perfectum quodlibet esset (if IF not existed everything
would be perfect.

Maybe I am a partial zombie for these things.
(Mildly said).

John M

On Thu, Nov 27, 2008 at 4:36 PM, Kory Heath [EMAIL PROTECTED] wrote:



 On Nov 26, 2008, at 5:29 AM, Stathis Papaioannou wrote:
  Yes. Suppose one of the components in my computer is defective but,
  with incredible luck, is outputting the appropriate signals due to
  thermal noise. Would it then make sense to say that the computer isn't
  really running Firefox, but only pretending to do so, reproducing
  the Firefox behaviour but lacking the special Firefox
  qualia-equivalent?

 It seems to me that this reasoning creates just as serious a problem
 for your perspective as it does for mine. Suppose we physically remove
 the defective component from the computer, but, with incredible luck,
 the surrounding components continue to act as though they were
 receiving the signals they would have received. Your experience of
 using Firefox remains the same, so (by your argument above) it
 shouldn't make sense to say that the computer isn't really running
 Firefox. But we can keep removing components until all that's left is
 a monitor that, with incredible luck due to thermal noise, is
 displaying the pixels that would have been displayed if your computer
 was actually functioning, doing things like displaying a mouse-pointer
 that (very improbably!) happens to move when you move your mouse, etc.

 This is, of course, just a recapitulation of the argument we've
 already been considering - the slide from Fully-Functional Alice to
 Lucky Alice to Empty-Headed Alice. I have an intuition that causality
 (or its logical equivalent in Platonia) is somehow important for
 consciousness. You argue that the the slide from Fully-Functional
 Alice to Lucky Alice (or Fully-Functional Firefox to Lucky Firefox)
 indicates that there's something wrong with this idea. However, you
 have an intuition that order is somehow important for consciousness.
 (Without trying to beg the question, I might use the term mere
 order, to indicate the fact that, for you, it doesn't matter whether
 the blinking bits in some hypothetical 2D array were generated by
 (say) a random process, it just matters that they display the
 requisite order.) But the slide from Lucky Alice to Empty-Headed Alice
 is just as problematic for that view as the slide from Fully-
 Functional Alice to Lucky Alice is for mine.

 My point isn't that your intuition must be incorrect. My point is that
 the above argument fails to show me why your mere order intuition is
 more correct than my real order intuition, since the argument is
 equally destructive to both intuitions. Instead of giving up your
 intuition, you make a move to Platonia. But in that new context, I
 think it still makes sense to ask if mere order (for instance, in
 the binary digits of PI) is enough for consciousness, and the Alice /
 Firefox thought experiments don't help me answer that question.

  If by Unification you mean the idea that two identical brains with
  identical input will result in only one consciousness, I don't see how
  this solves the conceptual problem of partial zombies. What would
  happen if an identical part of both brains were replaced with a
  non-concious but otherwise identically functioning equivalent?

 I was referring to the idea that my Conway's Life version of Bruno's
 MGA 2 may only present a problem for Duplicationists. If one believes
 that physically re-performing all of the Conway's Life computations
 would create a second experience of pain (assuming that there's a
 creature in there with that description), and if you *don't* believe
 that the act of playing the move back creates a second experience of
 pain, then you have a partial zombie problem. But it you accept
 Unification, the problem might go away (although I'm unsure of this).

 I still feel like I don't have a handle on how you feel the move to
 Platonia solves these problems. If we imagine the mathematical
 description of filling a 3D grid with the binary digits of PI,
 somewhere within it we will find some patterns of bits that look as
 though they're following the rules to Conway's Life. If we see
 creatures in there, would they be conscious? What about the areas in
 that grid where we find the equivalent of Empty-Headed Alice, where
 most of the cells seem to be following the rules of Conway's Life,
 but the section where a creature's visual cortex ought to be is just
 filled with zeros? In other words, why doesn't the partial zombie
 problem still exist for us in Platonia?

 -- Kory



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Re: MGA 2

2008-11-26 Thread Stathis Papaioannou

2008/11/26 Kory Heath [EMAIL PROTECTED]:


 On Nov 24, 2008, at 5:40 PM, Stathis Papaioannou wrote:
 The question turns on what is a computation and why it should have
 magical properties. For example, if someone flips the squares on a
 Life board at random and accidentally duplicates the Life rules does
 that mean the computation is carried out?

 I would say no. But of course, the real question is, Why does it
 matter? If I'm reading you correctly, you're taking the view that
 it's the pattern of bits that matters, not what created it (or
 caused it, or computed it, etc.)

Yes. Suppose one of the components in my computer is defective but,
with incredible luck, is outputting the appropriate signals due to
thermal noise. Would it then make sense to say that the computer isn't
really running Firefox, but only pretending to do so, reproducing
the Firefox behaviour but lacking the special Firefox
qualia-equivalent?

 It would help me if I had a clearer idea of how you view
 consciousness. I assume that, for you, if someone flips the squares on
 a Life board at random and creates the expected chaos, there's no
 consciousness there, but that there are certain configurations that
 could arise (randomly) that you would consider conscious. I assume
 that these patterns would show some kind of regularity - some kind of
 law-like behavior.

In the first instance, yes. But then the problem arises that under a
certain interpretation, the chaotic patterns could also be seen as
implementing any given computation. A common response to this is that
although it may be true in a trivial sense, as it is true that a block
of marble contains every possible statue, it is useless to define
something as a computation unless it can process information in a way
that interacts with its environment. This seems reasonable so far, but
what if the putative computation is of a virtual world with conscious
observers? The trivial sense in which such a computation can be said
to be hiding in chaos is no longer trivial, as I see no reason why the
consciousness of these observers should be contingent on the
possibility of interaction with the environment containing the
substrate of their implementation. My conclusion from this is that
consciousness, in general, is not dependent on the orderly physical
activity which is essential for the computations that we observe.
Rather, consciousness must be a property of the abstract computation
itself, which leads to the conclusion that the physical world is
probably a virtual reality generated by the big computer in Platonia,
since there is no basis for believing that there is a concrete
physical world separate from the necessarily existing virtual one.

 It's not easy for me to explain why I think it matters what kind of
 process (or in Platonia, what kind of abstract computation) generated
 that order. But it's also not easy for me to understand the
 alternative view. During those stretches of time when the random field
 of bits is creating a pattern that you would call conscious, what do
 you *mean* when you say it's conscious? By definition, you can't mean
 anything about how it's reacting to its environment, or that it's
 doing something because of something else, etc.

I know what I mean by consciousness, being intimately associated with
it myself, but I can't explain it.

 I think there is a partial zombie problem regardless of whether
 Unification or Duplication is accepted.

 Can you elaborate on this? What partial zombie problem do you see that
 Unification doesn't address?

If by Unification you mean the idea that two identical brains with
identical input will result in only one consciousness, I don't see how
this solves the conceptual problem of partial zombies. What would
happen if an identical part of both brains were replaced with a
non-concious but otherwise identically functioning equivalent?

 And do you think that the move away from
 physical reality to mathematical reality solves that problem? If
 so, how?

The Fading Qualia argument proves functionalism, assuming that the
physical behaviour of the brain is computable (some people like Roger
Penrose dispute this). Functionalism then leads to the conclusion that
consciousness isn't dependent on physical activity, as discussed in
the recent threads. So, either functionalism is wrong, or
consciousness resides in the Platonic realm.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-26 Thread Brent Meeker

Stathis Papaioannou wrote:
 2008/11/26 Kory Heath [EMAIL PROTECTED]:

 On Nov 24, 2008, at 5:40 PM, Stathis Papaioannou wrote:
 The question turns on what is a computation and why it should have
 magical properties. For example, if someone flips the squares on a
 Life board at random and accidentally duplicates the Life rules does
 that mean the computation is carried out?
 I would say no. But of course, the real question is, Why does it
 matter? If I'm reading you correctly, you're taking the view that
 it's the pattern of bits that matters, not what created it (or
 caused it, or computed it, etc.)
 
 Yes. Suppose one of the components in my computer is defective but,
 with incredible luck, is outputting the appropriate signals due to
 thermal noise. Would it then make sense to say that the computer isn't
 really running Firefox, but only pretending to do so, reproducing
 the Firefox behaviour but lacking the special Firefox
 qualia-equivalent?

Doesn't this antinomy arise because we equivocate on running Firefox.  Do we 
mean a causal chain of events in the computer according to a certain program 
specification or do we mean the appearance on the screen of the same thing that 
the causal chain would have produced?  We'd say no by the first meaning, but 
yes by the second.  Obviously, there the question is not black-and-white.  If 
the computer simply dropped a bit or two and miscolored a few pixels, no one 
would notice and no one would assert it wasn't running Firefox.  So really, 
when 
we talk about running Firefox we are referring to a fuzzy, holistic process 
that admits of degrees.

I'm developing a suspicion of arguments that say suppose by accident  If 
we say that the (putative) possibility of something happening by accident 
destroys the relevance of it happening as part of a causal chain, we are, in a 
sense, rejecting the concept of causal chains and relations - and not just in 
consciousness, as your Firefox example illustrates.

I wrote putative above because this kind of thought experiment hypothesizes 
events whose probability is infinitesimal.  If you take a finitist view, there 
is a lower bound to non-zero probabilities.


 
 It would help me if I had a clearer idea of how you view
 consciousness. I assume that, for you, if someone flips the squares on
 a Life board at random and creates the expected chaos, there's no
 consciousness there, but that there are certain configurations that
 could arise (randomly) that you would consider conscious. I assume
 that these patterns would show some kind of regularity - some kind of
 law-like behavior.
 
 In the first instance, yes. But then the problem arises that under a
 certain interpretation, the chaotic patterns could also be seen as
 implementing any given computation. A common response to this is that
 although it may be true in a trivial sense, as it is true that a block
 of marble contains every possible statue, it is useless to define
 something as a computation unless it can process information in a way
 that interacts with its environment. This seems reasonable so far, but
 what if the putative computation is of a virtual world with conscious
 observers? The trivial sense in which such a computation can be said
 to be hiding in chaos is no longer trivial, 

It is still trivial in the sense that it could be said to instantiate all 
possible conscious worlds (at least up to some size limit).  Since we don't 
know 
what is necessary to instantiate consciousness, this seems much more 
speculative 
than saying the block of marble instantiates all computations - which we 
already 
agree is true only in a trivial sense.

as I see no reason why the
 consciousness of these observers should be contingent on the
 possibility of interaction with the environment containing the
 substrate of their implementation. My conclusion from this is that
 consciousness, in general, is not dependent on the orderly physical
 activity which is essential for the computations that we observe.

Yet this is directly contradicted by those specific instances in which 
consciousness is interrupted by disrupting the physical activity.


 Rather, consciousness must be a property of the abstract computation
 itself, which leads to the conclusion that the physical world is
 probably a virtual reality generated by the big computer in Platonia,

This seems to me to be jumping to a conclusion by examining only one side of 
the 
argument and, finding it flawed, embracing the contrary.  Abstract computations 
are atemporal and don't have to be generated.  So it amounts to saying that the 
physical world just IS in virtue of there being some mapping between the world 
and some computation.


 since there is no basis for believing that there is a concrete
 physical world separate from the necessarily existing virtual one.
 
 It's not easy for me to explain why I think it matters what kind of
 process (or in Platonia, what kind of abstract computation) generated
 that 

Re: MGA 2

2008-11-25 Thread Russell Standish

On Mon, Nov 24, 2008 at 12:28:45PM +0100, Bruno Marchal wrote:
 
 
 Le 24-nov.-08, à 02:39, Russell Standish a écrit :
 
 
  On Sun, Nov 23, 2008 at 03:59:02PM +0100, Bruno Marchal wrote:
 
  I would side with Kory that a looked up recording of conscious
  activity is not conscious.
 
 
 
  I agree with you. The point here is just that MEC+MAT implies it.
 
 
  This I don't follow. I would have thought it implies the opposite.
 
 
 MGA 1 shows that MEC+MAT implies lucky Alice is conscious (during the 
 exam). OK?
 MGA 2 shows that MEC+MAT implies Alice is dreaming (and thus conscious) 
 when the film is projected. OK?

Right - I think we had a breakdown in communication. I thought you
were asserting the opposite

 I take the looked recording as identical (with respect to the 
 reasoning) with a projection of the movie.
 
 Of course I don't believe that a projection of a filmed computation is 
 conscious 'qua computatio. It is so absurd that sometimes I end the 
 Movie Graph Argument here. I mean I consider this equivalent to false, 
 and thus as enough for showing COMP+MAT implies false.
 MGA 3 is intended for those who believes that the movie can be 
 conscious qua computatio.
 
 Bruno
 

The movie, in this case, is a very precise recording of the states of
all of Alice's neurons and their interactions. Why wouldn't it be
conscious? Someone once said to you don't confuse the territory with
the map - and you very sagely asked what if the map is so detailed
it is indistinguishable from the territory.

A popular representation of the universe is a block universe, where
all events exist in a 4D static representation that is forever
timeless. A block universe contains conscious entities, who perceive
time etc., at least according to your usual die hard materialist,
don't you think? How does a block universe differ from your movie
though?

Note it is important not to rely on our intuition here. None of us has
experience of movies with the level of resolution been discussed
here. High definition movies are distinctly lame by comparison.

I guess I'll need MGA3!


-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-25 Thread Bruno Marchal


Thanks for providing me with even more motivations for MGA 3.
I will try to do it as soon as possible. It could time because I am  
hesitating on the best way to proceed. I know that what is obvious for  
some is not for others, and vice versa ... That is why we do proof, to  
met universal criteria.

Bruno



Le 25-nov.-08, à 11:25, Russell Standish a écrit :


 On Mon, Nov 24, 2008 at 12:28:45PM +0100, Bruno Marchal wrote:


 Le 24-nov.-08, à 02:39, Russell Standish a écrit :


 On Sun, Nov 23, 2008 at 03:59:02PM +0100, Bruno Marchal wrote:

 I would side with Kory that a looked up recording of conscious
 activity is not conscious.



 I agree with you. The point here is just that MEC+MAT implies it.


 This I don't follow. I would have thought it implies the opposite.


 MGA 1 shows that MEC+MAT implies lucky Alice is conscious (during the
 exam). OK?
 MGA 2 shows that MEC+MAT implies Alice is dreaming (and thus  
 conscious)
 when the film is projected. OK?

 Right - I think we had a breakdown in communication. I thought you
 were asserting the opposite

 I take the looked recording as identical (with respect to the
 reasoning) with a projection of the movie.

 Of course I don't believe that a projection of a filmed computation is
 conscious 'qua computatio. It is so absurd that sometimes I end the
 Movie Graph Argument here. I mean I consider this equivalent to false,
 and thus as enough for showing COMP+MAT implies false.
 MGA 3 is intended for those who believes that the movie can be
 conscious qua computatio.

 Bruno


 The movie, in this case, is a very precise recording of the states of
 all of Alice's neurons and their interactions. Why wouldn't it be
 conscious? Someone once said to you don't confuse the territory with
 the map - and you very sagely asked what if the map is so detailed
 it is indistinguishable from the territory.

 A popular representation of the universe is a block universe, where
 all events exist in a 4D static representation that is forever
 timeless. A block universe contains conscious entities, who perceive
 time etc., at least according to your usual die hard materialist,
 don't you think? How does a block universe differ from your movie
 though?

 Note it is important not to rely on our intuition here. None of us has
 experience of movies with the level of resolution been discussed
 here. High definition movies are distinctly lame by comparison.

 I guess I'll need MGA3!


 --  

 --- 
 -
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics   
 UNSW SYDNEY 2052   [EMAIL PROTECTED]
 Australiahttp://www.hpcoders.com.au
 --- 
 -

 

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-25 Thread Kory Heath


On Nov 24, 2008, at 5:40 PM, Stathis Papaioannou wrote:
 The question turns on what is a computation and why it should have
 magical properties. For example, if someone flips the squares on a
 Life board at random and accidentally duplicates the Life rules does
 that mean the computation is carried out?

I would say no. But of course, the real question is, Why does it  
matter? If I'm reading you correctly, you're taking the view that  
it's the pattern of bits that matters, not what created it (or  
caused it, or computed it, etc.)

It would help me if I had a clearer idea of how you view  
consciousness. I assume that, for you, if someone flips the squares on  
a Life board at random and creates the expected chaos, there's no  
consciousness there, but that there are certain configurations that  
could arise (randomly) that you would consider conscious. I assume  
that these patterns would show some kind of regularity - some kind of  
law-like behavior.

It's not easy for me to explain why I think it matters what kind of  
process (or in Platonia, what kind of abstract computation) generated  
that order. But it's also not easy for me to understand the  
alternative view. During those stretches of time when the random field  
of bits is creating a pattern that you would call conscious, what do  
you *mean* when you say it's conscious? By definition, you can't mean  
anything about how it's reacting to its environment, or that it's  
doing something because of something else, etc.

 I think there is a partial zombie problem regardless of whether
 Unification or Duplication is accepted.

Can you elaborate on this? What partial zombie problem do you see that  
Unification doesn't address? And do you think that the move away from  
physical reality to mathematical reality solves that problem? If  
so, how?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Bruno Marchal


Le 24-nov.-08, à 02:39, Russell Standish a écrit :


 On Sun, Nov 23, 2008 at 03:59:02PM +0100, Bruno Marchal wrote:

 I would side with Kory that a looked up recording of conscious
 activity is not conscious.



 I agree with you. The point here is just that MEC+MAT implies it.


 This I don't follow. I would have thought it implies the opposite.


MGA 1 shows that MEC+MAT implies lucky Alice is conscious (during the 
exam). OK?
MGA 2 shows that MEC+MAT implies Alice is dreaming (and thus conscious) 
when the film is projected. OK?
I take the looked recording as identical (with respect to the 
reasoning) with a projection of the movie.

Of course I don't believe that a projection of a filmed computation is 
conscious 'qua computatio. It is so absurd that sometimes I end the 
Movie Graph Argument here. I mean I consider this equivalent to false, 
and thus as enough for showing COMP+MAT implies false.
MGA 3 is intended for those who believes that the movie can be 
conscious qua computatio.

Bruno



http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Günther Greindl



Quentin Anciaux wrote:

 If infinities are at play... what is a MAT-history ? it can't even be
 written.

Agreed. And that is why we should be more reluctant to drop COMP than to 
drop MAT.

But IF we drop COMP, we could accept unwriteable MAT-histories.

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Bruno Marchal


On 24 Nov 2008, at 16:11, Günther Greindl wrote:




 Quentin Anciaux wrote:

 If infinities are at play... what is a MAT-history ? it can't even be
 written.

 Agreed. And that is why we should be more reluctant to drop COMP  
 than to
 drop MAT.

 But IF we drop COMP, we could accept unwriteable MAT-histories.


Yes. You could define precise mathematical unwriteable MAT-histories.  
Mathematical logicians have already the tools for managing Newtonian  
MAT-histories..You will need logic with non enumerable alphabet. Good  
luck with the non enumerable typo errors :)
But no problem. I find this unplausible but it can be done consistently.

  COMP is a bit like consistency from Peano Arithmetic first person  
view on its third person description (its clothes or its Gödel Number,  
or its program): IF true, then its falsity is consistent.
COMP is the ontic truth on YES DOCTOR, and it entails (provably with  
some vocabulary definition) the intrinsical RIGHT, for machines,  to  
say NO to the doctor, and the ethical obligation to respect those  
who says NO.

I have no problem with MAT believers, only with COMP+MAT believers.
Note also that, even with just COMP the first person OM lives  
unwriteable stories, so those tools will be used, even in the cadre of  
COMP.
And I can uderstand, through comp,  the roots of the believe that comp  
is false. Actually there is a sense to say that from the first person  
point of view, comp *is* flase. The first person that you can (in a  
proper mathematical way) associated to a machine, already does not or  
cannot believe in the truth of comp. This I can elaborate later, but  
this needs more technics.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Bruno Marchal

Hi  Günther,



 I think you are correct, but allowing the observer to be mechanically
 described as obeying the wave equation (which solutions obeys to  
 comp),

 Hmm well if you have a basis, yes; - but naked infinite-dimensional
 Hilbert Space (the everything in QM)?


You put the finger on a problem I have with QM. I ill make a  
confession: I don't believe QM is really turing universal.
The universal quantum rotation does not generate any interesting  
computations!
I am open, say, to the idea that quantum universality needs  
measurement, and this could only exists internally. So the naked  
infinidimensional Hilbert space + the universal wave (rotation,  
unitary transformation) is a simpler ontology than arithmetical truth.
Yet, even on the vaccum, from inside its gives all the non linearities  
you need to build arithmetic ... and consciousness.





 With MAT we do not only
 concentrate on OMs (as with COMP) but on all states (which maybe don't
 have an OM)

I have no idea of what you try to say. With comp, we have an (non  
denombrable) infinity of computations, going through a (denombrable)  
infinity of states, and only few of them, I would say will have 1-OM  
role or 3-OM role. Even a fewer minority (a priori) will belongs to  
sharable computations (physical realities).






 I mean Everett is really SWE+COMP.

 Ok I have not looked at it this way yet - how does COMP enter the
 picture automatically in the Everett interpretation?
 I am missing
 something here. Do you mean because all the solutions are computable?
 (but see objection above)

There are two ways COMP enter the picture in Everett or QM::
-When Everett showed the consistency of the intersubjective report in  
the case of many obervers doing experimentation together, he used  
machine-like observer. It has to assume the observer have capacities  
to distinguish 0, and 1, and have memories of result of experiments.  
In his long version paper everything is explained with some detail.
- The solution of SWE are computable. They does not go out of the F_i  
and W_i, SWE does not refute Church thesis.



 With MAT we haven't (except bibles, myth, etc.). There is no standard
 notion of mat histories,

 I agree - that is why I think COMP is a better guess than MAT -  
 although
 I still have some quibbles ...


Quibbles happens. Sure :)





 deployment with comp). To have MAT correct, you have to accept not  
 only
 actual infinities, but concrete actual infinities that you cannot
 approximate with Turing machine, nor with Turing Machine with oracle.
 You are a bit back to literal angels and fairies ...

 Yes, we agree.

 As I said many times, COMP is my favorite working *hypothesis*. It  
 is my
 ...
 MAT has been a wonderful methodological assumption, but it has always
 being incoherent, or eliminativist on the mind.

 Ok. But what do you think of the following: Bertrand Russell's neutral
 monism (also Feigl and others) is an interesting metaphysical  
 theory:
 one would have a basic mind-stuff - protoexperientials - which would
 follow the laws of comp.

Ontically, all we need are 0, the successor and successor's law,  
addition and multiplication.
With this you have already a sort of God which lost himself (agaian  
and again) in its creation ...

I have no problem  calling the comp ontic a neutral monism, if this is  
not used to eliminate again the first persons.



 It would not be a dualism, it would be mind-monism, but the objects
 being computed would not be OMs directly but some kind of basic
 mind-components - this idea is not new, in fact these objects would
 correspond to the dharmas of yogacara (and also Theravada Buddhism,
 but not so clearly there). (see
 http://en.wikipedia.org/wiki/ 
 Dharmas#Dharmas_in_Buddhist_phenomenology)


Imo, the Yogacara is excellent. I have already given the reference of  
the wonderful book by Wendy Donger O'Flaherty: Dreams Illusion and  
Other Realities, which is good on that subject. There are many books  
comparing Plotinus and some Eastern conception of reality.




 One would lose the wonderful OM-COMP correspondence (which I think  
 is an
 important feature of your COMP)

OM, observer moment, is an expression, introduced in Bostrom. With  
comp, I have make an attempt to (re)define the OMs. The original first  
person PM of Bostrom can be recasted more or less in term of first  
person having proved some (sigma1) sentence. But this does not work  
well, you have to consider fiber on their extension, and so one (so  
it is a bit of a open problem which I bypass by the interview of the  
machine). 3-OM are more simple, they are just the sigma1 sentences.  
They correspond to the accessible states by the Universal Dovetailer.  
you can see the UD as a theorem prover proving all the true Sigma1  
sentences. (Of course the lobian machine generated through those  
proofs prove much more complex sentences than the UD. I will have to  
come back on this, cf Searles Error in this 

Re: MGA 2

2008-11-24 Thread Kory Heath


On Nov 23, 2008, at 11:24 AM, Brent Meeker wrote:
 Kory Heath wrote:
 Or maybe I'm still misdiagnosing the problem. Is anyone arguing that,
 when you play back the lookup table like a movie, this counts as
 performing all of the Conway's Life computations a second time?

 Why shouldn't it?

Please see my recent response to Bruno. If we perform a complex  
computation which results in placing the integer 5 into some memory  
variable, and then later we copy the contents of that memory variable  
to some other location in memory, in what sense are we re-performing  
the original complex computation?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Kory Heath


On Nov 24, 2008, at 3:28 AM, Bruno Marchal wrote:
 MGA 1 shows that MEC+MAT implies lucky Alice is conscious (during the
 exam). OK?
 MGA 2 shows that MEC+MAT implies Alice is dreaming (and thus  
 conscious)
 when the film is projected. OK?

I don't mean to hold up the show, but I'm still stuck here. I don't  
understand how Lucky Alice should be viewed as conscious in the  
context of MEC+MAT.

In a different message, you said this:

 But to go in the
 detail here would confront us with the not simple task of defining
 more precisely what is a computation, or what we will count has two
 identical computations in the deployment.

As complex as that task may be, I'm beginning to think that I can't  
get past MGA 1 without tackling it.

Imagine that you have a grid of bits, and at each tick of the clock,  
each bit is randomly turned on or off using a pseudorandom number  
generator with a very long periodicity. Imagine that for some stretch  
of time, the bits in the grid act as if they were following the  
rules to Conway's Life. Are Conway's Life computations in fact being  
performed? I thought obviously no. The majority answer here seems to  
be obviously yes.

Suppose that we perform a very complex computation, and the result is  
the integer 5. Should any computation that results in 5 be viewed  
as performing the former computation?

Chalmer's paper Does a Rock Implement Every Finite-State Automaton?  
seems directly relevant to all of these Lucky Alice thought  
experiments. (Is it?) I need to re-read that paper.

I have no doubt that my thinking on these topics is confused. Where  
should I begin?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 24, 2008, at 3:28 AM, Bruno Marchal wrote:
 MGA 1 shows that MEC+MAT implies lucky Alice is conscious (during the
 exam). OK?
 MGA 2 shows that MEC+MAT implies Alice is dreaming (and thus  
 conscious)
 when the film is projected. OK?
 
 I don't mean to hold up the show, but I'm still stuck here. I don't  
 understand how Lucky Alice should be viewed as conscious in the  
 context of MEC+MAT.
 
 In a different message, you said this:
 
 But to go in the
 detail here would confront us with the not simple task of defining
 more precisely what is a computation, or what we will count has two
 identical computations in the deployment.
 
 As complex as that task may be, I'm beginning to think that I can't  
 get past MGA 1 without tackling it.
 
 Imagine that you have a grid of bits, and at each tick of the clock,  
 each bit is randomly turned on or off using a pseudorandom number  
 generator with a very long periodicity. Imagine that for some stretch  
 of time, the bits in the grid act as if they were following the  
 rules to Conway's Life. Are Conway's Life computations in fact being  
 performed? I thought obviously no. The majority answer here seems to  
 be obviously yes.
 
 Suppose that we perform a very complex computation, and the result is  
 the integer 5. Should any computation that results in 5 be viewed  
 as performing the former computation?
 
 Chalmer's paper Does a Rock Implement Every Finite-State Automaton?  
 seems directly relevant to all of these Lucky Alice thought  
 experiments. (Is it?) I need to re-read that paper.
 
 I have no doubt that my thinking on these topics is confused. Where  
 should I begin?
 
 -- Kory

I share your reservations, Kory.  In outline, Burno's argument so far seems to 
be (I'm sure Bruno will correct me if I get this wrong):

1. Assume that consciousness supervenes on the material realization of some 
complex computations.

2. These computations could be performed stepwise by some machine that only 
does 
arithmetic and consciousness would still supervene.

3. The order of the steps matter, but not the time interval between steps.  So 
even if the steps are discrete and separated in time consciousness will still 
supervene.

4. Since many different mechanisms can realize the sequence of steps the 
consciousness must supervene on the computation however the sequence is 
realized.

5. The sequence of steps could be realized by accident, i.e. a random number 
generator.

6. The sequence of steps could be realized by a recording of the original, 
conscious sequence.

7. 5  6 supra are absurd (i.e. false) therefore there is an implicit 
contradiction in 1.

But I don't find this compelling.  First, 5  6 are not contradictions - they 
just violate our intuitions about what consciousness should be like.  But what 
is it about them that violate our intuitions?

(a) They have divorced consciousness from it's context, i.e. it's potential or 
actual interaction with an environment.

(b) They eliminate the temporal continuity, so that the consciousness is sliced 
into discrete observer moments which are regarded as states in a state 
machine.

(c) They eliminate causal connections within the process that is supposed to 
realize consciousness.

The causal connections are broken by imagining coincidents that are so 
improbable that their probablity of happening within the lifetime of the 
universe is infinitesimal - in other words at a level where we have no way to 
distinguish improbable from impossible.

Having shown there is something counter-intuitive implicit in 1 thru 7 supra, 
we're invited to conclude that consciousness supervenes on pure, abstract 
computation which takes place in an arithmetical Platonia.  But that also 
violates a lot of intuitions.  Of course I'm not against violating intuitions, 
but I expect some predictive power in exchange.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 23, 2008, at 11:24 AM, Brent Meeker wrote:
 Kory Heath wrote:
 Or maybe I'm still misdiagnosing the problem. Is anyone arguing that,
 when you play back the lookup table like a movie, this counts as
 performing all of the Conway's Life computations a second time?
 Why shouldn't it?
 
 Please see my recent response to Bruno. If we perform a complex  
 computation which results in placing the integer 5 into some memory  
 variable, and then later we copy the contents of that memory variable  
 to some other location in memory, in what sense are we re-performing  
 the original complex computation?

That's different since, ex hypothesi, the original calculation was complex.  So 
we can say just putting the answer, 5, in a register is not repeating the 
calculation based on some complexity measure of the process.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Stathis Papaioannou

2008/11/24 Kory Heath [EMAIL PROTECTED]:


 On Nov 22, 2008, at 6:52 PM, Stathis Papaioannou wrote:
 Which leads again to the problem of partial zombies. What is your
 objection to saying that the looked up computation is also conscious?
 How would that be inconsistent with observation, or lead to logical
 contradiction?

 I can only answer this in the context of Bostrom's Duplication or
 Unification question. Let's say that within our Conway's Life
 universe, one particular creature feels a lot of pain. After the run
 is over, if we load the Initial State back into the array and iterate
 the rules again, is another experience of pain occurring? If you think
 yes, you accept Duplication by Bostrom's definition. If you say
 no, you accept Unification.

I accept Unification, though for different reasons to those discussed
in these threads.

 Duplication is more intuitive to me, and you might say that my thought
 experiment is aimed at Duplicationists. In that context, I don't
 understand why playing back the lookup table as a movie should create
 another experience of pain. None of the actual Conway's Life
 computations are being performed. We could just print them out on
 (very large) pieces of paper and flip them like a book. Is this
 supposed to generate an experience of pain? What if we just lay out
 all the pages in a row and move our eyes across them? What if we lay
 them out randomly and move our eyes across them? And so on.

If the GOL results in consciousness, then I don't see how you could
consistently claim that such activities don't generate consciousness.
The question turns on what is a computation and why it should have
magical properties. For example, if someone flips the squares on a
Life board at random and accidentally duplicates the Life rules does
that mean the computation is carried out? How would you know by
observation if this was happening just by luck? You could argue that
after a short period of observation the Life board would become
completely disorganised, but what about the case of competent
square-flipper who has a condition that might render him amnesic at
any moment? What about the case of having a vast army of random
square-flippers operating multiple boards, so that at least one of
them necessarily follows the correct rules?

 I argue
 that if running the original computation a second time would create a
 second experience of pain, we can generate a partial zombie.

 Stathis, Brent, and Bruno have all suggested that there is no partial
 zombie problem in my argument. Is that because you all accept
 Unification? Or am I missing something else?

I think there is a partial zombie problem regardless of whether
Unification or Duplication is accepted. Interestingly, Nick Bostrom
doesn't seem to have a problem with the idea of partial zombies:

http://www.nickbostrom.com/papers/experience.pdf



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Kory Heath


On Nov 24, 2008, at 5:26 PM, Brent Meeker wrote:
 Kory Heath wrote:
 On Nov 23, 2008, at 11:24 AM, Brent Meeker wrote:
 Kory Heath wrote:
 Or maybe I'm still misdiagnosing the problem. Is anyone arguing  
 that,
 when you play back the lookup table like a movie, this counts as
 performing all of the Conway's Life computations a second time?
 Why shouldn't it?

 Please see my recent response to Bruno. If we perform a complex
 computation which results in placing the integer 5 into some memory
 variable, and then later we copy the contents of that memory variable
 to some other location in memory, in what sense are we re-performing
 the original complex computation?

 That's different since, ex hypothesi, the original calculation was  
 complex.  So
 we can say just putting the answer, 5, in a register is not  
 repeating the
 calculation based on some complexity measure of the process.

But the Conway's Life calculations are complex in the sense that I  
meant the term. If we have a grid of cells filled with a pattern of  
bits, and we point at one particular cell and ask, If we iterate the  
Conway's Life rule on this grid a trillion times, will this bit be on  
or off?, we have to perform a bunch of computations to answer the  
question. If we store the results of those computations, and then  
later someone points at that same cell and asks the same question, and  
I just look up the answer, I don't see how we can say that that act of  
looking up the answer counts as re-performing the original  
computation. Are you arguing that it does?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-24 Thread Brent Meeker

Kory Heath wrote:
 On Nov 24, 2008, at 5:26 PM, Brent Meeker wrote:
   
 Kory Heath wrote:
 
 On Nov 23, 2008, at 11:24 AM, Brent Meeker wrote:
   
 Kory Heath wrote:
 
 Or maybe I'm still misdiagnosing the problem. Is anyone arguing  
 that,
 when you play back the lookup table like a movie, this counts as
 performing all of the Conway's Life computations a second time?
   
 Why shouldn't it?
 
 Please see my recent response to Bruno. If we perform a complex
 computation which results in placing the integer 5 into some memory
 variable, and then later we copy the contents of that memory variable
 to some other location in memory, in what sense are we re-performing
 the original complex computation?
   
 That's different since, ex hypothesi, the original calculation was  
 complex.  So
 we can say just putting the answer, 5, in a register is not  
 repeating the
 calculation based on some complexity measure of the process.
 

 But the Conway's Life calculations are complex in the sense that I  
 meant the term. If we have a grid of cells filled with a pattern of  
 bits, and we point at one particular cell and ask, If we iterate the  
 Conway's Life rule on this grid a trillion times, will this bit be on  
 or off?, we have to perform a bunch of computations to answer the  
 question. If we store the results of those computations, and then  
 later someone points at that same cell and asks the same question, and  
 I just look up the answer, I don't see how we can say that that act of  
 looking up the answer counts as re-performing the original  
 computation. Are you arguing that it does?

 -- Kory
   
No, I'm saying it doesn't. 

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Stathis Papaioannou

2008/11/23 Jason Resch [EMAIL PROTECTED]:

 I would side with Kory that a looked up recording of conscious activity is
 not conscious.  My argument being that static information has no implicit
 meaning because there are an infinite number of ways a bit string can be
 interpreted.  However in a running program the values of the bits do have
 implicit meaning according to the rules of the state machine.

One part of the system has meaning relative to another part. However,
what if we consider the whole system? We could then say that the left
half, computer A, has meaning relative to the right half, computer B.
It doesn't matter that an outside observer could come up with
infinitely many meanings, any more than it matters that an alien could
up with infinitely many interpretations of an English sentence.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal


On 22 Nov 2008, at 17:27, Kory Heath wrote:



 On Nov 22, 2008, at 7:26 AM, Telmo Menezes wrote:
 Ok, but what if consciousness is a computational process that
 potentially depends on the entire state of the universe? Let's  
 suppose
 for example that quantum particles are the fundamental building
 blocks, i.e. the hardware, and that consciousness is a computational
 process that emerges from their interactions. We still have MEC+MAT,
 and due to quantum entanglement, any quantum particle in the universe
 can potentially interfere in the consciousness computation. How can
 you store Bruno's film in such a universe?

 This is why I prefer to cast these thought experiments in terms of
 finite cellular automata. All of the issues you mention go away. (One
 can argue that finite cellular automata can't contain conscious
 beings, but that's just a rejection of MEC, which we're supposed to be
 keeping.)

 I'm not entirely sure I understand the details of Bruno's Movie-Graph
 (yet), so I don't know if it's equivalent to the following thought
 experiment:



It seems to me equivalent indeed, in the case you project a part of  
the movie on the broken part of the optical boolean graph.




 Let's say that we run a computer program that allocates a very large
 two-dimensional array, fills it with a special Initial State (which is
 hard-coded into the program), and then executes the rules of Conway's
 Life on the array for a certain number of iterations. Let's say that
 the resulting universe contains creatures that any garden-variety
 mechanist would agree are fully conscious. Let's say that we run the
 universe for at least enough iterations to allow the creatures to move
 around, say a few things, experience a few things, etc. Finally, let's
 say that we store the results of all of our calculations in a (much
 larger) area of memory, so that we can look up what each bit did at
 each tick of the clock.

 Now let's say that we play back the stored results of our
 calculations, like a movie. At each tick of the clock t, we just copy
 the bits from time t of our our stored memory into our two-dimensional
 array. There are no Conway's Life calculations going on here. We're
 just copying bits, one time-slice at a time, from our stored memory
 into our original grid. It is difficult for a mechanist to argue that
 any consciousness is happening here. It's functionally equivalent to
 just printing out each time-slice onto a (huge) piece of paper, and
 flipping through those pages like a picture book and watching the
 animated playback. It's hard for a mechanist to argue that this
 style of flipping pages in a picture book can create consciousness.

 Now let's imagine that we compute the Conway's Life universe again -
 we load the Initial State into the grid, and then iteratively apply
 the Conway's Life rule to the grid. However, for some percentage of
 the cells in the grid, instead of looking at the neighboring cells and
 updating according to the Conway's Life rule, we instead just pull the
 data from the lookup table that we created in the previous run.

 If we apply the Conway's Life rule to all the cells, it seems like the
 creatures in the grid ought to be conscious. If we don't apply the
 Life rule to any of the cells, but just pull the data from our
 previously-created lookup table, it seems like the creatures in the
 grid are not conscious. But if we apply the Life rule to half of the
 cells and pull the other half from the lookup table, there will
 (probably) be some creature in the grid who has half of the cells in
 its brain being computed by the Life rule, and half being pulled from
 the lookup table. What's the status of this creature's consciousness?

 -- Kory



 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal


On 22 Nov 2008, at 21:45, Brent Meeker wrote:


 Telmo Menezes wrote:
 Quentin,

 Ok, but what if consciousness is a computational process that
 potentially depends on the entire state of the universe? Let's  
 suppose
 for example that quantum particles are the fundamental building
 blocks, i.e. the hardware, and that consciousness is a computational
 process that emerges from their interactions. We still have MEC+MAT,
 and due to quantum entanglement, any quantum particle in the universe
 can potentially interfere in the consciousness computation. How can
 you store Bruno's film in such a universe?

 Telmo.


 But brain functions are essentially classical (see Tegmark's  
 paper).  Thought
 would be impossible if quantum entanglement was more that a  
 perturbation.  From
 a classical viewpoint, your brain can only be causally affected by a  
 finite
 portion of the universe.

Right. And , even if the brain is a quantum computer, the argument  
will go through, if only because a quantum computer can be simulated  
by a classical computer (albeit very slowly: but this is not relevant,  
the UD is very slow but first person cannot be aware of that). As  
Quentin suggested you have to identify yourself completely with the  
entire quantum multiverse to prevent the conclusion, and even in that  
case, this has to be extracted from the MEC part of the MEC+MAT  
hypothesis, which is the point. But yes in that case you can postulate  
a sort of primitive matter having some relevance with your  
consciousness. (Making them both very mysterious, and making their  
link also rather mysterious, btw).
MGA 1 and MGA 2 are sometimes confronted with super ad hoc move,  
which, from a logical point of view have to be taken into account. I  
expect I will have to go up to MGA 4, but I can imagine making some  
MGA 5 to make such move invalid, relatively to some inductive  
rationality principle explicited. Sort of a vaccine against such  
super ad hoc move.  They appears also against many worlds, against  
experience testing Bell's inequality, etc. Also if you want to use  
entanglement throughout the whole universe (or multiverse), you will  
have difficulties in relating measurements and conscious memory of  
experiences (but of course this is not yet solved the pure comp view),  
I think.

So Tegmark work is not really relevant here. A good thing for me  
because, although I think and tend to believe that Tegmark is  
accurate, I don't have the personal knowledge of practical quantum  
mechanics to be assure personally about the meaningfulness of the  
chosen unities.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal


On 22 Nov 2008, at 22:10, Brent Meeker wrote:

 If we apply the Conway's Life rule to all the cells, it seems like  
 the
 creatures in the grid ought to be conscious. If we don't apply the
 Life rule to any of the cells, but just pull the data from our
 previously-created lookup table, it seems like the creatures in the
 grid are not conscious. But if we apply the Life rule to half of the
 cells and pull the other half from the lookup table, there will
 (probably) be some creature in the grid who has half of the cells in
 its brain being computed by the Life rule, and half being pulled from
 the lookup table. What's the status of this creature's consciousness?

 I don't think it's a relevant distinction.  Even when the game-of- 
 life is
 running on the computer the adjacent cells are not physically  
 causing the
 changes from on to off and vice versa - that function is via the  
 program
 implemented in the computer memory and cpu.  So why should it make a  
 difference
 whether those state changes are decided by gates in the cpu or a  
 huge look-up table?


I agree.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Kory Heath


On Nov 22, 2008, at 1:56 PM, Brent Meeker wrote:
 But how would they agree on this?  If we knew the answer to that we  
 wouldn't
 need to be considering these (nomologically) impossible thought  
 experiments.

They would use the same criteria that they use to decide that humans  
are conscious in our own world, which would be a combination of  
observing outward behavior (Turing-Test), and observing brain states.  
In one sense, that would be harder, because the conscious beings in  
the Life universe will look very different than us. In another sense  
it would be easier, because they'd have access to every bit of the  
Life universe.

Am I confusing mechanism with something else? Functionalism?  
Computationalism?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Kory Heath


On Nov 22, 2008, at 6:52 PM, Stathis Papaioannou wrote:
 Which leads again to the problem of partial zombies. What is your
 objection to saying that the looked up computation is also conscious?
 How would that be inconsistent with observation, or lead to logical
 contradiction?

I can only answer this in the context of Bostrom's Duplication or  
Unification question. Let's say that within our Conway's Life  
universe, one particular creature feels a lot of pain. After the run  
is over, if we load the Initial State back into the array and iterate  
the rules again, is another experience of pain occurring? If you think  
yes, you accept Duplication by Bostrom's definition. If you say  
no, you accept Unification.

Duplication is more intuitive to me, and you might say that my thought  
experiment is aimed at Duplicationists. In that context, I don't  
understand why playing back the lookup table as a movie should create  
another experience of pain. None of the actual Conway's Life  
computations are being performed. We could just print them out on  
(very large) pieces of paper and flip them like a book. Is this  
supposed to generate an experience of pain? What if we just lay out  
all the pages in a row and move our eyes across them? What if we lay  
them out randomly and move our eyes across them? And so on. I argue  
that if running the original computation a second time would create a  
second experience of pain, we can generate a partial zombie.

Stathis, Brent, and Bruno have all suggested that there is no partial  
zombie problem in my argument. Is that because you all accept  
Unification? Or am I missing something else?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal

On 23 Nov 2008, at 04:46, Jason Resch wrote:



 On Sat, Nov 22, 2008 at 8:52 PM, Stathis Papaioannou [EMAIL PROTECTED] 
  wrote:

 2008/11/23 Kory Heath [EMAIL PROTECTED]:

  If we apply the Conway's Life rule to all the cells, it seems like  
 the
  creatures in the grid ought to be conscious. If we don't apply the
  Life rule to any of the cells, but just pull the data from our
  previously-created lookup table, it seems like the creatures in the
  grid are not conscious. But if we apply the Life rule to half of the
  cells and pull the other half from the lookup table, there will
  (probably) be some creature in the grid who has half of the cells in
  its brain being computed by the Life rule, and half being pulled  
 from
  the lookup table. What's the status of this creature's  
 consciousness?

 Which leads again to the problem of partial zombies. What is your
 objection to saying that the looked up computation is also conscious?
 How would that be inconsistent with observation, or lead to logical
 contradiction?


 I would side with Kory that a looked up recording of conscious  
 activity is not conscious.



I agree with you. The point here is just that MEC+MAT implies it.



  My argument being that static information has no implicit meaning  
 because there are an infinite number of ways a bit string can be  
 interpreted.


The point is, or will be, that as far as the string is complex enough  
to part of some of history, and to bet on some continuation of that  
history, she will not feel statics at all, from her point of view.



  However in a running program the values of the bits do have  
 implicit meaning according to the rules of the state machine.


Relatively to you, and relatively to most common probable history/ 
computation that you share with that running program.




 What makes this weird is that in one respect our universe might be  
 considered a 4-d recording, containing a record of computations  
 performed by neurons and brains across one of its dimensions.   
 Perhaps this is further evidence in support of Bruno's theory: mind  
 cannot exist in a physical universe because it is just a recording  
 of a computation, and only the actual computation itself can create  
 consciousness.



I would say that all computations exist (already in arihmetical  
truth), and actual is a possible computation as seen from inside.
Actuallity last as long as consistency. Consciousness differentiates  
on the path, as seen in the path. This is more related to the first  
steps than UDA than MGA.

If we abandon physical supervenience, we have to define a sufficiently  
good notion of computational supervenience. But the UD and its  
deployment gives not much choice.

We have to go from
consciousness at (dx,dt) = physical state at (dx, dt)(sup- 
phys)
to
consciousness of (dx, dt)=  computational state,(sup-comp)

And we have to explain the appearance of both consciousness at (dx,dt)  
and physical state at (dx, dt) from sup-comp. With a naïve view on  
computations, there are too much white rabbits, but computer science  
and logic can be used to show this issue is far from simple.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread A. Wolf

 We have to go from consciousness at (dx,dt)

Since when can consciousness be an instantaneous event?

Anna


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Kory Heath


On Nov 22, 2008, at 1:10 PM, Brent Meeker wrote:
 So why should it make a difference
 whether those state changes are decided by gates in the cpu or a  
 huge look-up table?

The difference is in the number of times that the relevant computation  
was physically implemented. When you query the lookup table to get a  
bit, you are not performing the computation again. You're just viewing  
the result of the computation you did earlier. It seems to me that  
this matters for Duplicationists, but maybe not for Unificationists.

Or maybe I'm still misdiagnosing the problem. Is anyone arguing that,  
when you play back the lookup table like a movie, this counts as  
performing all of the Conway's Life computations a second time? In  
that case there would be nothing problematic about this thought  
experiment for Duplicationists or Unificationists. But I don't see how  
playing back the lookup table can count as implementing the Conway's  
Life computations.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal


On 23 Nov 2008, at 16:06, A. Wolf wrote:


 We have to go from consciousness at (dx,dt)

 Since when can consciousness be an instantaneous event?


Oops! replace with (Dx,Dt).  I have no deltas.

Bruno




 Anna


 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal


On 23 Nov 2008, at 15:48, Kory Heath wrote:



 On Nov 22, 2008, at 6:52 PM, Stathis Papaioannou wrote:
 Which leads again to the problem of partial zombies. What is your
 objection to saying that the looked up computation is also conscious?
 How would that be inconsistent with observation, or lead to logical
 contradiction?

 I can only answer this in the context of Bostrom's Duplication or
 Unification question. Let's say that within our Conway's Life
 universe, one particular creature feels a lot of pain. After the run
 is over, if we load the Initial State back into the array and iterate
 the rules again, is another experience of pain occurring? If you think
 yes, you accept Duplication by Bostrom's definition. If you say
 no, you accept Unification.

 Duplication is more intuitive to me, and you might say that my thought
 experiment is aimed at Duplicationists. In that context, I don't
 understand why playing back the lookup table as a movie should create
 another experience of pain. None of the actual Conway's Life
 computations are being performed. We could just print them out on
 (very large) pieces of paper and flip them like a book. Is this
 supposed to generate an experience of pain? What if we just lay out
 all the pages in a row and move our eyes across them? What if we lay
 them out randomly and move our eyes across them? And so on. I argue
 that if running the original computation a second time would create a
 second experience of pain, we can generate a partial zombie.

 Stathis, Brent, and Bruno have all suggested that there is no partial
 zombie problem in my argument. Is that because you all accept
 Unification? Or am I missing something else?


Unification, I would say. But we have to be careful, unification  
becomes duplication or n-plication if the computations diverge. This  
does not change the content of the experience of the person, which  
remains unique, but it can change the relative personal probabilities  
of such content. I wrote once: Y = || (multiplication of the future  
secures the past).  Third person bifurcation of histories/computations  
=  first person differentiation of consciousness. But to go in the  
detail here would confront us with the not simple task of defining  
more precisely what is a computation, or what we will count has two  
identical computations in the deployment. Eventually I bypass this  
hard question by asking directly what sound Lobian machines can  
think about that ... leading to AUDA (arithmetical uda). But  
unification, in Bostrom's sense, is at play, from the first person  
experience. Alice dreamed of the Mushroom only once. But if we wake up  
by projecting the end of the movie on an operational optical boolean  
graph, simultaneously (or not) in Washington and in Moscow, then,  
although the experience of the dreams remains unique, the experience  
of remembering the dream will be multiplied by two. Indeed one in  
Moscow, once in Washington.

Bruno




 -- Kory


 

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread A. Wolf

 Since when can consciousness be an instantaneous event?

 Oops! replace with (Dx,Dt).  I have no deltas.

Yeah, but still.  I don't think consciousness can be freeze-framed 
mathematically like this.  I haven't been reading the conversation, 
though...I should probably try to catch up.

Anna


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal


On 23 Nov 2008, at 17:23, A. Wolf wrote:


 Since when can consciousness be an instantaneous event?

 Oops! replace with (Dx,Dt).  I have no deltas.

 Yeah, but still.  I don't think consciousness can be freeze-framed
 mathematically like this.  I haven't been reading the conversation,
 though...I should probably try to catch up.


You are welcome.

You seem to know a bit of logic, so you could read the paper UDA +  
AUDA paper here:

http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html

Well you arrive at the end (of the first part?) of a more than 10  
years conversation but it is NEVER too late :)

I am currently explaining the Movie Graph Argument, which is the 8th  
step of the Universal Dovetailer Argument. The UDA is supposed to  
show, or shows, that mechanism and physicalism (or materialism,  
naturalism) are incompatible. It shows that if mechanism is true,  
physics has to be derived from numbers and logic.
The AUDA is about the same explained to, or by, a lobian machine,  
which is a universal machine knowing she is universal (or if you know  
logic: a Sigma_1 theorem prover which can prove all sentences of the  
shape S - Bew('S'), S Sigma_1. Peano Arithmetic, the formal theory,  
can readily be transformed into such a finitely presentable machine.  
 From this we can extract a logic of the observable proposition and  
compare with the empirical quantum logic, making comp testable, and  
already tested on its most weird consequences, retrospectively.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 22, 2008, at 1:10 PM, Brent Meeker wrote:
 So why should it make a difference
 whether those state changes are decided by gates in the cpu or a  
 huge look-up table?
 
 The difference is in the number of times that the relevant computation  
 was physically implemented. When you query the lookup table to get a  
 bit, you are not performing the computation again. You're just viewing  
 the result of the computation you did earlier. It seems to me that  
 this matters for Duplicationists, but maybe not for Unificationists.
 
 Or maybe I'm still misdiagnosing the problem. Is anyone arguing that,  
 when you play back the lookup table like a movie, this counts as  
 performing all of the Conway's Life computations a second time? 

Why shouldn't it?  Suppose your recording device uses a compression algorithm 
and suppose the compression algorithm is so efficient the compressed recording 
is no bigger than the Conway's Life program plus the initial state information.

Brent

In  
 that case there would be nothing problematic about this thought  
 experiment for Duplicationists or Unificationists. But I don't see how  
 playing back the lookup table can count as implementing the Conway's  
 Life computations.
 
 -- Kory
 
 
  
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Brent Meeker

Bruno Marchal wrote:
 
 On 23 Nov 2008, at 15:48, Kory Heath wrote:
 

 On Nov 22, 2008, at 6:52 PM, Stathis Papaioannou wrote:
 Which leads again to the problem of partial zombies. What is your
 objection to saying that the looked up computation is also conscious?
 How would that be inconsistent with observation, or lead to logical
 contradiction?
 I can only answer this in the context of Bostrom's Duplication or
 Unification question. Let's say that within our Conway's Life
 universe, one particular creature feels a lot of pain. After the run
 is over, if we load the Initial State back into the array and iterate
 the rules again, is another experience of pain occurring? If you think
 yes, you accept Duplication by Bostrom's definition. If you say
 no, you accept Unification.

 Duplication is more intuitive to me, and you might say that my thought
 experiment is aimed at Duplicationists. In that context, I don't
 understand why playing back the lookup table as a movie should create
 another experience of pain. None of the actual Conway's Life
 computations are being performed. We could just print them out on
 (very large) pieces of paper and flip them like a book. Is this
 supposed to generate an experience of pain? What if we just lay out
 all the pages in a row and move our eyes across them? What if we lay
 them out randomly and move our eyes across them? And so on. I argue
 that if running the original computation a second time would create a
 second experience of pain, we can generate a partial zombie.

 Stathis, Brent, and Bruno have all suggested that there is no partial
 zombie problem in my argument. Is that because you all accept
 Unification? Or am I missing something else?
 
 
 Unification, I would say. But we have to be careful, unification  
 becomes duplication or n-plication if the computations diverge. This  
 does not change the content of the experience of the person, which  
 remains unique, but it can change the relative personal probabilities  
 of such content. I wrote once: Y = || (multiplication of the future  
 secures the past).  Third person bifurcation of histories/computations  
 =  first person differentiation of consciousness. But to go in the  
 detail here would confront us with the not simple task of defining  
 more precisely what is a computation, or what we will count has two  
 identical computations in the deployment. Eventually I bypass this  
 hard question by asking directly what sound Lobian machines can  
 think about that ... leading to AUDA (arithmetical uda). But  
 unification, in Bostrom's sense, is at play, from the first person  
 experience. Alice dreamed of the Mushroom only once. But if we wake up  
 by projecting the end of the movie on an operational optical boolean  
 graph, simultaneously (or not) in Washington and in Moscow, then,  
 although the experience of the dreams remains unique, the experience  
 of remembering the dream will be multiplied by two. Indeed one in  
 Moscow, once in Washington.

Why do they count as two instances?  Because they supervene on physical 
processes that are spacially distinct?  That would assume that spacetime is 
fundamental.  Or is it because you assume that remembering the dream isn't 
distinct process but must be mixed with other experiences related to the 
location?

Brent

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal


On 23 Nov 2008, at 21:21, Brent Meeker wrote:


 Bruno Marchal wrote:

 On 23 Nov 2008, at 15:48, Kory Heath wrote:


 On Nov 22, 2008, at 6:52 PM, Stathis Papaioannou wrote:
 Which leads again to the problem of partial zombies. What is your
 objection to saying that the looked up computation is also  
 conscious?
 How would that be inconsistent with observation, or lead to logical
 contradiction?
 I can only answer this in the context of Bostrom's Duplication or
 Unification question. Let's say that within our Conway's Life
 universe, one particular creature feels a lot of pain. After the run
 is over, if we load the Initial State back into the array and  
 iterate
 the rules again, is another experience of pain occurring? If you  
 think
 yes, you accept Duplication by Bostrom's definition. If you say
 no, you accept Unification.

 Duplication is more intuitive to me, and you might say that my  
 thought
 experiment is aimed at Duplicationists. In that context, I don't
 understand why playing back the lookup table as a movie should  
 create
 another experience of pain. None of the actual Conway's Life
 computations are being performed. We could just print them out on
 (very large) pieces of paper and flip them like a book. Is this
 supposed to generate an experience of pain? What if we just lay out
 all the pages in a row and move our eyes across them? What if we lay
 them out randomly and move our eyes across them? And so on. I argue
 that if running the original computation a second time would  
 create a
 second experience of pain, we can generate a partial zombie.

 Stathis, Brent, and Bruno have all suggested that there is no  
 partial
 zombie problem in my argument. Is that because you all accept
 Unification? Or am I missing something else?


 Unification, I would say. But we have to be careful, unification
 becomes duplication or n-plication if the computations diverge. This
 does not change the content of the experience of the person, which
 remains unique, but it can change the relative personal probabilities
 of such content. I wrote once: Y = || (multiplication of the future
 secures the past).  Third person bifurcation of histories/ 
 computations
 =  first person differentiation of consciousness. But to go in the
 detail here would confront us with the not simple task of defining
 more precisely what is a computation, or what we will count has two
 identical computations in the deployment. Eventually I bypass this
 hard question by asking directly what sound Lobian machines can
 think about that ... leading to AUDA (arithmetical uda). But
 unification, in Bostrom's sense, is at play, from the first person
 experience. Alice dreamed of the Mushroom only once. But if we wake  
 up
 by projecting the end of the movie on an operational optical boolean
 graph, simultaneously (or not) in Washington and in Moscow, then,
 although the experience of the dreams remains unique, the experience
 of remembering the dream will be multiplied by two. Indeed one in
 Moscow, once in Washington.

 Why do they count as two instances?  Because they supervene on  
 physical
 processes that are spacially distinct?  That would assume that  
 spacetime is
 fundamental.  Or is it because you assume that remembering the dream  
 isn't
 distinct process but must be mixed with other experiences related to  
 the location?

The last one. Unless the person has not yet opened the door of the  
reconstitution box, the experience of remembering the dream in  
Washington is a different experience of the remembering the dream in  
Moscow. For reason of climate, people to who relating the dream, etc.  
The two computations executed by Alice brain diverge because they  
have different inputs.

Bruno



http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Günther Greindl

Bruno,

  From this we can extract a logic of the observable proposition and  
 compare with the empirical quantum logic, making comp testable, and  
 already tested on its most weird consequences, retrospectively.

you could refute COMP (MEC) if it would contradict empirical QM, but QM 
(and especially many worlds) is also compatible with MAT (and NOT COMP).

These would be Tegmark's Level I and II universes - infinite physical 
(or mathematical physicalist as defined by Kory) universes with matter 
permuting in all possible ways. If you then let consciousness supervene 
on matter (but not in a COMP way (see MGA) - maybe because of local 
infinities or whatever) and with UNIFICATION you would also get a many 
worlds scenario (also in the sense that for a 1st person one would have 
to look at the MAT-histories running through every OM)

In your posts you do seem to have a preference for COMP (although you 
say you don't have a position ;-) but I think you definitely lean more 
to COMP than to MAT - are there reasons for this or is it only a 
personal predilection?

Cheers,
Günther

p.s.: I am looking forward to your further MGA posts (how far will they 
go, you have hinted up to MGA 5?) and the ensuing discussion, I have 
very much enjoyed reading all this stuff.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Quentin Anciaux

Hi,

Le dimanche 23 novembre 2008 à 22:09 +0100, Günther Greindl a écrit :
 Bruno,
 
   From this we can extract a logic of the observable proposition and  
  compare with the empirical quantum logic, making comp testable, and  
  already tested on its most weird consequences, retrospectively.
 
 you could refute COMP (MEC) if it would contradict empirical QM, but QM 
 (and especially many worlds) is also compatible with MAT (and NOT COMP).

It is.

 These would be Tegmark's Level I and II universes - infinite physical 
 (or mathematical physicalist as defined by Kory) universes with matter 
 permuting in all possible ways. 

That was my point about finite block of universe... even if the
universe is infinite every finite block of it contains finite numbers of
matter hence a finite numbers (however big it is) of possible
permutations of the matter within it (even if you take the maximal
permutations of fully filled block of matters). That's what I call the
divx arguments :) What you see (or what any human could see) however big
is the resolution of the picture is still finite data. Example, imagine
that our eyes resolution is 10⁵x10⁵ and we are able to see 10³
pictures per second... then a human lifetime of seeing is encodable in a
string of 10⁵x10⁵x3x10³x60x60x24x365x~100 bits (3 for 3 bytes per pixel
for 16.5 millions color not even discernable by us, 100 for a 100 years
of lifetime) not taking compression in account.. it's (very⁵) big but
finite (and I did take a very⁵ high resolution) and all humans seeing
will be encodable with all permutations available on a string of this
length. Which is even bigger but still finite.

 If you then let consciousness supervene 
 on matter (but not in a COMP way (see MGA) - maybe because of local 
 infinities or whatever) and with UNIFICATION you would also get a many 
 worlds scenario (also in the sense that for a 1st person one would have 
 to look at the MAT-histories running through every OM)

If infinities are at play... what is a MAT-history ? it can't even be
written.

 In your posts you do seem to have a preference for COMP (although you 
 say you don't have a position ;-) but I think you definitely lean more 
 to COMP than to MAT - are there reasons for this or is it only a 
 personal predilection?
 
 Cheers,
 Günther
 
 p.s.: I am looking forward to your further MGA posts (how far will they 
 go, you have hinted up to MGA 5?) and the ensuing discussion, I have 
 very much enjoyed reading all this stuff.
 
 

Regards,
Quentin
-- 
All those moments will be lost in time, like tears in rain.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Bruno Marchal

On 23 Nov 2008, at 22:09, Günther Greindl wrote:


 Bruno,

 From this we can extract a logic of the observable proposition and
 compare with the empirical quantum logic, making comp testable, and
 already tested on its most weird consequences, retrospectively.

 you could refute COMP (MEC) if it would contradict empirical QM, but  
 QM
 (and especially many worlds) is also compatible with MAT (and NOT  
 COMP).

I think you are correct, but allowing the observer to be mechanically  
described as obeying the wave equation (which solutions obeys to  
comp), is the main motivation for the Many-World. Why not saying  
directly that the mind collapse the wave then?
I mean Everett is really SWE+COMP. (or weakening of COMP). I think  
that the wave collapse has been invented for both keeping the physical  
universe unique, but also making the observer beyond science.



 These would be Tegmark's Level I and II universes - infinite physical
 (or mathematical physicalist as defined by Kory) universes with matter
 permuting in all possible ways. If you then let consciousness  
 supervene
 on matter (but not in a COMP way (see MGA) - maybe because of local
 infinities or whatever) and with UNIFICATION you would also get a many
 worlds scenario (also in the sense that for a 1st person one would  
 have
 to look at the MAT-histories running through every OM)

 In your posts you do seem to have a preference for COMP (although you
 say you don't have a position ;-) but I think you definitely lean more
 to COMP than to MAT - are there reasons for this or is it only a
 personal predilection?

It is the same reason why someone in the dark can be searching its key  
only under the lamp. Elsewhere there is no chance he finds it. With  
comp we do have a theory of mind.
With MAT we haven't (except bibles, myth, etc.). There is no standard  
notion of mat histories, no satisfying notion of wholeness (like the  
deployment with comp). To have MAT correct, you have to accept not  
only actual infinities, but concrete actual infinities that you cannot  
approximate with Turing machine, nor with Turing Machine with oracle.  
You are a bit back to literal angels and fairies ...
Of course MAT + not COMP is consistent. Many catholic theological  
reading of Aristotelian based Matter theory propose similar idea  
making the soul material at some point. To my knowledge, Penrose is  
the only scientist which endorses this kind of views, allowing  
gravitation to play a role in the collapse. Its motivation from  
Godel's theorem are not correct, but its main NON COMP or NOT MAT  
starting intuition is valid with respect to MGA-UDA.

As I said many times, COMP is my favorite working *hypothesis*. It is  
my bread (or should be ...). I like it because it makes a part of  
philosophy or theology a science. We can doubt it, discuss it, and  
even refute it, with some chance, or confirme.
MAT has been a wonderful methodological assumption, but it has always  
being incoherent, or eliminativist on the mind.


 p.s.: I am looking forward to your further MGA posts (how far will  
 they
 go, you have hinted up to MGA 5?) and the ensuing discussion, I have
 very much enjoyed reading all this stuff.


Thanks. And so you believe that MAT+MEC makes Alice conscious through  
the projection of its brain movie! You really want me to show this is  
absurd. It is not so easy, and few people find this necessary, but I  
will do asap (MGA 3). MGA 4 is for those who make a special sort of  
objection which has not yet appeared, or those who will make a special  
objection to MGA 3, so ..., well I will do it because it puts more  
light on the meaning of the computational supervenience thesis. But  
MGA 4 is really ... Maudlin. And MGA 5 should be just a form of OCCAM  
razor, but I don't think this will be necessary, except perhaps for  
some last Advocate's devils and theoreticians of the Conspiracies :)

I will due this hopefully this week. Thanks for the patience.

Bruno



http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-23 Thread Russell Standish

On Sun, Nov 23, 2008 at 03:59:02PM +0100, Bruno Marchal wrote:
 
  I would side with Kory that a looked up recording of conscious  
  activity is not conscious.
 
 
 
 I agree with you. The point here is just that MEC+MAT implies it.
 

This I don't follow. I would have thought it implies the opposite.

-- 


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED]
Australiahttp://www.hpcoders.com.au


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Telmo Menezes

Bruno,

Conserving MEC+MAT, one could argue that no isolation from the
environment is possible, even while dreaming. Even if you put Alice in
a sensory isolation tank, there is still the possibility that
interactions with the entire environment are an essential part of the
process that produces consciousness. For example, through quantum
entanglement. In the limit, there is the possibility that the entire
universe is necessary for her consciousness to arise, and the film
experiment becomes impossible because you would have to film the
entire sequence of states of the universe during her dream and play
them back. Obviously, the same universe where the dream takes place
cannot also contain the film (you get infinite recursion). I can't see
a way out of this in a single universe. What do you think?

Cheers,
Telmo Menezes.

On Fri, Nov 21, 2008 at 6:33 PM, Bruno Marchal [EMAIL PROTECTED] wrote:

 MGA 2


 The second step of the MGA, consists in making a change to MGA 1 so
 that we don't have to introduce that unreasonable amount of cosmic
 luck, or of apparent randomness. It shows the lucky aspect of the
 coming information is not relevant. Jason thought on this sequel.


 Let us consider again Alice, which, as you know as an artificial
 brain, made of logic gates.
 Now Alice is sleeping, and doing a dream---like Carroll's original
 Alice.

 Today we know that a REM dream is a conscious experience, or better an
 experience of consciousness, thanks to the work of Hearne Laberge,
 Dement, etc.
 Malcolm's theory of dream, where dream are not conscious, has been
 properly refuted by Hearne and Laberge experiences. (All reference can
 be found in the bibliography of my long thesis. Ask me if you have
 problem to find them.

 I am using a dream experience instead of an experience of awakeness
 for having less technical problems and being shorter on the relevant
 points. I let you do the change as an exercise if you want. If you
 have understood UDA up to the sixth step, such change are easy to do.
 To convince Brent Meeker, you will have to put the environment,
 actually its digital functional part in the generalized brain,
 making the general setting much longer to describe. (If the part of
 the environment needed for consciousness to proceed is not Turing
 emulable, then you already negate MEC of course).

 The dream will facilitate the experience. It is known that in a REM
 dream we are paralyzed (no outputs), we are cut out from the
 environment: (no inputs, well not completely because you would not
 hear the awakening clock, but let us not care about this, or do the
 exercise above), ... and we are hallucinating: the dream is a natural
 sort of video game. It shows that the brain is at least a natural
 virtual reality generator. OK?

 Alice has already an artificial digital brain. This consists in a
 boolean tridimensional  graph with nodes being NOR gates, and vertex
 being wires. For the MEC+MAT believer, the dream is produced by the
 physical activity of the circular digital information processing
 done by that boolean graph.

 With MEC, obviously all what matter is that the boolean graph
 processes the right computation, and we don't have to take into
 account the precise  position of the gates in space. They are not
 relevant for the computation (if things like that were relevant we
 would already have said no to the doctor. So we can topologically
 deform Alice boolean graph brain and project it on a plane so that no
 gates overlap. Some wires will cross, but (exercise) the crossing of
 the wires function can itself be implemented with NOR gates. (A
 solution of that problem, posed by Dewdney, has been given in the
 Scientific American Journal (and is displayed in Conscience et
 Mécanisme with the reference).

 So Alice's brain can be made into a plane boolean graph.

 Also, a MEC+MAT believer should not insist on the electrical nature
 of the communication by wires, nor on the electrical nature of the
 processing of the information by the gates, so that we can use optical
 information instead. Laser beams play the role of the wires, and some
 destructive interference can be used for the NOR. The details are not
 relevant, given that I am not presenting a realist experiment (below,
 or later, if people harass me with too much engineering question,  I
 will propose a completely different representation of the same (with
 respect to the relevance of the reasoning) situation, by using the
 even less realist Ned Block Chinese People Computer: it can be used
 for making clear no magic is used in what follows, with the price that
 its overall implementation is very unrealist, given that the neurons
 are the chinese willingly playing that role.

 So, now, we put Alice's brain, which has become a two dimensional
 optical boolean graph, in between two planes of transparent solid
 material, glass, and we add a sort of clever fluid cristal together
 with the graph,in between the glass plates. The fluid cristal is
 

Re: MGA 2

2008-11-22 Thread Quentin Anciaux

Hi,

if you conserve MEC+MAT... then you conserve MEC, which means
consciousness is a computational process (running on real hardware per
MAT) but it is a computational process hence the process cannot rely on
the entire universe because if it is then MEC should obviously be false
unless the entire universe is also a computational process which then
would render MAT useless. Don't you think ?

Regards,
Quentin

Le samedi 22 novembre 2008 à 11:54 +, Telmo Menezes a écrit :
 Bruno,
 
 Conserving MEC+MAT, one could argue that no isolation from the
 environment is possible, even while dreaming. Even if you put Alice in
 a sensory isolation tank, there is still the possibility that
 interactions with the entire environment are an essential part of the
 process that produces consciousness. For example, through quantum
 entanglement. In the limit, there is the possibility that the entire
 universe is necessary for her consciousness to arise, and the film
 experiment becomes impossible because you would have to film the
 entire sequence of states of the universe during her dream and play
 them back. Obviously, the same universe where the dream takes place
 cannot also contain the film (you get infinite recursion). I can't see
 a way out of this in a single universe. What do you think?
 
 Cheers,
 Telmo Menezes.
 
 On Fri, Nov 21, 2008 at 6:33 PM, Bruno Marchal [EMAIL PROTECTED] wrote:
 
  MGA 2
 
 
  The second step of the MGA, consists in making a change to MGA 1 so
  that we don't have to introduce that unreasonable amount of cosmic
  luck, or of apparent randomness. It shows the lucky aspect of the
  coming information is not relevant. Jason thought on this sequel.
 
 
  Let us consider again Alice, which, as you know as an artificial
  brain, made of logic gates.
  Now Alice is sleeping, and doing a dream---like Carroll's original
  Alice.
 
  Today we know that a REM dream is a conscious experience, or better an
  experience of consciousness, thanks to the work of Hearne Laberge,
  Dement, etc.
  Malcolm's theory of dream, where dream are not conscious, has been
  properly refuted by Hearne and Laberge experiences. (All reference can
  be found in the bibliography of my long thesis. Ask me if you have
  problem to find them.
 
  I am using a dream experience instead of an experience of awakeness
  for having less technical problems and being shorter on the relevant
  points. I let you do the change as an exercise if you want. If you
  have understood UDA up to the sixth step, such change are easy to do.
  To convince Brent Meeker, you will have to put the environment,
  actually its digital functional part in the generalized brain,
  making the general setting much longer to describe. (If the part of
  the environment needed for consciousness to proceed is not Turing
  emulable, then you already negate MEC of course).
 
  The dream will facilitate the experience. It is known that in a REM
  dream we are paralyzed (no outputs), we are cut out from the
  environment: (no inputs, well not completely because you would not
  hear the awakening clock, but let us not care about this, or do the
  exercise above), ... and we are hallucinating: the dream is a natural
  sort of video game. It shows that the brain is at least a natural
  virtual reality generator. OK?
 
  Alice has already an artificial digital brain. This consists in a
  boolean tridimensional  graph with nodes being NOR gates, and vertex
  being wires. For the MEC+MAT believer, the dream is produced by the
  physical activity of the circular digital information processing
  done by that boolean graph.
 
  With MEC, obviously all what matter is that the boolean graph
  processes the right computation, and we don't have to take into
  account the precise  position of the gates in space. They are not
  relevant for the computation (if things like that were relevant we
  would already have said no to the doctor. So we can topologically
  deform Alice boolean graph brain and project it on a plane so that no
  gates overlap. Some wires will cross, but (exercise) the crossing of
  the wires function can itself be implemented with NOR gates. (A
  solution of that problem, posed by Dewdney, has been given in the
  Scientific American Journal (and is displayed in Conscience et
  Mécanisme with the reference).
 
  So Alice's brain can be made into a plane boolean graph.
 
  Also, a MEC+MAT believer should not insist on the electrical nature
  of the communication by wires, nor on the electrical nature of the
  processing of the information by the gates, so that we can use optical
  information instead. Laser beams play the role of the wires, and some
  destructive interference can be used for the NOR. The details are not
  relevant, given that I am not presenting a realist experiment (below,
  or later, if people harass me with too much engineering question,  I
  will propose a completely different representation of the same (with
  respect to the relevance 

Re: MGA 2

2008-11-22 Thread Telmo Menezes

Quentin,

Ok, but what if consciousness is a computational process that
potentially depends on the entire state of the universe? Let's suppose
for example that quantum particles are the fundamental building
blocks, i.e. the hardware, and that consciousness is a computational
process that emerges from their interactions. We still have MEC+MAT,
and due to quantum entanglement, any quantum particle in the universe
can potentially interfere in the consciousness computation. How can
you store Bruno's film in such a universe?

Telmo.

On Sat, Nov 22, 2008 at 12:38 PM, Quentin Anciaux [EMAIL PROTECTED] wrote:

 Hi,

 if you conserve MEC+MAT... then you conserve MEC, which means
 consciousness is a computational process (running on real hardware per
 MAT) but it is a computational process hence the process cannot rely on
 the entire universe because if it is then MEC should obviously be false
 unless the entire universe is also a computational process which then
 would render MAT useless. Don't you think ?

 Regards,
 Quentin

 Le samedi 22 novembre 2008 à 11:54 +, Telmo Menezes a écrit :
 Bruno,

 Conserving MEC+MAT, one could argue that no isolation from the
 environment is possible, even while dreaming. Even if you put Alice in
 a sensory isolation tank, there is still the possibility that
 interactions with the entire environment are an essential part of the
 process that produces consciousness. For example, through quantum
 entanglement. In the limit, there is the possibility that the entire
 universe is necessary for her consciousness to arise, and the film
 experiment becomes impossible because you would have to film the
 entire sequence of states of the universe during her dream and play
 them back. Obviously, the same universe where the dream takes place
 cannot also contain the film (you get infinite recursion). I can't see
 a way out of this in a single universe. What do you think?

 Cheers,
 Telmo Menezes.

 On Fri, Nov 21, 2008 at 6:33 PM, Bruno Marchal [EMAIL PROTECTED] wrote:
 
  MGA 2
 
 
  The second step of the MGA, consists in making a change to MGA 1 so
  that we don't have to introduce that unreasonable amount of cosmic
  luck, or of apparent randomness. It shows the lucky aspect of the
  coming information is not relevant. Jason thought on this sequel.
 
 
  Let us consider again Alice, which, as you know as an artificial
  brain, made of logic gates.
  Now Alice is sleeping, and doing a dream---like Carroll's original
  Alice.
 
  Today we know that a REM dream is a conscious experience, or better an
  experience of consciousness, thanks to the work of Hearne Laberge,
  Dement, etc.
  Malcolm's theory of dream, where dream are not conscious, has been
  properly refuted by Hearne and Laberge experiences. (All reference can
  be found in the bibliography of my long thesis. Ask me if you have
  problem to find them.
 
  I am using a dream experience instead of an experience of awakeness
  for having less technical problems and being shorter on the relevant
  points. I let you do the change as an exercise if you want. If you
  have understood UDA up to the sixth step, such change are easy to do.
  To convince Brent Meeker, you will have to put the environment,
  actually its digital functional part in the generalized brain,
  making the general setting much longer to describe. (If the part of
  the environment needed for consciousness to proceed is not Turing
  emulable, then you already negate MEC of course).
 
  The dream will facilitate the experience. It is known that in a REM
  dream we are paralyzed (no outputs), we are cut out from the
  environment: (no inputs, well not completely because you would not
  hear the awakening clock, but let us not care about this, or do the
  exercise above), ... and we are hallucinating: the dream is a natural
  sort of video game. It shows that the brain is at least a natural
  virtual reality generator. OK?
 
  Alice has already an artificial digital brain. This consists in a
  boolean tridimensional  graph with nodes being NOR gates, and vertex
  being wires. For the MEC+MAT believer, the dream is produced by the
  physical activity of the circular digital information processing
  done by that boolean graph.
 
  With MEC, obviously all what matter is that the boolean graph
  processes the right computation, and we don't have to take into
  account the precise  position of the gates in space. They are not
  relevant for the computation (if things like that were relevant we
  would already have said no to the doctor. So we can topologically
  deform Alice boolean graph brain and project it on a plane so that no
  gates overlap. Some wires will cross, but (exercise) the crossing of
  the wires function can itself be implemented with NOR gates. (A
  solution of that problem, posed by Dewdney, has been given in the
  Scientific American Journal (and is displayed in Conscience et
  Mécanisme with the reference).
 
  So Alice's brain can be made into a 

Re: MGA 2

2008-11-22 Thread Quentin Anciaux

Well what is the entire state of the universe ? if it is an infinite
string then it cannot be computational, it is not simulable. 

Also if all my consciousness depends on all the universe, then it
depends also on yours (and everything else) that I know you or not... I
believe this a lot improbable. 

If MEC is true I'm a finite string, 'I' is a finite set of information
simply because it is a computation... If it's not finite simply it's not
a computation and MEC is false. 

But what you say is for me rejecting MEC and keeping MAT, not both.

Regards,
Quentin


Le samedi 22 novembre 2008 à 15:26 +, Telmo Menezes a écrit :
 Quentin,
 
 Ok, but what if consciousness is a computational process that
 potentially depends on the entire state of the universe? Let's suppose
 for example that quantum particles are the fundamental building
 blocks, i.e. the hardware, and that consciousness is a computational
 process that emerges from their interactions. We still have MEC+MAT,
 and due to quantum entanglement, any quantum particle in the universe
 can potentially interfere in the consciousness computation. How can
 you store Bruno's film in such a universe?
 
 Telmo.
 
 On Sat, Nov 22, 2008 at 12:38 PM, Quentin Anciaux [EMAIL PROTECTED] wrote:
 
  Hi,
 
  if you conserve MEC+MAT... then you conserve MEC, which means
  consciousness is a computational process (running on real hardware per
  MAT) but it is a computational process hence the process cannot rely on
  the entire universe because if it is then MEC should obviously be false
  unless the entire universe is also a computational process which then
  would render MAT useless. Don't you think ?
 
  Regards,
  Quentin
 
  Le samedi 22 novembre 2008 à 11:54 +, Telmo Menezes a écrit :
  Bruno,
 
  Conserving MEC+MAT, one could argue that no isolation from the
  environment is possible, even while dreaming. Even if you put Alice in
  a sensory isolation tank, there is still the possibility that
  interactions with the entire environment are an essential part of the
  process that produces consciousness. For example, through quantum
  entanglement. In the limit, there is the possibility that the entire
  universe is necessary for her consciousness to arise, and the film
  experiment becomes impossible because you would have to film the
  entire sequence of states of the universe during her dream and play
  them back. Obviously, the same universe where the dream takes place
  cannot also contain the film (you get infinite recursion). I can't see
  a way out of this in a single universe. What do you think?
 
  Cheers,
  Telmo Menezes.
 
  On Fri, Nov 21, 2008 at 6:33 PM, Bruno Marchal [EMAIL PROTECTED] wrote:
  
   MGA 2
  
  
   The second step of the MGA, consists in making a change to MGA 1 so
   that we don't have to introduce that unreasonable amount of cosmic
   luck, or of apparent randomness. It shows the lucky aspect of the
   coming information is not relevant. Jason thought on this sequel.
  
  
   Let us consider again Alice, which, as you know as an artificial
   brain, made of logic gates.
   Now Alice is sleeping, and doing a dream---like Carroll's original
   Alice.
  
   Today we know that a REM dream is a conscious experience, or better an
   experience of consciousness, thanks to the work of Hearne Laberge,
   Dement, etc.
   Malcolm's theory of dream, where dream are not conscious, has been
   properly refuted by Hearne and Laberge experiences. (All reference can
   be found in the bibliography of my long thesis. Ask me if you have
   problem to find them.
  
   I am using a dream experience instead of an experience of awakeness
   for having less technical problems and being shorter on the relevant
   points. I let you do the change as an exercise if you want. If you
   have understood UDA up to the sixth step, such change are easy to do.
   To convince Brent Meeker, you will have to put the environment,
   actually its digital functional part in the generalized brain,
   making the general setting much longer to describe. (If the part of
   the environment needed for consciousness to proceed is not Turing
   emulable, then you already negate MEC of course).
  
   The dream will facilitate the experience. It is known that in a REM
   dream we are paralyzed (no outputs), we are cut out from the
   environment: (no inputs, well not completely because you would not
   hear the awakening clock, but let us not care about this, or do the
   exercise above), ... and we are hallucinating: the dream is a natural
   sort of video game. It shows that the brain is at least a natural
   virtual reality generator. OK?
  
   Alice has already an artificial digital brain. This consists in a
   boolean tridimensional  graph with nodes being NOR gates, and vertex
   being wires. For the MEC+MAT believer, the dream is produced by the
   physical activity of the circular digital information processing
   done by that boolean graph.
  
   With MEC, obviously 

Re: MGA 2

2008-11-22 Thread Kory Heath


On Nov 21, 2008, at 10:33 AM, Bruno Marchal wrote:
 So let us suppose that poor Alice got, again, a not very good optical
 plane graph, so that some (1 to many to all, again) NOR gates break
 down, in that precise computation corresponding to her dream
 experience. And let us project, in real time, with the correct
 scaling, the movie we have made, on the graph, playing its role of a
 repeatable lucky rays generator.

Is the movie causally interacting with the gates? In other words, is  
the light from the movie projector stimulating gates when the lasers  
fail to?

 In the ALL gates broken case, we have really, *only a movie* of
 Alice's brain activity. Does consciousness arise from the projection
 of that movie?

Once again, is the movie supposed to be triggering any working  
machinery in the graph? Or could you just as easily project it  
somewhere else that point?

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Telmo Menezes

 Well what is the entire state of the universe ? if it is an infinite
 string then it cannot be computational, it is not simulable.

I tend to think our universe is finite. The multiverse, that's another
story... But even in an infinite universe, we could have a finite
consciousness computation that could potentially depend on any part of
the state of the universe, without being possible to know which part a
priori. Someone could always argue that playing the film failed to
recreate consciousness because you left a certain part of the universe
out. I don't actually believe any of this to be the case, I'm just
playing devil's advocate...

 Also if all my consciousness depends on all the universe, then it
 depends also on yours (and everything else) that I know you or not... I
 believe this a lot improbable.

I wouldn't know how to measure the probability of such a thing being
true, but I think at least you agree that it is possible, and that's
enough to cause us problems.

T.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Kory Heath


On Nov 22, 2008, at 7:26 AM, Telmo Menezes wrote:
 Ok, but what if consciousness is a computational process that
 potentially depends on the entire state of the universe? Let's suppose
 for example that quantum particles are the fundamental building
 blocks, i.e. the hardware, and that consciousness is a computational
 process that emerges from their interactions. We still have MEC+MAT,
 and due to quantum entanglement, any quantum particle in the universe
 can potentially interfere in the consciousness computation. How can
 you store Bruno's film in such a universe?

This is why I prefer to cast these thought experiments in terms of  
finite cellular automata. All of the issues you mention go away. (One  
can argue that finite cellular automata can't contain conscious  
beings, but that's just a rejection of MEC, which we're supposed to be  
keeping.)

I'm not entirely sure I understand the details of Bruno's Movie-Graph  
(yet), so I don't know if it's equivalent to the following thought  
experiment:

Let's say that we run a computer program that allocates a very large  
two-dimensional array, fills it with a special Initial State (which is  
hard-coded into the program), and then executes the rules of Conway's  
Life on the array for a certain number of iterations. Let's say that  
the resulting universe contains creatures that any garden-variety  
mechanist would agree are fully conscious. Let's say that we run the  
universe for at least enough iterations to allow the creatures to move  
around, say a few things, experience a few things, etc. Finally, let's  
say that we store the results of all of our calculations in a (much  
larger) area of memory, so that we can look up what each bit did at  
each tick of the clock.

Now let's say that we play back the stored results of our  
calculations, like a movie. At each tick of the clock t, we just copy  
the bits from time t of our our stored memory into our two-dimensional  
array. There are no Conway's Life calculations going on here. We're  
just copying bits, one time-slice at a time, from our stored memory  
into our original grid. It is difficult for a mechanist to argue that  
any consciousness is happening here. It's functionally equivalent to  
just printing out each time-slice onto a (huge) piece of paper, and  
flipping through those pages like a picture book and watching the  
animated playback. It's hard for a mechanist to argue that this  
style of flipping pages in a picture book can create consciousness.

Now let's imagine that we compute the Conway's Life universe again -  
we load the Initial State into the grid, and then iteratively apply  
the Conway's Life rule to the grid. However, for some percentage of  
the cells in the grid, instead of looking at the neighboring cells and  
updating according to the Conway's Life rule, we instead just pull the  
data from the lookup table that we created in the previous run.

If we apply the Conway's Life rule to all the cells, it seems like the  
creatures in the grid ought to be conscious. If we don't apply the  
Life rule to any of the cells, but just pull the data from our  
previously-created lookup table, it seems like the creatures in the  
grid are not conscious. But if we apply the Life rule to half of the  
cells and pull the other half from the lookup table, there will  
(probably) be some creature in the grid who has half of the cells in  
its brain being computed by the Life rule, and half being pulled from  
the lookup table. What's the status of this creature's consciousness?

-- Kory



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Brent Meeker

Telmo Menezes wrote:
 Quentin,
 
 Ok, but what if consciousness is a computational process that
 potentially depends on the entire state of the universe? Let's suppose
 for example that quantum particles are the fundamental building
 blocks, i.e. the hardware, and that consciousness is a computational
 process that emerges from their interactions. We still have MEC+MAT,
 and due to quantum entanglement, any quantum particle in the universe
 can potentially interfere in the consciousness computation. How can
 you store Bruno's film in such a universe?
 
 Telmo.


But brain functions are essentially classical (see Tegmark's paper).  Thought 
would be impossible if quantum entanglement was more that a perturbation.  From 
a classical viewpoint, your brain can only be causally affected by a finite 
portion of the universe.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 22, 2008, at 7:26 AM, Telmo Menezes wrote:
 Ok, but what if consciousness is a computational process that
 potentially depends on the entire state of the universe? Let's suppose
 for example that quantum particles are the fundamental building
 blocks, i.e. the hardware, and that consciousness is a computational
 process that emerges from their interactions. We still have MEC+MAT,
 and due to quantum entanglement, any quantum particle in the universe
 can potentially interfere in the consciousness computation. How can
 you store Bruno's film in such a universe?
 
 This is why I prefer to cast these thought experiments in terms of  
 finite cellular automata. All of the issues you mention go away. (One  
 can argue that finite cellular automata can't contain conscious  
 beings, but that's just a rejection of MEC, which we're supposed to be  
 keeping.)
 
 I'm not entirely sure I understand the details of Bruno's Movie-Graph  
 (yet), so I don't know if it's equivalent to the following thought  
 experiment:
 
 Let's say that we run a computer program that allocates a very large  
 two-dimensional array, fills it with a special Initial State (which is  
 hard-coded into the program), and then executes the rules of Conway's  
 Life on the array for a certain number of iterations. Let's say that  
 the resulting universe contains creatures that any garden-variety  
 mechanist would agree are fully conscious. Let's say that we run the  
 universe for at least enough iterations to allow the creatures to move  
 around, say a few things, experience a few things, etc. Finally, let's  
 say that we store the results of all of our calculations in a (much  
 larger) area of memory, so that we can look up what each bit did at  
 each tick of the clock.
 
 Now let's say that we play back the stored results of our  
 calculations, like a movie. At each tick of the clock t, we just copy  
 the bits from time t of our our stored memory into our two-dimensional  
 array. There are no Conway's Life calculations going on here. We're  
 just copying bits, one time-slice at a time, from our stored memory  
 into our original grid. It is difficult for a mechanist to argue that  
 any consciousness is happening here. It's functionally equivalent to  
 just printing out each time-slice onto a (huge) piece of paper, and  
 flipping through those pages like a picture book and watching the  
 animated playback. It's hard for a mechanist to argue that this  
 style of flipping pages in a picture book can create consciousness.
 
 Now let's imagine that we compute the Conway's Life universe again -  
 we load the Initial State into the grid, and then iteratively apply  
 the Conway's Life rule to the grid. However, for some percentage of  
 the cells in the grid, instead of looking at the neighboring cells and  
 updating according to the Conway's Life rule, we instead just pull the  
 data from the lookup table that we created in the previous run.
 
 If we apply the Conway's Life rule to all the cells, it seems like the  
 creatures in the grid ought to be conscious. If we don't apply the  
 Life rule to any of the cells, but just pull the data from our  
 previously-created lookup table, it seems like the creatures in the  
 grid are not conscious. But if we apply the Life rule to half of the  
 cells and pull the other half from the lookup table, there will  
 (probably) be some creature in the grid who has half of the cells in  
 its brain being computed by the Life rule, and half being pulled from  
 the lookup table. What's the status of this creature's consciousness?

I don't think it's a relevant distinction.  Even when the game-of-life is 
running on the computer the adjacent cells are not physically causing the 
changes from on to off and vice versa - that function is via the program 
implemented in the computer memory and cpu.  So why should it make a difference 
whether those state changes are decided by gates in the cpu or a huge look-up 
table?

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Brent Meeker

Kory Heath wrote:
 
 On Nov 22, 2008, at 7:26 AM, Telmo Menezes wrote:
 Ok, but what if consciousness is a computational process that
 potentially depends on the entire state of the universe? Let's suppose
 for example that quantum particles are the fundamental building
 blocks, i.e. the hardware, and that consciousness is a computational
 process that emerges from their interactions. We still have MEC+MAT,
 and due to quantum entanglement, any quantum particle in the universe
 can potentially interfere in the consciousness computation. How can
 you store Bruno's film in such a universe?
 
 This is why I prefer to cast these thought experiments in terms of  
 finite cellular automata. All of the issues you mention go away. (One  
 can argue that finite cellular automata can't contain conscious  
 beings, but that's just a rejection of MEC, which we're supposed to be  
 keeping.)
 
 I'm not entirely sure I understand the details of Bruno's Movie-Graph  
 (yet), so I don't know if it's equivalent to the following thought  
 experiment:
 
 Let's say that we run a computer program that allocates a very large  
 two-dimensional array, fills it with a special Initial State (which is  
 hard-coded into the program), and then executes the rules of Conway's  
 Life on the array for a certain number of iterations. Let's say that  
 the resulting universe contains creatures that any garden-variety  
 mechanist would agree are fully conscious. 

But how would they agree on this?  If we knew the answer to that we wouldn't 
need to be considering these (nomologically) impossible thought experiments.  I 
don't think we would judge purely by their behavior.  That might suffice if we 
could observe for a very long time and if we could manipulate the environment, 
but more practically I think we would look at how their sensory organs and 
memory interacted to influence behavior.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Stathis Papaioannou

2008/11/23 Kory Heath [EMAIL PROTECTED]:

 If we apply the Conway's Life rule to all the cells, it seems like the
 creatures in the grid ought to be conscious. If we don't apply the
 Life rule to any of the cells, but just pull the data from our
 previously-created lookup table, it seems like the creatures in the
 grid are not conscious. But if we apply the Life rule to half of the
 cells and pull the other half from the lookup table, there will
 (probably) be some creature in the grid who has half of the cells in
 its brain being computed by the Life rule, and half being pulled from
 the lookup table. What's the status of this creature's consciousness?

Which leads again to the problem of partial zombies. What is your
objection to saying that the looked up computation is also conscious?
How would that be inconsistent with observation, or lead to logical
contradiction?



-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Jason Resch
On Sat, Nov 22, 2008 at 8:52 PM, Stathis Papaioannou [EMAIL PROTECTED]wrote:


 2008/11/23 Kory Heath [EMAIL PROTECTED]:

  If we apply the Conway's Life rule to all the cells, it seems like the
  creatures in the grid ought to be conscious. If we don't apply the
  Life rule to any of the cells, but just pull the data from our
  previously-created lookup table, it seems like the creatures in the
  grid are not conscious. But if we apply the Life rule to half of the
  cells and pull the other half from the lookup table, there will
  (probably) be some creature in the grid who has half of the cells in
  its brain being computed by the Life rule, and half being pulled from
  the lookup table. What's the status of this creature's consciousness?

 Which leads again to the problem of partial zombies. What is your
 objection to saying that the looked up computation is also conscious?
 How would that be inconsistent with observation, or lead to logical
 contradiction?


I would side with Kory that a looked up recording of conscious activity is
not conscious.  My argument being that static information has no implicit
meaning because there are an infinite number of ways a bit string can be
interpreted.  However in a running program the values of the bits do have
implicit meaning according to the rules of the state machine.

What makes this weird is that in one respect our universe might be
considered a 4-d recording, containing a record of computations performed by
neurons and brains across one of its dimensions.  Perhaps this is further
evidence in support of Bruno's theory: mind cannot exist in a physical
universe because it is just a recording of a computation, and only the
actual computation itself can create consciousness.

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-22 Thread Brent Meeker

Jason Resch wrote:
 
 
 On Sat, Nov 22, 2008 at 8:52 PM, Stathis Papaioannou [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 
 2008/11/23 Kory Heath [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]:
 
   If we apply the Conway's Life rule to all the cells, it seems
 like the
   creatures in the grid ought to be conscious. If we don't apply the
   Life rule to any of the cells, but just pull the data from our
   previously-created lookup table, it seems like the creatures in the
   grid are not conscious. But if we apply the Life rule to half of the
   cells and pull the other half from the lookup table, there will
   (probably) be some creature in the grid who has half of the cells in
   its brain being computed by the Life rule, and half being pulled from
   the lookup table. What's the status of this creature's consciousness?
 
 Which leads again to the problem of partial zombies. What is your
 objection to saying that the looked up computation is also conscious?
 How would that be inconsistent with observation, or lead to logical
 contradiction?
 
 
 I would side with Kory that a looked up recording of conscious activity 
 is not conscious.  My argument being that static information has no 
 implicit meaning because there are an infinite number of ways a bit 
 string can be interpreted.  However in a running program the values of 
 the bits do have implicit meaning according to the rules of the state 
 machine.

But this static information is produced by a dynamic computation - so it can be 
regarded as deriving its meaning from that computation.  I don't see why that 
implicit meaning shouldn't count.

Brent

 
 What makes this weird is that in one respect our universe might be 
 considered a 4-d recording, containing a record of computations 
 performed by neurons and brains across one of its dimensions.  Perhaps 
 this is further evidence in support of Bruno's theory: mind cannot exist 
 in a physical universe because it is just a recording of a computation, 
 and only the actual computation itself can create consciousness.
 
 Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-21 Thread Brent Meeker

Bruno Marchal wrote:
 MGA 2
 
 
 The second step of the MGA, consists in making a change to MGA 1 so  
 that we don't have to introduce that unreasonable amount of cosmic  
 luck, or of apparent randomness. It shows the lucky aspect of the  
 coming information is not relevant. Jason thought on this sequel.
 
 
 Let us consider again Alice, which, as you know as an artificial  
 brain, made of logic gates.
 Now Alice is sleeping, and doing a dream---like Carroll's original  
 Alice.
 
 Today we know that a REM dream is a conscious experience, or better an  
 experience of consciousness, thanks to the work of Hearne Laberge,  
 Dement, etc.
 Malcolm's theory of dream, where dream are not conscious, has been  
 properly refuted by Hearne and Laberge experiences. (All reference can  
 be found in the bibliography of my long thesis. Ask me if you have  
 problem to find them.
 
 I am using a dream experience instead of an experience of awakeness  
 for having less technical problems and being shorter on the relevant  
 points. I let you do the change as an exercise if you want. If you  
 have understood UDA up to the sixth step, such change are easy to do.  
 To convince Brent Meeker, you will have to put the environment,  
 actually its digital functional part in the generalized brain,  
 making the general setting much longer to describe. (If the part of  
 the environment needed for consciousness to proceed is not Turing  
 emulable, then you already negate MEC of course).
 
 The dream will facilitate the experience. It is known that in a REM  
 dream we are paralyzed (no outputs), we are cut out from the  
 environment: (no inputs, well not completely because you would not  
 hear the awakening clock, but let us not care about this, or do the  
 exercise above), ... and we are hallucinating: the dream is a natural  
 sort of video game. It shows that the brain is at least a natural  
 virtual reality generator. OK?
 
 Alice has already an artificial digital brain. This consists in a  
 boolean tridimensional  graph with nodes being NOR gates, and vertex  
 being wires. For the MEC+MAT believer, the dream is produced by the  
 physical activity of the circular digital information processing  
 done by that boolean graph.
 
 With MEC, obviously all what matter is that the boolean graph  
 processes the right computation, and we don't have to take into  
 account the precise  position of the gates in space. They are not  
 relevant for the computation (if things like that were relevant we  
 would already have said no to the doctor. So we can topologically  
 deform Alice boolean graph brain and project it on a plane so that no  
 gates overlap. Some wires will cross, but (exercise) the crossing of  
 the wires function can itself be implemented with NOR gates. (A  
 solution of that problem, posed by Dewdney, has been given in the  
 Scientific American Journal (and is displayed in Conscience et  
 Mécanisme with the reference).
 
 So Alice's brain can be made into a plane boolean graph.
 
 Also, a MEC+MAT believer should not insist on the electrical nature   
 of the communication by wires, nor on the electrical nature of the  
 processing of the information by the gates, so that we can use optical  
 information instead. Laser beams play the role of the wires, and some  
 destructive interference can be used for the NOR. The details are not  
 relevant, given that I am not presenting a realist experiment (below,  
 or later, if people harass me with too much engineering question,  I  
 will propose a completely different representation of the same (with  
 respect to the relevance of the reasoning) situation, by using the  
 even less realist Ned Block Chinese People Computer: it can be used  
 for making clear no magic is used in what follows, with the price that  
 its overall implementation is very unrealist, given that the neurons  
 are the chinese willingly playing that role.
 
 So, now, we put Alice's brain, which has become a two dimensional  
 optical boolean graph, in between two planes of transparent solid  
 material, glass, and we add a sort of clever fluid cristal together  
 with the graph,in between the glass plates. The fluid cristal is  
 supposed to have the following peculiar property (which certainly is  
 hard to implement concretely but which is possible in principle). Each  
 time a beam of light trigs a line between two nodes, it trigs a laser  
 beam in the good direction between the two optical gates, with the  
 correct frequency-color (to keep right the functioning of the NOR).
 
 This works well, and we can let that brain work  from time t1 to t2,  
 where Alice dreams specifically, for fixing the matter, that she is in  
 front of a mushroom, talking with a caterpillar who sits on the  
 Muschroom (all right?). We have beforehand save the instantaneous  
 state corresponding to the begining of that dream, so as to be able to  
 repeat that precise graph activity.
 
 Each