Re: What does the MGA accomplish?

2015-05-26 Thread Bruno Marchal


On 26 May 2015, at 02:54, Bruce Kellett wrote:


Bruno Marchal wrote:

On 25 May 2015, at 08:58, Bruce Kellett wrote:
Part of my problem is that the UD does not execute any actual  
program sequentially: after each step in a program it executes the  
next step of the next program and so on, until it reaches the  
first step of some program, at which point it loops back to the  
start.
The UD does  execute sequentially *each* specific program, but the  
UD adds delays, due to its dovetailing duties, which we already  
know (step 2) that it does not change the first person experience  
of the entiuty supported by that execution.
So if the conscious moment is evinced by a logical sequence of  
steps by the dovetailer, it does not correspond to any particular  
program, but a rather arbitrary assortment of steps from many  
programs.

?
Each execution of the programs is well individuated in the UD*. You  
can descrbied them by sequences

phi_i^k(j) k = 0, 1, 2, ...(with i and j fixed).


But each step of the dovetailer is just a single application of the  
axioms of the Turing machine in question: one of the

  Kxy  gives x,
  Sxyz gives xz(yz),
for example. These single steps are all that there is in the  
dovetailer. But such steps lack a context -- they make no sense on  
their own. You could simply claim that the two basic steps are all  
that is needed -- consciousness self-assembles by taking as many of  
these in whatever order is needed.


?

The context will be given by the combinators. To dovetail universally  
with the combinators, you need to generate them all: K, S, KK, KS, SK,  
SS, KKK, K(KK), KKS, K(KS), ...


If comp is true, the combinators running your current brain states  
will be executed, and probably with some rich context in most of them  
(if not, and can prove it, comp is refuted).




If the next step of program phi_i is some 10^50 or so dovetailer  
steps away, the only thing that could possibly link these is the  
program phi_i itself -- the actual execution of the steps is  
entirely secondary. In which case, one would say that consciousness  
resides in the program phi_i itself - execution on the dovetailer is  
not required. I do not think you would want to go down this path, so  
you need something to give each step a context, something to link  
the separate steps that are required for consciousness.


Consciousness is associated to the execution, not to the programs.  
That would not make sense, even if the existence of the program  
entails the existence of its execution in arithmetic. The relative  
probabilities depends on the execution and the mathematical structure  
which exists on the set of continuations (structured by the first,  
third, ... points of view).





The teleportation arguments of Steps 1-7 are insufficient for this,  
since in that argument you are teleporting complete conscious  
entities, not just single steps of the underlying program.


?






Of course, given that all programs are executed, this sequence of  
steps does correspond to some program, somewhere, but not  
necessarily any of the ones partially executed for generating that  
conscious moment.

Yes. So what?


I was trying to give you a way out of the problems that I have  
raised above. If you don't see that the sequential steps of the  
actual dovetailer program give the required connectivity, then what  
does?


?
I do see that the sequential steps of the *many* computations give the  
required statistical connectivity.



You did, some time ago, claim that the dovetailer steps gave an  
effective time parameter for the  system.


An infinity of them.



But even that requires a contextual link between the steps --


The UD brought them all.


something that would be given by the underlying stepping -- which is  
not the stepping of each individual program phi_i.


It depends at which level you describe the happenings. The FPI makes  
your subjective future statistically defined on all the UD*, by the  
first person non awareness of the underlying stepping of the UD itself.


Bruno





Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit 

Re: What does the MGA accomplish?

2015-05-26 Thread Bruno Marchal


On 26 May 2015, at 05:02, Bruce Kellett wrote:


meekerdb wrote:

On 5/25/2015 5:54 PM, Bruce Kellett wrote:

Bruno Marchal wrote:

On 25 May 2015, at 08:58, Bruce Kellett wrote:

Part of my problem is that the UD does not execute any actual  
program sequentially: after each step in a program it executes  
the next step of the next program and so on, until it reaches  
the first step of some program, at which point it loops back to  
the start.


The UD does  execute sequentially *each* specific program, but  
the UD adds delays, due to its dovetailing duties, which we  
already know (step 2) that it does not change the first person  
experience of the entiuty supported by that execution.


So if the conscious moment is evinced by a logical sequence of  
steps by the dovetailer, it does not correspond to any  
particular program, but a rather arbitrary assortment of steps  
from many programs.


?
Each execution of the programs is well individuated in the UD*.  
You can descrbied them by sequences


phi_i^k(j) k = 0, 1, 2, ...(with i and j fixed).


But each step of the dovetailer is just a single application of  
the axioms of the Turing machine in question: one of the

  Kxy  gives x,
  Sxyz gives xz(yz),
for example. These single steps are all that there is in the  
dovetailer. But such steps lack a context -- they make no sense on  
their own. You could simply claim that the two basic steps are all  
that is needed -- consciousness self-assembles by taking as many  
of these in whatever order is needed.


If the next step of program phi_i is some 10^50 or so dovetailer  
steps away, the only thing that could possibly link these is the  
program phi_i itself -- the actual execution of the steps is  
entirely secondary. In which case, one would say that  
consciousness resides in the program phi_i itself - execution on  
the dovetailer is not required. I do not think you would want to  
go down this path, so you need something to give each step a  
context, something to link the separate steps that are required  
for consciousness.


The teleportation arguments of Steps 1-7 are insufficient for  
this, since in that argument you are teleporting complete  
conscious entities, not just single steps of the underlying program.


Of course, given that all programs are executed, this sequence  
of steps does correspond to some program, somewhere, but not  
necessarily any of the ones partially executed for generating  
that conscious moment.


Yes. So what?


I was trying to give you a way out of the problems that I have  
raised above. If you don't see that the sequential steps of the  
actual dovetailer program give the required connectivity, then  
what does? You did, some time ago, claim that the dovetailer steps  
gave an effective time parameter for the  system. But even that  
requires a contextual link between the steps -- something that  
would be given by the underlying stepping -- which is not the  
stepping of each individual program phi_i.
I think what it boils down to is that steps in phi_{i}, where {i}  
is a set indexing programs supporting a particular consciousness,  
must be linked by representing consciousness of the same thing, the  
same thought.  But I think that requires some outside reference  
whereby they can be about the same thing.  So it is not enough to  
just link the phi_{i} of the single consciousness, they must also  
be linked to an environment.  I think this part of what Pierz is  
saying.  He says the linkage cannot merge different physics, so  
effectively the thread of computations instantiating Bruce's  
consciousness imply the computation of a whole world (with physics)  
for Bruce's consciousness to exist in.


My original question here concerned the connectivity in Platonia for  
the computational steps of an individual consciousness. But I do  
agree that we have to go beyond this because consciousness is  
conscious of *something*, viz., an external world, so that has to be  
part of the computation -- so that when I hit you hard on the head,  
your self in Platonia loses consciousness. There is endless  
connectivity between the self and the world external to the self --  
and this covers all space and time, because my consciousness can be  
changed by a CMB photon. Hence my thinking that the whole universe  
(multiverse) may well have to be included in the same connected  
simulation in Platonia.


Bruno does not seem to have thought along these lines.


I am not sure why you say this. I very often mention that possibility.  
The point is only that whatever the case is, it has to be jusified  
from computer science/arithmetic.


Bruno





Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit 

Re: What does the MGA accomplish?

2015-05-25 Thread Bruce Kellett

Bruno Marchal wrote:

On 18 May 2015, at 02:31, Bruce Kellett wrote:


LizR wrote:
On 17 May 2015 at 11:44, Bruce Kellett 
bhkell...@optusnet.com.au I can see that computationalism 
might well have difficulties

   accommodating a gradual evolutionary understanding of almost
   anything -- after all, the dovetailer is there in Platonia
   before anything physical ever appears. So how can consciousness
   evolve gradually?
This is the tired old misunderstanding of the concept of a block 
universe. It's as though Minkowski never existed.


OK. Explain to me exactly how the block universe ideas work out in 
Platonia.


I thought I saw an answer by Liz, but don't find it.


No, Liz only snipes from the sidelines.she does not answer 
substantive questions.


I am not sure that the block physical universe ideas work out in 
Platonia, although block physical multiverse appearance might be 
explainable by the rather canonical all computations, which is offered 
once we agree that 2+2 = 4, or any theorem of RA, is true independently 
of him/her/it.


The block multiverse could well be a different concept from the block 
universe of the Minkowskian understanding of special relativity.


The question arose in a discussion of the possibility of an evolutionary 
understanding of consciousness. This does not, on the face of it, appear 
to sit terribly easily in comp, since comp starts from the individual 
conscious moment or moments, and seeks to understand physics as somehow 
emergent from the statistics of all such instantiations of this set of 
computations in the UD. This does not appear to relate easily to an 
account of times before and after the existence of that particular 
consciousness.


Part of my problem is that the UD does not execute any actual program 
sequentially: after each step in a program it executes the next step of 
the next program and so on, until it reaches the first step of some 
program, at which point it loops back to the start. So if the conscious 
moment is evinced by a logical sequence of steps by the dovetailer, it 
does not correspond to any particular program, but a rather arbitrary 
assortment of steps from many programs. Of course, given that all 
programs are executed, this sequence of steps does correspond to some 
program, somewhere, but not necessarily any of the ones partially 
executed for generating that conscious moment.


There is also a question as to whether this sequence of computational 
steps generates one conscious moment -- of some shorter or longer 
duration (duration being in experienced time, since the computations are 
timeless) -- or whether a whole conscious life is generated by a 
continuous sequence of steps, or whether the whole history of the world 
that contains that consciousness (and all other conscious beings, past, 
present, and future) are generated by the same (extraordinarily long) 
continuous sequence of computational steps.


If the idea is something along the lines of the latter possibility, then 
the block universe might well be the result. The problem then, of 
course, is that any particular consciousness will be generated an 
indefinitely great number of separate times for each time this whole 
universe is generated. This, of course, is the Boltzmann brain problem, 
and I do not think you have adequately addressed this.


Of course, it is a poisonous gift, as it leads to the necessary search, 
for the computationalist, of a measure on the border of the sigma_1 
reality.


It is long to explain, but you might appreciate shortcuts, as the 
sigma_1 arithmetical reality emulates all rational approximations of 
physical equations, and so, abstracting from the (comp) measure problem 
temporarily, you can make sense of relative local block universe in that 
reality, as that part of the arithmetical reality mimics the physicists 
block universe or universes (perhaps only locally too).


Generating all rational approximations of physical equations is not 
going to get you a block universe -- or any sort of universe, for that 
matter. The equations of physics describe the behaviour of the physical 
world, they are not that physical world -- map and territory again.



Of course such shortcuts might not have the right measure, and so we 
need to use a vaster net.


My point is that if our brains or bodies are Turing emulable then they 
are Turing emulated in a small part of the arithmetical reality. The 
first person points of view gives an internal perspective which is much 
complex, in fact of unboundable complexity, but with important 
invariants too.


In the technic parts I exploit important relations between the sigma_1 
truth, the sigma_1 provable and the (with CT) intuitively computable.


I can explain, if you want, but my feeling is that you don't like the 
idea (that the aristotelian materialist dogma can be doubted), nor does 
it seems you are ready to involve in more of computer science.


But if you don't study 

Re: What does the MGA accomplish?

2015-05-25 Thread Pierz


On Saturday, May 9, 2015 at 8:24:51 AM UTC+10, Russell Standish wrote:

 On Fri, May 08, 2015 at 08:47:22AM +0200, Bruno Marchal wrote: 
  
  
  It is only a new recent fashion on this list to take seriously that 
  a recording can be conscious, because for a logician, that error is 
  the (common) confusion between the finger and the moon, or between 
  2+2=4 and 2+2=4. 
  

 It is only recently that we began seriously discussing the MGA at all 
 (about the last 3 years). 

 Why do you say conscious recording (playbacks) are the same as the 
 confusion 
 between 2+2=4 and 2+2=4? I don't even know what you mean by 
 confusion between the finger and the moon... 

 I don't know where that originally comes from, but most trippers know it! 
:) The finger that points to the moon is not the moon itself. The 
representation is not the thing. 
 

 Cheers 
 -- 

  

 Prof Russell Standish  Phone 0425 253119 (mobile) 
 Principal, High Performance Coders 
 Visiting Professor of Mathematics  hpc...@hpcoders.com.au 
 javascript: 
 University of New South Wales  http://www.hpcoders.com.au 
  



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-25 Thread Bruno Marchal


On 25 May 2015, at 08:58, Bruce Kellett wrote:


Bruno Marchal wrote:

On 18 May 2015, at 02:31, Bruce Kellett wrote:

LizR wrote:
On 17 May 2015 at 11:44, Bruce Kellett  
bhkell...@optusnet.com.au I can see that  
computationalism might well have difficulties

  accommodating a gradual evolutionary understanding of almost
  anything -- after all, the dovetailer is there in Platonia
  before anything physical ever appears. So how can  
consciousness

  evolve gradually?
This is the tired old misunderstanding of the concept of a block  
universe. It's as though Minkowski never existed.


OK. Explain to me exactly how the block universe ideas work out in  
Platonia.

I thought I saw an answer by Liz, but don't find it.


No, Liz only snipes from the sidelines.she does not answer  
substantive questions.


I am not sure that the block physical universe ideas work out in  
Platonia, although block physical multiverse appearance might be  
explainable by the rather canonical all computations, which is  
offered once we agree that 2+2 = 4, or any theorem of RA, is true  
independently of him/her/it.


The block multiverse could well be a different concept from the  
block universe of the Minkowskian understanding of special relativity.


The question arose in a discussion of the possibility of an  
evolutionary understanding of consciousness. This does not, on the  
face of it, appear to sit terribly easily in comp, since comp starts  
from the individual conscious moment or moments, and seeks to  
understand physics as somehow emergent from the statistics of all  
such instantiations of this set of computations in the UD.


Comp just assumes the invariance of consciousness or first person  
experience for some digital substitution. It is an assumption of non  
magic, or non actual infinities playing some role and it is the  
default assumption of many materialist.


Then UDA is an argument showing that this leads to the necessity of  
deriving physics from the math of the machine's dreams and their  
theoretical computer science important redundancies.


Then a theory of consciousness is suggested, as the first person view  
of consistency, as it will corroborate both the comp discourse of the  
machine, and some common conscious experience (if you agree it is  
undoubtable, unjustifiable, unexpressible in 3p discourses, etc.).






This does not appear to relate easily to an account of times before  
and after the existence of that particular consciousness.


UDA explains the problem, and AUDA, which is UDA made so simple and  
elementary that we can explain it to any (Löbian) universal machine,  
and indeed, it is the machine's answer that I give.






Part of my problem is that the UD does not execute any actual  
program sequentially: after each step in a program it executes the  
next step of the next program and so on, until it reaches the first  
step of some program, at which point it loops back to the start.


The UD does  execute sequentially *each* specific program, but the UD  
adds delays, due to its dovetailing duties, which we already know  
(step 2) that it does not change the first person experience of the  
entiuty supported by that execution.





So if the conscious moment is evinced by a logical sequence of steps  
by the dovetailer, it does not correspond to any particular program,  
but a rather arbitrary assortment of steps from many programs.


?
Each execution of the programs is well individuated in the UD*. You  
can descrbied them by sequences


 phi_i^k(j) k = 0, 1, 2, ...(with i and j fixed).





Of course, given that all programs are executed, this sequence of  
steps does correspond to some program, somewhere, but not  
necessarily any of the ones partially executed for generating that  
conscious moment.


Yes. So what?




There is also a question as to whether this sequence of  
computational steps generates one conscious moment



To be sure, I do not believe in one conscious moment. I believe that  
a person can be conscious of moment. But the consciousness of a moment  
is not associated with a moment, but with an infinity of instantiation  
of some relative computational state in (sigma_1) arithmetic.




-- of some shorter or longer duration (duration being in experienced  
time, since the computations are timeless) -- or whether a whole  
conscious life is generated by a continuous sequence of steps, or  
whether the whole history of the world that contains that  
consciousness (and all other conscious beings, past, present, and  
future) are generated by the same (extraordinarily long) continuous  
sequence of computational steps.


Good, you begin to see the problem.




If the idea is something along the lines of the latter possibility,  
then the block universe might well be the result. The problem then,  
of course, is that any particular consciousness will be generated an  
indefinitely great number of separate times for each 

Re: What does the MGA accomplish?

2015-05-25 Thread Bruce Kellett

Bruno Marchal wrote:

On 25 May 2015, at 08:58, Bruce Kellett wrote:

Part of my problem is that the UD does not execute any actual program 
sequentially: after each step in a program it executes the next step 
of the next program and so on, until it reaches the first step of some 
program, at which point it loops back to the start.


The UD does  execute sequentially *each* specific program, but the UD 
adds delays, due to its dovetailing duties, which we already know (step 
2) that it does not change the first person experience of the entiuty 
supported by that execution.


So if the conscious moment is evinced by a logical sequence of steps 
by the dovetailer, it does not correspond to any particular program, 
but a rather arbitrary assortment of steps from many programs.


?
Each execution of the programs is well individuated in the UD*. You can 
descrbied them by sequences


 phi_i^k(j) k = 0, 1, 2, ...(with i and j fixed).


But each step of the dovetailer is just a single application of the 
axioms of the Turing machine in question: one of the

   Kxy  gives x,
   Sxyz gives xz(yz),
for example. These single steps are all that there is in the dovetailer. 
But such steps lack a context -- they make no sense on their own. You 
could simply claim that the two basic steps are all that is needed -- 
consciousness self-assembles by taking as many of these in whatever 
order is needed.


If the next step of program phi_i is some 10^50 or so dovetailer steps 
away, the only thing that could possibly link these is the program phi_i 
itself -- the actual execution of the steps is entirely secondary. In 
which case, one would say that consciousness resides in the program 
phi_i itself - execution on the dovetailer is not required. I do not 
think you would want to go down this path, so you need something to give 
each step a context, something to link the separate steps that are 
required for consciousness.


The teleportation arguments of Steps 1-7 are insufficient for this, 
since in that argument you are teleporting complete conscious entities, 
not just single steps of the underlying program.


Of course, given that all programs are executed, this sequence of 
steps does correspond to some program, somewhere, but not necessarily 
any of the ones partially executed for generating that conscious moment.


Yes. So what?


I was trying to give you a way out of the problems that I have raised 
above. If you don't see that the sequential steps of the actual 
dovetailer program give the required connectivity, then what does? You 
did, some time ago, claim that the dovetailer steps gave an effective 
time parameter for the  system. But even that requires a contextual link 
between the steps -- something that would be given by the underlying 
stepping -- which is not the stepping of each individual program phi_i.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-25 Thread Bruce Kellett

meekerdb wrote:

On 5/25/2015 5:54 PM, Bruce Kellett wrote:

Bruno Marchal wrote:

On 25 May 2015, at 08:58, Bruce Kellett wrote:

Part of my problem is that the UD does not execute any actual 
program sequentially: after each step in a program it executes the 
next step of the next program and so on, until it reaches the first 
step of some program, at which point it loops back to the start.


The UD does  execute sequentially *each* specific program, but the UD 
adds delays, due to its dovetailing duties, which we already know 
(step 2) that it does not change the first person experience of the 
entiuty supported by that execution.


So if the conscious moment is evinced by a logical sequence of steps 
by the dovetailer, it does not correspond to any particular program, 
but a rather arbitrary assortment of steps from many programs.


?
Each execution of the programs is well individuated in the UD*. You 
can descrbied them by sequences


 phi_i^k(j) k = 0, 1, 2, ...(with i and j fixed).


But each step of the dovetailer is just a single application of the 
axioms of the Turing machine in question: one of the

   Kxy  gives x,
   Sxyz gives xz(yz),
for example. These single steps are all that there is in the 
dovetailer. But such steps lack a context -- they make no sense on 
their own. You could simply claim that the two basic steps are all 
that is needed -- consciousness self-assembles by taking as many of 
these in whatever order is needed.


If the next step of program phi_i is some 10^50 or so dovetailer steps 
away, the only thing that could possibly link these is the program 
phi_i itself -- the actual execution of the steps is entirely 
secondary. In which case, one would say that consciousness resides in 
the program phi_i itself - execution on the dovetailer is not 
required. I do not think you would want to go down this path, so you 
need something to give each step a context, something to link the 
separate steps that are required for consciousness.


The teleportation arguments of Steps 1-7 are insufficient for this, 
since in that argument you are teleporting complete conscious 
entities, not just single steps of the underlying program.


Of course, given that all programs are executed, this sequence of 
steps does correspond to some program, somewhere, but not 
necessarily any of the ones partially executed for generating that 
conscious moment.


Yes. So what?


I was trying to give you a way out of the problems that I have raised 
above. If you don't see that the sequential steps of the actual 
dovetailer program give the required connectivity, then what does? You 
did, some time ago, claim that the dovetailer steps gave an effective 
time parameter for the  system. But even that requires a contextual 
link between the steps -- something that would be given by the 
underlying stepping -- which is not the stepping of each individual 
program phi_i.


I think what it boils down to is that steps in phi_{i}, where {i} is a 
set indexing programs supporting a particular consciousness, must be 
linked by representing consciousness of the same thing, the same 
thought.  But I think that requires some outside reference whereby they 
can be about the same thing.  So it is not enough to just link the 
phi_{i} of the single consciousness, they must also be linked to an 
environment.  I think this part of what Pierz is saying.  He says the 
linkage cannot merge different physics, so effectively the thread of 
computations instantiating Bruce's consciousness imply the computation 
of a whole world (with physics) for Bruce's consciousness to exist in.


My original question here concerned the connectivity in Platonia for the 
computational steps of an individual consciousness. But I do agree that 
we have to go beyond this because consciousness is conscious of 
*something*, viz., an external world, so that has to be part of the 
computation -- so that when I hit you hard on the head, your self in 
Platonia loses consciousness. There is endless connectivity between the 
self and the world external to the self -- and this covers all space and 
time, because my consciousness can be changed by a CMB photon. Hence my 
thinking that the whole universe (multiverse) may well have to be 
included in the same connected simulation in Platonia.


Bruno does not seem to have thought along these lines.

Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-25 Thread meekerdb

On 5/25/2015 5:54 PM, Bruce Kellett wrote:

Bruno Marchal wrote:

On 25 May 2015, at 08:58, Bruce Kellett wrote:

Part of my problem is that the UD does not execute any actual program sequentially: 
after each step in a program it executes the next step of the next program and so on, 
until it reaches the first step of some program, at which point it loops back to the 
start.


The UD does  execute sequentially *each* specific program, but the UD adds delays, due 
to its dovetailing duties, which we already know (step 2) that it does not change the 
first person experience of the entiuty supported by that execution.


So if the conscious moment is evinced by a logical sequence of steps by the 
dovetailer, it does not correspond to any particular program, but a rather arbitrary 
assortment of steps from many programs.


?
Each execution of the programs is well individuated in the UD*. You can descrbied them 
by sequences


 phi_i^k(j) k = 0, 1, 2, ...(with i and j fixed).


But each step of the dovetailer is just a single application of the axioms of the Turing 
machine in question: one of the

   Kxy  gives x,
   Sxyz gives xz(yz),
for example. These single steps are all that there is in the dovetailer. But such steps 
lack a context -- they make no sense on their own. You could simply claim that the two 
basic steps are all that is needed -- consciousness self-assembles by taking as many of 
these in whatever order is needed.


If the next step of program phi_i is some 10^50 or so dovetailer steps away, the only 
thing that could possibly link these is the program phi_i itself -- the actual execution 
of the steps is entirely secondary. In which case, one would say that consciousness 
resides in the program phi_i itself - execution on the dovetailer is not required. I do 
not think you would want to go down this path, so you need something to give each step a 
context, something to link the separate steps that are required for consciousness.


The teleportation arguments of Steps 1-7 are insufficient for this, since in that 
argument you are teleporting complete conscious entities, not just single steps of the 
underlying program.


Of course, given that all programs are executed, this sequence of steps does 
correspond to some program, somewhere, but not necessarily any of the ones partially 
executed for generating that conscious moment.


Yes. So what?


I was trying to give you a way out of the problems that I have raised above. If you 
don't see that the sequential steps of the actual dovetailer program give the required 
connectivity, then what does? You did, some time ago, claim that the dovetailer steps 
gave an effective time parameter for the  system. But even that requires a contextual 
link between the steps -- something that would be given by the underlying stepping -- 
which is not the stepping of each individual program phi_i.


I think what it boils down to is that steps in phi_{i}, where {i} is a set indexing 
programs supporting a particular consciousness, must be linked by representing 
consciousness of the same thing, the same thought.  But I think that requires some outside 
reference whereby they can be about the same thing.  So it is not enough to just link the 
phi_{i} of the single consciousness, they must also be linked to an environment.  I think 
this part of what Pierz is saying.  He says the linkage cannot merge different physics, so 
effectively the thread of computations instantiating Bruce's consciousness imply the 
computation of a whole world (with physics) for Bruce's consciousness to exist in.


My apologies if I'm mistaking your or Pierz's ideas.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-21 Thread Bruno Marchal


On 18 May 2015, at 02:31, Bruce Kellett wrote:


LizR wrote:
On 17 May 2015 at 11:44, Bruce Kellett  
bhkell...@optusnet.com.au I can see that computationalism  
might well have difficulties

   accommodating a gradual evolutionary understanding of almost
   anything -- after all, the dovetailer is there in Platonia
   before anything physical ever appears. So how can  
consciousness

   evolve gradually?
This is the tired old misunderstanding of the concept of a block  
universe. It's as though Minkowski never existed.


OK. Explain to me exactly how the block universe ideas work out in  
Platonia.


I thought I saw an answer by Liz, but don't find it.

I am not sure that the block physical universe ideas work out in  
Platonia, although block physical multiverse appearance might be  
explainable by the rather canonical all computations, which is  
offered once we agree that 2+2 = 4, or any theorem of RA, is true  
independently of him/her/it.


Of course, it is a poisonous gift, as it leads to the necessary  
search, for the computationalist, of a measure on the border of the  
sigma_1 reality.


It is long to explain, but you might appreciate shortcuts, as the  
sigma_1 arithmetical reality emulates all rational approximations of  
physical equations, and so, abstracting from the (comp) measure  
problem temporarily, you can make sense of relative local block  
universe in that reality, as that part of the arithmetical reality  
mimics the physicists block universe or universes (perhaps only  
locally too).


Of course such shortcuts might not have the right measure, and so we  
need to use a vaster net.


My point is that if our brains or bodies are Turing emulable then they  
are Turing emulated in a small part of the arithmetical reality. The  
first person points of view gives an internal perspective which is  
much complex, in fact of unboundable complexity, but with important  
invariants too.


In the technic parts I exploit important relations between the sigma_1  
truth, the sigma_1 provable and the (with CT) intuitively computable.


I can explain, if you want, but my feeling is that you don't like the  
idea (that the aristotelian materialist dogma can be doubted), nor  
does it seems you are ready to involve in more of computer science.


But if you don't study the work, you should try to not criticize it  
from personal taste only. I can't pretend liking all consequences of  
comp, but that is another topic. Science is NOT wishful thinking, a  
priori.


Bruno





Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-18 Thread Bruno Marchal


On 17 May 2015, at 18:34, Telmo Menezes wrote:




On Sun, May 17, 2015 at 10:44 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


On 16 May 2015, at 15:47, Telmo Menezes wrote:




On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett bhkell...@optusnet.com.au 
 wrote:

Telmo Menezes wrote:
On Sat, May 16, 2015 at 11:50 AM, Bruce Kellett bhkell...@optusnet.com.au 
 mailto:bhkell...@optusnet.com.au wrote:


Telmo Menezes wrote:
On Sat, May 16, 2015 at 10:22 AM, Bruce Kellett

Are you seriously going to argue that homo sapiens did *not*
arise by a process of natural selection, aka evolution?

No, Darwinian evolution is my favourite scientific theory.

What I am arguing is that we don't know if consciousness is an
evolved trait. It is perfectly possible to imagine darwinian
evolution working without consciousness, even to the human
intelligence level (producing philosophical zombies).

For example, if consciousness is more fundamental than matter,
then evolution is something that happens within consciousness,
not a generator of it.


 That is probably the strongest argument against computationalism to
 date.

How so?

So you think that Darwinian evolution produced intelligent zombies,  
and then computationalism infused consciousness?


No. What I am saying is that consciousness is not a plausible  
target for gradual evolution for the following reasons:


1) There is no evolutionary advantage to it, intelligent zombies  
could do equally well. Every single behaviour that each one of us  
has, as seen for the outside, could be performed by intelligent  
zombies;


2) There is no known mechanism of conscious generation that can be  
climbed. For example, we understand how neurons are computational  
units, how connecting neurons creates a computer, how more neurons  
and more connections create a more powerful computer and so on.  
Evolution can climb this stuff. There is no equivalent known  
mechanism for consciousness.


I don't know if intelligent zombies are possible. Maybe  
consciousness necessarily supervenes on the stuff necessary for  
that level of intelligence. But who knows where consciousness stops  
supervening? Maybe stuff that is not biologically evolved is  
already conscious. Maybe stars are conscious. Who knows? How could  
we know?


I think we can say that universal numbers are conscious, but they  
are self-conscious only when they become Löbian.


So, in a sense, I agree with you, consciousness is already there, in  
arithmetic, seen in some global way.
Then it can differentiate on the different computation which will  
relatively incarnate/implement those universal numbers.







I think you are going to have to do better than that if you want  
comp to be believed by anyone with any scientific knowledge.


Anyone with any scientific knowledge will be agnostic on comp.  
There is no basis to believe it or disbelieve it. Maybe it is  
unknowable. What we can do is investigate the consequences of  
assuming comp.


If the classical-comp-physics is different from the empirical  
physics, we wll have clues that the classical comp is false.


Ok. Do you have any intuition on the level of effort necessary to  
extract classical physics from comp?



I guess that you mean the approximate classical physics.
In fact classical comp entails the falsity of classical physics. Comp  
entails quickly the non booleanity of all points of view, and the main  
features of the quantum.


So we just derive the quantum from comp, and then will derive the  
classical limit from the quantum, and it will be more difficult, and  
perhaps impossible, which would make the apparent classicalness of  
physics geographical. i doubt this, as I think that space and time  
arise from the necessary part of the necessary quantum feature, but I  
am less sure for the Hamiltonian.












You really are calling on dualism to explain consciousness -- the  
homunculus in the machine.


I am not trying to explain consciousness. I don't know what  
consciousness is


I think you know what consciousness is (you just cannot define it).

True.




or how it originates. What I am claiming is that current science  
has nothing to say about it either.


Hmm...
Would you be willing to accept, if only for the sake of a  
discussion,  the following consciousness of P axioms (P for a  
person):



1) P know that P is conscious,
2) P is conscious entails that P cannot doubt that P is conscious,
3) P, like any of its consistent extensions, cannot justify that P  
(resp the consistent extensions) is (are) conscious
4) P cannot define consciousness in any 3p way. (But might with some  
good definition of 1p.)
4) comp: there is a level of description of P's body such that P's  
consciousness is invariant for a digital substitution made at that  
level.


I have no problem with any of these axioms. They feel like a natural  
expansion on cogito ergo sum + the definition of comp.



Re: What does the MGA accomplish?

2015-05-18 Thread Bruno Marchal


On 17 May 2015, at 20:24, meekerdb wrote:


On 5/17/2015 1:44 AM, Bruno Marchal wrote:
Roughly speaking, consciousness originates from the fact that p -  
[]p (the sigma_1 truth get represented in the body/brain of the  
machine), and the fact that []p - p is true, but non justifiable  
by the machine.


What does []p mean?


An abbreviation of beweisbar('p'). It is Gödel arithmetical  
provability predicate, and we have an equivalent for any machine  
talking correctly about itself, seen in the 3p picture.





p doesn't entail that p is provable or necessary.


It does, when p is restricted to sigma_1 sentences. Sigma_1 means  
equivalent to a sentence with the shape ExA(x, ...), with A(x, ...)  
decidable.
You can understand that if A(x, ...) is decidable, you will find the x  
verifying it, by testing


A(0, ...)
A(1, ...)
A(2, ...)
etc. Until you find it.

When ExA(x, ...) is true (p) then it is provable (by any sigma_1  
complete machine, like RA, and this is equivalent with Turing  
universality). So p - []p.


You might have just skipped that we limit ourself to the sigma_truth.  
All such sigma_1 truth is provable by the machine.


Löbian machine are not only sigma_1 complete (and thus p - []p is  
true for them, with p Sigma_1), but they can prove p- []p, for any  
p sigma_1. In particular []p is sigma_1, so we get the self-awareness  
principle: []p - [][]p.


Ask any question if this is not clear enough.

Bruno




Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-17 Thread Bruno Marchal


On 16 May 2015, at 05:41, meekerdb wrote:


On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 6:18 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
 wrote:


   I'm trying to understand what counterfactual  
correctness means in

   the physical thought experiments.

You and me both.


Yes. When you think about it, 'counterfactual' means that the  
antecedent is false. So Bruno's referring to the branching  
'if A then B else C' construction of a program is not really  
a counterfactual at all, since to be a counterfactual A  
*must* be false. So the counterfactual construction is 'A  
then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's  
never made explicit.  In the beginning when one is asked to  
accept a digital prosthesis for a brain part, Bruno says  
almost everyone agrees that consciousness is realized by a  
certain class of computations.  The alternative, as suggested  
by Searle for example, that consciousness depends not only of  
the activity of the brain but also what the physical material  
is, seems like invoking magic.  So we agree that consciousness  
depends on the program that's running, not the hardware it's  
running on.  And implicit in this is that this program  
implements intelligence, the ability to respond differently to  
different externals signals/environment. Bruno says that's  
what is meant by computation, but whether that's entailed by  
the word or not seems like a semantic quibble.  Whatever you  
call it, it's implicit in the idea of digital brain prosthesis  
and in the idea of strong AI that the program instantiating  
consciousness must be able to respond differently to different  
inputs.


But it doesn't have respond differently to every different  
input or to all logically possible inputs.  It only needs to  
be able to respond to inputs within some range as might occur  
in its environment - whether that environment is a whole world  
or just the other parts of the brain.  So the digital  
prosthesis needs to do this with that same functionality over  
the same domain as the brain parts it replaced.  In which case  
it is counterfactually correct. Right?  It's a concept  
relative to a limited domain.


That is probably right. But that just means that the prosthesis  
is functionally equivalent over the required domain. To call  
this 'counterfactual correctness' seems to me to be just  
confused.


What makes the consciousness, in Bruno's view, is that it's the  
right kind of program being run - which seems fairly  
uncontroversial.  And part of being the right kind is that it is  
counterfactually correct = functionally equivalent at the  
software level.  Of course this also means it correctly  
interfaces physically with the rest of the world of which it is  
conscious.  But Bruno minimizes this by two moves. First, he  
considers the brain as dreaming so it is not interacting via  
perceptions.  I objected to this as missing the essential fact  
that the processes in the brain refer to perceptions and other  
concepts learned in its waking state and this is what gives them  
meaning.  Second, Bruno notes that one can just expand the  
digital prosthesis to include a digital artificial world,  
including even a simulation of a whole universe. To which my  
attitude is that this makes the concept of prosthesis and  
artificial moot.


I don't think you would consider just *any* piece of software  
running to be conscious and I do think you would consider some,  
sufficiently intelligent behaving software, plus perhaps certain  
I/O, to be conscious.  So what would be the crucial difference  
between these two software packages? I'd say having the ability  
to produce intelligent looking responses to a large range of  
inputs would be a minimum.


Quite probably. But the argument was made that the detailed  
recording of the sequence of brain states of a conscious person  
could not be conscious because it was not counterfactually  
correct. This charge has always seemed to me to be misguided,  
since the recording does not pretend to be functionally  
equivalent to the original in all circumstances -- just in the  
particular circumstance in which the recording was made. It has  
never been proposed that the film could be used as a prosthesis  
for all situations. So this argument against the replayed  
recording recreating the original conscious moments must fail --  
on the basis of total irrelevance.


But you could turn this around and pick some arbitrary sequence/ 
recording and say, Well it would be the right program to be  
conscious in SOME circumstance, therefore it's conscious.


I think it goes without saying that it is a 

Re: What does the MGA accomplish?

2015-05-17 Thread Bruno Marchal


On 16 May 2015, at 06:31, Bruce Kellett wrote:


meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary sequence/ 
recording and say, Well it would be the right program to be  
conscious in SOME circumstance, therefore it's conscious.


I think it goes without saying that it is a recording of brain  
activity of a conscious person -- not a film of your dog chasing a  
ball. We have to assume a modicum of common sense.
Fine.  But then what is it about the recording of the brain  
activity of a conscious person that makes it conscious?  Why is it  
a property of just that sequence, when in general we would  
attribute consciousness only to an entity that responded  
intelligently/differently to different circumstances.  We wouldn't  
attribute consciousness based on just a short sequence of behavior  
such as might be evinced by one of Disney's animitronics.


What is it about the brain activity of a conscious person that makes  
him conscious? Whatever made the person conscious in the first  
instance is what makes the recording recreate that conscious moment.  
The point here is that consciousness supervenes on the brain  
activity. This makes no ontological claims -- simply an  
epistemological claim. This brain activity is associated with the  
phenomenon we call consciousness.


Yes, but the relevant brain activity, assuming comp, is also emulated  
(infinitely often) in the sigma_1 reality. We can associate  
consciousness to appearance of brain/computation, but we have to  
associate infinities of brain/computation to consciousness. The  
identity thesis in not one-one.


Bruno





How we determine whether a person is conscious in the first place is  
a different matter.


Bruce

I think Bruno is right that it makes more sense to attribute  
consciousness, like intelligence, to a program that can respond  
differently and effectively to a wide range of inputs.  And, maybe  
unlike Bruno, I think intelligence and consciousness is only  
possible relative to an environment, one with extent in time as  
well as space.

Brent


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-17 Thread Bruce Kellett

Telmo Menezes wrote:

On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett
So you think that Darwinian evolution produced intelligent
zombies,
and then computationalism infused consciousness?

No. What I am saying is that consciousness is not a plausible
target for gradual evolution for the following reasons:

1) There is no evolutionary advantage to it, intelligent zombies
could do equally well. Every single behaviour that each one of
us has, as seen for the outside, could be performed by
intelligent zombies;

Do you find it an advantage to be conscious in your everyday life?

From a biological perspective, I don't know. It seems to me that my 
genes could survive and be propagated without consciousness. There have 
been some week attempts at demonstrating the evolutionary value of 
consciousness, but they always seem to equate consciousness with 
self-modelling.


Tell me if you think this thing is conscious:
http://thefutureofthings.com/5320-self-modeling-robot/


Not particularly. But consciousness would appear to be consciousness of 
self in an environment, so this may be some steps along the way, 
although rather simple at the moment.



Do you really think that your partner and/or children are zombies?

No, my personal bet is that intelligent zombies are not possible. I use 
intelligent zombies as a thought experiment. I bet that consciousness 
necessarily supervenes on the computations that allow for human 
intelligence. But this is a personal belief, not a scientific position.


I cannot make a scientific claim because I don't know how to design an 
experiment to test this hypothesis.


Yes, one would have a problem known just what computations were involved.


2) There is no known mechanism of conscious generation that can
be climbed. For example, we understand how neurons are
computational units, how connecting neurons creates a computer,
how more neurons and more connections create a more powerful
computer and so on. Evolution can climb this stuff. There is no
equivalent known mechanism for consciousness.

This is the tired old creationist crap saying that the eye is too
complicated to be explained by evolution; the bacterium's flagellum
is too complicated...; the .. is too complicated .

You are referring to the argument of irreducible complexity. In that 
case, the claim is that there is too much a priori needed complexity for 
an eye to function for it to be generated by iterative improvement. This 
claim counts as a scientific theory because it can be tested. It has 
been falsified by the fossil record, examples of earlier stages of eye 
evolution in simpler animals, demonstrations on how a single-cell eye 
could work, etc.


That was what I was referring to.

This is not the argument I am making at all. All I am saying is that we 
don't know what consciousness is and, even worse, we have no way to 
measure or detect it. If consciousness somehow emerges from brain 
activity, then it surely originated from evolution, even if has a 
spandrel. But to make that claim you have to show a mechanism by which 
brain activity generates consciousness. I don't think anyone has been 
able to do that, or even propose a falsifiable hypothesis.


I don't think you have to demonstrate the mechanism -- just that the 
association exists. And this has surely been done through many 
experiments studying brain activity in conscious and unconscious 
individuals. Absence of relevant brain activity is a widely used 
criterion for brain death and the subsequent turning off of life support 
systems. I think one can make too much of a mystery of consciousness. 
Sure, one individual's consciousness is not available for study in the 
way that his facial features or social behaviours are available. But 
that is not an insuperable barrier to scientific study.



Having absolute belief that such a mechanism must exist is the same type 
of mistake that the creationists make.


Mo. Creationists make the opposite assumption -- they claim that no such 
mechanism is possible. This is not so far from hinting that the 
mechanism doesn't exist when we don't know what it is.




At bottom, it is just an argument from ignorance. You do not happen
to know a mechanism whereby consciousness could develop from simpler
forms. But that does not in any way mean that such is not possible.

I never claimed it wasn't possible. I simply claimed it is not known.


I read what you said differently.


Creationist anti-intellectualism yet again

Can't you argue without insulting me?


You seemed to be going down that path...



I don't know if intelligent zombies are possible. Maybe
consciousness necessarily supervenes on the stuff necessary for
that level of intelligence. But who knows where consciousness
stops supervening? Maybe stuff 

Re: What does the MGA accomplish?

2015-05-17 Thread Bruno Marchal


On 16 May 2015, at 15:47, Telmo Menezes wrote:




On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett bhkell...@optusnet.com.au 
 wrote:

Telmo Menezes wrote:
On Sat, May 16, 2015 at 11:50 AM, Bruce Kellett bhkell...@optusnet.com.au 
 mailto:bhkell...@optusnet.com.au wrote:


Telmo Menezes wrote:
On Sat, May 16, 2015 at 10:22 AM, Bruce Kellett

Are you seriously going to argue that homo sapiens did *not*
arise by a process of natural selection, aka evolution?

No, Darwinian evolution is my favourite scientific theory.

What I am arguing is that we don't know if consciousness is an
evolved trait. It is perfectly possible to imagine darwinian
evolution working without consciousness, even to the human
intelligence level (producing philosophical zombies).

For example, if consciousness is more fundamental than matter,
then evolution is something that happens within consciousness,
not a generator of it.


 That is probably the strongest argument against computationalism to
 date.

How so?

So you think that Darwinian evolution produced intelligent zombies,  
and then computationalism infused consciousness?


No. What I am saying is that consciousness is not a plausible target  
for gradual evolution for the following reasons:


1) There is no evolutionary advantage to it, intelligent zombies  
could do equally well. Every single behaviour that each one of us  
has, as seen for the outside, could be performed by intelligent  
zombies;


2) There is no known mechanism of conscious generation that can be  
climbed. For example, we understand how neurons are computational  
units, how connecting neurons creates a computer, how more neurons  
and more connections create a more powerful computer and so on.  
Evolution can climb this stuff. There is no equivalent known  
mechanism for consciousness.


I don't know if intelligent zombies are possible. Maybe  
consciousness necessarily supervenes on the stuff necessary for that  
level of intelligence. But who knows where consciousness stops  
supervening? Maybe stuff that is not biologically evolved is already  
conscious. Maybe stars are conscious. Who knows? How could we know?


I think we can say that universal numbers are conscious, but they are  
self-conscious only when they become Löbian.


So, in a sense, I agree with you, consciousness is already there, in  
arithmetic, seen in some global way.
Then it can differentiate on the different computation which will  
relatively incarnate/implement those universal numbers.







I think you are going to have to do better than that if you want  
comp to be believed by anyone with any scientific knowledge.


Anyone with any scientific knowledge will be agnostic on comp. There  
is no basis to believe it or disbelieve it. Maybe it is unknowable.  
What we can do is investigate the consequences of assuming comp.


If the classical-comp-physics is different from the empirical physics,  
we wll have clues that the classical comp is false.






You really are calling on dualism to explain consciousness -- the  
homunculus in the machine.


I am not trying to explain consciousness. I don't know what  
consciousness is


I think you know what consciousness is (you just cannot define it).



or how it originates. What I am claiming is that current science has  
nothing to say about it either.


Hmm...
Would you be willing to accept, if only for the sake of a discussion,   
the following consciousness of P axioms (P for a person):



1) P know that P is conscious,
2) P is conscious entails that P cannot doubt that P is conscious,
3) P, like any of its consistent extensions, cannot justify that P  
(resp the consistent extensions) is (are) conscious
4) P cannot define consciousness in any 3p way. (But might with some  
good definition of 1p.)
4) comp: there is a level of description of P's body such that P's  
consciousness is invariant for a digital substitution made at that  
level.


If yes, then current computer science can already explains why  
universal machine, or Löbian machine, are already conscious (even just  
in arithmetic).


Roughly speaking, consciousness originates from the fact that p - []p  
(the sigma_1 truth get represented in the body/brain of the machine),  
and the fact that []p - p is true, but non justifiable by the  
machine. That makes the machine which are developing knowledge, more  
and more aware of their possible relative ignorance, and above some  
threshold, even wise, as they understand that the augmentation of  
knowledge leads to the augmentation of ignorance. The more the lantern  
is powerful in the cavern, the more we see that the cavern is big.


Also, that (axiomatic) notion of consciousness has many role, from  
speeding up the relative ability of the machine, and augmenting the  
degrees of freedom, to distinguishing efficaciously the bad (like  
being eaten) from the good (like eaten). It makes also possible to  

Re: What does the MGA accomplish?

2015-05-17 Thread Telmo Menezes
On Sun, May 17, 2015 at 10:04 AM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Telmo Menezes wrote:

 On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett
 So you think that Darwinian evolution produced intelligent
 zombies,
 and then computationalism infused consciousness?

 No. What I am saying is that consciousness is not a plausible
 target for gradual evolution for the following reasons:

 1) There is no evolutionary advantage to it, intelligent zombies
 could do equally well. Every single behaviour that each one of
 us has, as seen for the outside, could be performed by
 intelligent zombies;

 Do you find it an advantage to be conscious in your everyday life?

 From a biological perspective, I don't know. It seems to me that my genes
 could survive and be propagated without consciousness. There have been some
 week attempts at demonstrating the evolutionary value of consciousness, but
 they always seem to equate consciousness with self-modelling.

 Tell me if you think this thing is conscious:
 http://thefutureofthings.com/5320-self-modeling-robot/


 Not particularly. But consciousness would appear to be consciousness of
 self in an environment, so this may be some steps along the way, although
 rather simple at the moment.


Ok. My intuition is that consciousness is an all-or-nothing proposition.
What can be more or less complex ais what is experienced consciously -- and
this includes brain power, I see intelligence as a generator of more rich
content. I don't think my cat is less conscious than me, just less
intelligent (and perhaps more wise).




  Do you really think that your partner and/or children are zombies?

 No, my personal bet is that intelligent zombies are not possible. I use
 intelligent zombies as a thought experiment. I bet that consciousness
 necessarily supervenes on the computations that allow for human
 intelligence. But this is a personal belief, not a scientific position.

 I cannot make a scientific claim because I don't know how to design an
 experiment to test this hypothesis.


 Yes, one would have a problem known just what computations were involved.


Ok.




  2) There is no known mechanism of conscious generation that can
 be climbed. For example, we understand how neurons are
 computational units, how connecting neurons creates a computer,
 how more neurons and more connections create a more powerful
 computer and so on. Evolution can climb this stuff. There is no
 equivalent known mechanism for consciousness.

 This is the tired old creationist crap saying that the eye is too
 complicated to be explained by evolution; the bacterium's flagellum
 is too complicated...; the .. is too complicated .

 You are referring to the argument of irreducible complexity. In that
 case, the claim is that there is too much a priori needed complexity for an
 eye to function for it to be generated by iterative improvement. This claim
 counts as a scientific theory because it can be tested. It has been
 falsified by the fossil record, examples of earlier stages of eye evolution
 in simpler animals, demonstrations on how a single-cell eye could work, etc.


 That was what I was referring to.


Ok.




  This is not the argument I am making at all. All I am saying is that we
 don't know what consciousness is and, even worse, we have no way to measure
 or detect it. If consciousness somehow emerges from brain activity, then it
 surely originated from evolution, even if has a spandrel. But to make that
 claim you have to show a mechanism by which brain activity generates
 consciousness. I don't think anyone has been able to do that, or even
 propose a falsifiable hypothesis.


 I don't think you have to demonstrate the mechanism -- just that the
 association exists. And this has surely been done through many experiments
 studying brain activity in conscious and unconscious individuals. Absence
 of relevant brain activity is a widely used criterion for brain death and
 the subsequent turning off of life support systems. I think one can make
 too much of a mystery of consciousness. Sure, one individual's
 consciousness is not available for study in the way that his facial
 features or social behaviours are available. But that is not an insuperable
 barrier to scientific study.


This is Brent's position, I had this debate with him several times. I will
try to better explain my problem with this.

I think there is a very narrow case for which such empiricism applies.  I
think it's fair to say that we can use MRI machines to check if a person is
conscious AND able to form memories. But down the rabbit hole:

- It is possible that states of consciousness exist where one is not able
to form memories. Dali famously would hold a spoon above a ceramic plate
while falling asleep, in an attempt to recover images from the hypnagogic
state. 

Re: What does the MGA accomplish?

2015-05-17 Thread Telmo Menezes
On Sun, May 17, 2015 at 10:44 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 16 May 2015, at 15:47, Telmo Menezes wrote:



 On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett bhkell...@optusnet.com.au
 wrote:

 Telmo Menezes wrote:

 On Sat, May 16, 2015 at 11:50 AM, Bruce Kellett 
 bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au wrote:

 Telmo Menezes wrote:
 On Sat, May 16, 2015 at 10:22 AM, Bruce Kellett

 Are you seriously going to argue that homo sapiens did *not*
 arise by a process of natural selection, aka evolution?

 No, Darwinian evolution is my favourite scientific theory.

 What I am arguing is that we don't know if consciousness is an
 evolved trait. It is perfectly possible to imagine darwinian
 evolution working without consciousness, even to the human
 intelligence level (producing philosophical zombies).

 For example, if consciousness is more fundamental than matter,
 then evolution is something that happens within consciousness,
 not a generator of it.


  That is probably the strongest argument against computationalism to
  date.

 How so?


 So you think that Darwinian evolution produced intelligent zombies, and
 then computationalism infused consciousness?


 No. What I am saying is that consciousness is not a plausible target for
 gradual evolution for the following reasons:

 1) There is no evolutionary advantage to it, intelligent zombies could do
 equally well. Every single behaviour that each one of us has, as seen for
 the outside, could be performed by intelligent zombies;

 2) There is no known mechanism of conscious generation that can be
 climbed. For example, we understand how neurons are computational units,
 how connecting neurons creates a computer, how more neurons and more
 connections create a more powerful computer and so on. Evolution can climb
 this stuff. There is no equivalent known mechanism for consciousness.

 I don't know if intelligent zombies are possible. Maybe consciousness
 necessarily supervenes on the stuff necessary for that level of
 intelligence. But who knows where consciousness stops supervening? Maybe
 stuff that is not biologically evolved is already conscious. Maybe stars
 are conscious. Who knows? How could we know?


 I think we can say that universal numbers are conscious, but they are
 self-conscious only when they become Löbian.

 So, in a sense, I agree with you, consciousness is already there, in
 arithmetic, seen in some global way.
 Then it can differentiate on the different computation which will
 relatively incarnate/implement those universal numbers.






 I think you are going to have to do better than that if you want comp to
 be believed by anyone with any scientific knowledge.


 Anyone with any scientific knowledge will be agnostic on comp. There is no
 basis to believe it or disbelieve it. Maybe it is unknowable. What we can
 do is investigate the consequences of assuming comp.


 If the classical-comp-physics is different from the empirical physics, we
 wll have clues that the classical comp is false.


Ok. Do you have any intuition on the level of effort necessary to extract
classical physics from comp?







 You really are calling on dualism to explain consciousness -- the
 homunculus in the machine.


 I am not trying to explain consciousness. I don't know what consciousness
 is


 I think you know what consciousness is (you just cannot define it).


True.





 or how it originates. What I am claiming is that current science has
 nothing to say about it either.


 Hmm...
 Would you be willing to accept, if only for the sake of a discussion,  the
 following consciousness of P axioms (P for a person):


 1) P know that P is conscious,
 2) P is conscious entails that P cannot doubt that P is conscious,
 3) P, like any of its consistent extensions, cannot justify that P (resp
 the consistent extensions) is (are) conscious
 4) P cannot define consciousness in any 3p way. (But might with some good
 definition of 1p.)
 4) comp: there is a level of description of P's body such that P's
 consciousness is invariant for a digital substitution made at that level.


I have no problem with any of these axioms. They feel like a natural
expansion on cogito ergo sum + the definition of comp.



 If yes, then current computer science can already explains why universal
 machine, or Löbian machine, are already conscious (even just in arithmetic).

 Roughly speaking, consciousness originates from the fact that p - []p
 (the sigma_1 truth get represented in the body/brain of the machine), and
 the fact that []p - p is true, but non justifiable by the machine. That
 makes the machine which are developing knowledge, more and more aware of
 their possible relative ignorance, and above some threshold, even wise, as
 they understand that the augmentation of knowledge leads to the
 augmentation of ignorance. The more the lantern is powerful in the cavern,
 the more we see 

Re: What does the MGA accomplish?

2015-05-17 Thread LizR
On 18 May 2015 at 06:14, meekerdb meeke...@verizon.net wrote:

  At a very low level, yes.  It's more conscious than my computer or a
 rock. Maybe less conscious than an amoeba, since the amoeba not only
 understands how to move it also understands food and reproduction.


You think amoedas are conscious? Do you have an answer to Russell's
argument which I believe is entitled Ants are not conscious ? (in TON)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-17 Thread Bruce Kellett

LizR wrote:
On 17 May 2015 at 11:44, Bruce Kellett bhkell...@optusnet.com.au 


I can see that computationalism might well have difficulties
accommodating a gradual evolutionary understanding of almost
anything -- after all, the dovetailer is there in Platonia
before anything physical ever appears. So how can consciousness
evolve gradually?

This is the tired old misunderstanding of the concept of a block 
universe. It's as though Minkowski never existed.


OK. Explain to me exactly how the block universe ideas work out in Platonia.

Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-17 Thread LizR
On 17 May 2015 at 11:44, Bruce Kellett bhkell...@optusnet.com.au wrote:

 Telmo Menezes wrote:

 On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett
 So you think that Darwinian evolution produced intelligent zombies,
 and then computationalism infused consciousness?

 No. What I am saying is that consciousness is not a plausible target for
 gradual evolution for the following reasons:

 1) There is no evolutionary advantage to it, intelligent zombies could do
 equally well. Every single behaviour that each one of us has, as seen for
 the outside, could be performed by intelligent zombies;


 Do you find it an advantage to be conscious in your everyday life? Do you
 really think that your partner and/or children are zombies?


I'm not particularly taking sides here but it would be nice if you actually
answered the point that was made, rather than this rhetorical/emotional
stuff.


 I can see that computationalism might well have difficulties accommodating
 a gradual evolutionary understanding of almost anything -- after all, the
 dovetailer is there in Platonia before anything physical ever appears. So
 how can consciousness evolve gradually?


This is the tired old misunderstanding of the concept of a block universe.
It's as though Minkowski never existed.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-17 Thread meekerdb

On 5/17/2015 4:44 PM, LizR wrote:
On 18 May 2015 at 06:14, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


At a very low level, yes.  It's more conscious than my computer or a rock. 
Maybe
less conscious than an amoeba, since the amoeba not only understands how to 
move it
also understands food and reproduction.


You think amoedas are conscious? Do you have an answer to Russell's argument which I 
believe is entitled Ants are not conscious ? (in TON)


I haven't read it, but I suspect it's the same as my answer as to why I'm not 
Chinese.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-17 Thread meekerdb

On 5/17/2015 1:44 AM, Bruno Marchal wrote:
Roughly speaking, consciousness originates from the fact that p - []p (the sigma_1 
truth get represented in the body/brain of the machine), and the fact that []p - p is 
true, but non justifiable by the machine.


What does []p mean?  p doesn't entail that p is provable or necessary.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-17 Thread meekerdb

On 5/17/2015 12:16 AM, Telmo Menezes wrote:



On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett
So you think that Darwinian evolution produced intelligent zombies,
and then computationalism infused consciousness?

No. What I am saying is that consciousness is not a plausible target 
for gradual
evolution for the following reasons:

1) There is no evolutionary advantage to it, intelligent zombies could 
do
equally well. Every single behaviour that each one of us has, as seen 
for the
outside, could be performed by intelligent zombies;


Do you find it an advantage to be conscious in your everyday life?


From a biological perspective, I don't know. It seems to me that my genes could survive 
and be propagated without consciousness. There have been some week attempts at 
demonstrating the evolutionary value of consciousness, but they always seem to equate 
consciousness with self-modelling.


Do you think there's something more to it?



Tell me if you think this thing is conscious:
http://thefutureofthings.com/5320-self-modeling-robot/


At a very low level, yes.  It's more conscious than my computer or a rock. Maybe less 
conscious than an amoeba, since the amoeba not only understands how to move it also 
understands food and reproduction.




Do you really think that your partner and/or children are zombies?


No, my personal bet is that intelligent zombies are not possible. I use intelligent 
zombies as a thought experiment. I bet that consciousness necessarily supervenes on the 
computations that allow for human intelligence. But this is a personal belief, not a 
scientific position.


I cannot make a scientific claim because I don't know how to design an experiment to 
test this hypothesis.




2) There is no known mechanism of conscious generation that can be 
climbed. For
example, we understand how neurons are computational units, how 
connecting
neurons creates a computer, how more neurons and more connections 
create a more
powerful computer and so on. Evolution can climb this stuff. There is no
equivalent known mechanism for consciousness.


This is the tired old creationist crap saying that the eye is too 
complicated to be
explained by evolution; the bacterium's flagellum is too complicated...; 
the ..
is too complicated .


You are referring to the argument of irreducible complexity. In that case, the claim is 
that there is too much a priori needed complexity for an eye to function for it to be 
generated by iterative improvement. This claim counts as a scientific theory because it 
can be tested. It has been falsified by the fossil record, examples of earlier stages of 
eye evolution in simpler animals, demonstrations on how a single-cell eye could work, etc.


This is not the argument I am making at all. All I am saying is that we don't know what 
consciousness is and, even worse, we have no way to measure or detect it. If 
consciousness somehow emerges from brain activity, then it surely originated from 
evolution, even if has a spandrel. But to make that claim you have to show a mechanism 
by which brain activity generates consciousness. I don't think anyone has been able to 
do that, or even propose a falsifiable hypothesis.


Having absolute belief that such a mechanism must exist is the same type of mistake that 
the creationists make.



At bottom, it is just an argument from ignorance. You do not happen to know 
a
mechanism whereby consciousness could develop from simpler forms. But that 
does not
in any way mean that such is not possible.


I never claimed it wasn't possible. I simply claimed it is not known.

Creationist anti-intellectualism yet again


Can't you argue without insulting me?




I don't know if intelligent zombies are possible. Maybe consciousness
necessarily supervenes on the stuff necessary for that level of 
intelligence.
But who knows where consciousness stops supervening? Maybe stuff that 
is not
biologically evolved is already conscious. Maybe stars are conscious. 
Who knows?
How could we know?


What we can know, by scientific investigation, is that all known life forms 
evolved
from simpler forms by processes generally described under the heading of 
Darwinian
evolution. Consciousness is a feature of many living creatures. If you want 
to argue
that consciousness is something outside the normal evolutionary process, 
then you
have embraced an irreducibly dualist position.


That is a fallacy. Consider:

All life forms where created by evolutionary processes
All life forms have mass
Mass was created by evolutionary processes


I can see that computationalism might well have difficulties accommodating 
a gradual
evolutionary understanding of almost anything -- after all, the dovetailer 
is there
in Platonia before anything physical 

Re: What does the MGA accomplish?

2015-05-17 Thread Telmo Menezes


  On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett
 So you think that Darwinian evolution produced intelligent zombies,
 and then computationalism infused consciousness?

 No. What I am saying is that consciousness is not a plausible target for
 gradual evolution for the following reasons:

 1) There is no evolutionary advantage to it, intelligent zombies could do
 equally well. Every single behaviour that each one of us has, as seen for
 the outside, could be performed by intelligent zombies;


 Do you find it an advantage to be conscious in your everyday life?


From a biological perspective, I don't know. It seems to me that my genes
could survive and be propagated without consciousness. There have been some
week attempts at demonstrating the evolutionary value of consciousness, but
they always seem to equate consciousness with self-modelling.

Tell me if you think this thing is conscious:
http://thefutureofthings.com/5320-self-modeling-robot/



 Do you really think that your partner and/or children are zombies?


No, my personal bet is that intelligent zombies are not possible. I use
intelligent zombies as a thought experiment. I bet that consciousness
necessarily supervenes on the computations that allow for human
intelligence. But this is a personal belief, not a scientific position.

I cannot make a scientific claim because I don't know how to design an
experiment to test this hypothesis.




  2) There is no known mechanism of conscious generation that can be
 climbed. For example, we understand how neurons are computational units,
 how connecting neurons creates a computer, how more neurons and more
 connections create a more powerful computer and so on. Evolution can climb
 this stuff. There is no equivalent known mechanism for consciousness.


 This is the tired old creationist crap saying that the eye is too
 complicated to be explained by evolution; the bacterium's flagellum is too
 complicated...; the .. is too complicated .


You are referring to the argument of irreducible complexity. In that case,
the claim is that there is too much a priori needed complexity for an eye
to function for it to be generated by iterative improvement. This claim
counts as a scientific theory because it can be tested. It has been
falsified by the fossil record, examples of earlier stages of eye evolution
in simpler animals, demonstrations on how a single-cell eye could work, etc.

This is not the argument I am making at all. All I am saying is that we
don't know what consciousness is and, even worse, we have no way to measure
or detect it. If consciousness somehow emerges from brain activity, then it
surely originated from evolution, even if has a spandrel. But to make that
claim you have to show a mechanism by which brain activity generates
consciousness. I don't think anyone has been able to do that, or even
propose a falsifiable hypothesis.

Having absolute belief that such a mechanism must exist is the same type of
mistake that the creationists make.



 At bottom, it is just an argument from ignorance. You do not happen to
 know a mechanism whereby consciousness could develop from simpler forms.
 But that does not in any way mean that such is not possible.


I never claimed it wasn't possible. I simply claimed it is not known.


 Creationist anti-intellectualism yet again


Can't you argue without insulting me?





  I don't know if intelligent zombies are possible. Maybe consciousness
 necessarily supervenes on the stuff necessary for that level of
 intelligence. But who knows where consciousness stops supervening? Maybe
 stuff that is not biologically evolved is already conscious. Maybe stars
 are conscious. Who knows? How could we know?


 What we can know, by scientific investigation, is that all known life
 forms evolved from simpler forms by processes generally described under the
 heading of Darwinian evolution. Consciousness is a feature of many living
 creatures. If you want to argue that consciousness is something outside the
 normal evolutionary process, then you have embraced an irreducibly dualist
 position.


That is a fallacy. Consider:

All life forms where created by evolutionary processes
All life forms have mass
Mass was created by evolutionary processes



 I can see that computationalism might well have difficulties accommodating
 a gradual evolutionary understanding of almost anything -- after all, the
 dovetailer is there in Platonia before anything physical ever appears. So
 how can consciousness evolve gradually?

 This, it seems to me, is just another strike against comp -- it does not
 fit with the scientific data.


It is not hard to imagine evolutionary processes within computations. In
fact, that's what I did for a living for some years.

What is hard is to explain consciousness, comp or no comp. Pretending to
have answers is not a valid argument.

Telmo.




 Bruce

 --
 You received this message because you are subscribed to the Google Groups
 

Re: What does the MGA accomplish?

2015-05-16 Thread Telmo Menezes
On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Telmo Menezes wrote:

 On Sat, May 16, 2015 at 11:50 AM, Bruce Kellett 
 bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au wrote:

 Telmo Menezes wrote:
 On Sat, May 16, 2015 at 10:22 AM, Bruce Kellett

 Are you seriously going to argue that homo sapiens did *not*
 arise by a process of natural selection, aka evolution?

 No, Darwinian evolution is my favourite scientific theory.

 What I am arguing is that we don't know if consciousness is an
 evolved trait. It is perfectly possible to imagine darwinian
 evolution working without consciousness, even to the human
 intelligence level (producing philosophical zombies).

 For example, if consciousness is more fundamental than matter,
 then evolution is something that happens within consciousness,
 not a generator of it.


  That is probably the strongest argument against computationalism to
  date.

 How so?


 So you think that Darwinian evolution produced intelligent zombies, and
 then computationalism infused consciousness?


No. What I am saying is that consciousness is not a plausible target for
gradual evolution for the following reasons:

1) There is no evolutionary advantage to it, intelligent zombies could do
equally well. Every single behaviour that each one of us has, as seen for
the outside, could be performed by intelligent zombies;

2) There is no known mechanism of conscious generation that can be climbed.
For example, we understand how neurons are computational units, how
connecting neurons creates a computer, how more neurons and more
connections create a more powerful computer and so on. Evolution can climb
this stuff. There is no equivalent known mechanism for consciousness.

I don't know if intelligent zombies are possible. Maybe consciousness
necessarily supervenes on the stuff necessary for that level of
intelligence. But who knows where consciousness stops supervening? Maybe
stuff that is not biologically evolved is already conscious. Maybe stars
are conscious. Who knows? How could we know?


 I think you are going to have to do better than that if you want comp to
 be believed by anyone with any scientific knowledge.


Anyone with any scientific knowledge will be agnostic on comp. There is no
basis to believe it or disbelieve it. Maybe it is unknowable. What we can
do is investigate the consequences of assuming comp.


 You really are calling on dualism to explain consciousness -- the
 homunculus in the machine.


I am not trying to explain consciousness. I don't know what consciousness
is or how it originates. What I am claiming is that current science has
nothing to say about it either.

Telmo.




 Bruce

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Bruce Kellett

Telmo Menezes wrote:
On Sat, May 16, 2015 at 11:50 AM, Bruce Kellett 
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au wrote:


Telmo Menezes wrote:
On Sat, May 16, 2015 at 10:22 AM, Bruce Kellett

Are you seriously going to argue that homo sapiens did *not*
arise by a process of natural selection, aka evolution?

No, Darwinian evolution is my favourite scientific theory.

What I am arguing is that we don't know if consciousness is an
evolved trait. It is perfectly possible to imagine darwinian
evolution working without consciousness, even to the human
intelligence level (producing philosophical zombies).

For example, if consciousness is more fundamental than matter,
then evolution is something that happens within consciousness,
not a generator of it.


 That is probably the strongest argument against computationalism to
 date.

How so?


So you think that Darwinian evolution produced intelligent zombies, and 
then computationalism infused consciousness? I think you are going to 
have to do better than that if you want comp to be believed by anyone 
with any scientific knowledge. You really are calling on dualism to 
explain consciousness -- the homunculus in the machine.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 10:29 PM, Bruce Kellett wrote:


The AI that I envisage will probably be based on a learning program of 
some sort, that will have to learn in much the same way as an infant 
human learns. I doubt that we will ever be able to create an AI that 
is essentially an intelligent adult human when it is first turned on.


I agree with that, but once an AI is realized it will be possible to 
copy it.  And if it's digital it will be possible to implement it using 
different hardware.  If it's not digital, it will (in principle) be able 
to implement it arbitrarily closely with a digital device.  And we will 
have the same question - what is that makes that hardware device 
conscious?  I don't see any plausible answer except Running the program 
it instantiates.


But that does not imply that consciousness is itself a computation. 
There is not some subroutine in your AI the is labelled this subroutine 
computes consciousness. Consciousness is a function of the whole 
functioning system, not of some particular feature. That is why I think 
identifying consciousness with computation is in fact adding some 
additional magic to the machine.


Consciousness arose in nature by a process of natural evolution. 
Proto-consciousness gave some evolutionary advantage, so it grew and 
developed. Nature did not at some point add the fact that it was a 
computation, and then it suddenly become conscious. Consciousness is a 
computation only in the trivial sense that any physical process can be 
regarded as a computation, or mapping taking some input to some output. 
There is not some special, magical class of computations that are unique 
to consciousness. Consciousness is an evolved bulk property, not just 
one specific feature of that bulk.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread meekerdb

On 5/15/2015 11:14 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 10:29 PM, Bruce Kellett wrote:


The AI that I envisage will probably be based on a learning program of some sort, that 
will have to learn in much the same way as an infant human learns. I doubt that we 
will ever be able to create an AI that is essentially an intelligent adult human when 
it is first turned on.


I agree with that, but once an AI is realized it will be possible to copy it.  And if 
it's digital it will be possible to implement it using different hardware.  If it's not 
digital, it will (in principle) be able to implement it arbitrarily closely with a 
digital device.  And we will have the same question - what is that makes that hardware 
device conscious?  I don't see any plausible answer except Running the program it 
instantiates.


But that does not imply that consciousness is itself a computation. 


I didn't draw that conclusion.  That's the conclusion Bruno wants to draw - or close to it 
(he talks about consciousness supervening on an infinite number of computational threads).


There is not some subroutine in your AI the is labelled this subroutine computes 
consciousness. Consciousness is a function of the whole functioning system, not of some 
particular feature. 


Right.  And the system includes the environment that the consciousness refers 
to.

That is why I think identifying consciousness with computation is in fact adding some 
additional magic to the machine.


I don't see that it's adding anything, magic or otherwise.  As you say, any process can 
regarded as a computation.




Consciousness arose in nature by a process of natural evolution. Proto-consciousness 
gave some evolutionary advantage, so it grew and developed. 


Yes, it seems like a natural extension of modeling one's surroundings as part of decision 
making, to add yourself into the model in order to imagine yourself making different choices.


Nature did not at some point add the fact that it was a computation, and then it 
suddenly become conscious. Consciousness is a computation only in the trivial sense that 
any physical process can be regarded as a computation, or mapping taking some input to 
some output. There is not some special, magical class of computations that are unique to 
consciousness. Consciousness is an evolved bulk property, not just one specific feature 
of that bulk.


But computation is also not just one specific feature of a process, it's a wholistic 
concept.  So I disagree that there is not some special class of programs that implement 
consciousness; specifically those that model the device as part of it's own deicision 
processes.


Brent



Bruce



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Bruce Kellett

Telmo Menezes wrote:
On Sat, May 16, 2015 at 10:22 AM, Bruce Kellett 
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au wrote:


Telmo Menezes wrote:

On Sat, May 16, 2015 at 8:14 AM, Bruce Kellett
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au
mailto:bhkell...@optusnet.com.au
mailto:bhkell...@optusnet.com.au wrote:

meekerdb wrote:

On 5/15/2015 10:29 PM, Bruce Kellett wrote:


The AI that I envisage will probably be based on a
learning
program of some sort, that will have to learn in
much the
same way as an infant human learns. I doubt that we will
ever be able to create an AI that is essentially an
intelligent adult human when it is first turned on.


I agree with that, but once an AI is realized it will be
possible to copy it.  And if it's digital it will be
possible to
implement it using different hardware.  If it's not
digital, it
will (in principle) be able to implement it arbitrarily
closely
with a digital device.  And we will have the same question -
what is that makes that hardware device conscious?  I
don't see
any plausible answer except Running the program it
instantiates.


But that does not imply that consciousness is itself a
computation.
There is not some subroutine in your AI the is labelled this
subroutine computes consciousness. Consciousness is a
function of
the whole functioning system, not of some particular
feature. That
is why I think identifying consciousness with computation is
in fact
adding some additional magic to the machine.


So you don't believe that performing the same computations that
your brain does in another substrate will produce a copy of your
mind? If you don't believe that, then you must believe in some
unknown property of matter (magic?). If you do, then you believe
that consciousness supervenes on computation.  Consciousness
arose in nature by a process of natural evolution.

How do you know that?
 Proto-consciousness gave some evolutionary advantage, so it
grew and
developed.

How do you know that?
 Nature did not at some point add the fact that it was a
computation,
and then it suddenly become conscious.
Of course not. Nobody claims that.
 Consciousness is a computation only in the trivial sense
that any
physical process can be regarded as a computation, or
mapping taking
some input to some output. There is not some special,
magical class
of computations that are unique to consciousness.
Consciousness is
an evolved bulk property, not just one specific feature of
that bulk.

How do you know it's evolved?


Are you seriously going to argue that homo sapiens did *not* arise
by a process of natural selection, aka evolution?


No, Darwinian evolution is my favourite scientific theory.

What I am arguing is that we don't know if consciousness is an evolved 
trait. It is perfectly possible to imagine darwinian evolution working 
without consciousness, even to the human intelligence level (producing 
philosophical zombies).


For example, if consciousness is more fundamental than matter, then 
evolution is something that happens within consciousness, not a 
generator of it.


That is probably the strongest argument against computationalism to date.

Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Telmo Menezes
On Sat, May 16, 2015 at 8:14 AM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 meekerdb wrote:

 On 5/15/2015 10:29 PM, Bruce Kellett wrote:


 The AI that I envisage will probably be based on a learning program of
 some sort, that will have to learn in much the same way as an infant human
 learns. I doubt that we will ever be able to create an AI that is
 essentially an intelligent adult human when it is first turned on.


 I agree with that, but once an AI is realized it will be possible to copy
 it.  And if it's digital it will be possible to implement it using
 different hardware.  If it's not digital, it will (in principle) be able to
 implement it arbitrarily closely with a digital device.  And we will have
 the same question - what is that makes that hardware device conscious?  I
 don't see any plausible answer except Running the program it instantiates.


 But that does not imply that consciousness is itself a computation. There
 is not some subroutine in your AI the is labelled this subroutine computes
 consciousness. Consciousness is a function of the whole functioning
 system, not of some particular feature. That is why I think identifying
 consciousness with computation is in fact adding some additional magic to
 the machine.


So you don't believe that performing the same computations that your brain
does in another substrate will produce a copy of your mind? If you don't
believe that, then you must believe in some unknown property of matter
(magic?). If you do, then you believe that consciousness supervenes on
computation.



 Consciousness arose in nature by a process of natural evolution.


How do you know that?


 Proto-consciousness gave some evolutionary advantage, so it grew and
 developed.


How do you know that?


 Nature did not at some point add the fact that it was a computation, and
 then it suddenly become conscious.


Of course not. Nobody claims that.


 Consciousness is a computation only in the trivial sense that any physical
 process can be regarded as a computation, or mapping taking some input to
 some output. There is not some special, magical class of computations that
 are unique to consciousness. Consciousness is an evolved bulk property, not
 just one specific feature of that bulk.


How do you know it's evolved?

Telmo.




 Bruce


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Bruce Kellett

Telmo Menezes wrote:



On Sat, May 16, 2015 at 8:14 AM, Bruce Kellett 
bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au wrote:


meekerdb wrote:

On 5/15/2015 10:29 PM, Bruce Kellett wrote:


The AI that I envisage will probably be based on a learning
program of some sort, that will have to learn in much the
same way as an infant human learns. I doubt that we will
ever be able to create an AI that is essentially an
intelligent adult human when it is first turned on.


I agree with that, but once an AI is realized it will be
possible to copy it.  And if it's digital it will be possible to
implement it using different hardware.  If it's not digital, it
will (in principle) be able to implement it arbitrarily closely
with a digital device.  And we will have the same question -
what is that makes that hardware device conscious?  I don't see
any plausible answer except Running the program it instantiates.


But that does not imply that consciousness is itself a computation.
There is not some subroutine in your AI the is labelled this
subroutine computes consciousness. Consciousness is a function of
the whole functioning system, not of some particular feature. That
is why I think identifying consciousness with computation is in fact
adding some additional magic to the machine.


So you don't believe that performing the same computations that your 
brain does in another substrate will produce a copy of your mind? If you 
don't believe that, then you must believe in some unknown property of 
matter (magic?). If you do, then you believe that consciousness 
supervenes on computation. 
 
Consciousness arose in nature by a process of natural evolution.


How do you know that?
 
Proto-consciousness gave some evolutionary advantage, so it grew and

developed.

How do you know that?
 
Nature did not at some point add the fact that it was a computation,
and then it suddenly become conscious. 


Of course not. Nobody claims that.
 
Consciousness is a computation only in the trivial sense that any

physical process can be regarded as a computation, or mapping taking
some input to some output. There is not some special, magical class
of computations that are unique to consciousness. Consciousness is
an evolved bulk property, not just one specific feature of that bulk.

How do you know it's evolved?


Are you seriously going to argue that homo sapiens did *not* arise by a 
process of natural selection, aka evolution?


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Telmo Menezes
On Sat, May 16, 2015 at 10:22 AM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Telmo Menezes wrote:



 On Sat, May 16, 2015 at 8:14 AM, Bruce Kellett bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au wrote:

 meekerdb wrote:

 On 5/15/2015 10:29 PM, Bruce Kellett wrote:


 The AI that I envisage will probably be based on a learning
 program of some sort, that will have to learn in much the
 same way as an infant human learns. I doubt that we will
 ever be able to create an AI that is essentially an
 intelligent adult human when it is first turned on.


 I agree with that, but once an AI is realized it will be
 possible to copy it.  And if it's digital it will be possible to
 implement it using different hardware.  If it's not digital, it
 will (in principle) be able to implement it arbitrarily closely
 with a digital device.  And we will have the same question -
 what is that makes that hardware device conscious?  I don't see
 any plausible answer except Running the program it instantiates.


 But that does not imply that consciousness is itself a computation.
 There is not some subroutine in your AI the is labelled this
 subroutine computes consciousness. Consciousness is a function of
 the whole functioning system, not of some particular feature. That
 is why I think identifying consciousness with computation is in fact
 adding some additional magic to the machine.


 So you don't believe that performing the same computations that your
 brain does in another substrate will produce a copy of your mind? If you
 don't believe that, then you must believe in some unknown property of
 matter (magic?). If you do, then you believe that consciousness supervenes
 on computation.  Consciousness arose in nature by a process of natural
 evolution.

 How do you know that?
  Proto-consciousness gave some evolutionary advantage, so it grew and
 developed.

 How do you know that?
  Nature did not at some point add the fact that it was a computation,
 and then it suddenly become conscious.
 Of course not. Nobody claims that.
  Consciousness is a computation only in the trivial sense that any
 physical process can be regarded as a computation, or mapping taking
 some input to some output. There is not some special, magical class
 of computations that are unique to consciousness. Consciousness is
 an evolved bulk property, not just one specific feature of that bulk.

 How do you know it's evolved?


 Are you seriously going to argue that homo sapiens did *not* arise by a
 process of natural selection, aka evolution?


No, Darwinian evolution is my favourite scientific theory.

What I am arguing is that we don't know if consciousness is an evolved
trait. It is perfectly possible to imagine darwinian evolution working
without consciousness, even to the human intelligence level (producing
philosophical zombies).

For example, if consciousness is more fundamental than matter, then
evolution is something that happens within consciousness, not a generator
of it.

Telmo.




 Bruce

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Telmo Menezes
On Sat, May 16, 2015 at 11:50 AM, Bruce Kellett bhkell...@optusnet.com.au
wrote:

 Telmo Menezes wrote:

 On Sat, May 16, 2015 at 10:22 AM, Bruce Kellett 
 bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au wrote:

 Telmo Menezes wrote:

 On Sat, May 16, 2015 at 8:14 AM, Bruce Kellett
 bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au
 mailto:bhkell...@optusnet.com.au

 mailto:bhkell...@optusnet.com.au wrote:

 meekerdb wrote:

 On 5/15/2015 10:29 PM, Bruce Kellett wrote:


 The AI that I envisage will probably be based on a
 learning
 program of some sort, that will have to learn in
 much the
 same way as an infant human learns. I doubt that we
 will
 ever be able to create an AI that is essentially an
 intelligent adult human when it is first turned on.


 I agree with that, but once an AI is realized it will be
 possible to copy it.  And if it's digital it will be
 possible to
 implement it using different hardware.  If it's not
 digital, it
 will (in principle) be able to implement it arbitrarily
 closely
 with a digital device.  And we will have the same
 question -
 what is that makes that hardware device conscious?  I
 don't see
 any plausible answer except Running the program it
 instantiates.


 But that does not imply that consciousness is itself a
 computation.
 There is not some subroutine in your AI the is labelled this
 subroutine computes consciousness. Consciousness is a
 function of
 the whole functioning system, not of some particular
 feature. That
 is why I think identifying consciousness with computation is
 in fact
 adding some additional magic to the machine.


 So you don't believe that performing the same computations that
 your brain does in another substrate will produce a copy of your
 mind? If you don't believe that, then you must believe in some
 unknown property of matter (magic?). If you do, then you believe
 that consciousness supervenes on computation.  Consciousness
 arose in nature by a process of natural evolution.

 How do you know that?
  Proto-consciousness gave some evolutionary advantage, so it
 grew and
 developed.

 How do you know that?
  Nature did not at some point add the fact that it was a
 computation,
 and then it suddenly become conscious.
 Of course not. Nobody claims that.
  Consciousness is a computation only in the trivial sense
 that any
 physical process can be regarded as a computation, or
 mapping taking
 some input to some output. There is not some special,
 magical class
 of computations that are unique to consciousness.
 Consciousness is
 an evolved bulk property, not just one specific feature of
 that bulk.

 How do you know it's evolved?


 Are you seriously going to argue that homo sapiens did *not* arise
 by a process of natural selection, aka evolution?


 No, Darwinian evolution is my favourite scientific theory.

 What I am arguing is that we don't know if consciousness is an evolved
 trait. It is perfectly possible to imagine darwinian evolution working
 without consciousness, even to the human intelligence level (producing
 philosophical zombies).

 For example, if consciousness is more fundamental than matter, then
 evolution is something that happens within consciousness, not a generator
 of it.


 That is probably the strongest argument against computationalism to date.


How so?

Telmo.




 Bruce

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 11:14 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 10:29 PM, Bruce Kellett wrote:


The AI that I envisage will probably be based on a learning program 
of some sort, that will have to learn in much the same way as an 
infant human learns. I doubt that we will ever be able to create an 
AI that is essentially an intelligent adult human when it is first 
turned on.


I agree with that, but once an AI is realized it will be possible to 
copy it.  And if it's digital it will be possible to implement it 
using different hardware.  If it's not digital, it will (in 
principle) be able to implement it arbitrarily closely with a digital 
device.  And we will have the same question - what is that makes that 
hardware device conscious?  I don't see any plausible answer except 
Running the program it instantiates.


But that does not imply that consciousness is itself a computation. 


I didn't draw that conclusion.  That's the conclusion Bruno wants to 
draw - or close to it (he talks about consciousness supervening on an 
infinite number of computational threads).


There is not some subroutine in your AI the is labelled this 
subroutine computes consciousness. Consciousness is a function of the 
whole functioning system, not of some particular feature. 


Right.  And the system includes the environment that the consciousness 
refers to.


That is why I think identifying consciousness with computation is in 
fact adding some additional magic to the machine.


I don't see that it's adding anything, magic or otherwise.  As you say, 
any process can regarded as a computation.




Consciousness arose in nature by a process of natural evolution. 
Proto-consciousness gave some evolutionary advantage, so it grew and 
developed. 


Yes, it seems like a natural extension of modeling one's surroundings as 
part of decision making, to add yourself into the model in order to 
imagine yourself making different choices.


Nature did not at some point add the fact that it was a computation, 
and then it suddenly become conscious. Consciousness is a computation 
only in the trivial sense that any physical process can be regarded as 
a computation, or mapping taking some input to some output. There is 
not some special, magical class of computations that are unique to 
consciousness. Consciousness is an evolved bulk property, not just one 
specific feature of that bulk.


But computation is also not just one specific feature of a process, it's 
a wholistic concept.  So I disagree that there is not some special class 
of programs that implement consciousness; specifically those that model 
the device as part of it's own deicision processes.


The dovetailer does not implement consciousness in any particular 
program. The way the dovetailer runs it does not run any single program 
sequentially, it is always looping back to compute the next steps of all 
preceding programs. So the computation that implements consciousness, if 
that is what it is, is not a particular program -- it is  found in a 
sequence of states that arise by chance indefinitely often. My question 
is what is the magic that makes that set of computational steps 
conscious, whereas all others are not conscious? I don't think that 
computationalism even begins to explain consciousness.


Whereas an evolutionary account does all that is necessary -- 
proto-consciousness develops over evolutionary time to become more 
efficient, and eventually to develop a self image. As you say, the 
proto-consciousness added itself to its model of the surroundings, and 
with that became a lot more efficient at finding food and evading 
predators. Language and an internal narrative added further layers of 
efficiency and effectiveness.


I find such a naturalist account, which sees consciousness as a 
whole-of-brain function, a lot more convincing than the computationalist 
account -- which doesn't seem to me to even get off first base.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread spudboy100 via Everything List
Very, well. It's usually vague when science writers talk about wave-particle 
duality. It always seems they use the photon, and never the electron in such 
tests. Thanks.
 

 

 

-Original Message-
From: Bruce Kellett bhkell...@optusnet.com.au
To: everything-list everything-list@googlegroups.com
Sent: Thu, May 14, 2015 10:26 pm
Subject: Re: What does the MGA accomplish?


spudboy100 via Everything List wrote:
 Photons can re-combine? So they are
unlike electrons or positrons, which 
 like a magnet, repell like
charges.

Electrons can recombine too. Just think of the two-slit experiment
with 
electron -- we see only one spot on the screen. It is all part of the

meaning of a superposition in quantum mechanics.

Bruce

-- 
You received
this message because you are subscribed to the Google Groups Everything List
group.
To unsubscribe from this group and stop receiving emails from it, send
an email to everything-list+unsubscr...@googlegroups.com.
To post to this
group, send email to everything-list@googlegroups.com.
Visit this group at
http://groups.google.com/group/everything-list.
For more options, visit
https://groups.google.com/d/optout.

 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread LizR
On 16 May 2015 at 08:56, meekerdb meeke...@verizon.net wrote:

 On 5/14/2015 7:24 PM, Bruce Kellett wrote:

 LizR wrote:

 On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net mailto:
 meeke...@verizon.net wrote:

 I'm trying to understand what counterfactual correctness means in
 the physical thought experiments.

 You and me both.


 Yes. When you think about it, 'counterfactual' means that the antecedent
 is false. So Bruno's referring to the branching 'if A then B else C'
 construction of a program is not really a counterfactual at all, since to
 be a counterfactual A *must* be false. So the counterfactual construction
 is 'A then C', where A happens to be false.

 The role of this in consciousness escapes me too.


 It comes in at the very beginning of his argument, but it's never made
 explicit.  In the beginning when one is asked to accept a digital
 prosthesis for a brain part, Bruno says almost everyone agrees that
 consciousness is realized by a certain class of computations.  The
 alternative, as suggested by Searle for example, that consciousness depends
 not only of the activity of the brain but also what the physical material
 is, seems like invoking magic.  So we agree that consciousness depends on
 the program that's running, not the hardware it's running on.  And implicit
 in this is that this program implements intelligence, the ability to
 respond differently to different externals signals/environment.  Bruno says
 that's what is meant by computation, but whether that's entailed by the
 word or not seems like a semantic quibble.  Whatever you call it, it's
 implicit in the idea of digital brain prosthesis and in the idea of strong
 AI that the program instantiating consciousness must be able to respond
 differently to different inputs.

 But it doesn't have respond differently to every different input or to all
 logically possible inputs.  It only needs to be able to respond to inputs
 within some range as might occur in its environment - whether that
 environment is a whole world or just the other parts of the brain.  So the
 digital prosthesis needs to do this with that same functionality over the
 same domain as the brain parts it replaced.  In which case it is
 counterfactually correct. Right?  It's a concept relative to a limited
 domain.


Thatnks, I see the point now - that the programme must be capable of
responding to a certain range of inputs seems fair enough - consciousness
responds to its surroundings, but has difficulty with novel inputs.

(I don't see how this affects the MGA, however, which limits the
computation in question to a re-run with the same inputs. Under those very
specific, very limited circumstances, the computation can only follow the
same path that it did previously.)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Bruce Kellett

Telmo Menezes wrote:
On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett 


So you think that Darwinian evolution produced intelligent zombies,
and then computationalism infused consciousness?

No. What I am saying is that consciousness is not a plausible target for 
gradual evolution for the following reasons:


1) There is no evolutionary advantage to it, intelligent zombies could 
do equally well. Every single behaviour that each one of us has, as seen 
for the outside, could be performed by intelligent zombies;


Do you find it an advantage to be conscious in your everyday life? Do 
you really think that your partner and/or children are zombies?


2) There is no known mechanism of conscious generation that can be 
climbed. For example, we understand how neurons are computational units, 
how connecting neurons creates a computer, how more neurons and more 
connections create a more powerful computer and so on. Evolution can 
climb this stuff. There is no equivalent known mechanism for consciousness.


This is the tired old creationist crap saying that the eye is too 
complicated to be explained by evolution; the bacterium's flagellum is 
too complicated...; the .. is too complicated .


At bottom, it is just an argument from ignorance. You do not happen to 
know a mechanism whereby consciousness could develop from simpler forms. 
But that does not in any way mean that such is not possible. Creationist 
anti-intellectualism yet again



I don't know if intelligent zombies are possible. Maybe consciousness 
necessarily supervenes on the stuff necessary for that level of 
intelligence. But who knows where consciousness stops supervening? Maybe 
stuff that is not biologically evolved is already conscious. Maybe stars 
are conscious. Who knows? How could we know?


What we can know, by scientific investigation, is that all known life 
forms evolved from simpler forms by processes generally described under 
the heading of Darwinian evolution. Consciousness is a feature of many 
living creatures. If you want to argue that consciousness is something 
outside the normal evolutionary process, then you have embraced an 
irreducibly dualist position.


I can see that computationalism might well have difficulties 
accommodating a gradual evolutionary understanding of almost anything -- 
after all, the dovetailer is there in Platonia before anything physical 
ever appears. So how can consciousness evolve gradually?


This, it seems to me, is just another strike against comp -- it does not 
fit with the scientific data.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread meekerdb

On 5/16/2015 4:44 PM, Bruce Kellett wrote:

Telmo Menezes wrote:

On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett
So you think that Darwinian evolution produced intelligent zombies,
and then computationalism infused consciousness?

No. What I am saying is that consciousness is not a plausible target for gradual 
evolution for the following reasons:


1) There is no evolutionary advantage to it, intelligent zombies could do equally well. 
Every single behaviour that each one of us has, as seen for the outside, could be 
performed by intelligent zombies;


Do you find it an advantage to be conscious in your everyday life? Do you really think 
that your partner and/or children are zombies?


I think the obvious inference is that one way to gain intelligence is to become 
conscious.  But being conscious isn't just one all-or-nothing ability.  There's being 
aware of the environment...being aware of oneself...being able to model oneself in an 
environment, including a social environment in which you model other beings as aware and 
similar to yourself.  I think there can be different kinds of awareness just as there are 
different modes of perception.  Whether intelligence at the human level could be realized 
without the kind of self-modeling implicit in consciousness seems doubtful, but it might 
be possible in some radically different kind of AI.  That wouldn't necessarily mean it 
wasn't conscious, but only that it might be conscious=self-aware in such a different way 
we'd have a hard time recognizing it.  For example, most of our memories are 
reconstructions as opposed to faithful recordings.  If there were an AI that had 
enormously greater memory capacity, it might make a lot more intelligent decisions just by 
repeating some remembered action (Big Blue played chess sort of like this as compared to a 
human player).




2) There is no known mechanism of conscious generation that can be climbed. 


If consciousness is just a matter of self-modeling, then it is an advantage that would 
show up in intelligence of decisions and natural selection could act on it.


For example, we understand how neurons are computational units, how connecting neurons 
creates a computer, how more neurons and more connections create a more powerful 
computer and so on. Evolution can climb this stuff. There is no equivalent known 
mechanism for consciousness.


This is the tired old creationist crap saying that the eye is too complicated to be 
explained by evolution; the bacterium's flagellum is too complicated...; the .. is 
too complicated .


At bottom, it is just an argument from ignorance. You do not happen to know a mechanism 
whereby consciousness could develop from simpler forms. But that does not in any way 
mean that such is not possible. Creationist anti-intellectualism yet again



I don't know if intelligent zombies are possible. Maybe consciousness necessarily 
supervenes on the stuff necessary for that level of intelligence. But who knows where 
consciousness stops supervening? Maybe stuff that is not biologically evolved is 
already conscious. Maybe stars are conscious. Who knows? How could we know?


I see it as an engineering question.  When we can reliably create AIs that act 
intelligently and appear as conscious as other human beings and we can create AIs that 
have more or less humor, guilt, ego,... Then we will have understood it.


Brent




What we can know, by scientific investigation, is that all known life forms evolved from 
simpler forms by processes generally described under the heading of Darwinian evolution. 
Consciousness is a feature of many living creatures. If you want to argue that 
consciousness is something outside the normal evolutionary process, then you have 
embraced an irreducibly dualist position.


I can see that computationalism might well have difficulties accommodating a gradual 
evolutionary understanding of almost anything -- after all, the dovetailer is there in 
Platonia before anything physical ever appears. So how can consciousness evolve gradually?


This, it seems to me, is just another strike against comp -- it does not fit with the 
scientific data.


Bruce



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-16 Thread Bruce Kellett

meekerdb wrote:

On 5/16/2015 4:44 PM, Bruce Kellett wrote:

Telmo Menezes wrote:

On Sat, May 16, 2015 at 2:48 PM, Bruce Kellett
So you think that Darwinian evolution produced intelligent zombies,
and then computationalism infused consciousness?

No. What I am saying is that consciousness is not a plausible target 
for gradual evolution for the following reasons:


1) There is no evolutionary advantage to it, intelligent zombies 
could do equally well. Every single behaviour that each one of us 
has, as seen for the outside, could be performed by intelligent zombies;


Do you find it an advantage to be conscious in your everyday life? Do 
you really think that your partner and/or children are zombies?


I think the obvious inference is that one way to gain intelligence is to 
become conscious.  But being conscious isn't just one all-or-nothing 
ability.  There's being aware of the environment...being aware of 
oneself...being able to model oneself in an environment, including a 
social environment in which you model other beings as aware and similar 
to yourself.  I think there can be different kinds of awareness just as 
there are different modes of perception.  Whether intelligence at the 
human level could be realized without the kind of self-modeling implicit 
in consciousness seems doubtful, but it might be possible in some 
radically different kind of AI.  That wouldn't necessarily mean it 
wasn't conscious, but only that it might be conscious=self-aware in such 
a different way we'd have a hard time recognizing it.  For example, most 
of our memories are reconstructions as opposed to faithful recordings.  
If there were an AI that had enormously greater memory capacity, it 
might make a lot more intelligent decisions just by repeating some 
remembered action (Big Blue played chess sort of like this as compared 
to a human player).


Yes. If you take an evolutionary perspective there are clearly degrees 
of consciosuness and degrees of intelligence. I think you are right that 
 full human intelligence without consciousness, the inner narrative, 
and subconscious activity, is essentially impossible. I don't find the 
idea of philosophical zombies very likely. Also, things that make for 
'intelligent' behaviour in humans are often no more than heuristics, 
rough rules of thumb. But if one is to reflect on these heuristics, and 
improve them according to experience, one must be conscious, and have 
that inner narrative, emotions, values, goals, etc. Zombies wouldn't 
have any of this, so would not appear adaptable or intelligent to us.



2) There is no known mechanism of conscious generation that can be 
climbed. 


If consciousness is just a matter of self-modeling, then it is an 
advantage that would show up in intelligence of decisions and natural 
selection could act on it.


For example, we understand how neurons are computational units, how 
connecting neurons creates a computer, how more neurons and more 
connections create a more powerful computer and so on. Evolution can 
climb this stuff. There is no equivalent known mechanism for 
consciousness.


This is the tired old creationist crap saying that the eye is too 
complicated to be explained by evolution; the bacterium's flagellum is 
too complicated...; the .. is too complicated .


At bottom, it is just an argument from ignorance. You do not happen to 
know a mechanism whereby consciousness could develop from simpler 
forms. But that does not in any way mean that such is not possible. 
Creationist anti-intellectualism yet again



I don't know if intelligent zombies are possible. Maybe consciousness 
necessarily supervenes on the stuff necessary for that level of 
intelligence. But who knows where consciousness stops supervening? 
Maybe stuff that is not biologically evolved is already conscious. 
Maybe stars are conscious. Who knows? How could we know?


I see it as an engineering question.  When we can reliably create AIs 
that act intelligently and appear as conscious as other human beings and 
we can create AIs that have more or less humor, guilt, ego,... Then we 
will have understood it.


I agree. The only way we will ever finally understand intelligence and 
consciousness is to build it -- the engineering problem.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 20:22, meekerdb wrote:


On 5/14/2015 9:19 AM, Bruno Marchal wrote:


On 14 May 2015, at 02:50, Russell Standish wrote:


On Wed, May 13, 2015 at 02:33:06PM +0200, Bruno Marchal wrote:





3. A recording of (2) supra being played back.


Nobody would call that a computation, except to evade comp's
consequences.



I do, because it is a computation, albeit a rather trivial one.


Yes, like a rock, in some theory of rock. It is not relevant for  
the argument.





It is
not to evade comp's consequences, however, which I already accept  
from

UDA1-7.


OK.





I insist on the point, because the MGA is about driving an
inconsistency between computational and physical supervenience,  
which

requires care and rigour to demonstrate, not careless mislabelling.


If you agree with the consequences from UDA1-7, then You don't need  
step 8 (MGA) to understand the epistemological inconsistency  
between of computational supervenience and the primitive-physical  
supervenience (assumed often implicitly by the Aristotelians).


So, i see where your problem comes from, you might believe that  
step 8 shows that physical supervenience is wrong (not just the  
primitive one). But that is astonishing, because the that physical  
supervenience seems to me to be contained in the definition of  
comp, which refers to a doctor with a physical body, which will  
reinstalled my mind in a digital and physical machine.


Step 8 just shows an epistemological contradiction between comp and  
primitive or physicalist notion of matter.


How can it do that when it never mentions a physicalist notion of  
matter.  It only invokes ordinary experience and ideas of matter -  
without assuming anything about whether they are fundamental?




The contradiction is epistemological. It dissociate what we  
observed from that primitive matter. Unless you introduce a magic  
clairvoyance ability to Olympia (feeling the inactive Klara nearby).









Whether (3) preserving consciousness is absurd or not (and I agree
with Russell that's to much of a stretch of intuition to judge);


There is no stretch of intuition, the MGA shows that you need to  
put

magic in the primitive matter to make it playing a role in the
consciousness of physical events.



Where does the MGA show this? I don't believe you use the word  
magic

in any of your papers on the MGA.


Good point (if true, not the time to verify, but it seems the idea  
is there). I will use magic in some next publication. At some  
point, when we apply logic to reality, we have to invoke the magic,  
as with magic, you can always suggest a theory is wrong. Earth is  
flat, it just the photon who have very weird trajectories ...


I agree that I take for granted that in science we don't do any  
ontological commitment, so there is no proof at all about reality.


Then why do you say that supervenience of consciousness on physics  
has something to do with assuming physics is based on ur-stuff?


Just that comp1 - comp2, that is physics, assuming comp1,  is not  
the fundamental science, as it makes consciousness supervening on  
all computations in the sigma_1 reality, but with the FPI, not  
through possible particular emulations, although they have to be  
justify above the substitution level.


It was also clearly intended that primitive-physical supervenience  
entails that the movie will supports the same consciousness  
experience than the one supported by the boolean graph. Indeed the  
point of Maudlin is that we can eliminate almost all physical  
activity why keeping the counterfactual correctness (by the inert  
Klara)) making the primitive-supervenience thesis (aristotelianism,  
physicalism) more absurd.






Sorry, but this does seem a rhetorical comment.


Who would have thought that?

I think you might underestimate the easiness of step 8, which  
addresses only person believing that there is a substantially real  
*primitive* (that we have to assumed the existence as axioms in the  
fundamental TOE) physical universe, and that it is the explanation  
of why we exist, have mind and are conscious.


That consciousness supervenes on the physical that we might extract  
from comp, that is indeed what would follow if the physical do what  
it has to do: gives the right measure on the relative  
computational histories.


It is for those who, like Peter Jones, perhaps Brent and Bruce, who  
at step 7 say that the UD needs to be executed in a primitive  
physical universe, (to get the measure problem) with the intent to  
save physicalism.


I don't see anything in the MGA that makes it specific to a  
*primitive* physics.  It just refers to ordinary physical  
realizations of computations, and so whatever it concludes applies  
to ordinary physics.  And ordinary physics doesn't depend on some  
assumption of primitive materialism - as evidenced by physicist like  
Wheeler and Tegmark who speculate about what makes the equations work.



There is no 

Re: What does the MGA accomplish?

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 20:44, meekerdb wrote:


On 5/14/2015 10:33 AM, Bruno Marchal wrote:
Then the math confirms this, as proved by the Löbian universal  
machine itself, as on the p sigma_1, the first person variant of  
the 3p G ([]p), that is []p  p, []p  t, and []p  t  p,  
provides a quantization (namely [i]ip, with [i] being the  
corresponding modality of the variants).


Went over my head.  Can you expand on that?


What can an ideally correct machine proves on itself, at the right  
level by construction? This can be handled with using the self  
provided by the second recursion theorem, or the little song if D'x'  
gives 'x'x'' what gives DD. The song is singed by some universal  
system understanding the notation.


The math gives here the Beweisbar predicate of Gödel, for the finite  
entities believing in enough induction axioms, which I call the Löbian  
numbers or the Löbian combinators (depending if there is a r in the  
month).


If you interpret the propositional variables (p, q, r ...) by  
arithmetical propositions, and the modal box []p by beweisbar(code of  
p in the language of the machine), Gödel and Löb's theorems prove the  
soundness of the formula ~[]f - ~[]~[]f , and []([]A-A)-[]A, etc.  
Indeed Solovay's theorem will prove that G and G* characterize what  
the machine can prove about its provability and consistency abilities,  
and G* describes what is true about them.


If you define generously a mystic by any entity interested in its  
self, then G is the abstract mystical science, and G* is the abstract  
mystical truth.


Gödel already saw that G, well, it is not a logic of knowledge, like T  
([]p-p), or S4 ([]p-p, []p - [][]p).


This means, that contrarily to the intuition of some mathematicians  
and scientists, formal provability is of the type belief, not  
knowledge. But this gives the opportunity to define knowledge by using  
Theaetetus idea: [k]p = [g]p  p, with [g] = the usual beweisbar [] of  
Gödel. This does fit with Tarski mathematical analysis of truth, where  
il pleut is true when it rains.


This leads to a (meta) definition of a knower, indeed axiomatized  
soundly and completely by the modal logic S4Grz. Grz for Grzegorczyk  
who discovered an equivalent formula, in the context of topological  
interpretation of intuitionisic logic. Indeed S4Grz provides an  
arithmetical (self-referential) interpretation of intuitionistic logic  
(which the first person will be, from her own perspective).


But we want a probability measure of those things accessible by the UD.

G has a Kripke semantic. In particular, there, there are cul-de-sac  
world everywhere, and they do contains sorts of white rabbits, as []f  
is true in those cul-de-sac world. To get a probability, we need to  
have the D axiom ([]p - p).


What about the measure one? It is simpler to extract it than a measure  
different from one. Recall what I asked 10 times to John Clark: you  
are in Helsinki (so you are PA+I am in Helsinki, say), and you will  
be duplicated and reconstituted in Washington and Moscow, and you are  
told that both reconstitutions will be offered a coffee cup. We want  
to say that []A would do, as by completeness it entails truth in all  
models, in particular true in all consistent extensions (PA + I was  
in Helsinki + I am in Moscow + I got a cup of coffe), and (PA + I  
was in Helsinki + I am in Washington + I got a cup of coffee).


But [] does not intensionally acts like that and D is false, so to get  
it you have to add explicilty that there is a consistent extension.  
[b]p = []p  t   (in Kripke semantic: t means that there is a  
world in which t exists, as t is true in all worlds, this amount to  
say that there is a world: it is a default hypothesis (an instinct)  
made explicit. the b of [b]p is for bet.


To get physics, through comp, you have to restrict the local  
continuations to the UD's work, or to the sigma_1 reality.


So the propositional logic of physics (the logic of yes no experiment)  
must be given by the logic of [b]p with p sigma_1 sentence.


Then it happens that quantum logicians have already a nice  
representation of quantum logic in modal logic, and roughly a modal  
logic known as B, (with main axiom []p - p, p - []p) interpret  
quantum logic through a quantization of the classical proposition  
([]A, that is B proves []p for the atomic proposition when and  
only when quantum logic proves p).


Now, the three of SGrz1 ( = S4Grz restricted to the sigma_1), Z1* (the  
part of Z, restricted to sigma_1, and proved by G*, when translated),  
X1*, provide such a quantization, and a corresponding different  
quantum logics.


Nobody complained that I got three quantum logics. But then as Van  
Frassen said: there is a labyrinth of quantum logics, and here comp  
provides a sort of etalon.


All logic can be proved to be emulated/represented by the decidable  
logic G, so they are all decidable, and with the exception of the X  
logics ([]p  

Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


I'm trying to understand what counterfactual correctness means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the 
antecedent is false. So Bruno's referring to the branching 'if A then 
B else C' construction of a program is not really a counterfactual at 
all, since to be a counterfactual A *must* be false. So the 
counterfactual construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made 
explicit.  In the beginning when one is asked to accept a digital 
prosthesis for a brain part, Bruno says almost everyone agrees that 
consciousness is realized by a certain class of computations.  The 
alternative, as suggested by Searle for example, that consciousness 
depends not only of the activity of the brain but also what the physical 
material is, seems like invoking magic.  So we agree that consciousness 
depends on the program that's running, not the hardware it's running 
on.  And implicit in this is that this program implements intelligence, 
the ability to respond differently to different externals 
signals/environment.  Bruno says that's what is meant by computation, 
but whether that's entailed by the word or not seems like a semantic 
quibble.  Whatever you call it, it's implicit in the idea of digital 
brain prosthesis and in the idea of strong AI that the program 
instantiating consciousness must be able to respond differently to 
different inputs.


But it doesn't have respond differently to every different input or to 
all logically possible inputs.  It only needs to be able to respond to 
inputs within some range as might occur in its environment - whether 
that environment is a whole world or just the other parts of the brain.  
So the digital prosthesis needs to do this with that same functionality 
over the same domain as the brain parts it replaced.  In which case it 
is counterfactually correct. Right?  It's a concept relative to a 
limited domain.


That is probably right. But that just means that the prosthesis is 
functionally equivalent over the required domain. To call this 
'counterfactual correctness' seems to me to be just confused.


Bruce





--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


I'm trying to understand what counterfactual correctness 
means in

the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the 
antecedent is false. So Bruno's referring to the branching 'if A 
then B else C' construction of a program is not really a 
counterfactual at all, since to be a counterfactual A *must* be 
false. So the counterfactual construction is 'A then C', where A 
happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never 
made explicit.  In the beginning when one is asked to accept a 
digital prosthesis for a brain part, Bruno says almost everyone 
agrees that consciousness is realized by a certain class of 
computations.  The alternative, as suggested by Searle for example, 
that consciousness depends not only of the activity of the brain but 
also what the physical material is, seems like invoking magic.  So we 
agree that consciousness depends on the program that's running, not 
the hardware it's running on.  And implicit in this is that this 
program implements intelligence, the ability to respond differently 
to different externals signals/environment.  Bruno says that's what 
is meant by computation, but whether that's entailed by the word or 
not seems like a semantic quibble.  Whatever you call it, it's 
implicit in the idea of digital brain prosthesis and in the idea of 
strong AI that the program instantiating consciousness must be able 
to respond differently to different inputs.


But it doesn't have respond differently to every different input or 
to all logically possible inputs.  It only needs to be able to 
respond to inputs within some range as might occur in its environment 
- whether that environment is a whole world or just the other parts 
of the brain.  So the digital prosthesis needs to do this with that 
same functionality over the same domain as the brain parts it 
replaced.  In which case it is counterfactually correct. Right?  
It's a concept relative to a limited domain.


That is probably right. But that just means that the prosthesis is 
functionally equivalent over the required domain. To call this 
'counterfactual correctness' seems to me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right 
kind of program being run - which seems fairly uncontroversial.  And 
part of being the right kind is that it is counterfactually correct = 
functionally equivalent at the software level.  Of course this also 
means it correctly interfaces physically with the rest of the world of 
which it is conscious.  But Bruno minimizes this by two moves. First, he 
considers the brain as dreaming so it is not interacting via 
perceptions.  I objected to this as missing the essential fact that the 
processes in the brain refer to perceptions and other concepts learned 
in its waking state and this is what gives them meaning.  Second, Bruno 
notes that one can just expand the digital prosthesis to include a 
digital artificial world, including even a simulation of a whole 
universe. To which my attitude is that this makes the concept of 
prosthesis and artificial moot.


I don't think you would consider just *any* piece of software running to 
be conscious and I do think you would consider some, sufficiently 
intelligent behaving software, plus perhaps certain I/O, to be 
conscious.  So what would be the crucial difference between these two 
software packages?  I'd say having the ability to produce intelligent 
looking responses to a large range of inputs would be a minimum.


Quite probably. But the argument was made that the detailed recording of 
the sequence of brain states of a conscious person could not be 
conscious because it was not counterfactually correct. This charge has 
always seemed to me to be misguided, since the recording does not 
pretend to be functionally equivalent to the original in all 
circumstances -- just in the particular circumstance in which the 
recording was made. It has never been proposed that the film could be 
used as a prosthesis for all situations. So this argument against the 
replayed recording recreating the original conscious moments must fail 
-- on the basis of total irrelevance.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


I'm trying to understand what counterfactual correctness means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent is false. So 
Bruno's referring to the branching 'if A then B else C' construction of a program is 
not really a counterfactual at all, since to be a counterfactual A *must* be false. So 
the counterfactual construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made explicit.  In 
the beginning when one is asked to accept a digital prosthesis for a brain part, Bruno 
says almost everyone agrees that consciousness is realized by a certain class of 
computations.  The alternative, as suggested by Searle for example, that consciousness 
depends not only of the activity of the brain but also what the physical material is, 
seems like invoking magic.  So we agree that consciousness depends on the program 
that's running, not the hardware it's running on.  And implicit in this is that this 
program implements intelligence, the ability to respond differently to different 
externals signals/environment.  Bruno says that's what is meant by computation, but 
whether that's entailed by the word or not seems like a semantic quibble.  Whatever you 
call it, it's implicit in the idea of digital brain prosthesis and in the idea of 
strong AI that the program instantiating consciousness must be able to respond 
differently to different inputs.


But it doesn't have respond differently to every different input or to all logically 
possible inputs.  It only needs to be able to respond to inputs within some range as 
might occur in its environment - whether that environment is a whole world or just the 
other parts of the brain.  So the digital prosthesis needs to do this with that same 
functionality over the same domain as the brain parts it replaced.  In which case it is 
counterfactually correct. Right?  It's a concept relative to a limited domain.


That is probably right. But that just means that the prosthesis is functionally 
equivalent over the required domain. To call this 'counterfactual correctness' seems to 
me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right kind of program 
being run - which seems fairly uncontroversial.  And part of being the right kind is that 
it is counterfactually correct = functionally equivalent at the software level.  Of 
course this also means it correctly interfaces physically with the rest of the world of 
which it is conscious.  But Bruno minimizes this by two moves. First, he considers the 
brain as dreaming so it is not interacting via perceptions.  I objected to this as missing 
the essential fact that the processes in the brain refer to perceptions and other concepts 
learned in its waking state and this is what gives them meaning.  Second, Bruno notes that 
one can just expand the digital prosthesis to include a digital artificial world, 
including even a simulation of a whole universe. To which my attitude is that this makes 
the concept of prosthesis and artificial moot.


I don't think you would consider just *any* piece of software running to be conscious and 
I do think you would consider some, sufficiently intelligent behaving software, plus 
perhaps certain I/O, to be conscious.  So what would be the crucial difference between 
these two software packages?  I'd say having the ability to produce intelligent looking 
responses to a large range of inputs would be a minimum.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruno Marchal


On 14 May 2015, at 20:34, meekerdb wrote:


On 5/14/2015 9:41 AM, Bruno Marchal wrote:


On 14 May 2015, at 07:13, meekerdb wrote:


On 5/13/2015 5:32 PM, Russell Standish wrote:

On Thu, May 14, 2015 at 11:26:17AM +1200, LizR wrote:
On 13 May 2015 at 18:20, Russell Standish  
li...@hpcoders.com.au wrote:


For a robust ontology, counterfactuals are physically  
instantiated,

therefore the MGA is invalid.

Can you elaborate on this? ISTM that counterfactuals aren't, and  
indeed
can't, be physically instantiated. (Isn't that what being  
counterfactual

means?!)
No - counterfactual just means not in this universe. If its not  
in any
universe, then its not just counterfactual, but actually  
illogical, or

impossible, or something.


If not in any universe is meant in the Kripke sense, then  
something not in any universe is something that is logically  
impossible.  But if not in any universe is meant in the MWI  
sense, then counterfactuals are only those outcomes consistent  
with QM but which don't happen.


OK.


I think it is only the latter kind of counterfactual that need be  
considered in computations.


Not OK. You beg the question of justifying why the physical  
computation wins. You then miss the comp promise of explaining the  
physical form from something simpler like the combinatorial, or the  
arithmetical, or the sigma_1 complete set, etc.


OK, I appreciate that.  But then what does it mean that the brain  
prosthesis the doctor installs must be counterfactually correct?


It means that, after the substitution is done, in case you *are* not  
hungry, then in case you would be hungry, you would eat.






Is there no restriction except consistency of the possible inputs?


?
Consistent applies to any system of beliefs producing (believed)  
propositions. In classical systems, consistent systems ate those not  
believing in some proposition A and in ~A, or which does not believe  
in the constant propositional falsity f.  ~[]f, or t.








Unless you talk like if UDA is understood, and suggest a way to  
explain physical counterfactualness  in term of the physics  
extracted from comp, which you assume is QM. In that case, I can  
make sense of your sentence.


I'm trying to understand what counterfactual correctness means in  
the physical thought experiments.


It means that in the physically correct mimic of the computation, like  
the MOVIE, we would have the right output or the relevant circuits  
behavior in case we would have made some change in the system.


Maudlin, in MGA terms, add the Klara, physically inactive device  
which would only be trigged and restore the counterfactual  
correctness, in case a change is introduced.
But, of course, restoring the counterfactualness at the right moment  
makes you counterfactually correct, by definition, so if we accept the  
physical supervenience (of consciousness on the physical activity of  
the computation) then we have to accept the consciousness on MOVIE +  
KLARA, which are during the experience identical, as the Klara are  
inactive.


So physical supervenience makes computationalism spurious, and it is  
simpler to NOT assume a physical reality at the start, and relate  
consciousness to the semantic of the abstract program/person, which is  
actually supported accidentally or not to this or that universal system.


This leads to the extent of Everett's formulation on the Universal  
Wave/Multiverse to the Sigma_1 arithmetic. That one is better seen as  
a web of dream emulation, as the obligatory exercise now should  
consist in justifying a probability measure on them (cf the FPI).


I say more on this in an answer to another post.

Bruno





Brent


Are you defending physicalism? Or are you trying to justify the  
appearance of physicalism in comp?


Sometimes, out of context, those two things can't avoid to look  
similar. At some point peole should perhaps make clear all what  
they assume.


Bruno





Brent



As I mentioned, a simple example is my decision between tea and  
coffee. In

the MWI (or an infinite universe) there are separate branches (or
locations) in which I have both - but in the branch where I had  
tea, I
didn't have coffee, and vice versa. And because those branches  
can't
communicate, the road not taken remains counterfactual and non- 
physical
within each branch. Isn't that enough for the MGA to not need to  
worry

about counterfactuals, even in the MWI/Level whatever multiverse?


Why is communication needed?




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/





--
You 

Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 6:18 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


I'm trying to understand what counterfactual correctness means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent is false. 
So Bruno's referring to the branching 'if A then B else C' construction of a program 
is not really a counterfactual at all, since to be a counterfactual A *must* be 
false. So the counterfactual construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made explicit.  In 
the beginning when one is asked to accept a digital prosthesis for a brain part, 
Bruno says almost everyone agrees that consciousness is realized by a certain class 
of computations.  The alternative, as suggested by Searle for example, that 
consciousness depends not only of the activity of the brain but also what the 
physical material is, seems like invoking magic.  So we agree that consciousness 
depends on the program that's running, not the hardware it's running on.  And 
implicit in this is that this program implements intelligence, the ability to respond 
differently to different externals signals/environment.  Bruno says that's what is 
meant by computation, but whether that's entailed by the word or not seems like a 
semantic quibble.  Whatever you call it, it's implicit in the idea of digital brain 
prosthesis and in the idea of strong AI that the program instantiating consciousness 
must be able to respond differently to different inputs.


But it doesn't have respond differently to every different input or to all logically 
possible inputs.  It only needs to be able to respond to inputs within some range as 
might occur in its environment - whether that environment is a whole world or just 
the other parts of the brain.  So the digital prosthesis needs to do this with that 
same functionality over the same domain as the brain parts it replaced.  In which 
case it is counterfactually correct. Right?  It's a concept relative to a limited 
domain.


That is probably right. But that just means that the prosthesis is functionally 
equivalent over the required domain. To call this 'counterfactual correctness' seems 
to me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right kind of program 
being run - which seems fairly uncontroversial.  And part of being the right kind is 
that it is counterfactually correct = functionally equivalent at the software 
level.  Of course this also means it correctly interfaces physically with the rest of 
the world of which it is conscious.  But Bruno minimizes this by two moves. First, he 
considers the brain as dreaming so it is not interacting via perceptions.  I objected 
to this as missing the essential fact that the processes in the brain refer to 
perceptions and other concepts learned in its waking state and this is what gives them 
meaning.  Second, Bruno notes that one can just expand the digital prosthesis to 
include a digital artificial world, including even a simulation of a whole universe. To 
which my attitude is that this makes the concept of prosthesis and artificial moot.


I don't think you would consider just *any* piece of software running to be conscious 
and I do think you would consider some, sufficiently intelligent behaving software, 
plus perhaps certain I/O, to be conscious.  So what would be the crucial difference 
between these two software packages?  I'd say having the ability to produce intelligent 
looking responses to a large range of inputs would be a minimum.


Quite probably. But the argument was made that the detailed recording of the sequence of 
brain states of a conscious person could not be conscious because it was not 
counterfactually correct. This charge has always seemed to me to be misguided, since the 
recording does not pretend to be functionally equivalent to the original in all 
circumstances -- just in the particular circumstance in which the recording was made. It 
has never been proposed that the film could be used as a prosthesis for all situations. 
So this argument against the replayed recording recreating the original conscious 
moments must fail -- on the basis of total irrelevance.


But you could turn this around and pick some arbitrary sequence/recording and say, Well 
it would be the right program to be conscious in SOME circumstance, therefore it's conscious.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To 

Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 9:31 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary sequence/recording and say, 
Well it would be the right program to be conscious in SOME circumstance, therefore 
it's conscious.


I think it goes without saying that it is a recording of brain activity of a conscious 
person -- not a film of your dog chasing a ball. We have to assume a modicum of common 
sense.


Fine.  But then what is it about the recording of the brain activity of a conscious 
person that makes it conscious?  Why is it a property of just that sequence, when in 
general we would attribute consciousness only to an entity that responded 
intelligently/differently to different circumstances.  We wouldn't attribute 
consciousness based on just a short sequence of behavior such as might be evinced by 
one of Disney's animitronics.


What is it about the brain activity of a conscious person that makes him conscious? 
Whatever made the person conscious in the first instance is what makes the recording 
recreate that conscious moment. 


Unless we know what it is about the brain processes that make it conscious we can't know 
what it is necessary to record.


The point here is that consciousness supervenes on the brain activity. This makes no 
ontological claims -- simply an epistemological claim. This brain activity is associated 
with the phenomenon we call consciousness.


So are you assuming that only a brain can instantiate consciousness? Do you not believe 
that consciousness is a matter of what information processing the brain is doing, but that 
it requires wetware?  Bruno's idea is that he may solve the mind-body problem; but you 
seem not to see any problem.  Of course consciousness supervenes on brain activity - but 
maybe not just any brain activity (c.f. anesthesia).  The question is whether it can 
supervene on something else and if so, what?




How we determine whether a person is conscious in the first place is a 
different matter.


But that completely avoids the question of creating a conscious AI program, whether it's 
possible, and whether it's identical with making an intelligent AI program.


Brent



Bruce

I think Bruno is right that it makes more sense to attribute consciousness, like 
intelligence, to a program that can respond differently and effectively to a wide range 
of inputs.  And, maybe unlike Bruno, I think intelligence and consciousness is only 
possible relative to an environment, one with extent in time as well as space.


Brent




--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary 
sequence/recording and say, Well it would be the right program to be 
conscious in SOME circumstance, therefore it's conscious.


I think it goes without saying that it is a recording of brain 
activity of a conscious person -- not a film of your dog chasing a 
ball. We have to assume a modicum of common sense.


Fine.  But then what is it about the recording of the brain activity of 
a conscious person that makes it conscious?  Why is it a property of 
just that sequence, when in general we would attribute consciousness 
only to an entity that responded intelligently/differently to different 
circumstances.  We wouldn't attribute consciousness based on just a 
short sequence of behavior such as might be evinced by one of Disney's 
animitronics.


What is it about the brain activity of a conscious person that makes him 
conscious? Whatever made the person conscious in the first instance is 
what makes the recording recreate that conscious moment. The point here 
is that consciousness supervenes on the brain activity. This makes no 
ontological claims -- simply an epistemological claim. This brain 
activity is associated with the phenomenon we call consciousness.


How we determine whether a person is conscious in the first place is a 
different matter.


Bruce

I think Bruno is right that it makes more sense to attribute 
consciousness, like intelligence, to a program that can respond 
differently and effectively to a wide range of inputs.  And, maybe 
unlike Bruno, I think intelligence and consciousness is only possible 
relative to an environment, one with extent in time as well as space.


Brent


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 9:31 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary 
sequence/recording and say, Well it would be the right program to 
be conscious in SOME circumstance, therefore it's conscious.


I think it goes without saying that it is a recording of brain 
activity of a conscious person -- not a film of your dog chasing a 
ball. We have to assume a modicum of common sense.


Fine.  But then what is it about the recording of the brain activity 
of a conscious person that makes it conscious?  Why is it a property 
of just that sequence, when in general we would attribute 
consciousness only to an entity that responded 
intelligently/differently to different circumstances.  We wouldn't 
attribute consciousness based on just a short sequence of behavior 
such as might be evinced by one of Disney's animitronics.


What is it about the brain activity of a conscious person that makes 
him conscious? Whatever made the person conscious in the first 
instance is what makes the recording recreate that conscious moment. 


Unless we know what it is about the brain processes that make it 
conscious we can't know what it is necessary to record.


I thought the idea was that we recorded everything that was going on.


The point here is that consciousness supervenes on the brain activity. 
This makes no ontological claims -- simply an epistemological claim. 
This brain activity is associated with the phenomenon we call 
consciousness.


So are you assuming that only a brain can instantiate consciousness?


No. All functional brains are conscious does not entail that all 
consciousness comes with function goo in a skull.



Do you not believe that consciousness is a matter of what information 
processing the brain is doing, but that it requires wetware?  Bruno's 
idea is that he may solve the mind-body problem; but you seem not to see 
any problem.


No, I don't see any particular problem. In fact, if there is a 
difference between brain activity and consciousness, you are introducing 
some weird dualist Cartesian theatre -- the brain activity is only 
conscious when it is enlivened by some extra computational magic stuff.



 Of course consciousness supervenes on brain activity - but 
maybe not just any brain activity (c.f. anesthesia).  The question is 
whether it can supervene on something else and if so, what?


I don't see any problem here -- see above: brain goo activity - 
consciousness does not mean that consciousness - brain goo activity.



How we determine whether a person is conscious in the first place is a 
different matter.


But that completely avoids the question of creating a conscious AI 
program, whether it's possible, and whether it's identical with making 
an intelligent AI program.


I didn't think we were trying to create a conscious AI in this 
discussion. I think this is probably possible, and that the means by 
which it is done will probably be quite different from programs written 
to control robots. I suspect that the difference might well be in the 
provision of language skills -- so that an internal narrative can be 
developed.


The AI that I envisage will probably be based on a learning program of 
some sort, that will have to learn in much the same way as an infant 
human learns. I doubt that we will ever be able to create an AI that is 
essentially an intelligent adult human when it is first turned on.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 10:29 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 9:31 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:


But you could turn this around and pick some arbitrary sequence/recording and say, 
Well it would be the right program to be conscious in SOME circumstance, therefore 
it's conscious.


I think it goes without saying that it is a recording of brain activity of a 
conscious person -- not a film of your dog chasing a ball. We have to assume a 
modicum of common sense.


Fine.  But then what is it about the recording of the brain activity of a conscious 
person that makes it conscious?  Why is it a property of just that sequence, when in 
general we would attribute consciousness only to an entity that responded 
intelligently/differently to different circumstances.  We wouldn't attribute 
consciousness based on just a short sequence of behavior such as might be evinced by 
one of Disney's animitronics.


What is it about the brain activity of a conscious person that makes him conscious? 
Whatever made the person conscious in the first instance is what makes the recording 
recreate that conscious moment. 


Unless we know what it is about the brain processes that make it conscious we can't 
know what it is necessary to record.


I thought the idea was that we recorded everything that was going on.


The point here is that consciousness supervenes on the brain activity. This makes no 
ontological claims -- simply an epistemological claim. This brain activity is 
associated with the phenomenon we call consciousness.


So are you assuming that only a brain can instantiate consciousness?


No. All functional brains are conscious does not entail that all consciousness comes 
with function goo in a skull.



Do you not believe that consciousness is a matter of what information processing the 
brain is doing, but that it requires wetware?  Bruno's idea is that he may solve the 
mind-body problem; but you seem not to see any problem.


No, I don't see any particular problem. In fact, if there is a difference between brain 
activity and consciousness, you are introducing some weird dualist Cartesian theatre -- 
the brain activity is only conscious when it is enlivened by some extra computational 
magic stuff.



 Of course consciousness supervenes on brain activity - but maybe not just any brain 
activity (c.f. anesthesia).  The question is whether it can supervene on something else 
and if so, what?


I don't see any problem here -- see above: brain goo activity - consciousness does not 
mean that consciousness - brain goo activity.




How we determine whether a person is conscious in the first place is a 
different matter.


But that completely avoids the question of creating a conscious AI program, whether 
it's possible, and whether it's identical with making an intelligent AI program.


I didn't think we were trying to create a conscious AI in this discussion. I think this 
is probably possible, and that the means by which it is done will probably be quite 
different from programs written to control robots. I suspect that the difference might 
well be in the provision of language skills -- so that an internal narrative can be 
developed.


The AI that I envisage will probably be based on a learning program of some sort, that 
will have to learn in much the same way as an infant human learns. I doubt that we will 
ever be able to create an AI that is essentially an intelligent adult human when it is 
first turned on.


I agree with that, but once an AI is realized it will be possible to copy it.  And if it's 
digital it will be possible to implement it using different hardware.  If it's not 
digital, it will (in principle) be able to implement it arbitrarily closely with a digital 
device.  And we will have the same question - what is that makes that hardware device 
conscious?  I don't see any plausible answer except Running the program it instantiates.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-15 Thread Bruce Kellett

meekerdb wrote:

On 5/15/2015 6:18 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


I'm trying to understand what counterfactual correctness 
means in

the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the 
antecedent is false. So Bruno's referring to the branching 'if A 
then B else C' construction of a program is not really a 
counterfactual at all, since to be a counterfactual A *must* be 
false. So the counterfactual construction is 'A then C', where A 
happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never 
made explicit.  In the beginning when one is asked to accept a 
digital prosthesis for a brain part, Bruno says almost everyone 
agrees that consciousness is realized by a certain class of 
computations.  The alternative, as suggested by Searle for example, 
that consciousness depends not only of the activity of the brain 
but also what the physical material is, seems like invoking magic.  
So we agree that consciousness depends on the program that's 
running, not the hardware it's running on.  And implicit in this is 
that this program implements intelligence, the ability to respond 
differently to different externals signals/environment.  Bruno says 
that's what is meant by computation, but whether that's entailed 
by the word or not seems like a semantic quibble.  Whatever you 
call it, it's implicit in the idea of digital brain prosthesis and 
in the idea of strong AI that the program instantiating 
consciousness must be able to respond differently to different inputs.


But it doesn't have respond differently to every different input or 
to all logically possible inputs.  It only needs to be able to 
respond to inputs within some range as might occur in its 
environment - whether that environment is a whole world or just the 
other parts of the brain.  So the digital prosthesis needs to do 
this with that same functionality over the same domain as the brain 
parts it replaced.  In which case it is counterfactually correct. 
Right?  It's a concept relative to a limited domain.


That is probably right. But that just means that the prosthesis is 
functionally equivalent over the required domain. To call this 
'counterfactual correctness' seems to me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right 
kind of program being run - which seems fairly uncontroversial.  And 
part of being the right kind is that it is counterfactually correct 
= functionally equivalent at the software level.  Of course this 
also means it correctly interfaces physically with the rest of the 
world of which it is conscious.  But Bruno minimizes this by two 
moves. First, he considers the brain as dreaming so it is not 
interacting via perceptions.  I objected to this as missing the 
essential fact that the processes in the brain refer to perceptions 
and other concepts learned in its waking state and this is what gives 
them meaning.  Second, Bruno notes that one can just expand the 
digital prosthesis to include a digital artificial world, including 
even a simulation of a whole universe. To which my attitude is that 
this makes the concept of prosthesis and artificial moot.


I don't think you would consider just *any* piece of software running 
to be conscious and I do think you would consider some, sufficiently 
intelligent behaving software, plus perhaps certain I/O, to be 
conscious.  So what would be the crucial difference between these two 
software packages?  I'd say having the ability to produce intelligent 
looking responses to a large range of inputs would be a minimum.


Quite probably. But the argument was made that the detailed recording 
of the sequence of brain states of a conscious person could not be 
conscious because it was not counterfactually correct. This charge has 
always seemed to me to be misguided, since the recording does not 
pretend to be functionally equivalent to the original in all 
circumstances -- just in the particular circumstance in which the 
recording was made. It has never been proposed that the film could be 
used as a prosthesis for all situations. So this argument against the 
replayed recording recreating the original conscious moments must fail 
-- on the basis of total irrelevance.


But you could turn this around and pick some arbitrary 
sequence/recording and say, Well it would be the right program to be 
conscious in SOME circumstance, therefore it's conscious.


I think it goes without saying that it is a recording of brain activity 
of a conscious person -- not a film of your dog chasing a ball. We have 
to assume a modicum of common sense.


Bruce

--
You received this 

Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/15/2015 7:37 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 6:18 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/15/2015 4:40 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


I'm trying to understand what counterfactual correctness means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent is false. 
So Bruno's referring to the branching 'if A then B else C' construction of a 
program is not really a counterfactual at all, since to be a counterfactual A 
*must* be false. So the counterfactual construction is 'A then C', where A happens 
to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made explicit.  
In the beginning when one is asked to accept a digital prosthesis for a brain part, 
Bruno says almost everyone agrees that consciousness is realized by a certain class 
of computations.  The alternative, as suggested by Searle for example, that 
consciousness depends not only of the activity of the brain but also what the 
physical material is, seems like invoking magic.  So we agree that consciousness 
depends on the program that's running, not the hardware it's running on.  And 
implicit in this is that this program implements intelligence, the ability to 
respond differently to different externals signals/environment. Bruno says that's 
what is meant by computation, but whether that's entailed by the word or not 
seems like a semantic quibble.  Whatever you call it, it's implicit in the idea of 
digital brain prosthesis and in the idea of strong AI that the program 
instantiating consciousness must be able to respond differently to different inputs.


But it doesn't have respond differently to every different input or to all 
logically possible inputs.  It only needs to be able to respond to inputs within 
some range as might occur in its environment - whether that environment is a whole 
world or just the other parts of the brain.  So the digital prosthesis needs to do 
this with that same functionality over the same domain as the brain parts it 
replaced.  In which case it is counterfactually correct. Right?  It's a concept 
relative to a limited domain.


That is probably right. But that just means that the prosthesis is functionally 
equivalent over the required domain. To call this 'counterfactual correctness' seems 
to me to be just confused.


What makes the consciousness, in Bruno's view, is that it's the right kind of program 
being run - which seems fairly uncontroversial.  And part of being the right kind is 
that it is counterfactually correct = functionally equivalent at the software 
level.  Of course this also means it correctly interfaces physically with the rest 
of the world of which it is conscious.  But Bruno minimizes this by two moves. First, 
he considers the brain as dreaming so it is not interacting via perceptions.  I 
objected to this as missing the essential fact that the processes in the brain refer 
to perceptions and other concepts learned in its waking state and this is what gives 
them meaning.  Second, Bruno notes that one can just expand the digital prosthesis to 
include a digital artificial world, including even a simulation of a whole universe. 
To which my attitude is that this makes the concept of prosthesis and artificial 
moot.


I don't think you would consider just *any* piece of software running to be conscious 
and I do think you would consider some, sufficiently intelligent behaving software, 
plus perhaps certain I/O, to be conscious.  So what would be the crucial difference 
between these two software packages? I'd say having the ability to produce 
intelligent looking responses to a large range of inputs would be a minimum.


Quite probably. But the argument was made that the detailed recording of the sequence 
of brain states of a conscious person could not be conscious because it was not 
counterfactually correct. This charge has always seemed to me to be misguided, since 
the recording does not pretend to be functionally equivalent to the original in all 
circumstances -- just in the particular circumstance in which the recording was made. 
It has never been proposed that the film could be used as a prosthesis for all 
situations. So this argument against the replayed recording recreating the original 
conscious moments must fail -- on the basis of total irrelevance.


But you could turn this around and pick some arbitrary sequence/recording and say, 
Well it would be the right program to be conscious in SOME circumstance, therefore 
it's conscious.


I think it goes without saying that it is a recording of brain activity of a conscious 
person -- not a film of your dog chasing a ball. We have to assume a modicum of common 
sense.

Re: What does the MGA accomplish?

2015-05-15 Thread meekerdb

On 5/14/2015 7:24 PM, Bruce Kellett wrote:

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


I'm trying to understand what counterfactual correctness means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent is false. So 
Bruno's referring to the branching 'if A then B else C' construction of a program is not 
really a counterfactual at all, since to be a counterfactual A *must* be false. So the 
counterfactual construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.


It comes in at the very beginning of his argument, but it's never made explicit.  In the 
beginning when one is asked to accept a digital prosthesis for a brain part, Bruno says 
almost everyone agrees that consciousness is realized by a certain class of computations.  
The alternative, as suggested by Searle for example, that consciousness depends not only 
of the activity of the brain but also what the physical material is, seems like invoking 
magic.  So we agree that consciousness depends on the program that's running, not the 
hardware it's running on.  And implicit in this is that this program implements 
intelligence, the ability to respond differently to different externals 
signals/environment.  Bruno says that's what is meant by computation, but whether that's 
entailed by the word or not seems like a semantic quibble.  Whatever you call it, it's 
implicit in the idea of digital brain prosthesis and in the idea of strong AI that the 
program instantiating consciousness must be able to respond differently to different inputs.


But it doesn't have respond differently to every different input or to all logically 
possible inputs.  It only needs to be able to respond to inputs within some range as might 
occur in its environment - whether that environment is a whole world or just the other 
parts of the brain.  So the digital prosthesis needs to do this with that same 
functionality over the same domain as the brain parts it replaced.  In which case it is 
counterfactually correct. Right?  It's a concept relative to a limited domain.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 13 May 2015, at 21:04, meekerdb wrote:


On 5/13/2015 5:33 AM, Bruno Marchal wrote:


On 12 May 2015, at 22:27, meekerdb wrote:


On 5/12/2015 3:25 AM, Bruno Marchal wrote:

On 12 May 2015, at 02:33, Bruce Kellett wrote: The fact that  
projecting the film isn't a general purpose computer seems to me  
to be a red herring. It was never claimed that projecting the  
film of the brain substrate instantiated general consciousness  
-- the only claim ever made here is that this projection  
recreates the conscious moment that was originally filmed. That  
is all that is required. General purpose computing and  
counterfactual correctness are all beside the point. If the  
original conscious moment is recreated, then the film is a  
computation in any sense that is necessary to produce a  
conscious moment. This is sufficient to undermine the claim that  
consciousness does not supervene on the physical body.


The matter of whether the physical is primitive or not is also a  
red herring. No such assumption is required in order to show  
that the MGA fails to prove its point.


It is a reductio ad absurdum. If consciousnesss requires the  
physical activity and only the physical activity, then the  
recording is conscious. But anyone knowing what is a computation  
should understand that the recording does not compute more than a  
trivial sequence of projection, which is not similar to the  
computation of the boolean graph.


I think there are five concepts of computation in play here:

1. An abstract deterministic computer (TM) or program running with  
some given external input.  This program is assumed to have well  
defined behavior over a whole class of inputs, not just the one  
considered.


OK. That is the standard concept (although the computation does not  
have to be deterministic, but that is a detail here).






2. A classical (deterministic) physical computer realizing (1)  
supra.  This is what the doctor proposes to replace part or all of  
your brain.


Yes, and this involves physics. But this is no more a computation  
in the sense of Church-Turing, which does not refer to physics at  
all.






3. A recording of (2) supra being played back.


Nobody would call that a computation, except to evade comp's  
consequences.






4. An execution of (1) with a classical (deterministic) computer  
that has all the branching points disabled so that it realizes (1)  
but is not counterfactually equivalent to (1) or (2).


This computes one epsilon more than the movie. That is, not a lot.



5. A physical (quantum) computer realizing (1) supra, in it's  
classical limit.


That is the solution we hope for (as it would make comp and QM ally  
and very plausible).





Bruno takes (1) to define computation and takes the hypothesis  
that consciousness is realized by a certain kind of computation,  
an instance of (1).  So he says that if you believe this you will  
say yes to the doctor who proposes (2) as a prosthesis. This  
substitution of a physical deterministic computer will preserve  
your consciousness.  Then he proceeds to argue via the MGA that  
this implies your consciousness will not be affected by using (4)  
instead of (2)  and further that (4) is equivalent to (3) and (3)  
is absurd.


Having found a reductio, he wants to reject the assumption that  
your consciousness is realized by the physics of a deterministic  
computer as in (2).


Whether (3) preserving consciousness is absurd or not (and I agree  
with Russell that's to much of a stretch of intuition to judge);


There is no stretch of intuition, the MGA shows that you need to  
put magic in the primitive matter to make it playing a role in the  
consciousness of physical events.





this is not the reversal of physics claimed.  The Democritan  
physicist (nothing but atoms and the void) will point out that (2)  
is not what the doctor can implement.


?


What is possible is realizing a prosthetic computation by (5).   
And (5) cannot be truncated like (4); quantum mechanical systems  
can only be approximately classical and only when they are  
interacting with an environment.  The classical deterministic  
computer (TM) is a platonic ideal which, as far as we know, cannot  
be realized.


But then comp is false, as comp is a bet of surviving some digital  
truncation.


That's why you need to distinguish comp1 from comp2.  Comp1, which  
almost everyone agrees to, assumes the doctor will implant a real  
quantum mechanical, approximately digital device.  But the reasoning  
of leading to the MGA assumes and ideal, abstract digital device  
which has no interaction with its environment except the TM I/O.


I use a dream to make it simpler, but you can do the MGA with an  
interactive environment. There are two ways for doing that: either you  
put the digital approximation of the environnment, in the graph,  
or ... you don't. You will need different kind of magic for preventing  
the physical supervenience (assumed 

Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 13 May 2015, at 21:49, meekerdb wrote:


On 5/13/2015 8:49 AM, David Nyman wrote:

On 13 May 2015 at 14:53, Bruno Marchal marc...@ulb.ac.be wrote:

I think it is a good summary, yes. Thanks!

Building on that then, would you say that bodies and brains  
(including of course our own) fall within the class of embedded  
features of the machine's generalised 'physical environment'? Their  
particular role being the relation between the 'knower' in platonia  
and the environment in general. At a 'low' level, the comp  
assumption is that the FPI  results in a 'measure battle' yielding  
a range of observable transformations (or continuations) consistent  
with the Born probabilities (else comp is false). A physics  
consistent with QM, in other words. But the expectation is also  
that the knower itself maintains its capacity for physical  
manifestation in relation to the transformed environment, in each  
continuation, in order for the observations to occur.


BTW, Bruce made the point that the expected measure of the class of  
such physically-consistent observations, against the background of  
UD*, must be very close to zero. ISTM that this isn't really the  
point (e.g. the expected measure of readable books in the Library  
of Babel must also be close to zero). What seems more relevant is  
the presumed lack of 'un-physical' observer -environment relations  
(i.e. not only 'why no white rabbits?' but 'why physics?'). From  
this perspective, the obvious difference between the Library of  
Babel and UD* is that the former must be 'observed' externally  
whereas the latter is conceived as yielding a view 'from within'.  
Hence what must be justified is why our particular species of  
internal observer - i.e. the kind capable of self-manifesting  
within consistently 'physical' environments, should predominate.


As they say on TV, This just in!

Why Boltzmann Brains Don't Fluctuate Into Existence From the De  
Sitter Vacuum

Kimberly K. Boddy, Sean M. Carroll, Jason Pollack
(Submitted on 11 May 2015)

Many modern cosmological scenarios feature large volumes of  
spacetime in a de Sitter vacuum phase. Such models are said to be  
faced with a Boltzmann Brain problem - the overwhelming majority  
of observers with fixed local conditions are random fluctuations in  
the de Sitter vacuum, rather than arising via thermodynamically  
sensible evolution from a low-entropy past. We argue that this worry  
can be straightforwardly avoided in the Many-Worlds (Everett)  
approach to quantum mechanics, as long as the underlying Hilbert  
space is infinite-dimensional. In that case, de Sitter settles into  
a truly stationary quantum vacuum state. While there would be a  
nonzero probability for observing Boltzmann-Brain-like fluctuations  
in such a state, observation refers to a specific kind of  
dynamical process that does not occur in the vacuum (which is, after  
all, time-independent). Observers are necessarily out-of-equilibrium  
physical systems, which are absent in the vacuum. Hence, the fact  
that projection operators corresponding to states with observers in  
them do not annihilate the vacuum does not imply that such observers  
actually come into existence. The Boltzmann Brain problem is  
therefore much less generic than has been supposed.



Good opportunity to recall the answer of the question asked in the  
title of the thread:


What MGA explains is that the computationalist has not that option,  
even if the winner will be that infinite but non robust physical  
universe (weird but not yet shown comp impossible).
In the sigma_1 complete reality, you don't need to fluctuate to get  
all brains, in infinitely many exemplars. Perhaps too much actually,  
but that remains to be evaluated.


Note that the quantum logics Z1*, X1*, and S4Grz1 suggest infinite  
dimensional (quasi-) Hilbert Space, technically.


For a very nice (but a bit technical) intro to quantum logic, see:
http://plato.stanford.edu/entries/qt-quantlog/

Bruno






arXiv:1505.02780v1 [hep-th]

Brent



David
--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything- 
l...@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this 

Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 14 May 2015, at 07:13, meekerdb wrote:


On 5/13/2015 5:32 PM, Russell Standish wrote:

On Thu, May 14, 2015 at 11:26:17AM +1200, LizR wrote:
On 13 May 2015 at 18:20, Russell Standish li...@hpcoders.com.au  
wrote:



For a robust ontology, counterfactuals are physically instantiated,
therefore the MGA is invalid.

Can you elaborate on this? ISTM that counterfactuals aren't, and  
indeed
can't, be physically instantiated. (Isn't that what being  
counterfactual

means?!)
No - counterfactual just means not in this universe. If its not in  
any
universe, then its not just counterfactual, but actually illogical,  
or

impossible, or something.


If not in any universe is meant in the Kripke sense, then  
something not in any universe is something that is logically  
impossible.  But if not in any universe is meant in the MWI sense,  
then counterfactuals are only those outcomes consistent with QM but  
which don't happen.


OK.


 I think it is only the latter kind of counterfactual that need be  
considered in computations.


Not OK. You beg the question of justifying why the physical  
computation wins. You then miss the comp promise of explaining the  
physical form from something simpler like the combinatorial, or the  
arithmetical, or the sigma_1 complete set, etc.


Unless you talk like if UDA is understood, and suggest a way to  
explain physical counterfactualness  in term of the physics extracted  
from comp, which you assume is QM. In that case, I can make sense of  
your sentence.


Are you defending physicalism? Or are you trying to justify the  
appearance of physicalism in comp?


Sometimes, out of context, those two things can't avoid to look  
similar. At some point peole should perhaps make clear all what they  
assume.


Bruno





Brent



As I mentioned, a simple example is my decision between tea and  
coffee. In

the MWI (or an infinite universe) there are separate branches (or
locations) in which I have both - but in the branch where I had  
tea, I

didn't have coffee, and vice versa. And because those branches can't
communicate, the road not taken remains counterfactual and non- 
physical
within each branch. Isn't that enough for the MGA to not need to  
worry

about counterfactuals, even in the MWI/Level whatever multiverse?


Why is communication needed?




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread meekerdb

On 5/14/2015 9:41 AM, Bruno Marchal wrote:


On 14 May 2015, at 07:13, meekerdb wrote:


On 5/13/2015 5:32 PM, Russell Standish wrote:

On Thu, May 14, 2015 at 11:26:17AM +1200, LizR wrote:

On 13 May 2015 at 18:20, Russell Standish li...@hpcoders.com.au wrote:


For a robust ontology, counterfactuals are physically instantiated,
therefore the MGA is invalid.


Can you elaborate on this? ISTM that counterfactuals aren't, and indeed
can't, be physically instantiated. (Isn't that what being counterfactual
means?!)

No - counterfactual just means not in this universe. If its not in any
universe, then its not just counterfactual, but actually illogical, or
impossible, or something.


If not in any universe is meant in the Kripke sense, then something not in any 
universe is something that is logically impossible.  But if not in any universe is 
meant in the MWI sense, then counterfactuals are only those outcomes consistent with QM 
but which don't happen.


OK.


 I think it is only the latter kind of counterfactual that need be considered in 
computations.


Not OK. You beg the question of justifying why the physical computation wins. You then 
miss the comp promise of explaining the physical form from something simpler like the 
combinatorial, or the arithmetical, or the sigma_1 complete set, etc.


OK, I appreciate that.  But then what does it mean that the brain prosthesis the doctor 
installs must be counterfactually correct?  Is there no restriction except consistency of 
the possible inputs?





Unless you talk like if UDA is understood, and suggest a way to explain physical 
counterfactualness  in term of the physics extracted from comp, which you assume is QM. 
In that case, I can make sense of your sentence.


I'm trying to understand what counterfactual correctness means in the physical thought 
experiments.


Brent


Are you defending physicalism? Or are you trying to justify the appearance of 
physicalism in comp?


Sometimes, out of context, those two things can't avoid to look similar. At some point 
peole should perhaps make clear all what they assume.


Bruno





Brent




As I mentioned, a simple example is my decision between tea and coffee. In
the MWI (or an infinite universe) there are separate branches (or
locations) in which I have both - but in the branch where I had tea, I
didn't have coffee, and vice versa. And because those branches can't
communicate, the road not taken remains counterfactual and non-physical
within each branch. Isn't that enough for the MGA to not need to worry
about counterfactuals, even in the MWI/Level whatever multiverse?


Why is communication needed?




--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/





--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread meekerdb

On 5/14/2015 9:19 AM, Bruno Marchal wrote:


On 14 May 2015, at 02:50, Russell Standish wrote:


On Wed, May 13, 2015 at 02:33:06PM +0200, Bruno Marchal wrote:





3. A recording of (2) supra being played back.


Nobody would call that a computation, except to evade comp's
consequences.



I do, because it is a computation, albeit a rather trivial one.


Yes, like a rock, in some theory of rock. It is not relevant for the argument.




It is
not to evade comp's consequences, however, which I already accept from
UDA1-7.


OK.





I insist on the point, because the MGA is about driving an
inconsistency between computational and physical supervenience, which
requires care and rigour to demonstrate, not careless mislabelling.


If you agree with the consequences from UDA1-7, then You don't need step 8 (MGA) to 
understand the epistemological inconsistency between of computational supervenience and 
the primitive-physical supervenience (assumed often implicitly by the Aristotelians).


So, i see where your problem comes from, you might believe that step 8 shows that 
physical supervenience is wrong (not just the primitive one). But that is astonishing, 
because the that physical supervenience seems to me to be contained in the definition of 
comp, which refers to a doctor with a physical body, which will reinstalled my mind in 
a digital and physical machine.


Step 8 just shows an epistemological contradiction between comp and primitive or 
physicalist notion of matter.


How can it do that when it never mentions a physicalist notion of matter.  It only 
invokes ordinary experience and ideas of matter - without assuming anything about whether 
they are fundamental?




The contradiction is epistemological. It dissociate what we observed from that primitive 
matter. Unless you introduce a magic clairvoyance ability to Olympia (feeling the 
inactive Klara nearby).









Whether (3) preserving consciousness is absurd or not (and I agree
with Russell that's to much of a stretch of intuition to judge);


There is no stretch of intuition, the MGA shows that you need to put
magic in the primitive matter to make it playing a role in the
consciousness of physical events.



Where does the MGA show this? I don't believe you use the word magic
in any of your papers on the MGA.


Good point (if true, not the time to verify, but it seems the idea is there). I will use 
magic in some next publication. At some point, when we apply logic to reality, we have 
to invoke the magic, as with magic, you can always suggest a theory is wrong. Earth is 
flat, it just the photon who have very weird trajectories ...


I agree that I take for granted that in science we don't do any ontological commitment, 
so there is no proof at all about reality. 


Then why do you say that supervenience of consciousness on physics has something to do 
with assuming physics is based on ur-stuff?


Just that comp1 - comp2, that is physics, assuming comp1,  is not the fundamental 
science, as it makes consciousness supervening on all computations in the sigma_1 
reality, but with the FPI, not through possible particular emulations, although they 
have to be justify above the substitution level.


It was also clearly intended that primitive-physical supervenience entails that the 
movie will supports the same consciousness experience than the one supported by the 
boolean graph. Indeed the point of Maudlin is that we can eliminate almost all physical 
activity why keeping the counterfactual correctness (by the inert Klara)) making the 
primitive-supervenience thesis (aristotelianism, physicalism) more absurd.






Sorry, but this does seem a rhetorical comment.


Who would have thought that?

I think you might underestimate the easiness of step 8, which addresses only person 
believing that there is a substantially real *primitive* (that we have to assumed the 
existence as axioms in the fundamental TOE) physical universe, and that it is the 
explanation of why we exist, have mind and are conscious.


That consciousness supervenes on the physical that we might extract from comp, that is 
indeed what would follow if the physical do what it has to do: gives the right measure 
on the relative computational histories.


It is for those who, like Peter Jones, perhaps Brent and Bruce, who at step 7 say that 
the UD needs to be executed in a primitive physical universe, (to get the measure 
problem) with the intent to save physicalism.


I don't see anything in the MGA that makes it specific to a *primitive* physics.  It just 
refers to ordinary physical realizations of computations, and so whatever it concludes 
applies to ordinary physics.  And ordinary physics doesn't depend on some assumption of 
primitive materialism - as evidenced by physicist like Wheeler and Tegmark who speculate 
about what makes the equations work.


Brent



If you get the UDA1-7 problem, the MGA makes no point. It only shows that physicalism or 
matter per se does not provide 

Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 14 May 2015, at 08:08, Bruce Kellett wrote:


Bruno Marchal wrote:

On 13 May 2015, at 14:08, Bruce Kellett wrote:
So you claim that there is a contradiction between physical  
supervenience and comp.
Yes. Between primitive-physical supervenience (as this what is at  
stake).


I cannot allow that this move is legitimate. The MGA does not at any  
point refer to the basic ontology,


Read my paper and long texts. I talk only on that. I search the  
simplest ontology (realm) to explain both matter and mind, and in a  
way which makes possible to formulate mathematically the mind-body  
problem (that is: the body appearance problem in sigma_1 arithmetic)


Why taking all the time this negative outlook. I am not criticizing  
physics, only the statement that physics is the fundamental science  
(physicalism). You need to grasp why.


nor does it at any point make a distinction between what is true for  
a matter ontology as opposed to a Platonic ontology. The argument  
works in exactly the same way for both, so if you dismiss a physical  
ontology on the basis of this argument, you must, in logic, also  
dismiss the arithmetical ontology.



Where is the flaw? I answered your question.





Comp is either false, or it is incoherent.



I guess you mean comp2.

But where is the flaw in comp1 - comp2 argument?

Your previous argument were based on a misunderstanding of what are  
computations. It seemed to me.
Just try to ask question where you don't understand. You made often  
correct point, but non relevant for invalidating the argument.


Bruno





Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread meekerdb

On 5/14/2015 10:33 AM, Bruno Marchal wrote:
Then the math confirms this, as proved by the Löbian universal machine itself, as on the 
p sigma_1, the first person variant of the 3p G ([]p), that is []p  p, []p  t, and 
[]p  t  p, provides a quantization (namely [i]ip, with [i] being the corresponding 
modality of the variants). 


Went over my head.  Can you expand on that?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 14 May 2015, at 07:25, Bruce Kellett wrote:


meekerdb wrote:

On 5/13/2015 5:32 PM, Russell Standish wrote:

On Thu, May 14, 2015 at 11:26:17AM +1200, LizR wrote:
On 13 May 2015 at 18:20, Russell Standish li...@hpcoders.com.au  
wrote:


For a robust ontology, counterfactuals are physically  
instantiated,

therefore the MGA is invalid.

Can you elaborate on this? ISTM that counterfactuals aren't, and  
indeed
can't, be physically instantiated. (Isn't that what being  
counterfactual

means?!)
No - counterfactual just means not in this universe. If its not in  
any
universe, then its not just counterfactual, but actually  
illogical, or

impossible, or something.
If not in any universe is meant in the Kripke sense, then  
something not in any universe is something that is logically  
impossible.  But if not in any universe is meant in the MWI  
sense, then counterfactuals are only those outcomes consistent with  
QM but which don't happen.  I think it is only the latter kind of  
counterfactual that need be considered in computations.


No. The counterfactuals that Bruno refers to in comp seem to come  
from the If A the B else C construction of computer programming.  
This puts no restriction on the worlds containg B and C. So it  
actually has nothing whatsoever to do with MWI. As you say, the  
possible alternative worlds in MWI come from the eigenfunctions of  
an eigenselected basis, and those are by no means arbitrary.


Right, but UDA explains that it is just not an affair of counting or  
weighting computations, it is also an affair of living them, or  
observing them, or betting on them, according to the first or  
third or other points of view, which eventually are self--referential.  
Then the logic of self-reference, non trivial (by Gödel, Löb, Solovay  
and others) put a mathematical structure on the set of computation.



Then it is interesting that the definition of knower' by Theatetus,  
in arithmetic, and in this comp setting, gives an intuitionistic- 
solipsistic anti-mechanist, and unnameable first person, obeying to  
the antic soul theory in its development from Pythagorus-Parmenides- 
Plato-Plotinus-Porphyry-Proclus.


Then on p sigma_1, restricting ourselves to the DU, or to the sigma_1  
arithmetical reality, we got a quantum logic on those first person  
(some plurals) points of view.


Yes, the FPI has nothing to do a priori with the comp global (on the  
UD, or the sigma_1, FPI).
But if both are true, comp explains how and why to derive QM (and  
perhaps QM+space-time-gravitation) from arithmetic. The why is that in  
that (re)discovery you have the tool to separate, as far as it is  
possible, the communicable/justifiable truth from the non communicable/ 
non justifiable one.


Bruno




Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 14 May 2015, at 08:12, Bruce Kellett wrote:


meekerdb wrote:

On 5/13/2015 10:25 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/13/2015 5:32 PM, Russell Standish wrote:

On Thu, May 14, 2015 at 11:26:17AM +1200, LizR wrote:
On 13 May 2015 at 18:20, Russell Standish  
li...@hpcoders.com.au wrote:


For a robust ontology, counterfactuals are physically  
instantiated,

therefore the MGA is invalid.

Can you elaborate on this? ISTM that counterfactuals aren't,  
and indeed
can't, be physically instantiated. (Isn't that what being  
counterfactual

means?!)
No - counterfactual just means not in this universe. If its not  
in any
universe, then its not just counterfactual, but actually  
illogical, or

impossible, or something.


If not in any universe is meant in the Kripke sense, then  
something not in any universe is something that is logically  
impossible.  But if not in any universe is meant in the MWI  
sense, then counterfactuals are only those outcomes consistent  
with QM but which don't happen.  I think it is only the latter  
kind of counterfactual that need be considered in computations.


No. The counterfactuals that Bruno refers to in comp seem to come  
from the If A the B else C construction of computer programming.  
This puts no restriction on the worlds containg B and C.
That would seem to create conundrums.  The counterfactual is A  
taking a value other than the one it actually did.  A is one of the  
inputs to the prosthetic brain part, so in practice the doctor  
would only consider a finite number of values of A that could be  
realized by the sense organ or other brain parts that realize it.  
But if A can be anything from platonia it could be If this program  
X halts... or The smallest even integer not the sum of two primes.


If you read around the typos in my post above, the counterfactual  
if...then...else... construction branches on the input A, which is  
not necessarily *anything* at all, just what the program encounters.  
It is B and C that can be anything that the programmer wants them to  
be. We are talking computing here, not physics.


Indeed.


That is why the other worlds of MWI have nothing to do with  
counterfactual correctness.



Yes and no, as it is a very difficult subject. The problem is mainly  
for the past false situation, like: as the following sentence any  
meaning: if Hitler did not born, the Nazis would have got the atomic  
bomb before the others, in 1960.


Now, with the MWI, + ultra-quantum computer, we might imagine a day  
where detailed quantum emulation of that part of the earth might  
suggest that in 80% of the realities with Hitler not born, the nazi  
(sometimes with another name) got the Atomic bomb before the others,  
and in 30% of the situation used them.


The normal reaction, before such technology exists,  is in the french  
saying: avec des si et des mais on mets Paris en bouteille (with if  
and but we can put Paris in a bootle).


So the MWI might still have relations with some type of  
counterfactuals. Like comp itself, as you can know that you would have  
done something differently, you would belong to different  
computations, most plausibly.


Note the ethical problem: can we simulate earth at the substitution  
level of its inhabitants?


Bruno












Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 14 May 2015, at 07:57, Bruce Kellett wrote:


Russell Standish wrote:

On Thu, May 14, 2015 at 02:51:00PM +1000, Bruce Kellett wrote:

But we are always going to have difficulty assigning a truth value
to a counterfactual like: The present king of France has a beard.


I would expect that somewhere in the Multiverse, France still has a
king in 2015, and has a beard, and somewhere else where he doesn't.


If you are talking of a separate universe outside our Hubble volume  
then you are going to have difficulty defining exactly what you mean  
by 'now' or '2015'. The other worlds of the MWI are not even in the  
same universe, so you are going to have even more difficulty in  
assigning a truth value to the proposition.


Philosophical 'possible worlds' are quite distinct from multiverse  
ideas.


Absolutely, but that is why it is interesting if some philosophical  
hypothesis (comp) can force a relation between them, like making QM  
the measure of the computation (in the classical-Church-Turing sense).


Then the math confirms this, as proved by the Löbian universal machine  
itself, as on the p sigma_1, the first person variant of the 3p G  
([]p), that is []p  p, []p  t, and []p  t  p, provides a  
quantization (namely [i]ip, with [i] being the corresponding  
modality of the variants).


I think that with comp, Heisenberg-Pauli-Fuchs are as much correct  
than Everett. The quantum, and also themrmodynamic are already bridge  
to a theory of mind, and with comp, eventually a theory of numbers and  
other CT-universal systems.


Bruno



Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 14 May 2015, at 06:51, Bruce Kellett wrote:


Russell Standish wrote:

On Wed, May 13, 2015 at 10:33:42PM +1000, Bruce Kellett wrote:

Bruno Marchal wrote:

On 13 May 2015, at 08:20, Russell Standish wrote:


For a
robust ontology, counterfactuals are physically instantiated,
therefore the MGA is invalid.

I don't see this. The  if A then B else C can be realized in a
newtonian universe, indeed in the game of life or c++.
And the duplication of universe, one where A is realized, and B  
is realized

+ one in which A is not realized and C is realized,
might NOT makes a non counterfactually correct version of that
if-then-else suddenly counterfactually correct.

Counterfactuals and MWI (and robustnesse) are a priori  
independent notions.

For once I agree with Bruno. I think this is right and that
counterfactuals are not, as Russell suggests, instantiated in the
other worlds of the MWI in any useful sense.

How does that work? Are you saying the counterfactual situations  
never
appear anywhere in the Multiverse? What principle prevents their  
occurrance?


By the meaning of the term: *counter*factual, i.e., contrary to the  
facts of the situation. As far as I know there is a philosophical  
theory of counterfactuals based on possible worlds.


Yes. The study of counterfualness often use modal logic, and is often  
considered as a modal notion. The problem here relies in part in the  
fact that logicians, physicists, and philosophers use the same words  
with different meaning, and here we need to unify the ideas, so it is  
easy to get confused.



But these are generally though to be imaginary. And my feeling is  
that the 'other worlds' of the MWI or other Hubble volumes, etc, are  
just philosophical possible other worlds. We can say anything about  
them that we like because it can never be checked -- they are  
physically inaccessible in principle.


But with comp, that is not enough. Even if a physically existing, but  
non accessible physical reality realize a computation similar to a  
continuation of you, it might change the measure, in principle.






But we are always going to have difficulty assigning a truth value  
to a counterfactual like: The present king of France has a beard.


It will indeed depends on the interpretations. But if you say it is a  
counterfactual, it should have the form: if there is a king of  
France, then he has a beard. It is trivially true in classical  
logic (assuming France is really a republic), but classical logic is  
not suited for counterfactuals, by definition, they are all true: if  
pigs can fly, I am Napoleon!, for example. False implies everything  
in classical logic. For counterfactuals, you need special classical  
modal logic(s), or sort or relevance logics.


Bruno




Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruno Marchal


On 14 May 2015, at 02:50, Russell Standish wrote:


On Wed, May 13, 2015 at 02:33:06PM +0200, Bruno Marchal wrote:





3. A recording of (2) supra being played back.


Nobody would call that a computation, except to evade comp's
consequences.



I do, because it is a computation, albeit a rather trivial one.


Yes, like a rock, in some theory of rock. It is not relevant for the  
argument.





It is
not to evade comp's consequences, however, which I already accept from
UDA1-7.


OK.





I insist on the point, because the MGA is about driving an
inconsistency between computational and physical supervenience, which
requires care and rigour to demonstrate, not careless mislabelling.


If you agree with the consequences from UDA1-7, then You don't need  
step 8 (MGA) to understand the epistemological inconsistency between  
of computational supervenience and the primitive-physical  
supervenience (assumed often implicitly by the Aristotelians).


So, i see where your problem comes from, you might believe that step 8  
shows that physical supervenience is wrong (not just the primitive  
one). But that is astonishing, because the that physical supervenience  
seems to me to be contained in the definition of comp, which refers to  
a doctor with a physical body, which will reinstalled my mind in a  
digital and physical machine.


Step 8 just shows an epistemological contradiction between comp and  
primitive or physicalist notion of matter.


The contradiction is epistemological. It dissociate what we observed  
from that primitive matter. Unless you introduce a magic clairvoyance  
ability to Olympia (feeling the inactive Klara nearby).









Whether (3) preserving consciousness is absurd or not (and I agree
with Russell that's to much of a stretch of intuition to judge);


There is no stretch of intuition, the MGA shows that you need to put
magic in the primitive matter to make it playing a role in the
consciousness of physical events.



Where does the MGA show this? I don't believe you use the word magic
in any of your papers on the MGA.


Good point (if true, not the time to verify, but it seems the idea is  
there). I will use magic in some next publication. At some point,  
when we apply logic to reality, we have to invoke the magic, as with  
magic, you can always suggest a theory is wrong. Earth is flat, it  
just the photon who have very weird trajectories ...


I agree that I take for granted that in science we don't do any  
ontological commitment, so there is no proof at all about reality.   
Just that comp1 - comp2, that is physics, assuming comp1,  is not the  
fundamental science, as it makes consciousness supervening on all  
computations in the sigma_1 reality, but with the FPI, not through  
possible particular emulations, although they have to be justify above  
the substitution level.


It was also clearly intended that primitive-physical supervenience  
entails that the movie will supports the same consciousness experience  
than the one supported by the boolean graph. Indeed the point of  
Maudlin is that we can eliminate almost all physical activity why  
keeping the counterfactual correctness (by the inert Klara)) making  
the primitive-supervenience thesis (aristotelianism, physicalism) more  
absurd.






Sorry, but this does seem a rhetorical comment.


Who would have thought that?

I think you might underestimate the easiness of step 8, which  
addresses only person believing that there is a substantially real  
*primitive* (that we have to assumed the existence as axioms in the  
fundamental TOE) physical universe, and that it is the explanation of  
why we exist, have mind and are conscious.


That consciousness supervenes on the physical that we might extract  
from comp, that is indeed what would follow if the physical do what it  
has to do: gives the right measure on the relative computational  
histories.


It is for those who, like Peter Jones, perhaps Brent and Bruce,  who  
at step 7 say that the UD needs to be executed in a primitive physical  
universe, (to get the measure problem) with the intent to save  
physicalism.


If you get the UDA1-7 problem, the MGA makes no point. It only shows  
that physicalism or matter per se does not provide a solution (without  
adding non Turing emulable and non FPI recoverable magic).



Bruno






--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this 

Re: What does the MGA accomplish?

2015-05-14 Thread LizR
On 14 May 2015 at 17:13, meekerdb meeke...@verizon.net wrote:

 On 5/13/2015 5:32 PM, Russell Standish wrote:

 But if not in any universe is meant in the MWI sense, then
 counterfactuals are only those outcomes consistent with QM but which don't
 happen.  I think it is only the latter kind of counterfactual that need be
 considered in computations.


What is an outcome consistent with QM which doesn't happen?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread LizR
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net wrote:

 I'm trying to understand what counterfactual correctness means in the
 physical thought experiments.


You and me both.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread spudboy100 via Everything List
Photons can re-combine? So they are unlike electrons or positrons, which like a 
magnet, repell like charges. Based on your description, Liz, then somewhere in 
the universe, are glowing soft-white piles of Photonium? Ah! The creamy nougat 
center of every barred spiral galaxy!  

Sent from AOL Mobile Mail


-Original Message-
From: LizR lizj...@gmail.com
To: everything-list everything-list@googlegroups.com
Sent: Thu, May 14, 2015 08:39 PM
Subject: Re: What does the MGA accomplish?



div id=AOLMsgPart_2_b7f6f4d4-d15a-46aa-a81d-2ad28771618a

 div dir=ltr
  div class=aolmail_gmail_extra
   div class=aolmail_gmail_quote
On 14 May 2015 at 14:40, Russell Standish 
span dir=ltra target=_blank 
href=mailto:li...@hpcoders.com.au;li...@hpcoders.com.au/a/span wrote:


blockquote class=aolmail_gmail_quote style=margin:0 0 0 
.8ex;border-left:1px #ccc solid;padding-left:1ex
 div class=aolmail_HOEnZb
  div class=aolmail_h5
   
 
  /div
 /divThe physical system refers to all parallel instantiations of an
 
 object ISTM.
 
 
 
 If I refer to a photon travelling through a ZM apparatus (to fix
 
 things - you know two half silvered mirrors, so the photons are split
 
 and travel over two spatially disinct paths before being recombined),
 
 we don't have two different physical systems in play. Its just
 
 the one physical system, even though it occupies two distinct universes.
 

/blockquote


 




The difference is that the photons 
 ican/i recombine, hence their states in different branches have a 
physical influence no the final outcome. But the processes leading to 
conciousness can't, normally, reconmbine, because brain-stuff decoheres far 
more rapidly than the timescales on which consciousness takes place. Hence 
counterfactuals aren't involved in consciousness, at least not ifwe assume 
consciousness supervenes on a physical system obeying QM.



 


   /div
  /div
 /div 
 p/p -- 
 
 You received this message because you are subscribed to the Google Groups 
Everything List group.
 
 To unsubscribe from this group and stop receiving emails from it, send an 
email to 
 a target=_blank 
href=mailto:everything-list+unsubscr...@googlegroups.com;everything-list+unsubscr...@googlegroups.com/a.
 
 To post to this group, send email to 
 a target=_blank 
href=mailto:everything-list@googlegroups.com;everything-list@googlegroups.com/a.
 
 Visit this group at 
 a target=_blank 
href=http://groups.google.com/group/everything-list;http://groups.google.com/group/everything-list/a.
 
 For more options, visit 
 a target=_blank 
href=https://groups.google.com/d/optout;https://groups.google.com/d/optout/a.
 
 

/div

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread LizR
On 14 May 2015 at 14:40, Russell Standish li...@hpcoders.com.au wrote:


 The physical system refers to all parallel instantiations of an
 object ISTM.

 If I refer to a photon travelling through a ZM apparatus (to fix
 things - you know two half silvered mirrors, so the photons are split
 and travel over two spatially disinct paths before being recombined),
 we don't have two different physical systems in play. Its just
 the one physical system, even though it occupies two distinct universes.


The difference is that the photons *can* recombine, hence their states in
different branches have a physical influence no the final outcome. But the
processes leading to conciousness can't, normally, reconmbine, because
brain-stuff decoheres far more rapidly than the timescales on which
consciousness takes place. Hence counterfactuals aren't involved in
consciousness, at least not ifwe assume consciousness supervenes on a
physical system obeying QM.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread LizR
Oops - I meant on the final outcome, of course - my fingers insist on
reversing the order of letters, and sometimes I don't notice.

On 15 May 2015 at 12:39, LizR lizj...@gmail.com wrote:

 On 14 May 2015 at 14:40, Russell Standish li...@hpcoders.com.au wrote:


 The physical system refers to all parallel instantiations of an
 object ISTM.

 If I refer to a photon travelling through a ZM apparatus (to fix
 things - you know two half silvered mirrors, so the photons are split
 and travel over two spatially disinct paths before being recombined),
 we don't have two different physical systems in play. Its just
 the one physical system, even though it occupies two distinct universes.


 The difference is that the photons *can* recombine, hence their states in
 different branches have a physical influence no the final outcome. But the
 processes leading to conciousness can't, normally, reconmbine, because
 brain-stuff decoheres far more rapidly than the timescales on which
 consciousness takes place. Hence counterfactuals aren't involved in
 consciousness, at least not ifwe assume consciousness supervenes on a
 physical system obeying QM.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruce Kellett

LizR wrote:
On 15 May 2015 at 06:34, meekerdb meeke...@verizon.net 
mailto:meeke...@verizon.net wrote:


I'm trying to understand what counterfactual correctness means in
the physical thought experiments.

You and me both. 


Yes. When you think about it, 'counterfactual' means that the antecedent 
is false. So Bruno's referring to the branching 'if A then B else C' 
construction of a program is not really a counterfactual at all, since 
to be a counterfactual A *must* be false. So the counterfactual 
construction is 'A then C', where A happens to be false.


The role of this in consciousness escapes me too.

Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruce Kellett

spudboy100 via Everything List wrote:
Photons can re-combine? So they are unlike electrons or positrons, which 
like a magnet, repell like charges.


Electrons can recombine too. Just think of the two-slit experiment with 
electron -- we see only one spot on the screen. It is all part of the 
meaning of a superposition in quantum mechanics.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruce Kellett

Bruno Marchal wrote:

On 13 May 2015, at 14:08, Bruce Kellett wrote:

So you claim that there is a contradiction between physical 
supervenience and comp.


Yes. Between primitive-physical supervenience (as this what is at stake).


I cannot allow that this move is legitimate. The MGA does not at any 
point refer to the basic ontology, nor does it at any point make a 
distinction between what is true for a matter ontology as opposed to a 
Platonic ontology. The argument works in exactly the same way for both, 
so if you dismiss a physical ontology on the basis of this argument, you 
must, in logic, also dismiss the arithmetical ontology.


Comp is either false, or it is incoherent.

Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread Bruce Kellett

meekerdb wrote:

On 5/13/2015 10:25 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/13/2015 5:32 PM, Russell Standish wrote:

On Thu, May 14, 2015 at 11:26:17AM +1200, LizR wrote:
On 13 May 2015 at 18:20, Russell Standish li...@hpcoders.com.au 
wrote:



For a robust ontology, counterfactuals are physically instantiated,
therefore the MGA is invalid.

Can you elaborate on this? ISTM that counterfactuals aren't, and 
indeed
can't, be physically instantiated. (Isn't that what being 
counterfactual

means?!)

No - counterfactual just means not in this universe. If its not in any
universe, then its not just counterfactual, but actually illogical, or
impossible, or something.


If not in any universe is meant in the Kripke sense, then something 
not in any universe is something that is logically impossible.  But 
if not in any universe is meant in the MWI sense, then 
counterfactuals are only those outcomes consistent with QM but which 
don't happen.  I think it is only the latter kind of counterfactual 
that need be considered in computations.


No. The counterfactuals that Bruno refers to in comp seem to come from 
the If A the B else C construction of computer programming. This 
puts no restriction on the worlds containg B and C. 


That would seem to create conundrums.  The counterfactual is A taking a 
value other than the one it actually did.  A is one of the inputs to the 
prosthetic brain part, so in practice the doctor would only consider a 
finite number of values of A that could be realized by the sense organ 
or other brain parts that realize it. But if A can be anything from 
platonia it could be If this program X halts... or The smallest even 
integer not the sum of two primes.


If you read around the typos in my post above, the counterfactual 
if...then...else... construction branches on the input A, which is not 
necessarily *anything* at all, just what the program encounters. It is B 
and C that can be anything that the programmer wants them to be. We are 
talking computing here, not physics. That is why the other worlds of MWI 
have nothing to do with counterfactual correctness.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-14 Thread LizR
On 15 May 2015 at 14:19, spudboy100 via Everything List 
everything-list@googlegroups.com wrote:

 Photons can re-combine? So they are unlike electrons or positrons, which
 like a magnet, repell like charges. Based on your description, Liz, then
 somewhere in the universe, are glowing soft-white piles of Photonium? Ah!
 The creamy nougat center of every barred spiral galaxy!

 In the two-slit experiment, yes.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 06:28, Russell Standish wrote:


On Wed, May 13, 2015 at 02:15:59PM +1000, Stathis Papaioannou wrote:


Not necessarily. Simulated beings could be conscious with their  
simulated

brains.



In which case their consciousness supervenes on their simulated
physics.


Simulated beings could be conscious with their simulated brains in  
arithmetic.





This is still physical supervenience,


yes, even when the brains are simulated in arithmetic, as to get the  
right measure, that simulation will have to have the right relative  
measure.





of the sort Bruce was
talking about.


I think he was using primitive-physical supervenience.

bruno






--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 01:26, Bruce Kellett wrote:


meekerdb wrote:

On 5/11/2015 11:14 PM, Bruce Kellett wrote:


[BM] Why?  Have you proven that consciousness supervenes on a  
record?


Have you proven that it does not?
No, but I have a lot of evidence it supervenes on brain / 
*processes*/.  Reducing that to /*states*/ is a further assumption.


That is the pedant's reply. :-)
A process reduces to a sequence of states -- you simply lower the  
substitution level (step rate) to whatever value is necessary to  
reproduce the process FAPP.


The assumption of the argument was that consciousness supervenes  
on the brain state.
That's not the same as saying yes to the doctor.  It's your added  
interpretation that consciousness supervenes on a brain state as  
opposed to a brain process that constitutes a computation.  Bruno,  
who made the argument, I think is relying on the latter.


Yes, that seems to be the case. The original claim of absurdity for  
the idea that consciousness could supervene on a recording has been  
replaced by the claim that the recording is not a computation of the  
required kind. This also begs the question of course -- where is it  
proved that that particular type of computation is both necessary  
and sufficient for consciousness?


However, I think one can approach this in a different way. The  
overwhelming evidence from neuroscience, and all related  
experimentation, is that consciousness supervenes on the physical  
brain -- the goo in our skulls. Damage the goo, stimulate the goo,  
do anything to the goo, and our qualia or consciousness are altered.  
Alter our consciousness/thinking/processing and there are associated  
changes in the brain activity/states. (Pet scans and the like.)


The MGA argues that the natural sequence of brain states and a  
recording of that sequence are not equivalent in that one is  
conscious and the other is not.


You still miss the point. MGA just shows that physical supervenience  
makes them equivalent, and as they are not equivalent (from the  
computer science point of view which is relevant with comp), physical  
supervenience has to be abandoned if we keep comp.




It is concluded from this that consciousness does not supervene on  
the brain states/processes,


A relief, because this is what block the progress toward a solution of  
the mind-body problem since 1500 years.



which conclusion is contradicted by the overwhelming bulk of  
experimental evidence.


Proof?

The fact that coffee can change my mind, and that my mind can change  
my brain is part of evidence for comp, not for the primitive physical  
supervenience thesis, whose main weakness at the start is that it  
assumes physicalism, primary matter, which are metaphysical concept,  
and no real scientific evidences have ever been given to them. It is a  
strong assumption in theology. There are no evidence that there is a  
*primitive* physical universe, or that some laws of physics has to be  
assumed.





This is science. When your theory is contradicted by overwhelming  
experimental evidence,


One evidence is enough.

But there are none. You can try one, but from above, you will beg the  
question as it seems you take the existence of *primitive* physical  
universe for granted.



it is conventionally taken as evidence that your theory has been  
falsified. The MGA puts Bruno's theory in this category: it has been  
falsified by the experimental results.


Do you mean that comp has been falsified? My work shows that to  
falsify a classical version of comp, you need to find a difference of  
prediction between QL, and the logics S4Grz1, X1*, Z1* or variants.


Or do you mean that there is a flaw in MGA?

Bruno





Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 05:03, Bruce Kellett wrote:


meekerdb wrote:

On 5/12/2015 4:26 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/11/2015 11:14 PM, Bruce Kellett wrote:


[BM] Why?  Have you proven that consciousness supervenes on a  
record?


Have you proven that it does not?


No, but I have a lot of evidence it supervenes on brain / 
*processes*/.  Reducing that to /*states*/ is a further assumption.


That is the pedant's reply. :-)
A process reduces to a sequence of states -- you simply lower the  
substitution level (step rate) to whatever value is necessary to  
reproduce the process FAPP.
No, a sequence of states is not the same as a process.  In a  
process the states in the sequence are causally related.


Need I quote Hume at you, Brent? That which we know as causality is  
nothing more than the constant conjunction of events. You make  
'causality' into a sort of dualist magic.


Well, thanks Bruce!





In playing back a *digitized* recording of states the causal  
relation is broken.  But, as I pointed out to Bruno, causal is a  
nomological, not logical, relation.  He, of course, disagreed.


The assumption of the argument was that consciousness supervenes  
on the brain state.


That's not the same as saying yes to the doctor.  It's your added  
interpretation that consciousness supervenes on a brain state as  
opposed to a brain process that constitutes a computation.   
Bruno, who made the argument, I think is relying on the latter.


Yes, that seems to be the case. The original claim of absurdity  
for the idea that consciousness could supervene on a recording has  
been replaced by the claim that the recording is not a computation  
of the required kind. This also begs the question of course --  
where is it proved that that particular type of computation is  
both necessary and sufficient for consciousness?
It's just hypothesized as implicit in saying yes to the doctor; one  
would only say yes if it were a counterfactually correct AI.


However, I think one can approach this in a different way. The  
overwhelming evidence from neuroscience, and all related  
experimentation, is that consciousness supervenes on the physical  
brain -- the goo in our skulls. Damage the goo, stimulate the goo,  
do anything to the goo, and our qualia or consciousness are  
altered. Alter our consciousness/thinking/processing and there are  
associated changes in the brain activity/states. (Pet scans and  
the like.)


The MGA argues that the natural sequence of brain states and a  
recording of that sequence are not equivalent in that one is  
conscious and the other is not. It is concluded from this that  
consciousness does not supervene on the brain states/processes,  
which conclusion is contradicted by the overwhelming bulk of  
experimental evidence.
I agree with you and Russell that it is not obvious that  
consciousness can't supervene on a playback of a recording.  But, I  
don't think there's any empirical evidence regarding recordings of  
brains.  In fact one of Russell's points is that the fact that such  
a recording would be so large and detailed is a reason not to trust  
intuitions about whether it could be conscious.


C'mon, Brent. It's a thought experiment. The fact that we don't have  
experimental evidence of conscious recordings is irrelevant to this  
particular thought experiment.


Again! OK. Good.





This is science. When your theory is contradicted by overwhelming  
experimental evidence, it is conventionally taken as evidence that  
your theory has been falsified. The MGA puts Bruno's theory in  
this category: it has been falsified by the experimental results.
Would that it were so.  But so far as I can see Bruno's theory  
doesn't make any definite predictions that can be empirically  
tested.  It explains a few things: quantum randomness=FPI and you  
can't know what program you are.  But these things also have other  
possible explanations and they were already known.


Bruno does make a prediction that can be empirically tested. He  
predicts that consciousness does not supervene on physical brains  
but on computations. The MGA purports to show that the assumption of  
physical supervenience leads to a contradiction. But supervenience  
of consciousness on brains is an indisputable empirical result, so  
the MGA works against comp.


Consciousness is not testable, ever. But The UDA+MGA can be translated  
into arithmetic, by using mainly Gödel's technic, and this leads to  
the extraction of physics. just accepting a very classical account of  
knowledge (by Theaetetus), we can, and have, already derived the  
propositional physics. We fond quantum logic, up to now.
So UDA predicts and explains the appearance of the MWI, for almost all  
universal machines, and
AUDA makes it possible to verify this mathematically, and it predicts  
and explain the quantum logic, from just the Peano axioms of arithmetic.


MGA would works against comp, if Gödel's and Everett's 

Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 07:03, meekerdb wrote:


On 5/12/2015 8:03 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/12/2015 4:26 PM, Bruce Kellett wrote:

meekerdb wrote:

On 5/11/2015 11:14 PM, Bruce Kellett wrote:


[BM] Why?  Have you proven that consciousness supervenes on a  
record?


Have you proven that it does not?


No, but I have a lot of evidence it supervenes on brain / 
*processes*/.  Reducing that to /*states*/ is a further  
assumption.


That is the pedant's reply. :-)
A process reduces to a sequence of states -- you simply lower the  
substitution level (step rate) to whatever value is necessary to  
reproduce the process FAPP.


No, a sequence of states is not the same as a process.  In a  
process the states in the sequence are causally related.


Need I quote Hume at you, Brent? That which we know as causality is  
nothing more than the constant conjunction of events. You make  
'causality' into a sort of dualist magic.


Whatever it is, it's what Bruno introduces to distinguish  
computation from a playback of computation.  I find the idea of  
states of an extended body like the brain problematic.  The speed of  
light is finite and the speed of neurons is slow; so to model the  
state as you propose means modeling it down microseconds or finer  
in order to capture the signaling relation between different neurons  
as their axons transmit pulses across several cm.  This is way below  
anything that might be considered a 'thought' or a 'conscious  
momement', so the later have spacial extent and temporal overlap. To  
conceive them as separate discrete states is already to concede that  
consciousness is in platonia.


This means that in case you would grasp what is a computation, in the  
Church-Turing sense, you would, like some computer scientist,  
disregard the necessity of MGA. With comp, the ontology is discrete.  
The continuum is recovered only in the mind of the numbers, like  
eventually the physical laws.


But the reason why I distinguish a computation from a play-back, is  
that a play black computes only trivial projections, , or arbitrary  
computations, and the boolean graph computes quite complex and  
specific relations.


This entails that consciousness is related to the (immaterial) number  
relations, and *all* their relative implementations, not just one  
specific, still less based on the dubious (never defined) primitive  
matter.


Bruno





Brent




In playing back a *digitized* recording of states the causal  
relation is broken.  But, as I pointed out to Bruno, causal is a  
nomological, not logical, relation.  He, of course, disagreed.




The assumption of the argument was that consciousness  
supervenes on the brain state.


That's not the same as saying yes to the doctor.  It's your  
added interpretation that consciousness supervenes on a brain  
state as opposed to a brain process that constitutes a  
computation.  Bruno, who made the argument, I think is relying  
on the latter.


Yes, that seems to be the case. The original claim of absurdity  
for the idea that consciousness could supervene on a recording  
has been replaced by the claim that the recording is not a  
computation of the required kind. This also begs the question of  
course -- where is it proved that that particular type of  
computation is both necessary and sufficient for consciousness?


It's just hypothesized as implicit in saying yes to the doctor;  
one would only say yes if it were a counterfactually correct AI.




However, I think one can approach this in a different way. The  
overwhelming evidence from neuroscience, and all related  
experimentation, is that consciousness supervenes on the physical  
brain -- the goo in our skulls. Damage the goo, stimulate the  
goo, do anything to the goo, and our qualia or consciousness are  
altered. Alter our consciousness/thinking/processing and there  
are associated changes in the brain activity/states. (Pet scans  
and the like.)


The MGA argues that the natural sequence of brain states and a  
recording of that sequence are not equivalent in that one is  
conscious and the other is not. It is concluded from this that  
consciousness does not supervene on the brain states/processes,  
which conclusion is contradicted by the overwhelming bulk of  
experimental evidence.


I agree with you and Russell that it is not obvious that  
consciousness can't supervene on a playback of a recording. But, I  
don't think there's any empirical evidence regarding recordings of  
brains.  In fact one of Russell's points is that the fact that  
such a recording would be so large and detailed is a reason not to  
trust intuitions about whether it could be conscious.


C'mon, Brent. It's a thought experiment. The fact that we don't  
have experimental evidence of conscious recordings is irrelevant to  
this particular thought experiment.


But I think it's jumping to a conclusion to say the supervenience on  
brain activity is overwhelming 

Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 06:24, Russell Standish wrote:


On Wed, May 13, 2015 at 09:26:02AM +1000, Bruce Kellett wrote:


This is science. When your theory is contradicted by overwhelming
experimental evidence, it is conventionally taken as evidence that
your theory has been falsified. The MGA puts Bruno's theory in this
category: it has been falsified by the experimental results.



I don't see that, because AFAICT, the MGA only works for a non-robust
ontology. So the only valid conclusion to draw is that COMP +
non-robustness has been falsified by the experimental results. Which
is what I state in my paper.


COMP assumes of course at least a robust reality (N, +, *).

MGA is just used for people believing that the UD needed to be  
executed *physically* (i.e. they need a robust physical universe). MGA  
does not show that illogical, but it shows that physicalism and/or  
primitive matter invokes a god-of-the-gap.


Bruno






--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread David Nyman
On 13 May 2015 at 12:19, Bruno Marchal marc...@ulb.ac.be wrote:

The fact that coffee can change my mind, and that my mind can change my
 brain is part of evidence for comp, not for the primitive physical
 supervenience thesis, whose main weakness at the start is that it assumes
 physicalism, primary matter, which are metaphysical concept, and no real
 scientific evidences have ever been given to them. It is a strong
 assumption in theology. There are no evidence that there is a *primitive*
 physical universe, or that some laws of physics has to be assumed.


So IIUC, in your terminology, 'primitive physicalism' just stands for the
assumption that some definite 'laws of physics' are assumed to be more
basic than anything else. If so, on that assumption, such laws would of
necessity be the ultimate basis of any effective computation (i.e. in some
physical approximation). The MGA then points out that in principle we can
always devise ways to preserve the purely physical dispositions of any
given approximate realisation (by fortuitous or deliberate one-time
interventions) even in circumstances where any or all of its original
computational characteristics have been grossly disrupted.

MGA then argues that, if conscious experience fundamentally depends on
preservation of such physical dispositions, we should thereby conclude that
it should be unaffected in such scenarios. But the problem is that the
interventions cannot be guaranteed to preserve the original 'computational'
architecture (in particular, its counter-factual capabilities). Hence it
would seem that, on the one hand, that if consciousness supervenes on
particular physical dispositions of the brain it should be preserved, but
on the other, if it depends on the particular *computational*
characteristics of such dispositions, it could not be (since these can
always be disrupted or simplified). It is the incompatibility of these two
views that forces a choice between the principles of physical and
computational supervenience.

It is argued in opposition to the rejection of physical supervenience that
it appears everywhere to be supported by observation. However, if two
observed phenomena (e.g. brain function and conscious experience) are found
to be in constant conjunction, an alternative to one or the other having a
 'primary' role would be that they both emanate from some common underlying
progenitor. Under computationalism, that role is subsumed by the entire
spectrum of computations below the substitution level of either (i.e. the
'computational everything').

Is that more or less your view?

David

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 12 May 2015, at 22:27, meekerdb wrote:


On 5/12/2015 3:25 AM, Bruno Marchal wrote:

On 12 May 2015, at 02:33, Bruce Kellett wrote: The fact that  
projecting the film isn't a general purpose computer seems to me  
to be a red herring. It was never claimed that projecting the film  
of the brain substrate instantiated general consciousness -- the  
only claim ever made here is that this projection recreates the  
conscious moment that was originally filmed. That is all that is  
required. General purpose computing and counterfactual correctness  
are all beside the point. If the original conscious moment is  
recreated, then the film is a computation in any sense that is  
necessary to produce a conscious moment. This is sufficient to  
undermine the claim that consciousness does not supervene on the  
physical body.


The matter of whether the physical is primitive or not is also a  
red herring. No such assumption is required in order to show that  
the MGA fails to prove its point.


It is a reductio ad absurdum. If consciousnesss requires the  
physical activity and only the physical activity, then the  
recording is conscious. But anyone knowing what is a computation  
should understand that the recording does not compute more than a  
trivial sequence of projection, which is not similar to the  
computation of the boolean graph.


I think there are five concepts of computation in play here:

1. An abstract deterministic computer (TM) or program running with  
some given external input.  This program is assumed to have well  
defined behavior over a whole class of inputs, not just the one  
considered.


OK. That is the standard concept (although the computation does not  
have to be deterministic, but that is a detail here).






2. A classical (deterministic) physical computer realizing (1)  
supra.  This is what the doctor proposes to replace part or all of  
your brain.


Yes, and this involves physics. But this is no more a computation in  
the sense of Church-Turing, which does not refer to physics at all.






3. A recording of (2) supra being played back.


Nobody would call that a computation, except to evade comp's  
consequences.






4. An execution of (1) with a classical (deterministic) computer  
that has all the branching points disabled so that it realizes (1)  
but is not counterfactually equivalent to (1) or (2).


This computes one epsilon more than the movie. That is, not a lot.



5. A physical (quantum) computer realizing (1) supra, in it's  
classical limit.


That is the solution we hope for (as it would make comp and QM ally  
and very plausible).





Bruno takes (1) to define computation and takes the hypothesis that  
consciousness is realized by a certain kind of computation, an  
instance of (1).  So he says that if you believe this you will say  
yes to the doctor who proposes (2) as a prosthesis.  This  
substitution of a physical deterministic computer will  preserve  
your consciousness.  Then he proceeds to argue via the MGA that this  
implies your consciousness will not be affected by using (4) instead  
of (2)  and further that (4) is equivalent to (3) and (3) is absurd.


Having found a reductio, he wants to reject the assumption that your  
consciousness is realized by the physics of a deterministic computer  
as in (2).


Whether (3) preserving consciousness is absurd or not (and I agree  
with Russell that's to much of a stretch of intuition to judge);


There is no stretch of intuition, the MGA shows that you need to put  
magic in the primitive matter to make it playing a role in the  
consciousness of physical events.





this is not the reversal of physics claimed.  The Democritan  
physicist (nothing but atoms and the void) will point out that (2)  
is not what the doctor can implement.


?


What is possible is realizing a prosthetic computation by (5).  And  
(5) cannot be truncated like (4); quantum mechanical systems can  
only be approximately classical and only when they are interacting  
with an environment.  The classical deterministic computer (TM) is a  
platonic ideal which, as far as we know, cannot be realized.


But then comp is false, as comp is a bet of surviving some digital  
truncation.





Now that doesn't invalidate Bruno just developing his theory of the  
UD and showing that it realizes QM and the wholistic quasi-classical  
physical behavior of macroscopic systems in some limit.


Right. In the original thesis, UDA and MGA is used only to explain the  
mind-body problem: the AUDA theory is explained as the main thing  
before them; and then used to solve the UD and MG Paradoxes.


But I do think that they are strong argument, and easier than AUDA,  
that's why I like to argue on this.



 But I don't think he can just help himself to the conclusion that  
there MUST BE some measure or some way of looking at the UD in which  
this is so because the MGA has refuted Democritus.


Only Democritus + (CT+YD).

Bruno



Brent


Re: What does the MGA accomplish?

2015-05-13 Thread Bruce Kellett

Bruno Marchal wrote:

On 13 May 2015, at 06:28, Russell Standish wrote:


In which case their consciousness supervenes on their simulated
physics.


Simulated beings could be conscious with their simulated brains in 
arithmetic.



This is still physical supervenience,


yes, even when the brains are simulated in arithmetic, as to get the 
right measure, that simulation will have to have the right relative 
measure.



of the sort Bruce was
talking about.


I think he was using primitive-physical supervenience.


I think this is where you misunderstand me, Bruno. You are ascribing to 
me a particular metaphysical position to which I do not necessarily 
subscribe. As has been said a few times, the basic ontology of physics 
is whatever our best physical theories tell us it is. This is not 
generally primitive matter, whatever that is.


In my criticism of the MGA, I am not committed to any particular 
ontology. I am simply pointing to the fact that the physical world 
exists independently of you or me, just as 2+2=4 exists independently of 
you or me. Our physical brains are part of this physical world, whether 
the basic ontology be quarks and electrons, quantum fields, or 
computations in Platonia. And our consciousness supervenes on these 
physical brains, however constituted -- the overwhelming weight of 
neurophysiological and other scientific evidence shows this.


As published, the MGA shows that *any* physical supervenience entails 
that replacing the brain by a recording of its activity will recreate 
the original conscious state. This is claimed to be absurd, since a 
recording does not consist of a computation of the kind required by 
comp, which says that a recording cannot be conscious. So you claim that 
there is a contradiction between physical supervenience and comp.


But the physical brain on which consciousness supervenes might well be 
itself a product of comp (and is, if you take the robust UD seriously). 
So you have shown that, either your whole theory is internally 
inconsistent, or else you have to abandon the supervenience of 
consciousness on brain goo, in contradiction to the empirical evidence.


If you allow that the recording can be conscious, then the MGA is 
toothless -- is does not accomplish anything. But in allowing a 
recording to be conscious, you have contradicted what I take to be one 
of your basic tenets of comp.


So comp is either false or it is incoherent.

Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 01:03, Russell Standish wrote:


On Tue, May 12, 2015 at 12:19:02PM +0200, Bruno Marchal wrote:


Exactly. Regardless of truth, it is an interesting model that could
well inform us about the truth. Provided it is tractable, of course,
which so far it has tended not to be  (John Clark's criticism).


No, the UD does not need to be tractable, because the first person
are not aware of the delays.

John simply cannot understand this, because this needs step 3, 4, 5,
6, 7.



Sorry - you misunderstood me. In this case, I was referring to the
consequences of the AUDA, ie the programme of extracting physics  
from COMP.


But that is tractable. The current algorithm that I provided makes it  
untractable for complex propositions, but that is contingent (CP is NP- 
complete too).
I would worry more if someone found a simple efficacious algorithm, as  
this would raise a doubt that such logic incarnate quantum computing.


Bruno




--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 07:45, Bruce Kellett wrote:


LizR wrote:
On 13 May 2015 at 15:03, Bruce Kellett bhkell...@optusnet.com.au mailto:bhkell...@optusnet.com.au 
 wrote:

   Bruno does make a prediction that can be empirically tested. He
   predicts that consciousness does not supervene on physical brains
   but on computations. The MGA purports to show that the  
assumption of

   physical supervenience leads to a contradiction. But supervenience
   of consciousness on brains is an indisputable empirical result, so
   the MGA works against comp.
I'm not sure that's what Bruno is trying to show, because he knows  
any TOE must explain all observations to date, at least in  
principle, so he would hardly be making a claim that is obviously  
refutable (or not for longer than it took him to notice that it was  
refutable, I hope).
I think Bruno's argument isn't attempting to refute supervention of  
the mind on the brain, but primary materialism - but I'm sure he  
will correct me if I'm wrong.


That might be the idea. It is difficult to get to this, though,  
since the notion of primary materialism doesn't really feature in  
the argument.


It does, as usually supervenience in philosophy of mind means  
primitively-physical supervenience, and it should be clear that this  
is what is at stake.
Step 0 and 1 makes clear that we do agree that comp, if true, is  
realized through some physical supervenience (at that stage, we are  
neutral on the primitiveness of that physical aspect).




Before we get to the MGA, the dovetailer has been introduced, and  
this is supposed to emulate the generalized brain (even if the  
generalized brain is the whole galaxy or even the entire universe)  
infinitely often, and the laws of physics emerge from the statistics  
of all UD-computations passing through my actual state.


The argument might then be that since the reconstruction of the  
brain states from the filmed recording is not a computation to be  
found in the dovetailer, it does not pass through my actual state,  
so is not part of what sustains my consciousness. Or something like  
that.


yes. In the worst case of some consciousness superverning on the  
movie, it might be the consciousness of a mosquito (but frankly, I  
think that an amoeba is more conscious than such a movie).








But I don't think that this move succeeds. Whether the physical  
universe and its laws come out of the dovetailer or not, I can set  
up the situation in which the sequence of brain states is reproduced  
from a recording *in the universe I inhabit*, whatever its ultimate  
origin. So talk about primitive materialism and computational  
dovetailer states are both equally irrelevant to the actual MGA. The  
thought experiment can be carried out, whatever substrate underlies  
the physical world.


Are you claiming that the movie is not only conscious, but that it is  
the same consciousness (in different time) than the original boolean  
graph?






The claim that the sequence of brain states reconstructed from the  
recording is not conscious contradicts the physical supervenience  
hypothesis, whether the 'physical brain' in this case is made of  
primitive matter (whatever that is) or extracted from the infinite  
computations of the dovetailer. And physical supervenience in the  
world we inhabit has overwhelming empirical support.


For an Aristotelian who believes a priori in a primitive physical  
universe. But there is no evidence at all for a primitive physical  
supervenience, which is the only thing at stake.


Bruno





Bruce

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread David Nyman
On 13 May 2015 at 13:08, Bruce Kellett bhkell...@optusnet.com.au wrote:

But the physical brain on which consciousness supervenes might well be
 itself a product of comp (and is, if you take the robust UD seriously). So
 you have shown that, either your whole theory is internally inconsistent,
 or else you have to abandon the supervenience of consciousness on brain
 goo, in contradiction to the empirical evidence.


The observed co-variance of physical brain activity and conscious
experience, assuming comp, would presumably be the net result of FPI over
the entire spectrum of computation underlying both (or else comp is false).
If this were indeed the case, I don't see why we would expect consciousness
to survive the kind of disruption described in the MGA, despite the
preservation of gross physical outcomes on a one-time basis. IOW, the
device, after disruption and intervention, has merely degenerated to a
one-time simulacrum, the consciousness of the original having depended on
*computational* characteristics no longer capable of physical realisation.
This doesn't strike me as being particularly counter-intuitive.

David

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruce Kellett

Bruno Marchal wrote:

On 13 May 2015, at 08:20, Russell Standish wrote:


For a
robust ontology, counterfactuals are physically instantiated,
therefore the MGA is invalid.


I don't see this. The  if A then B else C can be realized in a 
newtonian universe, indeed in the game of life or c++.

And the duplication of universe, one where A is realized, and B is realized
+ one in which A is not realized and C is realized,
might NOT makes a non counterfactually correct version of that 
if-then-else suddenly counterfactually correct.


Counterfactuals and MWI (and robustnesse) are a priori independent notions.


For once I agree with Bruno. I think this is right and that 
counterfactuals are not, as Russell suggests, instantiated in the other 
worlds of the MWI in any useful sense.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 08:20, Russell Standish wrote:


On Wed, May 13, 2015 at 03:45:09PM +1000, Bruce Kellett wrote:


That might be the idea. It is difficult to get to this, though,
since the notion of primary materialism doesn't really feature in
the argument. Before we get to the MGA, the dovetailer has been
introduced, and this is supposed to emulate the generalized brain
(even if the generalized brain is the whole galaxy or even the
entire universe) infinitely often, and the laws of physics emerge
from the statistics of all UD-computations passing through my actual
state.



I can get it, but by an indirect route.

Basically, the MGA shows a contradiction between computationalism and
physical supervenience. But only for non-robust ontologies.


But it is the only place where we need it. In robust ontologies,  
UDA1-7 is enough (to get the problem, not his solution!).





For a
robust ontology, counterfactuals are physically instantiated,
therefore the MGA is invalid.


I don't see this. The  if A then B else C can be realized in a  
newtonian universe, indeed in the game of life or c++.
And the duplication of universe, one where A is realized, and B is  
realized

+ one in which A is not realized and C is realized,
might NOT makes a non counterfactually correct version of that if-then- 
else suddenly counterfactually correct.


Counterfactuals and MWI (and robustnesse) are a priori independent  
notions.





Now physical supervenience has been demonstrated to a high level of
empirical satisfaction.


I don't think so, unless you mean physical in some non aristotelian  
sense, in which case you are right, but that does not falsified comp,  
in that case.





So we can conclude either that
computationalism is falsified, or that our ontology is robust. But if
the ontology is robust, UDA 1-7 demonstrates the reversal - physics
depends only on the properties of the universal machine, not on any
other ontological property of primitive reality. Therefore we can
excise the physicalness of ontology - anything capable of universal
computation will do, such as arithmetic.

But this chain of argument is not the usual one, so clearly it needs
to be examined critically. Bruno has not given his imprimatur to it,
dor example. Also, the MGA itself needs to shoring up, particularly
with respect to the requirement of counterfactual correctness, and
also that other issue I just raised about the recording player
machinery changing the physical arrangement, perhaps by just enough to
render physical supervenience toothless too. In which case the whole
thing falls apart.


This is a bit unclear to me. You might decompose your thought in some  
steps, with what is assumed and what is derived, as I am lost here.


Bruno






--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 15:22, David Nyman wrote:


On 13 May 2015 at 12:19, Bruno Marchal marc...@ulb.ac.be wrote:

The fact that coffee can change my mind, and that my mind can change  
my brain is part of evidence for comp, not for the primitive  
physical supervenience thesis, whose main weakness at the start is  
that it assumes physicalism, primary matter, which are metaphysical  
concept, and no real scientific evidences have ever been given to  
them. It is a strong assumption in theology. There are no evidence  
that there is a *primitive* physical universe, or that some laws of  
physics has to be assumed.


So IIUC, in your terminology, 'primitive physicalism' just stands  
for the assumption that some definite 'laws of physics' are assumed  
to be more basic than anything else. If so, on that assumption, such  
laws would of necessity be the ultimate basis of any effective  
computation (i.e. in some physical approximation). The MGA then  
points out that in principle we can always devise ways to preserve  
the purely physical dispositions of any given approximate  
realisation (by fortuitous or deliberate one-time interventions)  
even in circumstances where any or all of its original computational  
characteristics have been grossly disrupted.


MGA then argues that, if conscious experience fundamentally depends  
on preservation of such physical dispositions, we should thereby  
conclude that it should be unaffected in such scenarios. But the  
problem is that the interventions cannot be guaranteed to preserve  
the original 'computational' architecture (in particular, its  
counter-factual capabilities). Hence it would seem that, on the one  
hand, that if consciousness supervenes on particular physical  
dispositions of the brain it should be preserved, but on the other,  
if it depends on the particular *computational* characteristics of  
such dispositions, it could not be (since these can always be  
disrupted or simplified). It is the incompatibility of these two  
views that forces a choice between the principles of physical and  
computational supervenience.


It is argued in opposition to the rejection of physical  
supervenience that it appears everywhere to be supported by  
observation. However, if two observed phenomena (e.g. brain function  
and conscious experience) are found to be in constant conjunction,  
an alternative to one or the other having a  'primary' role would be  
that they both emanate from some common underlying progenitor. Under  
computationalism, that role is subsumed by the entire spectrum of  
computations below the substitution level of either (i.e. the  
'computational everything').


Is that more or less your view?


I think it is a good summary, yes. Thanks!

Bruno




David

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 14:08, Bruce Kellett wrote:


Bruno Marchal wrote:

On 13 May 2015, at 06:28, Russell Standish wrote:


In which case their consciousness supervenes on their simulated
physics.
Simulated beings could be conscious with their simulated brains in  
arithmetic.

This is still physical supervenience,
yes, even when the brains are simulated in arithmetic, as to get  
the right measure, that simulation will have to have the right  
relative measure.

of the sort Bruce was
talking about.

I think he was using primitive-physical supervenience.


I think this is where you misunderstand me, Bruno. You are ascribing  
to me a particular metaphysical position to which I do not  
necessarily subscribe.


Apology if I did.



As has been said a few times, the basic ontology of physics is  
whatever our best physical theories tell us it is. This is not  
generally primitive matter, whatever that is.


Primitive matter, is, by definition whatever physical you assume in  
the fundamental theory. For example the standard model (in physics)  
assumes some particles, having relation, through some other particles.


But you can do physics without assuming the metaphysical assumpitions  
that those particles are real, and if that there are not fundamenal,  
they are made of physical things, that we still need to assume.


I don't want to classify diverse degree of naivety in the concept of  
primitive matter, and we can saty at the level of the assumption  
needed. It also assumes a fundamental physical reality at the ground  
of all other realities (chemical, biological, psychological,  
sociological, etc.).







In my criticism of the MGA, I am not committed to any particular  
ontology. I am simply pointing to the fact that the physical world  
exists independently of you or me, just as 2+2=4 exists  
independently of you or me.


But this is ambiguous. If you use physical world in the aristotelian  
sense, I have no evidence that it is true. If you define the physical  
by the (stable) appearance to us, then you already slip on self- 
reference, and the Platonic idea that we might dream that physical  
reality. It is less demanding in assumption, given that those dreams  
exist in virtue of the minimal amount of math we need to talk about  
the physical reality.


if not you beg the question.




Our physical brains are part of this physical world, whether the  
basic ontology be quarks and electrons, quantum fields, or  
computations in Platonia. And our consciousness supervenes on these  
physical brains, however constituted -- the overwhelming weight of  
neurophysiological and other scientific evidence shows this.


Yes. Comp starts from this constatation.

But we just beg the qeustion of how the physical world, whatver it is,  
succeed in selecting this or that comp histoiry in arithmetic.  
Solution: we take them all. And do the math to see if that works, and  
the thing is that it works, even if modestly.





As published, the MGA shows that *any* physical supervenience  
entails that replacing the brain by a recording of its activity will  
recreate the original conscious state.


In real time, yes.


This is claimed to be absurd, since a recording does not consist of  
a computation of the kind required by comp,


Well, required by the guy who was hoping to survive.




which says that a recording cannot be conscious.


Then all real numbers are conscious, you go out completely from  
computer science. Your TOE is just the counting algorithm, and you can  
predict nothing.


You dismiss that we say yes to the dorcor, because the artificial  
brain will do the right computation, which means by defifnition, be  
counterfactually correct.




So you claim that there is a contradiction between physical  
supervenience and comp.


Yes. Between primitive-physical supervenience (as this what is at  
stake).






But the physical brain on which consciousness supervenes might well  
be itself a product of comp (and is, if you take the robust UD  
seriously).


With the FPI, yes.



So you have shown that, either your whole theory is internally  
inconsistent, or else you have to abandon the supervenience of  
consciousness on brain goo, in contradiction to the empirical  
evidence.


Not with empirical evidence, just with the usual identity mind-brain,  
which is doubted since long, and is related to a difficult problem  
since the antic time.






If you allow that the recording can be conscious, then the MGA is  
toothless -- is does not accomplish anything. But in allowing a  
recording to be conscious, you have contradicted what I take to be  
one of your basic tenets of comp.


So comp is either false or it is incoherent.


Lol
Well tried :)

I  think that if you understand what is a computation, in the Turing- 
Church sense, you can't believe that the movie is a computation,  
except in ad hoc a posteriori sense in which everything can compute  
everything.


But then I have to retract that 

Re: What does the MGA accomplish?

2015-05-13 Thread Bruno Marchal


On 13 May 2015, at 18:31, David Nyman wrote:


On 13 May 2015 at 17:14, Quentin Anciaux allco...@gmail.com wrote:

why should they predominate ? They should only have higher  
probability relatively to you.. you're in that class of observers,  
that certainly constrains what you can observe... there are many  
more insects than humans, yet, you're human... and should not expect  
to be a mosquito the next second. We could be absolutely rare, only  
a geographical incident in the whole and yet if the whole is... such  
observers as ourselves observing consistent physical environment  
must be.


Well, if I were a mosquito, I wouldn't of course be participating in  
this conversation. So ideally I would want to be able to justify why  
the kind of observer capable of this class of interaction might be  
restricted to 'physical' environments of the sort we observe. I  
think this may be related to Bruno's idea that our being embedded in  
an observably 'physical' environment is more than merely  
geographical - i.e. that we are somehow the beneficiaries of some  
'absolute' measure battle for the emergence of observably 'lawlike'  
phenomena.


Quentin is right that the predominance is not absolute, but only  
relative to us. Now, what we can find below our same and sharable  
subst level has to obey the same law everywhere, as it is defined by  
the same sum on all computation everywhere. The quantum laws are a  
very good candidate for that universal physics, but the hamiltonian  
might be more variable; yet still obey conditional laws, etc.
Computationalism offers a criterion to distinguish geography from  
physics, but it might not be the according to fact that the real  
physics is given by S4Grz1, Z1*, or X1* ([]p  p, etc.).


Bruno





David

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruce Kellett

Russell Standish wrote:

On Wed, May 13, 2015 at 10:33:42PM +1000, Bruce Kellett wrote:

Bruno Marchal wrote:

On 13 May 2015, at 08:20, Russell Standish wrote:


For a
robust ontology, counterfactuals are physically instantiated,
therefore the MGA is invalid.

I don't see this. The  if A then B else C can be realized in a
newtonian universe, indeed in the game of life or c++.
And the duplication of universe, one where A is realized, and B is realized
+ one in which A is not realized and C is realized,
might NOT makes a non counterfactually correct version of that
if-then-else suddenly counterfactually correct.

Counterfactuals and MWI (and robustnesse) are a priori independent notions.

For once I agree with Bruno. I think this is right and that
counterfactuals are not, as Russell suggests, instantiated in the
other worlds of the MWI in any useful sense.



How does that work? Are you saying the counterfactual situations never
appear anywhere in the Multiverse? What principle prevents their occurrance?


By the meaning of the term: *counter*factual, i.e., contrary to the 
facts of the situation. As far as I know there is a philosophical theory 
of counterfactuals based on possible worlds. But these are generally 
though to be imaginary. And my feeling is that the 'other worlds' of the 
MWI or other Hubble volumes, etc, are just philosophical possible other 
worlds. We can say anything about them that we like because it can never 
be checked -- they are physically inaccessible in principle.


But we are always going to have difficulty assigning a truth value to a 
counterfactual like: The present king of France has a beard.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Bruce Kellett

Russell Standish wrote:

On Thu, May 14, 2015 at 01:04:09PM +1200, LizR wrote:

On 14 May 2015 at 12:32, Russell Standish li...@hpcoders.com.au wrote:


On Thu, May 14, 2015 at 11:26:17AM +1200, LizR wrote:

On 13 May 2015 at 18:20, Russell Standish li...@hpcoders.com.au wrote:


For a robust ontology, counterfactuals are physically instantiated,
therefore the MGA is invalid.


Can you elaborate on this? ISTM that counterfactuals aren't, and indeed
can't, be physically instantiated. (Isn't that what being counterfactual
means?!)

No - counterfactual just means not in this universe. If its not in any
universe, then its not just counterfactual, but actually illogical, or
impossible, or something.


As I mentioned, a simple example is my decision between tea and coffee.

In

the MWI (or an infinite universe) there are separate branches (or
locations) in which I have both - but in the branch where I had tea, I
didn't have coffee, and vice versa. And because those branches can't
communicate, the road not taken remains counterfactual and non-physical
within each branch. Isn't that enough for the MGA to not need to worry
about counterfactuals, even in the MWI/Level whatever multiverse?


Why is communication needed?


Because otherwise there can be no physical influence, and - within the
branch(es) in which the MGA is being carried out - the recorded system is
identical to the non-recorded one. Without any physical communication /
interference there is no difference from a single universe version. Well,
ISTM, at least.



The physical system refers to all parallel instantiations of an
object ISTM.

If I refer to a photon travelling through a ZM apparatus (to fix
things - you know two half silvered mirrors, so the photons are split
and travel over two spatially disinct paths before being recombined),
we don't have two different physical systems in play. Its just
the one physical system, even though it occupies two distinct universes.


That is not the way the term 'worlds' or 'universes' is used in moder 
quantum physics. The term 'world' is reserved for (related) systems that 
have totally decohered, so that there is no possibility of 
recombination. Or, in the cosmological setting, two regions of 
space-time outside each other's Hubble volume.


The small number of people who still think that every possible path in 
QM is a separate world form a fast-vanishing rump.


Bruce

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread meekerdb

On 5/13/2015 8:49 AM, David Nyman wrote:

On 13 May 2015 at 14:53, Bruno Marchal marc...@ulb.ac.be 
mailto:marc...@ulb.ac.be wrote:

I think it is a good summary, yes. Thanks!


Building on that then, would you say that bodies and brains (including of course our 
own) fall within the class of embedded features of the machine's generalised 'physical 
environment'? Their particular role being the relation between the 'knower' in platonia 
and the environment in general. At a 'low' level, the comp assumption is that the FPI 
 results in a 'measure battle' yielding a range of observable transformations (or 
continuations) consistent with the Born probabilities (else comp is false). A physics 
consistent with QM, in other words. But the expectation is also that the knower itself 
maintains its capacity for physical manifestation in relation to the transformed 
environment, in each continuation, in order for the observations to occur.


BTW, Bruce made the point that the expected measure of the class of such 
physically-consistent observations, against the background of UD*, must be very close to 
zero. ISTM that this isn't really the point (e.g. the expected measure of readable books 
in the Library of Babel must also be close to zero). What seems more relevant is the 
presumed lack of 'un-physical' observer -environment relations (i.e. not only 'why no 
white rabbits?' but 'why physics?'). From this perspective, the obvious difference 
between the Library of Babel and UD* is that the former must be 'observed' externally 
whereas the latter is conceived as yielding a view 'from within'. Hence what must be 
justified is why our particular species of internal observer - i.e. the kind capable of 
self-manifesting within consistently 'physical' environments, should predominate.


As they say on TV, This just in!

/Why Boltzmann Brains Don't Fluctuate Into Existence From the De Sitter Vacuum//
//Kimberly K. Boddy, Sean M. Carroll, Jason Pollack//
//(Submitted on 11 May 2015)//
//
//Many modern cosmological scenarios feature large volumes of spacetime in a de Sitter 
vacuum phase. Such models are said to be faced with a Boltzmann Brain problem - the 
overwhelming majority of observers with fixed local conditions are random fluctuations in 
the de Sitter vacuum, rather than arising via thermodynamically sensible evolution from a 
low-entropy past. We argue that this worry can be straightforwardly avoided in the 
Many-Worlds (Everett) approach to quantum mechanics, as long as the underlying Hilbert 
space is infinite-dimensional. In that case, de Sitter settles into a truly stationary 
quantum vacuum state. While there would be a nonzero probability for observing 
Boltzmann-Brain-like fluctuations in such a state, observation refers to a specific kind 
of dynamical process that does not occur in the vacuum (which is, after all, 
time-independent). Observers are necessarily out-of-equilibrium physical systems, which 
are absent in the vacuum. Hence, the fact that projection operators corresponding to 
states with observers in them do not annihilate the vacuum does not imply that such 
observers actually come into existence. The Boltzmann Brain problem is therefore much less 
generic than has been supposed. /



arXiv:1505.02780v1 [hep-th]

Brent



David
--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com 
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread LizR
On 13 May 2015 at 18:20, Russell Standish li...@hpcoders.com.au wrote:

 For a robust ontology, counterfactuals are physically instantiated,
 therefore the MGA is invalid.


Can you elaborate on this? ISTM that counterfactuals aren't, and indeed
can't, be physically instantiated. (Isn't that what being counterfactual
means?!)

As I mentioned, a simple example is my decision between tea and coffee. In
the MWI (or an infinite universe) there are separate branches (or
locations) in which I have both - but in the branch where I had tea, I
didn't have coffee, and vice versa. And because those branches can't
communicate, the road not taken remains counterfactual and non-physical
within each branch. Isn't that enough for the MGA to not need to worry
about counterfactuals, even in the MWI/Level whatever multiverse?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Russell Standish
On Wed, May 13, 2015 at 02:07:52PM +0200, Bruno Marchal wrote:
 
 On 13 May 2015, at 08:20, Russell Standish wrote:
 
 On Wed, May 13, 2015 at 03:45:09PM +1000, Bruce Kellett wrote:
 
 That might be the idea. It is difficult to get to this, though,
 since the notion of primary materialism doesn't really feature in
 the argument. Before we get to the MGA, the dovetailer has been
 introduced, and this is supposed to emulate the generalized brain
 (even if the generalized brain is the whole galaxy or even the
 entire universe) infinitely often, and the laws of physics emerge
 from the statistics of all UD-computations passing through my actual
 state.
 
 
 I can get it, but by an indirect route.
 
 Basically, the MGA shows a contradiction between computationalism and
 physical supervenience. But only for non-robust ontologies.
 
 But it is the only place where we need it. In robust ontologies,
 UDA1-7 is enough (to get the problem, not his solution!).
 
 
 
 For a
 robust ontology, counterfactuals are physically instantiated,
 therefore the MGA is invalid.
 
 I don't see this. The  if A then B else C can be realized in a
 newtonian universe, indeed in the game of life or c++.
 And the duplication of universe, one where A is realized, and B is
 realized
 + one in which A is not realized and C is realized,
 might NOT makes a non counterfactually correct version of that
 if-then-else suddenly counterfactually correct.


It makes the non counterfactually correct version _physically_ different
from the counterfactually correct version. So one cannot drive the MGA
conclusion, which relies on the versions being physically indistinguishable.

 
 Counterfactuals and MWI (and robustnesse) are a priori independent
 notions.
 

Sure - but the MGA (if valid) connects them.

 
 
 Now physical supervenience has been demonstrated to a high level of
 empirical satisfaction.
 
 I don't think so, unless you mean physical in some non aristotelian
 sense, in which case you are right, but that does not falsified
 comp, in that case.

I mean in the usual sense of physical - atom, electrons and so on.

 
 
 
 So we can conclude either that
 computationalism is falsified, or that our ontology is robust. But if
 the ontology is robust, UDA 1-7 demonstrates the reversal - physics
 depends only on the properties of the universal machine, not on any
 other ontological property of primitive reality. Therefore we can
 excise the physicalness of ontology - anything capable of universal
 computation will do, such as arithmetic.
 
 But this chain of argument is not the usual one, so clearly it needs
 to be examined critically. Bruno has not given his imprimatur to it,
 dor example. Also, the MGA itself needs to shoring up, particularly
 with respect to the requirement of counterfactual correctness, and
 also that other issue I just raised about the recording player
 machinery changing the physical arrangement, perhaps by just enough to
 render physical supervenience toothless too. In which case the whole
 thing falls apart.
 
 This is a bit unclear to me. You might decompose your thought in
 some steps, with what is assumed and what is derived, as I am lost
 here.
 

Did you mean the first paragraph of the second? The first paragraph is
my argument, that I asking you to focus on in the first sentence of
the second para. The latter portion of the second paragraph is just
referring to all the niggling issues we've been discussing in this
thread - the role of intuition and absurdity, whether counterfactual
correctness is required for consciousness and the issue of whether a
replayed recording really is physically identical in a non-robust
setting (I suspect that it can be made to be, but the usual
formulations such as the MGA or Maudlin's are not so clear cut, as the
machinery required to implement the replaying is usually ignored).

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread David Nyman
On 13 May 2015 at 17:14, Quentin Anciaux allco...@gmail.com wrote:

why should they predominate ? They should only have higher probability
 relatively to you.. you're in that class of observers, that certainly
 constrains what you can observe... there are many more insects than humans,
 yet, you're human... and should not expect to be a mosquito the next
 second. We could be absolutely rare, only a geographical incident in the
 whole and yet if the whole is... such observers as ourselves observing
 consistent physical environment must be.


Well, if I were a mosquito, I wouldn't of course be participating in this
conversation. So ideally I would want to be able to justify why the kind of
observer capable of this class of interaction might be restricted to
'physical' environments of the sort we observe. I think this may be related
to Bruno's idea that our being embedded in an observably 'physical'
environment is more than merely geographical - i.e. that we are somehow the
beneficiaries of some 'absolute' measure battle for the emergence of
observably 'lawlike' phenomena.

David

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Quentin Anciaux
2015-05-13 17:49 GMT+02:00 David Nyman da...@davidnyman.com:

 On 13 May 2015 at 14:53, Bruno Marchal marc...@ulb.ac.be wrote:

 I think it is a good summary, yes. Thanks!


 Building on that then, would you say that bodies and brains (including of
 course our own) fall within the class of embedded features of the machine's
 generalised 'physical environment'? Their particular role being the
 relation between the 'knower' in platonia and the environment in general.
 At a 'low' level, the comp assumption is that the FPI  results in a
 'measure battle' yielding a range of observable transformations (or
 continuations) consistent with the Born probabilities (else comp is false).
 A physics consistent with QM, in other words. But the expectation is also
 that the knower itself maintains its capacity for physical manifestation in
 relation to the transformed environment, in each continuation, in order for
 the observations to occur.

 BTW, Bruce made the point that the expected measure of the class of such
 physically-consistent observations, against the background of UD*, must be
 very close to zero. ISTM that this isn't really the point (e.g. the
 expected measure of readable books in the Library of Babel must also be
 close to zero). What seems more relevant is the presumed lack of
 'un-physical' observer -environment relations (i.e. not only 'why no white
 rabbits?' but 'why physics?'). From this perspective, the obvious
 difference between the Library of Babel and UD* is that the former must be
 'observed' externally whereas the latter is conceived as yielding a view
 'from within'. Hence what must be justified is why our particular species
 of internal observer - i.e. the kind capable of self-manifesting within
 consistently 'physical' environments, should predominate.


Hi,

why should they predominate ? They should only have higher probability
relatively to you.. you're in that class of observers, that certainly
constrains what you can observe... there are many more insects than humans,
yet, you're human... and should not expect to be a mosquito the next
second. We could be absolutely rare, only a geographical incident in the
whole and yet if the whole is... such observers as ourselves observing
consistent physical environment must be.

Quentin





 David

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Russell Standish
On Wed, May 13, 2015 at 02:17:34PM +0200, Bruno Marchal wrote:
 
 On 13 May 2015, at 07:45, Bruce Kellett wrote:
 
 
 That might be the idea. It is difficult to get to this, though,
 since the notion of primary materialism doesn't really feature
 in the argument.
 
 It does, as usually supervenience in philosophy of mind means
 primitively-physical supervenience, and it should be clear that
 this is what is at stake.

That's never been made clear in the usual discussion of supervenience
- eg the Plato.stanford article. Even Maudlin's article doesn't refer
to primitiveness. He is still talking about regular physical
supervenience. 


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: What does the MGA accomplish?

2015-05-13 Thread Russell Standish
On Wed, May 13, 2015 at 02:33:06PM +0200, Bruno Marchal wrote:
 
 
 
 3. A recording of (2) supra being played back.
 
 Nobody would call that a computation, except to evade comp's
 consequences.
 

I do, because it is a computation, albeit a rather trivial one. It is
not to evade comp's consequences, however, which I already accept from
UDA1-7. I insist on the point, because the MGA is about driving an
inconsistency between computational and physical supervenience, which
requires care and rigour to demonstrate, not careless mislabelling.

 
 Whether (3) preserving consciousness is absurd or not (and I agree
 with Russell that's to much of a stretch of intuition to judge);
 
 There is no stretch of intuition, the MGA shows that you need to put
 magic in the primitive matter to make it playing a role in the
 consciousness of physical events.
 

Where does the MGA show this? I don't believe you use the word magic
in any of your papers on the MGA.

Sorry, but this does seem a rhetorical comment.


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   >