Re: Code length = probability distribution

2012-10-31 Thread Bruno Marchal


On 31 Oct 2012, at 08:21, Russell Standish wrote:


On Sun, Oct 28, 2012 at 01:14:47PM -0700, meekerdb wrote:

On 10/28/2012 10:42 AM, Bruno Marchal wrote:

How do you answer the person who get the 1-7 points, and concludes
(as he *believes* in a primary material world, and in comp) that
this proves that a physical universe, to procede consciousness,
has to be "little" (never run a big portion of the UD, so that it
maintain the brain-consciousness identity thesis).

I understand that a familiarity with digital machines and computer
science can make us feeling that this is really an had move,
almost inventing the physical universe, to prevent its possible
explanation and origin in dreams interferences.

But, still, logically, he is still consistent. he can say yes to
the doctor, and believes he is a "unique" owner of, perhaps in the
quantum weaker sense, primitively material machine/body.

The Movie-Graph is just a way to show more precisely that such a
move is *very* ad hoc, and will ask for non Turing emulable, nor
1-person comp-indeterminacy recoverable elements in the
computation. They can only been missed by the digitalist doctor,
and so it would contradict the "yes doctor" assumption.


I agree with all that. It just that I don't see how the small universe
enters into the UDA step 8 argument.



Do you agree that in step seven, a physicalist can still believe in  
comp, yet disbelieved that the laws of physics emerge from the  
computation, by assuming that we are in a primitive physical universe  
which is too much little to sustain any reasonable part of the  
universal dovetailing?


If you are OK with this, the movie-graph argument just show directly  
that such a move will need to attach the conscious mind to non Turing  
emulable, if not non-understandable, property of matter.









A
counterfactually used register will still be used by one of my
differentiated copies, and ISTM that these alternate differentiated
minds are essential to my consciousness,


What trans-world, or trans-terms of a superposition, interaction  
would made this senseful?

I mean, is the consciousness of the one in Washington dependent of
the consciousness of the one in Moscow?

It *might* be the case, if the brain was a quantum computer. In
that case we could put ourselves in the W+M superposition state,
do some different task, get some result, and then operate a
Fourier rotation on our resulting W'+M' state and extract some
consciousness relevant information.


The brain is a quantum object.  It doesn't do quantum computations
in the sense of existing in superpositions of what we regard as
different conscious classical propositions.  But it does quantum
computations at the microscopic level that maintain it's identity as
an (approximately) classical object, i.e. it must be entangled with
the environment to maintain classicality. So the experience of being
in Washington may, because of the way the transporter is constructed
to send to both places, depend on the *possibility* of an experience
of being in Moscow.


Brent picked up on exactly what I was suggesting.


Read my answer to Brent, and may be comment it. With the quantum  
duplication, or superposition, the *possibility* is realized, and it  
is case of first person indeterminacy OR a case of not having chosen  
the right level.





I was suggesting
that the possibility of ending up in either Washington or Moscow is
essential to the the operation of consciousness.


I can agree with this.




Once differentiation
has occurred, I do agree that the one in Washington is a different
mind to the one in Moscow.


That's all what we need for the reasoning to proceed.







But if this is what you mean, it would just mean that we need to
emulate the brain at a lower level. A simulation of that quantum
brain can be done classically, and we can reiterate the 323
question at that level.


But then I think your simulation needs to include the environment
with which the brain interacts to produce its quasi-classical
character.

Brent


I'm not sure that is quite true, but it would need to include both
branches (ie dovetail on them) in the calculation.


It depends on the level. Certainly so with the quantum brain. But this  
already the case with comp. Consciousness is attached to the infinity  
of instantiations of the sates in arithmetic.





In such a case, I
think this poses serious problems for the concept of supervenience on
the physical implementation of the classical computation, as I
described in the discussion earlier about supervenience of two minds
on the same physical structure of classroom + students.


I comment it, but I don't remember if you comment on my comment. I  
don't think this was a threat for the validity of the step 8. May be  
you can elaborate, or refer to the link.
Are you saying that if a classical computer, itself emulated by the  
local quantum reality, emulate a human brain at the quantum level, he  
would be like a zombie (no conscio

Re: Code length = probability distribution

2012-10-31 Thread Russell Standish
On Sun, Oct 28, 2012 at 01:14:47PM -0700, meekerdb wrote:
> On 10/28/2012 10:42 AM, Bruno Marchal wrote:
> >How do you answer the person who get the 1-7 points, and concludes
> >(as he *believes* in a primary material world, and in comp) that
> >this proves that a physical universe, to procede consciousness,
> >has to be "little" (never run a big portion of the UD, so that it
> >maintain the brain-consciousness identity thesis).
> >
> >I understand that a familiarity with digital machines and computer
> >science can make us feeling that this is really an had move,
> >almost inventing the physical universe, to prevent its possible
> >explanation and origin in dreams interferences.
> >
> >But, still, logically, he is still consistent. he can say yes to
> >the doctor, and believes he is a "unique" owner of, perhaps in the
> >quantum weaker sense, primitively material machine/body.
> >
> >The Movie-Graph is just a way to show more precisely that such a
> >move is *very* ad hoc, and will ask for non Turing emulable, nor
> >1-person comp-indeterminacy recoverable elements in the
> >computation. They can only been missed by the digitalist doctor,
> >and so it would contradict the "yes doctor" assumption.

I agree with all that. It just that I don't see how the small universe
enters into the UDA step 8 argument.

> >>A
> >>counterfactually used register will still be used by one of my
> >>differentiated copies, and ISTM that these alternate differentiated
> >>minds are essential to my consciousness,
> >
> >What trans-world, or trans-terms of a superposition, interaction would made 
> >this senseful?
> >I mean, is the consciousness of the one in Washington dependent of
> >the consciousness of the one in Moscow?
> >
> >It *might* be the case, if the brain was a quantum computer. In
> >that case we could put ourselves in the W+M superposition state,
> >do some different task, get some result, and then operate a
> >Fourier rotation on our resulting W'+M' state and extract some
> >consciousness relevant information.
> 
> The brain is a quantum object.  It doesn't do quantum computations
> in the sense of existing in superpositions of what we regard as
> different conscious classical propositions.  But it does quantum
> computations at the microscopic level that maintain it's identity as
> an (approximately) classical object, i.e. it must be entangled with
> the environment to maintain classicality. So the experience of being
> in Washington may, because of the way the transporter is constructed
> to send to both places, depend on the *possibility* of an experience
> of being in Moscow.

Brent picked up on exactly what I was suggesting. I was suggesting
that the possibility of ending up in either Washington or Moscow is
essential to the the operation of consciousness. Once differentiation
has occurred, I do agree that the one in Washington is a different
mind to the one in Moscow. 

> 
> >But if this is what you mean, it would just mean that we need to
> >emulate the brain at a lower level. A simulation of that quantum
> >brain can be done classically, and we can reiterate the 323
> >question at that level.
> 
> But then I think your simulation needs to include the environment
> with which the brain interacts to produce its quasi-classical
> character.
> 
> Brent

I'm not sure that is quite true, but it would need to include both
branches (ie dovetail on them) in the calculation. In such a case, I
think this poses serious problems for the concept of supervenience on
the physical implementation of the classical computation, as I
described in the discussion earlier about supervenience of two minds
on the same physical structure of classroom + students.



-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-29 Thread Bruno Marchal


On 28 Oct 2012, at 21:14, meekerdb wrote:


On 10/28/2012 10:42 AM, Bruno Marchal wrote:


On 28 Oct 2012, at 00:19, Russell Standish wrote:


On Thu, Oct 25, 2012 at 05:13:50PM +0200, Bruno Marchal wrote:


Oh yes, I remember that you did agree once with the 323 principle,
but I forget what is your problem with the movie-graph/step-8,  
then.

If you find the time, I am please if you can elaborate. I think
Russell too is not yet entirely convinced.

Bruno



Indeed I still have problems with step 8, and want to get back to
that. But I want to do it when it when you're not exhausted arguming
other points...


Ah well, that's nice.



Part of the problem, is that I already agree with the
reversal at step 7, so in some sense step 8 is redundant for me.


That is interesting. You are not alone. I have made attempt to make  
that precise, and it leads to some use of stronger form of Occam  
razor.


How do you answer the person who get the 1-7 points, and concludes  
(as he *believes* in a primary material world, and in comp) that  
this proves that a physical universe, to procede consciousness, has  
to be "little" (never run a big portion of the UD, so that it  
maintain the brain-consciousness identity thesis).


I understand that a familiarity with digital machines and computer  
science can make us feeling that this is really an had move, almost  
inventing the physical universe, to prevent its possible  
explanation and origin in dreams interferences.


But, still, logically, he is still consistent. he can say yes to  
the doctor, and believes he is a "unique" owner of, perhaps in the  
quantum weaker sense, primitively material machine/body.


The Movie-Graph is just a way to show more precisely that such a  
move is *very* ad hoc, and will ask for non Turing emulable, nor 1- 
person comp-indeterminacy recoverable elements in the computation.  
They can only been missed by the digitalist doctor, and so it would  
contradict the "yes doctor" assumption.






There may be an issue with the interpretation of the 323  
principle. I

have no problems with the removal of a register that is never
physically used in the calculation of a consious computation.


OK.





The
nuances arise when we consider Everett's many-minds picture.


Do you mean the Albert-Loewer many-mind theory? I guess you mean it  
in a more general sense.





A
counterfactually used register will still be used by one of my
differentiated copies, and ISTM that these alternate differentiated
minds are essential to my consciousness,


What trans-world, or trans-terms of a superposition, interaction  
would made this senseful?
I mean, is the consciousness of the one in Washington dependent of  
the consciousness of the one in Moscow?


It *might* be the case, if the brain was a quantum computer. In  
that case we could put ourselves in the W+M superposition state, do  
some different task, get some result, and then operate a Fourier  
rotation on our resulting W'+M' state and extract some  
consciousness relevant information.


The brain is a quantum object.  It doesn't do quantum computations  
in the sense of existing in superpositions of what we regard as  
different conscious classical propositions.  But it does quantum  
computations at the microscopic level that maintain it's identity as  
an (approximately) classical object, i.e. it must be entangled with  
the environment to maintain classicality. So the experience


Not the experience, or it just means the level has not been chosen  
correctly.




of being in Washington may, because of the way the transporter is  
constructed to send to both places, depend on the *possibility* of  
an experience of being in Moscow.


Only if the level is not right. If not you can't say "yes" to a doctor  
for a digital (classical) brain, as it might easily overlook some  
entanglement.






But if this is what you mean, it would just mean that we need to  
emulate the brain at a lower level. A simulation of that quantum  
brain can be done classically, and we can reiterate the 323  
question at that level.


But then I think your simulation needs to include the environment  
with which the brain interacts to produce its quasi-classical  
character.


As you wish, but the bill of the duplication will be rather salty. No  
conceptual problem here, though.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-29 Thread Bruno Marchal


On 28 Oct 2012, at 20:41, meekerdb wrote:


On 10/28/2012 8:28 AM, Bruno Marchal wrote:



On 27 Oct 2012, at 21:35, meekerdb wrote:


On 10/27/2012 7:56 AM, Bruno Marchal wrote:



On 26 Oct 2012, at 21:30, meekerdb wrote:


On 10/26/2012 6:57 AM, Bruno Marchal wrote:


Oh yes, I remember that you did agree once with the 323  
principle, but I forget what is your problem with the movie- 
graph/step-8, then. If you find the time, I am please if you  
can elaborate. I think Russell too is not yet entirely  
convinced.


What bothers me about it is that counterfactuals are virtually  
infinite.  So to make the argument go through I think it  
implicitly requires a whole 'world';


Not really, as here, you can use Maudlin who showed that the  
conuterfactuals does not require physical activity. In MGA, if  
you give a role to the conuterfactual, you violate the 323  
principle, so that you attribute a functional role in a  
particular computation to object having no physical activity  
for the actual computation.


But I'm not sure about the 323 principle in a QM world.


Even QM worlds, with QM observers, even having Q brains, are  
emulated in the UD, or in arithmetic. If the 323 principles does  
not hold for them, it might mean that QM is the winning  
computation, but then you have to explain this from arithmetic.


Or you are meaning that you need a *primary* QM world and brain?   
In that case, my consciousness would not be invariant when the Q  
brain is entirely simulated by a classical machine, and comp is  
made false.


The latter.  But why the restriction to "my consciousness"?  Only  
a small fraction of thought is conscious.


Consciousness is what will select the arithmetical or computer- 
science-theoretical branches in the arithmetical reality. It is not  
a fraction of thought which is conscious, it is a person, supported  
by infinities of "unconscious" computations.
If you opt for the latter, you can't accept a digital brain, not  
even a quantum one, per computatio.


That doesn't follow.  My new digital brain will be entangled with  
this QM world, just as my biological one was.  It may not be exactly  
the same consciousness but I think it will be similar; just as I  
think general intelligence will always be accompanied by some kind  
of consciousness. Supposing this entanglement is necessary is why I  
think a simulation must simulate a whole world in order to  
instantiate human like consciousness.


But this does not change the UDA. That simulation is still classicaly  
emulable, and emulated in arithmetic, or by a concrete UD. The 323  
principle is correct at that level. It is just a case of very low  
substitution level.






You negate comp, by putting something magical, needed for your  
consciousness, in the quantum material reality.


Not magical.  As you often point out, QM is computable.  You are  
making an assumption that the substitution can be done at the  
classical level where 'classical' is taken not as an approximation  
but to be fundamental.


But it can, as QM is computable, and classically emulable. So I don't  
see the objection here.


If you assume you need the QM at the basic ontological level, then  
comp is false, as comp, by definition assume that any universal system  
can do the work, if it simulates the right level.






In that non comp reality you are back with all questions unsolved:  
where does that QM reality comes from, how do you singularize  
actual conscious experiences in it, etc.


Good questions, but just because I don't know the answers it doesn't  
follow that I should accept your answer.  Your theory also has  
unanswered questions.


Only math problems, besides what is explained to be absolutely non  
explainable with comp (the 1% of consciousness, as I refer often too).  
That was the goal, transforming the mind-body problem into a math  
problem.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-28 Thread meekerdb

On 10/28/2012 10:42 AM, Bruno Marchal wrote:


On 28 Oct 2012, at 00:19, Russell Standish wrote:


On Thu, Oct 25, 2012 at 05:13:50PM +0200, Bruno Marchal wrote:


Oh yes, I remember that you did agree once with the 323 principle,
but I forget what is your problem with the movie-graph/step-8, then.
If you find the time, I am please if you can elaborate. I think
Russell too is not yet entirely convinced.

Bruno



Indeed I still have problems with step 8, and want to get back to
that. But I want to do it when it when you're not exhausted arguming
other points...


Ah well, that's nice.



Part of the problem, is that I already agree with the
reversal at step 7, so in some sense step 8 is redundant for me.


That is interesting. You are not alone. I have made attempt to make that precise, and it 
leads to some use of stronger form of Occam razor.


How do you answer the person who get the 1-7 points, and concludes (as he *believes* in 
a primary material world, and in comp) that this proves that a physical universe, to 
procede consciousness, has to be "little" (never run a big portion of the UD, so that it 
maintain the brain-consciousness identity thesis).


I understand that a familiarity with digital machines and computer science can make us 
feeling that this is really an had move, almost inventing the physical universe, to 
prevent its possible explanation and origin in dreams interferences.


But, still, logically, he is still consistent. he can say yes to the doctor, and 
believes he is a "unique" owner of, perhaps in the quantum weaker sense, primitively 
material machine/body.


The Movie-Graph is just a way to show more precisely that such a move is *very* ad hoc, 
and will ask for non Turing emulable, nor 1-person comp-indeterminacy recoverable 
elements in the computation. They can only been missed by the digitalist doctor, and so 
it would contradict the "yes doctor" assumption.






There may be an issue with the interpretation of the 323 principle. I
have no problems with the removal of a register that is never
physically used in the calculation of a consious computation.


OK.





The
nuances arise when we consider Everett's many-minds picture.


Do you mean the Albert-Loewer many-mind theory? I guess you mean it in a more general 
sense.





A
counterfactually used register will still be used by one of my
differentiated copies, and ISTM that these alternate differentiated
minds are essential to my consciousness,


What trans-world, or trans-terms of a superposition, interaction would made 
this senseful?
I mean, is the consciousness of the one in Washington dependent of the consciousness of 
the one in Moscow?


It *might* be the case, if the brain was a quantum computer. In that case we could put 
ourselves in the W+M superposition state, do some different task, get some result, and 
then operate a Fourier rotation on our resulting W'+M' state and extract some 
consciousness relevant information.


The brain is a quantum object.  It doesn't do quantum computations in the sense of 
existing in superpositions of what we regard as different conscious classical 
propositions.  But it does quantum computations at the microscopic level that maintain 
it's identity as an (approximately) classical object, i.e. it must be entangled with the 
environment to maintain classicality. So the experience of being in Washington may, 
because of the way the transporter is constructed to send to both places, depend on the 
*possibility* of an experience of being in Moscow.


But if this is what you mean, it would just mean that we need to emulate the brain at a 
lower level. A simulation of that quantum brain can be done classically, and we can 
reiterate the 323 question at that level.


But then I think your simulation needs to include the environment with which the brain 
interacts to produce its quasi-classical character.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-28 Thread meekerdb

On 10/28/2012 8:28 AM, Bruno Marchal wrote:


On 27 Oct 2012, at 21:35, meekerdb wrote:


On 10/27/2012 7:56 AM, Bruno Marchal wrote:


On 26 Oct 2012, at 21:30, meekerdb wrote:


On 10/26/2012 6:57 AM, Bruno Marchal wrote:
Oh yes, I remember that you did agree once with the 323 principle, but I forget 
what is your problem with the movie-graph/step-8, then. If you find the time, I am 
please if you can elaborate. I think Russell too is not yet entirely convinced.


What bothers me about it is that counterfactuals are virtually infinite.  So to 
make the argument go through I think it implicitly requires a whole 'world';


Not really, as here, you can use Maudlin who showed that the conuterfactuals does 
not require physical activity. In MGA, if you give a role to the conuterfactual, you 
violate the 323 principle, so that you attribute a functional role in a particular 
computation to object having no physical activity for the actual computation. 


But I'm not sure about the 323 principle in a QM world.


Even QM worlds, with QM observers, even having Q brains, are emulated in the UD, or in 
arithmetic. If the 323 principles does not hold for them, it might mean that QM is the 
winning computation, but then you have to explain this from arithmetic.


Or you are meaning that you need a *primary* QM world and brain?  In that case, my 
consciousness would not be invariant when the Q brain is entirely simulated by a 
classical machine, and comp is made false.


The latter.  But why the restriction to "my consciousness"?  Only a small fraction of 
thought is conscious.


Consciousness is what will select the arithmetical or computer-science-theoretical 
branches in the arithmetical reality. It is not a fraction of thought which is 
conscious, it is a person, supported by infinities of "unconscious" computations.
If you opt for the latter, you can't accept a digital brain, not even a quantum one, per 
computatio.


That doesn't follow.  My new digital brain will be entangled with this QM world, just as 
my biological one was.  It may not be exactly the same consciousness but I think it will 
be similar; just as I think general intelligence will always be accompanied by some kind 
of consciousness. Supposing this entanglement is necessary is why I think a simulation 
must simulate a whole world in order to instantiate human like consciousness.


You negate comp, by putting something magical, needed for your consciousness, in the 
quantum material reality.


Not magical.  As you often point out, QM is computable.  You are making an assumption that 
the substitution can be done at the classical level where 'classical' is taken not as an 
approximation but to be fundamental.


In that non comp reality you are back with all questions unsolved: where does that QM 
reality comes from, how do you singularize actual conscious experiences in it, etc.


Good questions, but just because I don't know the answers it doesn't follow that I should 
accept your answer.  Your theory also has unanswered questions.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-28 Thread Bruno Marchal


On 28 Oct 2012, at 00:19, Russell Standish wrote:


On Thu, Oct 25, 2012 at 05:13:50PM +0200, Bruno Marchal wrote:


Oh yes, I remember that you did agree once with the 323 principle,
but I forget what is your problem with the movie-graph/step-8, then.
If you find the time, I am please if you can elaborate. I think
Russell too is not yet entirely convinced.

Bruno



Indeed I still have problems with step 8, and want to get back to
that. But I want to do it when it when you're not exhausted arguming
other points...


Ah well, that's nice.



Part of the problem, is that I already agree with the
reversal at step 7, so in some sense step 8 is redundant for me.


That is interesting. You are not alone. I have made attempt to make  
that precise, and it leads to some use of stronger form of Occam razor.


How do you answer the person who get the 1-7 points, and concludes (as  
he *believes* in a primary material world, and in comp) that this  
proves that a physical universe, to procede consciousness, has to be  
"little" (never run a big portion of the UD, so that it maintain the  
brain-consciousness identity thesis).


I understand that a familiarity with digital machines and computer  
science can make us feeling that this is really an had move, almost  
inventing the physical universe, to prevent its possible explanation  
and origin in dreams interferences.


But, still, logically, he is still consistent. he can say yes to the  
doctor, and believes he is a "unique" owner of, perhaps in the quantum  
weaker sense, primitively material machine/body.


The Movie-Graph is just a way to show more precisely that such a move  
is *very* ad hoc, and will ask for non Turing emulable, nor 1-person  
comp-indeterminacy recoverable elements in the computation. They can  
only been missed by the digitalist doctor, and so it would contradict  
the "yes doctor" assumption.






There may be an issue with the interpretation of the 323 principle. I
have no problems with the removal of a register that is never
physically used in the calculation of a consious computation.


OK.





The
nuances arise when we consider Everett's many-minds picture.


Do you mean the Albert-Loewer many-mind theory? I guess you mean it in  
a more general sense.





A
counterfactually used register will still be used by one of my
differentiated copies, and ISTM that these alternate differentiated
minds are essential to my consciousness,


What trans-world, or trans-terms of a superposition, interaction would  
made this senseful?
I mean, is the consciousness of the one in Washington dependent of the  
consciousness of the one in Moscow?


It *might* be the case, if the brain was a quantum computer. In that  
case we could put ourselves in the W+M superposition state, do some  
different task, get some result, and then operate a Fourier rotation  
on our resulting W'+M' state and extract some consciousness relevant  
information.
But if this is what you mean, it would just mean that we need to  
emulate the brain at a lower level. A simulation of that quantum brain  
can be done classically, and we can reiterate the 323 question at that  
level.








and that removing the
counterfactually-used register in this case may well prevent my
consciousness.

To sum up, a counterfactually-used register is being physically used
if many-worlds is accepted, so therefore the 323 principle isn't
applicable.


In what sense is it more used than the person in Washington and its  
doppelganger in Moscow? They both handle just a different part of the  
initial person counterfactuals.


Again, that would play a comp genuine role only if there is a  
mechanism to extract information from the counterfactuals, but this  
means the substitution level is the quantum level, which is still  
emulable by, and actually even emulated by elementary arithmetic.


All right?

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-28 Thread Bruno Marchal


On 27 Oct 2012, at 21:35, meekerdb wrote:


On 10/27/2012 7:56 AM, Bruno Marchal wrote:



On 26 Oct 2012, at 21:30, meekerdb wrote:


On 10/26/2012 6:57 AM, Bruno Marchal wrote:


Oh yes, I remember that you did agree once with the 323  
principle, but I forget what is your problem with the movie- 
graph/step-8, then. If you find the time, I am please if you  
can elaborate. I think Russell too is not yet entirely convinced.


What bothers me about it is that counterfactuals are virtually  
infinite.  So to make the argument go through I think it  
implicitly requires a whole 'world';


Not really, as here, you can use Maudlin who showed that the  
conuterfactuals does not require physical activity. In MGA, if  
you give a role to the conuterfactual, you violate the 323  
principle, so that you attribute a functional role in a  
particular computation to object having no physical activity for  
the actual computation.


But I'm not sure about the 323 principle in a QM world.


Even QM worlds, with QM observers, even having Q brains, are  
emulated in the UD, or in arithmetic. If the 323 principles does  
not hold for them, it might mean that QM is the winning  
computation, but then you have to explain this from arithmetic.


Or you are meaning that you need a *primary* QM world and brain?   
In that case, my consciousness would not be invariant when the Q  
brain is entirely simulated by a classical machine, and comp is  
made false.


The latter.  But why the restriction to "my consciousness"?  Only a  
small fraction of thought is conscious.


Consciousness is what will select the arithmetical or computer-science- 
theoretical branches in the arithmetical reality. It is not a fraction  
of thought which is conscious, it is a person, supported by infinities  
of "unconscious" computations.
If you opt for the latter, you can't accept a digital brain, not even  
a quantum one, per computatio. You negate comp, by putting something  
magical, needed for your consciousness, in the quantum material reality.
In that non comp reality you are back with all questions unsolved:  
where does that QM reality comes from, how do you singularize actual  
conscious experiences in it, etc.


Bruno




Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-27 Thread Russell Standish
On Thu, Oct 25, 2012 at 05:13:50PM +0200, Bruno Marchal wrote:
> 
> Oh yes, I remember that you did agree once with the 323 principle,
> but I forget what is your problem with the movie-graph/step-8, then.
> If you find the time, I am please if you can elaborate. I think
> Russell too is not yet entirely convinced.
> 
> Bruno
> 

Indeed I still have problems with step 8, and want to get back to
that. But I want to do it when it when you're not exhausted arguming
other points... Part of the problem, is that I already agree with the
reversal at step 7, so in some sense step 8 is redundant for me.

There may be an issue with the interpretation of the 323 principle. I
have no problems with the removal of a register that is never
physically used in the calculation of a consious computation. The
nuances arise when we consider Everett's many-minds picture. A
counterfactually used register will still be used by one of my
differentiated copies, and ISTM that these alternate differentiated
minds are essential to my consciousness, and that removing the
counterfactually-used register in this case may well prevent my
consciousness.

To sum up, a counterfactually-used register is being physically used
if many-worlds is accepted, so therefore the 323 principle isn't
applicable.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-27 Thread meekerdb

On 10/27/2012 7:56 AM, Bruno Marchal wrote:


On 26 Oct 2012, at 21:30, meekerdb wrote:


On 10/26/2012 6:57 AM, Bruno Marchal wrote:
Oh yes, I remember that you did agree once with the 323 principle, but I forget what 
is your problem with the movie-graph/step-8, then. If you find the time, I am please 
if you can elaborate. I think Russell too is not yet entirely convinced.


What bothers me about it is that counterfactuals are virtually infinite.  So to make 
the argument go through I think it implicitly requires a whole 'world';


Not really, as here, you can use Maudlin who showed that the conuterfactuals does not 
require physical activity. In MGA, if you give a role to the conuterfactual, you 
violate the 323 principle, so that you attribute a functional role in a particular 
computation to object having no physical activity for the actual computation. 


But I'm not sure about the 323 principle in a QM world.


Even QM worlds, with QM observers, even having Q brains, are emulated in the UD, or in 
arithmetic. If the 323 principles does not hold for them, it might mean that QM is the 
winning computation, but then you have to explain this from arithmetic.


Or you are meaning that you need a *primary* QM world and brain?  In that case, my 
consciousness would not be invariant when the Q brain is entirely simulated by a 
classical machine, and comp is made false.


The latter.  But why the restriction to "my consciousness"?  Only a small fraction of 
thought is conscious.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-27 Thread Bruno Marchal


On 26 Oct 2012, at 21:30, meekerdb wrote:


On 10/26/2012 6:57 AM, Bruno Marchal wrote:


Oh yes, I remember that you did agree once with the 323  
principle, but I forget what is your problem with the movie-graph/ 
step-8, then. If you find the time, I am please if you can  
elaborate. I think Russell too is not yet entirely convinced.


What bothers me about it is that counterfactuals are virtually  
infinite.  So to make the argument go through I think it  
implicitly requires a whole 'world';


Not really, as here, you can use Maudlin who showed that the  
conuterfactuals does not require physical activity. In MGA, if you  
give a role to the conuterfactual, you violate the 323 principle,  
so that you attribute a functional role in a particular computation  
to object having no physical activity for the actual computation.


But I'm not sure about the 323 principle in a QM world.


Even QM worlds, with QM observers, even having Q brains, are emulated  
in the UD, or in arithmetic. If the 323 principles does not hold for  
them, it might mean that QM is the winning computation, but then you  
have to explain this from arithmetic.


Or you are meaning that you need a *primary* QM world and brain?  In  
that case, my consciousness would not be invariant when the Q brain is  
entirely simulated by a classical machine, and comp is made false.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-26 Thread meekerdb

On 10/26/2012 6:57 AM, Bruno Marchal wrote:
Oh yes, I remember that you did agree once with the 323 principle, but I forget what 
is your problem with the movie-graph/step-8, then. If you find the time, I am please 
if you can elaborate. I think Russell too is not yet entirely convinced.


What bothers me about it is that counterfactuals are virtually infinite.  So to make 
the argument go through I think it implicitly requires a whole 'world';


Not really, as here, you can use Maudlin who showed that the conuterfactuals does not 
require physical activity. In MGA, if you give a role to the conuterfactual, you violate 
the 323 principle, so that you attribute a functional role in a particular computation 
to object having no physical activity for the actual computation. 


But I'm not sure about the 323 principle in a QM world.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-26 Thread Bruno Marchal


On 25 Oct 2012, at 19:49, meekerdb wrote:


On 10/25/2012 8:13 AM, Bruno Marchal wrote:



Brent wrote:
If you're going to explain purpose, meaning, qualia,  
thoughts,...you need to start from something simpler that does not  
assume those things.  Bruno proposes to explain  matter as well,  
so he has to start without matter.


Actually I deduce the absence of matter from comp. If we bet on  
comp, we have no other choice than to explain matter from dream  
coherence notions. We can add matter, but it would be like  
invisible horses, and vision is a first person experience and it  
relies on the infinities of computation in arithmetic.


If you are with John Clark, and me, on comp, then you have to show  
a flaw in UDA if you disagree with this. At least Clark tells us  
where he stops in UDA (step 3, too bad nobody understands his  
point, which seems an obvious  confusion of 1 and 3-views).


I think you did follow the UDA up to step seven. Is it really the  
step 8 which still makes problem? It is a bit more subtle, some  
people have some difficulty there. Let us discuss them, or find  
where we disagree at least.


Oh yes, I remember that you did agree once with the 323 principle,  
but I forget what is your problem with the movie-graph/step-8,  
then. If you find the time, I am please if you can elaborate. I  
think Russell too is not yet entirely convinced.


What bothers me about it is that counterfactuals are virtually  
infinite.  So to make the argument go through I think it implicitly  
requires a whole 'world';


Not really, as here, you can use Maudlin who showed that the  
conuterfactuals does not require physical activity. In MGA, if you  
give a role to the conuterfactual, you violate the 323 principle, so  
that you attribute a functional role in a particular computation to  
object having no physical activity for the actual computation.




which is why I suspect people, consciousness, etc. can only exist in  
a world of matter (note that I'm not saying *primitive* ur-stuff)  
that can embody the computation.


But then the consequences can follow. The UD computes all possible  
couples subject-environments, including infinite environment (the  
first person cannot distinguish an infinite environment with a  
sequence of computations going through his state in bigger and bigger  
environments. This is already used in step 7.



That you use it to conclude that no matter (not even secondary  
matter) is needed is misleading.


Secondary matter can have a role, and certainly has a role, as the  
"real" computations are using it. Our brains remains material, with  
comp. The requirement is that such materiality is secondary on all  
immaterial computations existing in arithmetic.




But I need to read it again.


OK.
MGA is not easy, and I am sure it can be improved.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-26 Thread Bruno Marchal


On 25 Oct 2012, at 19:10, Stephen P. King wrote:


On 10/25/2012 11:13 AM, Bruno Marchal wrote:
If you're going to explain purpose, meaning, qualia,  
thoughts,...you need to start from something simpler that does not  
assume those things.  Bruno proposes to explain  matter as well,  
so he has to start without matter.


Actually I deduce the absence of matter from comp. If we bet on  
comp, we have no other choice than to explain matter from dream  
coherence notions. We can add matter, but it would be like  
invisible horses, and vision is a first person experience and it  
relies on the infinities of computation in arithmetic.

Dear Bruno,

   No, matter would not be "like invisible horses". It quantifies  
the "resources" required for a computation to run. You safely  
neglect this in your explanations because you set COMP in the  
platonic realm of ideas, where time, space and limitations do not  
exist. I am trying to get your result to work in conditions that  
cannot ignore resource requirements.



I was talking on primitive matter. Arithmetic is full of local  
relative resources. No problem.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-25 Thread meekerdb

On 10/25/2012 8:13 AM, Bruno Marchal wrote:


On 24 Oct 2012, at 22:20, meekerdb wrote:


On 10/24/2012 11:58 AM, Alberto G. Corona wrote:



2012/10/23 Bruno Marchal mailto:marc...@ulb.ac.be>>


On 22 Oct 2012, at 21:50, Alberto G. Corona wrote:




2012/10/22 Stephen P. King mailto:stephe...@charter.net>>

On 10/22/2012 2:38 AM, Alberto G. Corona wrote:



2012/10/22 Russell Standish mailto:li...@hpcoders.com.au>>

On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> Hi Rusell,
>
> How does Schmidhuber consider the physicality of resources?
>
> --
> Onward!
>
> Stephen

No. The concept doesn't enter consideration. What he considers is 
that
the Great Programmer has finite (or perhaps bounded resources), 
which
gives an additional boost to algorithms that run efficiently.

that愀 the problem that I insist, has  a natural solution considering the
computational needs of living beings under natural selection, without
resorting to a everithing-theory of reality based of a UD algorithm, 
like
the Schmidhuber one.

--


Dear Alberto,

My suspicion is that there does *not* exist a single global 
computation
of the behavior of living (or other) beings and that "natural 
selection" is a
local computation between each being and its environment. We end up 
with a
model where there are many computations occurring concurrently and 
there is
no single computation that can dovetail all of them together such that a
picture of the universe can be considered as a single simulation 
running on a
single computer except for a very trivial case (where the total 
universe is
in a bound state and at maximum equilibrium).

Yes, that'`s also what I think. These computations are material, in the 
sense
that they are subject to limitation of resources (nervous signal speeds, 
chemical
equilibrion, diffusion of hormones etc. So the bias toward a low kolmogorov
complexity of an habitable universe can be naturally deduced from that.

Natural selection is the mechanism for making discoveries, individual life
incorporate these discoveries, called adaptations. A cat that jump to catch 
a
fish has not discovered the laws of newton, Instead, the evolution has 
found a
way to modulate the force exerted by the muscles according with how long 
the jump
must be, and depending on the weight of the cat (that is calibrated by 
playing at
at the early age).

But this technique depends on the lineality and continuity of the law of 
newton
for short distances. If the law of newton were more complicated, that would 
not
be possible. So a low complexity of the macroscopical laws permit a low
complexity and a low use of resources of the living computers that deal with
them, and a faster dsicovery of adaptations by natural selection. But that
complexity has a upper limit; Lineality seems to be a requirement for the
operation of natural selection in the search for adaptations.


http://ilevolucionista.blogspot.com.es/2008/06/ockham-razor-and-genetic-algoritms-life.html




I kind of agree with all what you say here, and on the basic philosophy. 
But I
think that what you describe admits a more general description, in which 
the laws
of physics are themselves selected by a process similar but more general 
than
evolution. It makes me think that life (and brains at some different level) 
is
what happen when a universal system mirrors itself. A universal machine is a
dynamical mirror, and life can develop once you put the dynamical mirror in 
front
of itself (again a case of diagonalization). I think I follow your 
philosophy, but
apply it in arithmetic and/or computer science.


I envision also a kind of selection of the mind over the matter , since the most basic 
notion of existence implies and observer, that is,a  mind and a mind, in a universe 
where history has a meaning (that discard boltzmann brains) , and  hold a kind of 
intelligence (since intelligence permits to make use of experience) impose very strong 
antropic restrictions not only in the nature of the phisical laws, as I said, but in 
the matematicity of them. With matematicity i mean a reuse of the same simple 
structures at different levels of reality. I mean that the most simple mathematical 
structures are more represented in the structure of reality than complicated ones, to 
minimize the complexity.


But aren't those all the same conclusions that would arise from assuming that 
mathematics and physical laws are our inventions for describing and reasoning about the 
world and they are simple because that makes them understandable; they reflect our 
limited cognitive ability to think 

Re: Code length = probability distribution

2012-10-25 Thread Stephen P. King

On 10/25/2012 11:13 AM, Bruno Marchal wrote:
If you're going to explain purpose, meaning, qualia, thoughts,...you 
need to start from something simpler that does not assume those 
things.  Bruno proposes to explain  matter as well, so he has to 
start without matter.


Actually I deduce the absence of matter from comp. If we bet on comp, 
we have no other choice than to explain matter from dream coherence 
notions. We can add matter, but it would be like invisible horses, and 
vision is a first person experience and it relies on the infinities of 
computation in arithmetic.

Dear Bruno,

No, matter would not be "like invisible horses". It quantifies the 
"resources" required for a computation to run. You safely neglect this 
in your explanations because you set COMP in the platonic realm of 
ideas, where time, space and limitations do not exist. I am trying to 
get your result to work in conditions that cannot ignore resource 
requirements.


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-25 Thread Bruno Marchal


On 24 Oct 2012, at 22:20, meekerdb wrote:


On 10/24/2012 11:58 AM, Alberto G. Corona wrote:




2012/10/23 Bruno Marchal 

On 22 Oct 2012, at 21:50, Alberto G. Corona wrote:




2012/10/22 Stephen P. King 
On 10/22/2012 2:38 AM, Alberto G. Corona wrote:



2012/10/22 Russell Standish 
On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> Hi Rusell,
>
> How does Schmidhuber consider the physicality of resources?
>
> --
> Onward!
>
> Stephen

No. The concept doesn't enter consideration. What he considers is  
that
the Great Programmer has finite (or perhaps bounded resources),  
which

gives an additional boost to algorithms that run efficiently.

that´s the problem that I insist, has  a natural solution  
considering the computational needs of living beings under  
natural selection, without resorting to a everithing-theory of  
reality based of a UD algorithm, like the Schmidhuber one.

--

Dear Alberto,

My suspicion is that there does not exist a single global  
computation of the behavior of living (or other) beings and that  
"natural selection" is a local computation between each being and  
its environment. We end up with a model where there are many  
computations occurring concurrently and there is no single  
computation that can dovetail all of them together such that a  
picture of the universe can be considered as a single simulation  
running on a single computer except for a very trivial case (where  
the total universe is in a bound state and at maximum equilibrium).


Yes, that'`s also what I think. These computations are material,  
in the sense that they are subject to limitation of resources  
(nervous signal speeds, chemical equilibrion, diffusion of  
hormones etc. So the bias toward a low kolmogorov complexity of an  
habitable universe can be naturally deduced from that.


Natural selection is the mechanism for making discoveries,  
individual life incorporate these discoveries, called adaptations.  
A cat that jump to catch a fish has not discovered the laws of  
newton, Instead, the evolution has found a way to modulate the  
force exerted by the muscles according with how long the jump must  
be, and depending on the weight of the cat (that is calibrated by  
playing at at the early age).


But this technique depends on the lineality and continuity of the  
law of newton for short distances. If the law of newton were more  
complicated, that would not be possible. So a low complexity of  
the macroscopical laws permit a low complexity and a low use of  
resources of the living computers that deal with them, and a  
faster dsicovery of adaptations by natural selection. But that  
complexity has a upper limit; Lineality seems to be a requirement  
for the operation of natural selection in the search for  
adaptations.


 
http://ilevolucionista.blogspot.com.es/2008/06/ockham-razor-and-genetic-algoritms-life.html




I kind of agree with all what you say here, and on the basic  
philosophy. But I think that what you describe admits a more  
general description, in which the laws of physics are themselves  
selected by a process similar but more general than evolution. It  
makes me think that life (and brains at some different level) is  
what happen when a universal system mirrors itself. A universal  
machine is a dynamical mirror, and life can develop once you put  
the dynamical mirror in front of itself (again a case of  
diagonalization). I think I follow your philosophy, but apply it in  
arithmetic and/or computer science.


I envision also a kind of selection of the mind over the matter ,  
since the most basic notion of existence implies and observer, that  
is,a  mind and a mind, in a universe where history has a meaning  
(that discard boltzmann brains) , and  hold a kind of intelligence  
(since intelligence permits to make use of experience) impose very  
strong antropic restrictions not only in the nature of the phisical  
laws, as I said, but in the matematicity of them. With matematicity  
i mean a reuse of the same simple structures at different levels of  
reality. I mean that the most simple mathematical structures are  
more represented in the structure of reality  than  
complicated ones, to minimize the complexity.


But aren't those all the same conclusions that would arise from  
assuming that mathematics and physical laws are our inventions for  
describing and reasoning about the world and they are simple because  
that makes them understandable; they reflect our limited cognitive  
ability to think about only a few things at a time.  Notice that  
physics, as it has become more mathematical and abstract, has left  
more and more to contingency and the randomness of QM.  So  
physicists no longer propose to answer, "Why are there just eight  
planets?" or "Why is there a Moon?"




Now I am just afraid, to talk frankly, that it looks like you have  
a reductionist conception of numbers and machines, which does not  
take into

Re: Code length = probability distribution

2012-10-25 Thread Bruno Marchal


On 24 Oct 2012, at 20:58, Alberto G. Corona wrote:




2012/10/23 Bruno Marchal 

On 22 Oct 2012, at 21:50, Alberto G. Corona wrote:




2012/10/22 Stephen P. King 
On 10/22/2012 2:38 AM, Alberto G. Corona wrote:



2012/10/22 Russell Standish 
On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> Hi Rusell,
>
> How does Schmidhuber consider the physicality of resources?
>
> --
> Onward!
>
> Stephen

No. The concept doesn't enter consideration. What he considers is  
that
the Great Programmer has finite (or perhaps bounded resources),  
which

gives an additional boost to algorithms that run efficiently.

that´s the problem that I insist, has  a natural solution  
considering the computational needs of living beings under natural  
selection, without resorting to a everithing-theory of reality  
based of a UD algorithm, like the Schmidhuber one.

--

Dear Alberto,

My suspicion is that there does not exist a single global  
computation of the behavior of living (or other) beings and that  
"natural selection" is a local computation between each being and  
its environment. We end up with a model where there are many  
computations occurring concurrently and there is no single  
computation that can dovetail all of them together such that a  
picture of the universe can be considered as a single simulation  
running on a single computer except for a very trivial case (where  
the total universe is in a bound state and at maximum equilibrium).


Yes, that'`s also what I think. These computations are material, in  
the sense that they are subject to limitation of resources (nervous  
signal speeds, chemical equilibrion, diffusion of hormones etc. So  
the bias toward a low kolmogorov complexity of an habitable  
universe can be naturally deduced from that.


Natural selection is the mechanism for making discoveries,  
individual life incorporate these discoveries, called adaptations.  
A cat that jump to catch a fish has not discovered the laws of  
newton, Instead, the evolution has found a way to modulate the  
force exerted by the muscles according with how long the jump must  
be, and depending on the weight of the cat (that is calibrated by  
playing at at the early age).


But this technique depends on the lineality and continuity of the  
law of newton for short distances. If the law of newton were more  
complicated, that would not be possible. So a low complexity of the  
macroscopical laws permit a low complexity and a low use of  
resources of the living computers that deal with them, and a faster  
dsicovery of adaptations by natural selection. But that complexity  
has a upper limit; Lineality seems to be a requirement for the  
operation of natural selection in the search for adaptations.


 
http://ilevolucionista.blogspot.com.es/2008/06/ockham-razor-and-genetic-algoritms-life.html




I kind of agree with all what you say here, and on the basic  
philosophy. But I think that what you describe admits a more general  
description, in which the laws of physics are themselves selected by  
a process similar but more general than evolution. It makes me think  
that life (and brains at some different level) is what happen when a  
universal system mirrors itself. A universal machine is a dynamical  
mirror, and life can develop once you put the dynamical mirror in  
front of itself (again a case of diagonalization). I think I follow  
your philosophy, but apply it in arithmetic and/or computer science.


I envision also a kind of selection of the mind over the matter ,  
since the most basic notion of existence implies and observer, that  
is,a  mind and a mind, in a universe where history has a meaning  
(that discard boltzmann brains) , and  hold a kind of intelligence  
(since intelligence permits to make use of experience) impose very  
strong antropic restrictions not only in the nature of the phisical  
laws, as I said, but in the matematicity of them. With matematicity  
i mean a reuse of the same simple structures at different levels of  
reality. I mean that the most simple mathematical structures are  
more represented in the structure of reality than complicated ones,  
to minimize the complexity.


Now I am just afraid, to talk frankly, that it looks like you have a  
reductionist conception of numbers and machines, which does not take  
into account the discovery of the universal machine (by the Post- 
Church-Kleene-Turing thesis) which makes you miss that your  
philosophy might be the natural philosophy of all universal numbers.  
(I probably exaggerate my point for attempt to be short).


I do not discard your point of view. the difference is that I go the  
easy path, from inside to outside, in a cartesian process, may call  
it,  So my interest is centered not in a simple production  
principle, and explain the human experience from it, but to go from  
consciousness (with some leaps of faith) out to ascertain the nature  
of what is known with the aid of som

Re: Code length = probability distribution

2012-10-24 Thread Bruno Marchal


On 22 Oct 2012, at 20:13, Stephen P. King wrote:


On 10/22/2012 2:38 AM, Alberto G. Corona wrote:



2012/10/22 Russell Standish 
On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> Hi Rusell,
>
> How does Schmidhuber consider the physicality of resources?
>
> --
> Onward!
>
> Stephen

No. The concept doesn't enter consideration. What he considers is  
that

the Great Programmer has finite (or perhaps bounded resources), which
gives an additional boost to algorithms that run efficiently.

that´s the problem that I insist, has  a natural solution  
considering the computational needs of living beings under natural  
selection, without resorting to a everithing-theory of reality  
based of a UD algorithm, like the Schmidhuber one.

--

Dear Alberto,

My suspicion is that there does not exist a single global  
computation of the behavior of living (or other) beings and that  
"natural selection" is a local computation between each being and  
its environment. We end up with a model where there are many  
computations occurring concurrently and there is no single  
computation that can dovetail all of them together such that a  
picture of the universe can be considered as a single simulation  
running on a single computer except for a very trivial case (where  
the total universe is in a bound state and at maximum equilibrium).



I agree. But a UD, or just arithmetic define a superstructure  
containing this, and consciousness, or first person, is what will  
select it. This explains why the world will look computational, and  
still never be entirely computational. And indeed, we cannot simulate  
with a computer even just a quantum bit observation, without  
simulating the observer looking at that qubit.
Note that this does NOT make QM, nor the comp-physics violating Church  
thesis, at least assuming comp, as we can simulate exactly a qubit  
behavior coupled to an observer (but we have to include the observer  
in the simulation).


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-24 Thread meekerdb

On 10/24/2012 11:58 AM, Alberto G. Corona wrote:



2012/10/23 Bruno Marchal mailto:marc...@ulb.ac.be>>


On 22 Oct 2012, at 21:50, Alberto G. Corona wrote:




2012/10/22 Stephen P. King mailto:stephe...@charter.net>>

On 10/22/2012 2:38 AM, Alberto G. Corona wrote:



2012/10/22 Russell Standish mailto:li...@hpcoders.com.au>>

On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> Hi Rusell,
>
> How does Schmidhuber consider the physicality of resources?
>
> --
> Onward!
>
> Stephen

No. The concept doesn't enter consideration. What he considers is 
that
the Great Programmer has finite (or perhaps bounded resources), 
which
gives an additional boost to algorithms that run efficiently.

that´s the problem that I insist, has  a natural solution considering 
the
computational needs of living beings under natural selection, without
resorting to a everithing-theory of reality based of a UD algorithm, 
like
the Schmidhuber one.

--


Dear Alberto,

My suspicion is that there does *not* exist a single global 
computation of
the behavior of living (or other) beings and that "natural selection" 
is a
local computation between each being and its environment. We end up 
with a
model where there are many computations occurring concurrently and 
there is no
single computation that can dovetail all of them together such that a 
picture
of the universe can be considered as a single simulation running on a 
single
computer except for a very trivial case (where the total universe is in 
a bound
state and at maximum equilibrium).

Yes, that'`s also what I think. These computations are material, in the 
sense that
they are subject to limitation of resources (nervous signal speeds, chemical
equilibrion, diffusion of hormones etc. So the bias toward a low kolmogorov
complexity of an habitable universe can be naturally deduced from that.

Natural selection is the mechanism for making discoveries, individual life
incorporate these discoveries, called adaptations. A cat that jump to catch 
a fish
has not discovered the laws of newton, Instead, the evolution has found a 
way to
modulate the force exerted by the muscles according with how long the jump 
must be,
and depending on the weight of the cat (that is calibrated by playing at at 
the
early age).

But this technique depends on the lineality and continuity of the law of 
newton for
short distances. If the law of newton were more complicated, that would not 
be
possible. So a low complexity of the macroscopical laws permit a low 
complexity and
a low use of resources of the living computers that deal with them, and a 
faster
dsicovery of adaptations by natural selection. But that complexity has a 
upper
limit; Lineality seems to be a requirement for the operation of natural 
selection
in the search for adaptations.


http://ilevolucionista.blogspot.com.es/2008/06/ockham-razor-and-genetic-algoritms-life.html




I kind of agree with all what you say here, and on the basic philosophy. 
But I think
that what you describe admits a more general description, in which the laws 
of
physics are themselves selected by a process similar but more general than
evolution. It makes me think that life (and brains at some different level) 
is what
happen when a universal system mirrors itself. A universal machine is a 
dynamical
mirror, and life can develop once you put the dynamical mirror in front of 
itself
(again a case of diagonalization). I think I follow your philosophy, but 
apply it in
arithmetic and/or computer science.


I envision also a kind of selection of the mind over the matter , since the most basic 
notion of existence implies and observer, that is,a  mind and a mind, in a universe 
where history has a meaning (that discard boltzmann brains) , and  hold a kind of 
intelligence (since intelligence permits to make use of experience) impose very strong 
antropic restrictions not only in the nature of the phisical laws, as I said, but in the 
matematicity of them. With matematicity i mean a reuse of the same simple structures at 
different levels of reality. I mean that the most simple mathematical structures are 
more represented in the structure of reality than complicated ones, to minimize the 
complexity.


But aren't those all the same conclusions that would arise from assuming that mathematics 
and physical laws are our inventions for describing and reasoning about the world and they 
are simple because that makes them understandable; they reflect our limited cognitive 
ability to think about only a few things at a time.  Notice that physics, as it has become 
more math

Re: Code length = probability distribution

2012-10-23 Thread Bruno Marchal


On 22 Oct 2012, at 21:50, Alberto G. Corona wrote:




2012/10/22 Stephen P. King 
On 10/22/2012 2:38 AM, Alberto G. Corona wrote:



2012/10/22 Russell Standish 
On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> Hi Rusell,
>
> How does Schmidhuber consider the physicality of resources?
>
> --
> Onward!
>
> Stephen

No. The concept doesn't enter consideration. What he considers is  
that

the Great Programmer has finite (or perhaps bounded resources), which
gives an additional boost to algorithms that run efficiently.

that´s the problem that I insist, has  a natural solution  
considering the computational needs of living beings under natural  
selection, without resorting to a everithing-theory of reality  
based of a UD algorithm, like the Schmidhuber one.

--

Dear Alberto,

My suspicion is that there does not exist a single global  
computation of the behavior of living (or other) beings and that  
"natural selection" is a local computation between each being and  
its environment. We end up with a model where there are many  
computations occurring concurrently and there is no single  
computation that can dovetail all of them together such that a  
picture of the universe can be considered as a single simulation  
running on a single computer except for a very trivial case (where  
the total universe is in a bound state and at maximum equilibrium).


Yes, that'`s also what I think. These computations are material, in  
the sense that they are subject to limitation of resources (nervous  
signal speeds, chemical equilibrion, diffusion of hormones etc. So  
the bias toward a low kolmogorov complexity of an habitable universe  
can be naturally deduced from that.


Natural selection is the mechanism for making discoveries,  
individual life incorporate these discoveries, called adaptations. A  
cat that jump to catch a fish has not discovered the laws of newton,  
Instead, the evolution has found a way to modulate the force exerted  
by the muscles according with how long the jump must be, and  
depending on the weight of the cat (that is calibrated by playing at  
at the early age).


But this technique depends on the lineality and continuity of the  
law of newton for short distances. If the law of newton were more  
complicated, that would not be possible. So a low complexity of the  
macroscopical laws permit a low complexity and a low use of  
resources of the living computers that deal with them, and a faster  
dsicovery of adaptations by natural selection. But that complexity  
has a upper limit; Lineality seems to be a requirement for the  
operation of natural selection in the search for adaptations.


 
http://ilevolucionista.blogspot.com.es/2008/06/ockham-razor-and-genetic-algoritms-life.html




I kind of agree with all what you say here, and on the basic  
philosophy. But I think that what you describe admits a more general  
description, in which the laws of physics are themselves selected by a  
process similar but more general than evolution. It makes me think  
that life (and brains at some different level) is what happen when a  
universal system mirrors itself. A universal machine is a dynamical  
mirror, and life can develop once you put the dynamical mirror in  
front of itself (again a case of diagonalization). I think I follow  
your philosophy, but apply it in arithmetic and/or computer science.


Now I am just afraid, to talk frankly, that it looks like you have a  
reductionist conception of numbers and machines, which does not take  
into account the discovery of the universal machine (by the Post- 
Church-Kleene-Turing thesis) which makes you miss that your philosophy  
might be the natural philosophy of all universal numbers. (I probably  
exaggerate my point for attempt to be short).


We can already talk with the "Löbian numbers". I already recognize  
myself. I already don't take them as zombie. It does not matter that  
the talk admits a local atemporal description. Arithmetic is full of  
life and dreams.


And if we limit ourselves, non constructively (it is the price) to the  
*arithmetically sound* Löbian numbers, we get an arithmetical  
interpretation of a platonist conception of reality. Decidable on its  
propositional parts.


In that conception physics is the border of the universal mind, which  
by assuming comp, might be the mind of the universal machine.


Can that philosophy helps to solve the 1p measure problems, or guide  
us in the "human" interpretation of the arithmetical interpretation?  
Hard to say. Plausible. There will be different methods, and insight.



Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google

Re: Code length = probability distribution

2012-10-22 Thread Stephen P. King

On 10/22/2012 5:50 PM, Russell Standish wrote:
Schmidhuber does not consider ontology at all. He merely asks the 
question "What if we're living inside a universal dovetailer?".


Hi Russell,

That is an ontological question in my thinking, but I will not 
quibble this point.


He doesn't ask what the machine running the dovetailer is made of, nor 
what the programmer that sets the machine is motion is made of. These 
can be taken as literal or figurative as one likes, as they have no 
impact on the conclusions.


OK. I am reading hisftp://ftp.idsia.ch/pub/juergen/coltspeed.pdf 
 now.


In his second paper, 


Which one is that? http://www.idsia.ch/~juergen/everything/ 
 ?


he considers the question, what if the great programmer has limited 
resources? I'm not sure I really follow him there - a dovetailer 
running on a finitely resourced machine is no longer universal.


I disagree, it is still capable of universality but it is a bounded 
version of universality. I accept physical functional equivalence within 
bounds of equal quantities of resources, but to take this to the limit 
of no resources or ignoring physical resources altogether is going to 
far into metaphysics for some. I am OK with it, but I demand that if we 
are going to neglect the physical then we must be consistent: we cannot 
carry into Platonia anything that requires supervenience on physical 
process. We simply cannot talk about cake and cake not exist!


Also, computational runtimes should be invisible to the denizens of 
the computation, as Bruno points out in his UDA.


Sure, but, to us entities that are asking questions about the 
general properties of computations, runtimes do matter! For example, if 
we ask if computational simulation of A and computation simulation of B 
are capable of having an arbitrary long string of bisimulations between 
individual actions within their respective simulations, then there is an 
issue of synchrony between them that is sensitive to runtimes. Try 
modeling the interactions of banking customers and ATM machines such 
that the account totals are always up to date and correct. The single 
computer model fails miserably! This is the problem of concurrency that 
most "theoretical" computer scientists fail to recognize even as existing.
Independence of runtime properties only follow if we are 
considerign the goings-on of the inside of a single computational 
simulation that is generating all aspects. I am trying to distinguish 
between these two possibilities, single vs. multiple and separable, as I 
see the singular computation hypothesis (which Bruno's UDA seems to 
assume) as deeply problematic - it implies inescapable solipsism for the 
1p of such. For example, what does a "plurality of minds" mean in a 
universe where there is a single computation "running" everything?



--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-22 Thread Russell Standish
On Mon, Oct 22, 2012 at 01:45:11PM -0400, Stephen P. King wrote:
> On 10/22/2012 2:32 AM, Russell Standish wrote:
> >On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> >>Hi Rusell,
> >>
> >> How does Schmidhuber consider the physicality of resources?
> >>
> >>-- 
> >>Onward!
> >>
> >>Stephen
> >No. The concept doesn't enter consideration. What he considers is that
> >the Great Programmer has finite (or perhaps bounded resources), which
> >gives an additional boost to algorithms that run efficiently.
> >
> Hi Russell,
> 
> OK, so does Schmidhuber advocating an immaterialist ontology, as
> Bruno? I need to read the paper again, it has been a long time...
> 
> -- 
> Onward!
> 
> Stephen

Schmidhuber does not consider ontology at all. He merely asks the
question "What if we're living inside a universal dovetailer?". He
doesn't ask what the machine running the dovetailer is made of, nor
what the programmer that sets the machine is motion is made of. These
can be taken as literal or figurative as one likes, as they have no
impact on the conclusions.

In his second paper, he considers the question, what if the great
programmer has limited resources?

I'm not sure I really follow him there - a dovetailer running on a
finitely resourced machine is no longer universal. Also, computational
runtimes should be invisible to the denizens of the computation, as
Bruno points out in his UDA.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-22 Thread Alberto G. Corona
2012/10/22 Stephen P. King 

>  On 10/22/2012 2:38 AM, Alberto G. Corona wrote:
>
>
>
> 2012/10/22 Russell Standish 
>
>> On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
>> > Hi Rusell,
>> >
>> > How does Schmidhuber consider the physicality of resources?
>> >
>> > --
>> > Onward!
>> >
>> > Stephen
>>
>>  No. The concept doesn't enter consideration. What he considers is that
>> the Great Programmer has finite (or perhaps bounded resources), which
>> gives an additional boost to algorithms that run efficiently.
>>
>>  that´s the problem that I insist, has  a natural solution considering
> the computational needs of living beings under natural selection, without
> resorting to a everithing-theory of reality based of a UD algorithm, like
> the Schmidhuber one.
>
>>  --
>>
>  Dear Alberto,
>
> My suspicion is that there does *not* exist a single global
> computation of the behavior of living (or other) beings and that "natural
> selection" is a local computation between each being and its environment.
> We end up with a model where there are many computations occurring
> concurrently and there is no single computation that can dovetail all of
> them together such that a picture of the universe can be considered as a
> single simulation running on a single computer except for a very trivial
> case (where the total universe is in a bound state and at maximum
> equilibrium).
>
> Yes, that'`s also what I think. These computations are material, in the
sense that they are subject to limitation of resources (nervous signal
speeds, chemical equilibrion, diffusion of hormones etc. So the bias toward
a low kolmogorov complexity of an habitable universe can be naturally
deduced from that.

Natural selection is the mechanism for making discoveries, individual life
incorporate these discoveries, called adaptations. A cat that jump to catch
a fish has not discovered the laws of newton, Instead, the evolution has
found a way to modulate the force exerted by the muscles according with how
long the jump must be, and depending on the weight of the cat (that is
calibrated by playing at at the early age).

But this technique depends on the lineality and continuity of the law of
newton for short distances. If the law of newton were more complicated,
that would not be possible. So a low complexity of the macroscopical laws
permit a low complexity and a low use of resources of the living computers
that deal with them, and a faster dsicovery of adaptations by natural
selection. But that complexity has a upper limit; Lineality seems to be a
requirement for the operation of natural selection in the search for
adaptations.


http://ilevolucionista.blogspot.com.es/2008/06/ockham-razor-and-genetic-algoritms-life.html

Onward!
>
> Stephen
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>



-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-22 Thread Stephen P. King

On 10/22/2012 2:38 AM, Alberto G. Corona wrote:



2012/10/22 Russell Standish >


On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> Hi Rusell,
>
> How does Schmidhuber consider the physicality of resources?
>
> --
> Onward!
>
> Stephen

No. The concept doesn't enter consideration. What he considers is that
the Great Programmer has finite (or perhaps bounded resources), which
gives an additional boost to algorithms that run efficiently.

that愀 the problem that I insist, has  a natural solution considering 
the computational needs of living beings under natural selection, 
without resorting to a everithing-theory of reality based of a UD 
algorithm, like the Schmidhuber one.


--


Dear Alberto,

My suspicion is that there does *not* exist a single global 
computation of the behavior of living (or other) beings and that 
"natural selection" is a local computation between each being and its 
environment. We end up with a model where there are many computations 
occurring concurrently and there is no single computation that can 
dovetail all of them together such that a picture of the universe can be 
considered as a single simulation running on a single computer except 
for a very trivial case (where the total universe is in a bound state 
and at maximum equilibrium).


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-22 Thread Stephen P. King

On 10/22/2012 2:32 AM, Russell Standish wrote:

On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:

Hi Rusell,

 How does Schmidhuber consider the physicality of resources?

--
Onward!

Stephen

No. The concept doesn't enter consideration. What he considers is that
the Great Programmer has finite (or perhaps bounded resources), which
gives an additional boost to algorithms that run efficiently.


Hi Russell,

OK, so does Schmidhuber advocating an immaterialist ontology, as 
Bruno? I need to read the paper again, it has been a long time...


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-22 Thread Bruno Marchal


On 21 Oct 2012, at 22:03, Alberto G. Corona wrote:

This does not implies a reality created by an UD algorithm. It may  
be a mathematical universe, that is a superset of the computable  
universes.


The computable universe is a subset of the mathematical universe.

Just compare: the computable universe correspond to the Sigma_1  
sentences. It is already only semi or partially computable, as the  
negation of a Sigma_1 sentence can already be not computable (that's  
the PI_1 sentence), then you have the sigma_1, still more non  
computable, and then the sigma_3, etc... And that is only a tiny part  
of arithmetical truth, which is just *much* vaster than the  
computable, but can still be considered as a tiny part of the  
mathematical truth.





The measure problem in the UD algorith translates to the problem of  
the effectivity of the Occam Razor, or the problem of the apparent  
simplicity of the phisical laws, or, in other words, their low  
kolmogorov complexity, that solomonov translates in his theory of  
inductive inference.


This can solve the 3p rabbit problems, but not the 1p rabbit problems.  
You will re-awake older discussions.


Kolmogorov complexity can play some role here, but does not solve the  
1p-problem, which is transformed into a justification of the stability  
of dreams, with still a possibility to define a notion of physical  
realm, perhaps by changing some definition.


Complexity exploits the simple/immune complementary in the W_i.
Beliefs, knowledge, even observation, exploits the creativity/ 
productivity complementarity in the W_i.


Bennett notion of depth should play a role also, to justify a notion  
of cosmological history.


The whole problem of the 1p indeterminacy, is that it does give a role  
to big programs. The little programs cannot get rid of them so easily  
(by just matter of complexity). We are ourselves already relatively  
rare *big* relative numbers.


Bruno






2012/10/21 Alberto G. Corona 
Ok

I don´t remember the reason why Solomonof reduces the probability of  
the programs according with the length in is theory of inductive  
inference. I read it time ago. Solomonoff describes in his paper  
about inductive inference a more clear and direct solution for the  
measure problem. but I though that it was somehow ad hoc.


I tough time ago about the Solomonof  solution to the induction  
problem, and I though  as such: living beings have to find, by  
evolution, at least partial and approximate inductive solutions in  
order to survive in their environment. This imposes a restriction on  
the laws of a local universe with life: It demand a low kolmogorov  
complexity for the macroscopical  laws. Otherwise these laws would  
not be discoverable, there would be no induction possible, so the  
living beings could not anticipate outcomes and they woul not survive.


Solomonoff is a living being in a local universe, so shorther  
programs are more probable and add more weight for induction.


I´m just thinking aloud. I will look again to the solomonof  
inductive inference. I was a great moment when I read it the first  
time.



2012/10/20 Russell Standish 
On Sat, Oct 20, 2012 at 09:16:54PM +0200, Alberto G. Corona  wrote:
> This is not a consequence of the shannon optimum coding , in which  
the
> coding size of a symbol is inversely proportional  to the logaritm  
of the

> frequency of the symbol?.

Not quite. Traditional shannon entropy uses probability of a symbol,
whereas algorithmic complexity uses the probability of the whole
sequence. Only if the symbols are independently distributed are the
two the same. Usually, in most messages, the symbols are not id.

>
> What is exactly the comp measure problem?

A UD generates and executes all programs, many of which are
equivalent. So some programs are represented more than others. The
COMP measure is a function over all programs that captures this
variation in program respresentation.

Why should this be unique, independent of UD, or the universal Turing
machine it runs on? Because the UD executes every other UD, as well as
itself, the measure will be a limit over contributions from all UDs.

Cheers
--


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.





--
Alberto.



--
Alberto.

--
You received this messag

Re: Code length = probability distribution

2012-10-21 Thread Alberto G. Corona
2012/10/22 Russell Standish 

> On Sun, Oct 21, 2012 at 10:03:48PM +0200, Alberto G. Corona  wrote:
> > This does not implies a reality created by an UD algorithm. It may be a
> > mathematical universe, that is a superset of the computable universes.
> The
> > measure problem in the UD algorith translates to the problem of the
> > effectivity of the Occam Razor, or the problem of the apparent simplicity
> > of the phisical laws, or, in other words, their low kolmogorov
> complexity,
> > that solomonov translates in his theory of inductive inference.
> >
>
> I don't know. Around here, that problem is called the "White Rabbit
> problem", and IMHO is solved by Solomonoff's theory of inductive
> inference (with appropriate modern nuances). Not everbody agrees,
> however.
>
> The UD measure problem (as I understand it), is a computational
> question of what the measure actually is. Whatever it is, it will
> satisfy the properties of Solomonoff's universal prior, so will solve
> the WR problem. However, it is conjectured that the UD measure will
> differ in some measurable way from a universal prior obtained by
> postulating a uniform measure over the set of all infinite length
> binary strings treated as binary expansions of real numbers (see my
> paper Why Occams Razor for as an example work doing exactly that). If
> measurable, we have a means of selecting between different ensemble
> everything theories by experiment.
>

Thanks. I'll read your paper. From all the mathematical consideraitions.
IMHO the kolmogorov-solomonof research is the closest one to solving the
conundrum of the discovery of the macroscopical laws by computing living
beings in a mathematical universe (which may have a underlying
computational nature, but I think that this is a limitation that is not
justified).

>
> It is still very much an open problem.
>
>
> > 2012/10/21 Alberto G. Corona 
> >
> > > Ok
> > >
> > > I don愒 remember the reason why Solomonof reduces the probability of the
> > > programs according with the length in is theory of inductive
> inference. I
> > > read it time ago. Solomonoff describes in his paper about inductive
> > > inference a more clear and direct solution for the measure problem.
> but I
> > > though that it was somehow ad hoc.
> > >
> > > I tough time ago about the Solomonof  solution to the induction
> problem,
> > > and I though  as such: living beings have to find, by evolution, at
> least
> > > partial and approximate inductive solutions in order to survive in
> their
> > > environment. This imposes a restriction on the laws of a local universe
> > > with life: It demand a low kolmogorov complexity for the
> *macroscopical* laws. Otherwise these laws would not be discoverable, there
> would be no
> > > induction possible, so the living beings could not anticipate outcomes
> and
> > > they woul not survive.
> > >
> > > Solomonoff is a living being in a local universe, so shorther programs
> are
> > > more probable and add more weight for induction.
> > >
> > > I惴 just thinking aloud. I will look again to the solomonof inductive
> > > inference. I was a great moment when I read it the first time.
> > >
> > >
> > > 2012/10/20 Russell Standish 
> > >
> > >> On Sat, Oct 20, 2012 at 09:16:54PM +0200, Alberto G. Corona  wrote:
> > >> > This is not a consequence of the shannon optimum coding , in which
> the
> > >> > coding size of a symbol is inversely proportional  to the logaritm
> of
> > >> the
> > >> > frequency of the symbol?.
> > >>
> > >> Not quite. Traditional shannon entropy uses probability of a symbol,
> > >> whereas algorithmic complexity uses the probability of the whole
> > >> sequence. Only if the symbols are independently distributed are the
> > >> two the same. Usually, in most messages, the symbols are not id.
> > >>
> > >> >
> > >> > What is exactly the comp measure problem?
> > >>
> > >> A UD generates and executes all programs, many of which are
> > >> equivalent. So some programs are represented more than others. The
> > >> COMP measure is a function over all programs that captures this
> > >> variation in program respresentation.
> > >>
> > >> Why should this be unique, independent of UD, or the universal Turing
> > >> machine it runs on? Because the UD executes every other UD, as well as
> > >> itself, the measure will be a limit over contributions from all UDs.
> > >>
> > >> Cheers
> > >> --
> > >>
> > >>
> > >>
> 
> > >> Prof Russell Standish  Phone 0425 253119 (mobile)
> > >> Principal, High Performance Coders
> > >> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
> > >> University of New South Wales  http://www.hpcoders.com.au
> > >>
> > >>
> 
> > >>
> > >> --
> > >> You received this message because you are subscribed to the Google
> Groups
> > >> "Everything List" group.
> > >> To post to thi

Re: Code length = probability distribution

2012-10-21 Thread Alberto G. Corona
2012/10/22 Russell Standish 

> On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> > Hi Rusell,
> >
> > How does Schmidhuber consider the physicality of resources?
> >
> > --
> > Onward!
> >
> > Stephen
>
> No. The concept doesn't enter consideration. What he considers is that
> the Great Programmer has finite (or perhaps bounded resources), which
> gives an additional boost to algorithms that run efficiently.
>
> that´s the problem that I insist, has  a natural solution considering the
computational needs of living beings under natural selection, without
resorting to a everithing-theory of reality based of a UD algorithm, like
the Schmidhuber one.

> --
>
>
> 
> Prof Russell Standish  Phone 0425 253119 (mobile)
> Principal, High Performance Coders
> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
> University of New South Wales  http://www.hpcoders.com.au
>
> 
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>


-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-21 Thread Russell Standish
On Sun, Oct 21, 2012 at 11:38:46PM -0400, Stephen P. King wrote:
> Hi Rusell,
> 
> How does Schmidhuber consider the physicality of resources?
> 
> -- 
> Onward!
> 
> Stephen

No. The concept doesn't enter consideration. What he considers is that
the Great Programmer has finite (or perhaps bounded resources), which
gives an additional boost to algorithms that run efficiently.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-21 Thread Stephen P. King

On 10/21/2012 3:48 AM, Russell Standish wrote:

  I worry a bit about the use of the word "all" in your remark.
>"All" is too big, usually, to have a single constructable measure!
>Why not consider some large enough but finite collections of
>programs, such as what would be captured by the idea of an
>equivalence class of programs that satisfy some arbitrary parameters
>(such as solving a finite NP-hard problem) given some large but
>finite quantity of resources?
> Of course this goes against the grain of Bruno's theology, but
>maybe that is what it required to solve the measure problem.:-)  I
>find myself being won over by the finitists, such as Norman J.
>Wildberger!

This may well turn out to be the case. Also Juergen Schmidhuber has
investigated this under the rubrik of "speed prior".

Hi Rusell,

How does Schmidhuber consider the physicality of resources?

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-21 Thread Russell Standish
On Sun, Oct 21, 2012 at 10:03:48PM +0200, Alberto G. Corona  wrote:
> This does not implies a reality created by an UD algorithm. It may be a
> mathematical universe, that is a superset of the computable universes. The
> measure problem in the UD algorith translates to the problem of the
> effectivity of the Occam Razor, or the problem of the apparent simplicity
> of the phisical laws, or, in other words, their low kolmogorov complexity,
> that solomonov translates in his theory of inductive inference.
> 

I don't know. Around here, that problem is called the "White Rabbit
problem", and IMHO is solved by Solomonoff's theory of inductive
inference (with appropriate modern nuances). Not everbody agrees,
however.

The UD measure problem (as I understand it), is a computational
question of what the measure actually is. Whatever it is, it will
satisfy the properties of Solomonoff's universal prior, so will solve
the WR problem. However, it is conjectured that the UD measure will
differ in some measurable way from a universal prior obtained by
postulating a uniform measure over the set of all infinite length
binary strings treated as binary expansions of real numbers (see my
paper Why Occams Razor for as an example work doing exactly that). If
measurable, we have a means of selecting between different ensemble
everything theories by experiment.

It is still very much an open problem.


> 2012/10/21 Alberto G. Corona 
> 
> > Ok
> >
> > I don愒 remember the reason why Solomonof reduces the probability of the
> > programs according with the length in is theory of inductive inference. I
> > read it time ago. Solomonoff describes in his paper about inductive
> > inference a more clear and direct solution for the measure problem. but I
> > though that it was somehow ad hoc.
> >
> > I tough time ago about the Solomonof  solution to the induction problem,
> > and I though  as such: living beings have to find, by evolution, at least
> > partial and approximate inductive solutions in order to survive in their
> > environment. This imposes a restriction on the laws of a local universe
> > with life: It demand a low kolmogorov complexity for the *macroscopical* 
> > laws. Otherwise these laws would not be discoverable, there would be no
> > induction possible, so the living beings could not anticipate outcomes and
> > they woul not survive.
> >
> > Solomonoff is a living being in a local universe, so shorther programs are
> > more probable and add more weight for induction.
> >
> > I惴 just thinking aloud. I will look again to the solomonof inductive
> > inference. I was a great moment when I read it the first time.
> >
> >
> > 2012/10/20 Russell Standish 
> >
> >> On Sat, Oct 20, 2012 at 09:16:54PM +0200, Alberto G. Corona  wrote:
> >> > This is not a consequence of the shannon optimum coding , in which the
> >> > coding size of a symbol is inversely proportional  to the logaritm of
> >> the
> >> > frequency of the symbol?.
> >>
> >> Not quite. Traditional shannon entropy uses probability of a symbol,
> >> whereas algorithmic complexity uses the probability of the whole
> >> sequence. Only if the symbols are independently distributed are the
> >> two the same. Usually, in most messages, the symbols are not id.
> >>
> >> >
> >> > What is exactly the comp measure problem?
> >>
> >> A UD generates and executes all programs, many of which are
> >> equivalent. So some programs are represented more than others. The
> >> COMP measure is a function over all programs that captures this
> >> variation in program respresentation.
> >>
> >> Why should this be unique, independent of UD, or the universal Turing
> >> machine it runs on? Because the UD executes every other UD, as well as
> >> itself, the measure will be a limit over contributions from all UDs.
> >>
> >> Cheers
> >> --
> >>
> >>
> >> 
> >> Prof Russell Standish  Phone 0425 253119 (mobile)
> >> Principal, High Performance Coders
> >> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
> >> University of New South Wales  http://www.hpcoders.com.au
> >>
> >> 
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups
> >> "Everything List" group.
> >> To post to this group, send email to everything-list@googlegroups.com.
> >> To unsubscribe from this group, send email to
> >> everything-list+unsubscr...@googlegroups.com.
> >> For more options, visit this group at
> >> http://groups.google.com/group/everything-list?hl=en.
> >>
> >>
> >
> >
> > --
> > Alberto.
> >
> 
> 
> 
> -- 
> Alberto.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to 
> everything-list+unsubscr...@googlegroups.c

Re: Code length = probability distribution

2012-10-21 Thread Alberto G. Corona
This does not implies a reality created by an UD algorithm. It may be a
mathematical universe, that is a superset of the computable universes. The
measure problem in the UD algorith translates to the problem of the
effectivity of the Occam Razor, or the problem of the apparent simplicity
of the phisical laws, or, in other words, their low kolmogorov complexity,
that solomonov translates in his theory of inductive inference.

2012/10/21 Alberto G. Corona 

> Ok
>
> I don´t remember the reason why Solomonof reduces the probability of the
> programs according with the length in is theory of inductive inference. I
> read it time ago. Solomonoff describes in his paper about inductive
> inference a more clear and direct solution for the measure problem. but I
> though that it was somehow ad hoc.
>
> I tough time ago about the Solomonof  solution to the induction problem,
> and I though  as such: living beings have to find, by evolution, at least
> partial and approximate inductive solutions in order to survive in their
> environment. This imposes a restriction on the laws of a local universe
> with life: It demand a low kolmogorov complexity for the *macroscopical* 
> laws. Otherwise these laws would not be discoverable, there would be no
> induction possible, so the living beings could not anticipate outcomes and
> they woul not survive.
>
> Solomonoff is a living being in a local universe, so shorther programs are
> more probable and add more weight for induction.
>
> I´m just thinking aloud. I will look again to the solomonof inductive
> inference. I was a great moment when I read it the first time.
>
>
> 2012/10/20 Russell Standish 
>
>> On Sat, Oct 20, 2012 at 09:16:54PM +0200, Alberto G. Corona  wrote:
>> > This is not a consequence of the shannon optimum coding , in which the
>> > coding size of a symbol is inversely proportional  to the logaritm of
>> the
>> > frequency of the symbol?.
>>
>> Not quite. Traditional shannon entropy uses probability of a symbol,
>> whereas algorithmic complexity uses the probability of the whole
>> sequence. Only if the symbols are independently distributed are the
>> two the same. Usually, in most messages, the symbols are not id.
>>
>> >
>> > What is exactly the comp measure problem?
>>
>> A UD generates and executes all programs, many of which are
>> equivalent. So some programs are represented more than others. The
>> COMP measure is a function over all programs that captures this
>> variation in program respresentation.
>>
>> Why should this be unique, independent of UD, or the universal Turing
>> machine it runs on? Because the UD executes every other UD, as well as
>> itself, the measure will be a limit over contributions from all UDs.
>>
>> Cheers
>> --
>>
>>
>> 
>> Prof Russell Standish  Phone 0425 253119 (mobile)
>> Principal, High Performance Coders
>> Visiting Professor of Mathematics  hpco...@hpcoders.com.au
>> University of New South Wales  http://www.hpcoders.com.au
>>
>> 
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To post to this group, send email to everything-list@googlegroups.com.
>> To unsubscribe from this group, send email to
>> everything-list+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/everything-list?hl=en.
>>
>>
>
>
> --
> Alberto.
>



-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-21 Thread Stephen P. King

On 10/21/2012 3:48 AM, Russell Standish wrote:

On Sat, Oct 20, 2012 at 07:07:14PM -0400, Stephen P. King wrote:

On 10/20/2012 5:45 PM, Russell Standish wrote:

A UD generates and executes all programs, many of which are
equivalent. So some programs are represented more than others. The
COMP measure is a function over all programs that captures this
variation in program respresentation.

Why should this be unique, independent of UD, or the universal Turing
machine it runs on? Because the UD executes every other UD, as well as
itself, the measure will be a limit over contributions from all UDs.

Hi Russell,

 I worry a bit about the use of the word "all" in your remark.
"All" is too big, usually, to have a single constructable measure!
Why not consider some large enough but finite collections of
programs, such as what would be captured by the idea of an
equivalence class of programs that satisfy some arbitrary parameters
(such as solving a finite NP-hard problem) given some large but
finite quantity of resources?
 Of course this goes against the grain of Bruno's theology, but
maybe that is what it required to solve the measure problem. :-) I
find myself being won over by the finitists, such as Norman J.
Wildberger!

This may well turn out to be the case. Also Juergen Schmidhuber has
investigated this under the rubrik of "speed prior".

I should have a chat with Norm about that sometime. Maybe if I see him
at a Christmas party. I didn't realise he was a finitist. I knew he
has an interesting take on how trigonometry should be done.

Cheers


Hi Russell,

I will look at Juergen's stuff again. ;-)

--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-21 Thread Russell Standish
On Sat, Oct 20, 2012 at 07:07:14PM -0400, Stephen P. King wrote:
> On 10/20/2012 5:45 PM, Russell Standish wrote:
> >A UD generates and executes all programs, many of which are
> >equivalent. So some programs are represented more than others. The
> >COMP measure is a function over all programs that captures this
> >variation in program respresentation.
> >
> >Why should this be unique, independent of UD, or the universal Turing
> >machine it runs on? Because the UD executes every other UD, as well as
> >itself, the measure will be a limit over contributions from all UDs.
> Hi Russell,
> 
> I worry a bit about the use of the word "all" in your remark.
> "All" is too big, usually, to have a single constructable measure!
> Why not consider some large enough but finite collections of
> programs, such as what would be captured by the idea of an
> equivalence class of programs that satisfy some arbitrary parameters
> (such as solving a finite NP-hard problem) given some large but
> finite quantity of resources?
> Of course this goes against the grain of Bruno's theology, but
> maybe that is what it required to solve the measure problem. :-) I
> find myself being won over by the finitists, such as Norman J.
> Wildberger!

This may well turn out to be the case. Also Juergen Schmidhuber has
investigated this under the rubrik of "speed prior".

I should have a chat with Norm about that sometime. Maybe if I see him
at a Christmas party. I didn't realise he was a finitist. I knew he
has an interesting take on how trigonometry should be done.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-20 Thread Stephen P. King

On 10/20/2012 5:45 PM, Russell Standish wrote:

A UD generates and executes all programs, many of which are
equivalent. So some programs are represented more than others. The
COMP measure is a function over all programs that captures this
variation in program respresentation.

Why should this be unique, independent of UD, or the universal Turing
machine it runs on? Because the UD executes every other UD, as well as
itself, the measure will be a limit over contributions from all UDs.

Hi Russell,

I worry a bit about the use of the word "all" in your remark. "All" 
is too big, usually, to have a single constructable measure! Why not 
consider some large enough but finite collections of programs, such as 
what would be captured by the idea of an equivalence class of programs 
that satisfy some arbitrary parameters (such as solving a finite NP-hard 
problem) given some large but finite quantity of resources?
Of course this goes against the grain of Bruno's theology, but 
maybe that is what it required to solve the measure problem. :-) I find 
myself being won over by the finitists, such as Norman J. Wildberger!


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-20 Thread Russell Standish
On Sat, Oct 20, 2012 at 09:16:54PM +0200, Alberto G. Corona  wrote:
> This is not a consequence of the shannon optimum coding , in which the
> coding size of a symbol is inversely proportional  to the logaritm of the
> frequency of the symbol?.

Not quite. Traditional shannon entropy uses probability of a symbol,
whereas algorithmic complexity uses the probability of the whole
sequence. Only if the symbols are independently distributed are the
two the same. Usually, in most messages, the symbols are not id.

> 
> What is exactly the comp measure problem?

A UD generates and executes all programs, many of which are
equivalent. So some programs are represented more than others. The
COMP measure is a function over all programs that captures this
variation in program respresentation.

Why should this be unique, independent of UD, or the universal Turing
machine it runs on? Because the UD executes every other UD, as well as
itself, the measure will be a limit over contributions from all UDs.

Cheers
-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-20 Thread Alberto G. Corona
This is not a consequence of the shannon optimum coding , in which the
coding size of a symbol is inversely proportional  to the logaritm of the
frequency of the symbol?.

What is exactly the comp measure problem?

2012/10/19 Stephen P. King 

> Hi,
>
> I was looking up a definition and found the following:
> http://en.wikipedia.org/wiki/**Minimum_description_length
> "Central to MDL theory is the one-to-one correspondence between code
> length functions and probability distributions. (This follows from the
> Kraft-McMillan inequality.) For any probability distribution , it is
> possible to construct a code  such that the length (in bits) of  is equal
> to ; this code minimizes the expected code length. Vice versa, given a code
> , one can construct a probability distribution such that the same holds.
> (Rounding issues are ignored here.) In other words, searching for an
> efficient code reduces to searching for a good probability distribution,
> and vice versa."
>
> Is this true? Would it be an approach to the measure problem of COMP?
>
> --
> Onward!
>
> Stephen
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to 
> everything-list@googlegroups.**com
> .
> To unsubscribe from this group, send email to everything-list+unsubscribe@
> **googlegroups.com .
> For more options, visit this group at http://groups.google.com/**
> group/everything-list?hl=en
> .
>
>


-- 
Alberto.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-19 Thread Russell Standish
On Fri, Oct 19, 2012 at 02:03:27PM -0700, meekerdb wrote:
> On 10/19/2012 10:54 AM, Stephen P. King wrote:
> >Hi,
> >
> >I was looking up a definition and found the following:
> >http://en.wikipedia.org/wiki/Minimum_description_length
> >"Central to MDL theory is the one-to-one correspondence between
> >code length functions and probability distributions. (This follows
> >from the Kraft-McMillan inequality.) For any probability
> >distribution , it is possible to construct a code  such that the
> >length (in bits) of  is equal to ; this code minimizes the
> >expected code length. Vice versa, given a code , one can construct
> >a probability distribution such that the same holds. (Rounding
> >issues are ignored here.) In other words, searching for an
> >efficient code reduces to searching for a good probability
> >distribution, and vice versa."
> >
> >Is this true? Would it be an approach to the measure problem of COMP?
> >
> 
> Although the display math didn't show up above, I looked at the
> site.  I'm not clear on what x refers to in P(x) and C(x).  Is it
> the data being fitted, so in the coin tossing example x is a
> sequence, e.g. x={HHTTHHTHTT} and P(x) is the probability of various
> possible sequences, i.e. ignoring order but not length, it's the
> binomial distribution?  Then C(x) would be the length of the code to
> produce/describe a particular sequence x?  So a sequence
> {HHH} would have a shorter code than {TTHHTHTHTHTHH} - but
> ex hypothesi they both have the same probability?  What am I
> missing?
> 
> Brent
> 

Assuming the notation of Li & Vitanyi,

C(x) would be prefix-free algorithmic complexity, which is closely
related to Kolmogorov complexity K(x) (length of shortest program):

  K(x) + K_1 \leq C(x) \leq K(x) + K_2

P(x) would be the universal prior probability, discovered by
Solomonoff, and revised by Levin. It is defined by

P(x) = 2 ^ {-C(x)}

where we've assumed that C(x) is being measured in bits.

I haven't read the MDL stuff in Li & Vitanyi (too many other things,
and L&V is a "fierce book" (one of my friends' description), requiring
much study).

But I think the connection between probability distribution and
shortest length is made clear above.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Code length = probability distribution

2012-10-19 Thread meekerdb

On 10/19/2012 10:54 AM, Stephen P. King wrote:

Hi,

I was looking up a definition and found the following:
http://en.wikipedia.org/wiki/Minimum_description_length
"Central to MDL theory is the one-to-one correspondence between code length functions 
and probability distributions. (This follows from the Kraft-McMillan inequality.) For 
any probability distribution , it is possible to construct a code  such that the length 
(in bits) of  is equal to ; this code minimizes the expected code length. Vice versa, 
given a code , one can construct a probability distribution such that the same holds. 
(Rounding issues are ignored here.) In other words, searching for an efficient code 
reduces to searching for a good probability distribution, and vice versa."


Is this true? Would it be an approach to the measure problem of COMP?



Although the display math didn't show up above, I looked at the site.  I'm not clear on 
what x refers to in P(x) and C(x).  Is it the data being fitted, so in the coin tossing 
example x is a sequence, e.g. x={HHTTHHTHTT} and P(x) is the probability of various 
possible sequences, i.e. ignoring order but not length, it's the binomial distribution?  
Then C(x) would be the length of the code to produce/describe a particular sequence x?  So 
a sequence {HHH} would have a shorter code than {TTHHTHTHTHTHH} - but ex hypothesi 
they both have the same probability?  What am I missing?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.