Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Jim Bromer

 Jim: So, did Solomonoff's original idea involve randomizing whether the
 next bit would be a 1 or a 0 in the program?

Abram: Yep.
I meant, did Solomonoff's original idea involve randomizing whether the next
bit in the program's that are originally used to produce the *prior
probabilities* involve the use of randomizing whether the next bit would be
a 1 or a 0?  I have not been able to find any evidence that it was.
I thought that my question was clear but on second thought I guess it
wasn't. I think that the part about the coin flips was only a method to
express that he was interested in the probability that a particular string
would be produced from all possible programs, so that when actually testing
the prior probability of a particular string the program that was to be run
would have to be randomly generated.
Jim Bromer




On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

  Your function may be convergent but it is not a probability.


 True! All the possibilities sum to less than 1. There are ways of
 addressing this (ie, multiply by a normalizing constant which must also be
 approximated in a convergent manner), but for the most part adherents of
 Solomonoff induction don't worry about this too much. What we care about,
 mostly, is comparing different hyotheses to decide which to favor. The
 normalizing constant doesn't help us here, so it usually isn't mentioned.


 You said that Solomonoff's original construction involved flipping a coin
 for the next bit.  What good does that do?


 Your intuition is that running totally random programs to get predictions
 will just produce garbage, and that is fine. The idea of Solomonoff
 induction, though, is that it will produce systematically less garbage than
 just flipping coins to get predictions. Most of the garbage programs will be
 knocked out of the running by the data itself. This is supposed to be the
 least garbage we can manage without domain-specific knowledge

 This is backed up with the proof of dominance, which we haven't talked
 about yet, but which is really the key argument for the optimality of
 Solomonoff induction.


 And how does that prove that his original idea was convergent?


 The proofs of equivalence between all the different formulations of
 Solomonoff induction are something I haven't cared to look into too deeply.

 Since his idea is incomputable, there are no algorithms that can be run to
 demonstrate what he was talking about so the basic idea is papered with all
 sorts of unverifiable approximations.


 I gave you a proof of convergence for one such approximation, and if you
 wish I can modify it to include a normalizing constant to ensure that it is
 a probability measure. It would be helpful to me if your criticisms were
 more specific to that proof.

 So, did Solomonoff's original idea involve randomizing whether the next bit
 would be a 1 or a 0 in the program?


 Yep.

 Even ignoring the halting problem what kind of result would that give?


 Well, the general idea is this. An even distribution intuitively represents
 lack of knowledge. An even distribution over possible data fails horribly,
 however, predicting white noise. We want to represent the idea that we are
 very ignorant of what the data might be, but not *that* ignorant. To capture
 the idea of regularity, ie, similarity between past and future, we instead
 take an even distribution over *descriptions* of the data. (The distribution
 in the 2nd version of solomonoff induction that I gave, the one in which the
 space of possible programs is represented as a continuum, is an even
 distribution.) This appears to provide a good amount of regularity without
 too much.

 --Abram

 On Wed, Aug 4, 2010 at 8:10 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 Thanks for the explanation.  I still don't get it.  Your function may be
 convergent but it is not a probability.  You said that Solomonoff's original
 construction involved flipping a coin for the next bit.  What good does that
 do?  And how does that prove that his original idea was convergent?  The
 thing that is wrong with these explanations is that they are not coherent.
 Since his idea is incomputable, there are no algorithms that can be run to
 demonstrate what he was talking about so the basic idea is papered with all
 sorts of unverifiable approximations.

 So, did Solomonoff's original idea involve randomizing whether the next
 bit would be a 1 or a 0 in the program?  Even ignoring the halting
 problem what kind of result would that give?  Have you ever solved the
 problem for some strings and have you ever tested the solutions using a
 simulation?

 Jim Bromer

 On Mon, Aug 2, 2010 at 5:12 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 Interestingly, the formalization of Solomonoff induction I'm most
 familiar with uses a construction that relates the space of programs with
 the real numbers just as you say. This formulation may be due to Solomonoff,
 or 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Jim Bromer
I meant:
Did Solomonoff's original idea use randomization to determine the bits of
the programs that are used to produce the *prior probabilities*?  I think
that the answer to that is obviously no.  The randomization of the next bit
would used in the test of the prior probabilities as done using a random
sampling.  He probably found that students who had some familiarity with
statistics would initially assume that the prior probability was based on
some subset of possible programs as would be expected from a typical sample,
so he gave this statistics type of definition to emphasize the extent of
what he had in mind.

I asked this question just to make sure that I understood what Solomonoff
Induction was, because Abram had made some statement indicating that I
really didn't know.  Remember, this particular branch of the discussion was
originally centered around the question of whether Solomonoff
Induction would be convergent, even given a way around the incomputability
of finding only those programs that halted.  So while the random testing of
the prior probabilities is of interest to me, I wanted to make sure that
there is no evidence that Solomonoff Induction is convergent. I am not being
petty about this, but I also needed to make sure that I understood what
Solomonoff Induction is.

I am interested in hearing your ideas about your variation of
Solomonoff Induction because your convergent series, in this context, was
interesting.
Jim Bromer

On Fri, Aug 6, 2010 at 6:50 AM, Jim Bromer jimbro...@gmail.com wrote:

 Jim: So, did Solomonoff's original idea involve randomizing whether the
 next bit would be a 1 or a 0 in the program?

 Abram: Yep.
 I meant, did Solomonoff's original idea involve randomizing whether the
 next bit in the program's that are originally used to produce the *prior
 probabilities* involve the use of randomizing whether the next bit would
 be a 1 or a 0?  I have not been able to find any evidence that it was.
 I thought that my question was clear but on second thought I guess it
 wasn't. I think that the part about the coin flips was only a method to
 express that he was interested in the probability that a particular string
 would be produced from all possible programs, so that when actually testing
 the prior probability of a particular string the program that was to be run
 would have to be randomly generated.
 Jim Bromer




 On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

  Your function may be convergent but it is not a probability.


 True! All the possibilities sum to less than 1. There are ways of
 addressing this (ie, multiply by a normalizing constant which must also be
 approximated in a convergent manner), but for the most part adherents of
 Solomonoff induction don't worry about this too much. What we care about,
 mostly, is comparing different hyotheses to decide which to favor. The
 normalizing constant doesn't help us here, so it usually isn't mentioned.


 You said that Solomonoff's original construction involved flipping a coin
 for the next bit.  What good does that do?


 Your intuition is that running totally random programs to get predictions
 will just produce garbage, and that is fine. The idea of Solomonoff
 induction, though, is that it will produce systematically less garbage than
 just flipping coins to get predictions. Most of the garbage programs will be
 knocked out of the running by the data itself. This is supposed to be the
 least garbage we can manage without domain-specific knowledge

 This is backed up with the proof of dominance, which we haven't talked
 about yet, but which is really the key argument for the optimality of
 Solomonoff induction.


 And how does that prove that his original idea was convergent?


 The proofs of equivalence between all the different formulations of
 Solomonoff induction are something I haven't cared to look into too deeply.

 Since his idea is incomputable, there are no algorithms that can be run to
 demonstrate what he was talking about so the basic idea is papered with all
 sorts of unverifiable approximations.


 I gave you a proof of convergence for one such approximation, and if you
 wish I can modify it to include a normalizing constant to ensure that it is
 a probability measure. It would be helpful to me if your criticisms were
 more specific to that proof.

 So, did Solomonoff's original idea involve randomizing whether the next
 bit would be a 1 or a 0 in the program?


 Yep.

 Even ignoring the halting problem what kind of result would that give?


 Well, the general idea is this. An even distribution intuitively
 represents lack of knowledge. An even distribution over possible data fails
 horribly, however, predicting white noise. We want to represent the idea
 that we are very ignorant of what the data might be, but not *that*
 ignorant. To capture the idea of regularity, ie, similarity between past and
 future, we instead take an even distribution over *descriptions* 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Matt Mahoney
Jim, see http://www.scholarpedia.org/article/Algorithmic_probability
I think this answers your questions.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Fri, August 6, 2010 2:18:09 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


I meant:
Did Solomonoff's original idea use randomization to determine the bits of the 
programs that are used to produce the prior probabilities?  I think that the 
answer to that is obviously no.  The randomization of the next bit would used 
in 
the test of the prior probabilities as done using a random sampling.  He 
probably found that students who had some familiarity with statistics would 
initially assume that the prior probability was based on some subset of 
possible 
programs as would be expected from a typical sample, so he gave this statistics 
type of definition to emphasize the extent of what he had in mind.
 
I asked this question just to make sure that I understood what Solomonoff 
Induction was, because Abram had made some statement indicating that I really 
didn't know.  Remember, this particular branch of the discussion was originally 
centered around the question of whether Solomonoff Induction would 
be convergent, even given a way around the incomputability of finding only 
those 
programs that halted.  So while the random testing of the prior probabilities 
is 
of interest to me, I wanted to make sure that there is no evidence that 
Solomonoff Induction is convergent. I am not being petty about this, but I also 
needed to make sure that I understood what Solomonoff Induction is.
 
I am interested in hearing your ideas about your variation of 
Solomonoff Induction because your convergent series, in this context, was 
interesting.
Jim Bromer


On Fri, Aug 6, 2010 at 6:50 AM, Jim Bromer jimbro...@gmail.com wrote:

Jim: So, did Solomonoff's original idea involve randomizing whether the next 
bit 
would be a 1 or a 0 in the program? 

Abram: Yep. 

I meant, did Solomonoff's original idea involve randomizing whether the next 
bit 
in the program's that are originally used to produce the prior probabilities 
involve the use of randomizing whether the next bit would be a 1 or a 0?  I 
have 
not been able to find any evidence that it was.  I thought that my question was 
clear but on second thought I guess it wasn't. I think that the part about the 
coin flips was only a method to express that he was interested in the 
probability that a particular string would be produced from all possible 
programs, so that when actually testing the prior probability of a particular 
string the program that was to be run would have to be randomly generated.
Jim Bromer
 
 

 
On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski abramdem...@gmail.com wrote:

Jim,


Your function may be convergent but it is not a probability. 


True! All the possibilities sum to less than 1. There are ways of addressing 
this (ie, multiply by a normalizing constant which must also be approximated 
in 
a convergent manner), but for the most part adherents of Solomonoff induction 
don't worry about this too much. What we care about, mostly, is comparing 
different hyotheses to decide which to favor. The normalizing constant doesn't 
help us here, so it usually isn't mentioned. 




You said that Solomonoff's original construction involved flipping a coin for 
the next bit.  What good does that do?

Your intuition is that running totally random programs to get predictions will 
just produce garbage, and that is fine. The idea of Solomonoff induction, 
though, is that it will produce systematically less garbage than just flipping 
coins to get predictions. Most of the garbage programs will be knocked out of 
the running by the data itself. This is supposed to be the least garbage we 
can 
manage without domain-specific knowledge

This is backed up with the proof of dominance, which we haven't talked about 
yet, but which is really the key argument for the optimality of Solomonoff 
induction. 




And how does that prove that his original idea was convergent?

The proofs of equivalence between all the different formulations of Solomonoff 
induction are something I haven't cared to look into too deeply. 




Since his idea is incomputable, there are no algorithms that can be run to 
demonstrate what he was talking about so the basic idea is papered with all 
sorts of unverifiable approximations.

I gave you a proof of convergence for one such approximation, and if you wish 
I 
can modify it to include a normalizing constant to ensure that it is a 
probability measure. It would be helpful to me if your criticisms were more 
specific to that proof.



So, did Solomonoff's original idea involve randomizing whether the next bit 
would be a 1 or a 0 in the program? 



Yep. 



Even ignoring the halting problem what kind of result would that give?


Well, the general idea is this. An even

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-06 Thread Abram Demski
Jim,

From the article Matt linked to, specifically see the line:

As [image: p] is itself a binary string, we can define the discrete
universal a priori probability, [image: m(x)], to be the probability that
the output of a universal prefix Turing machine [image: U] is [image:
x]when provided with fair coin flips on the input tape.

--Abram

On Fri, Aug 6, 2010 at 3:38 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Jim, see http://www.scholarpedia.org/article/Algorithmic_probability
 http://www.scholarpedia.org/article/Algorithmic_probabilityI think this
 answers your questions.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Fri, August 6, 2010 2:18:09 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I meant:
 Did Solomonoff's original idea use randomization to determine the bits of
 the programs that are used to produce the *prior probabilities*?  I think
 that the answer to that is obviously no.  The randomization of the next bit
 would used in the test of the prior probabilities as done using a random
 sampling.  He probably found that students who had some familiarity with
 statistics would initially assume that the prior probability was based on
 some subset of possible programs as would be expected from a typical sample,
 so he gave this statistics type of definition to emphasize the extent of
 what he had in mind.

 I asked this question just to make sure that I understood what Solomonoff
 Induction was, because Abram had made some statement indicating that I
 really didn't know.  Remember, this particular branch of the discussion was
 originally centered around the question of whether Solomonoff
 Induction would be convergent, even given a way around the incomputability
 of finding only those programs that halted.  So while the random testing of
 the prior probabilities is of interest to me, I wanted to make sure that
 there is no evidence that Solomonoff Induction is convergent. I am not being
 petty about this, but I also needed to make sure that I understood what
 Solomonoff Induction is.

 I am interested in hearing your ideas about your variation of
 Solomonoff Induction because your convergent series, in this context, was
 interesting.
 Jim Bromer

 On Fri, Aug 6, 2010 at 6:50 AM, Jim Bromer jimbro...@gmail.com wrote:

 Jim: So, did Solomonoff's original idea involve randomizing whether the
 next bit would be a 1 or a 0 in the program?

 Abram: Yep.
 I meant, did Solomonoff's original idea involve randomizing whether the
 next bit in the program's that are originally used to produce the *prior
 probabilities* involve the use of randomizing whether the next bit would
 be a 1 or a 0?  I have not been able to find any evidence that it was.
 I thought that my question was clear but on second thought I guess it
 wasn't. I think that the part about the coin flips was only a method to
 express that he was interested in the probability that a particular string
 would be produced from all possible programs, so that when actually testing
 the prior probability of a particular string the program that was to be run
 would have to be randomly generated.
 Jim Bromer




 On Wed, Aug 4, 2010 at 10:27 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

  Your function may be convergent but it is not a probability.


 True! All the possibilities sum to less than 1. There are ways of
 addressing this (ie, multiply by a normalizing constant which must also be
 approximated in a convergent manner), but for the most part adherents of
 Solomonoff induction don't worry about this too much. What we care about,
 mostly, is comparing different hyotheses to decide which to favor. The
 normalizing constant doesn't help us here, so it usually isn't mentioned.


 You said that Solomonoff's original construction involved flipping a coin
 for the next bit.  What good does that do?


 Your intuition is that running totally random programs to get predictions
 will just produce garbage, and that is fine. The idea of Solomonoff
 induction, though, is that it will produce systematically less garbage than
 just flipping coins to get predictions. Most of the garbage programs will be
 knocked out of the running by the data itself. This is supposed to be the
 least garbage we can manage without domain-specific knowledge

 This is backed up with the proof of dominance, which we haven't talked
 about yet, but which is really the key argument for the optimality of
 Solomonoff induction.


 And how does that prove that his original idea was convergent?


 The proofs of equivalence between all the different formulations of
 Solomonoff induction are something I haven't cared to look into too deeply.

 Since his idea is incomputable, there are no algorithms that can be run
 to demonstrate what he was talking about so the basic idea is papered with
 all sorts of unverifiable approximations.


 I gave you

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-04 Thread Jim Bromer
Abram,
Thanks for the explanation.  I still don't get it.  Your function may be
convergent but it is not a probability.  You said that Solomonoff's original
construction involved flipping a coin for the next bit.  What good does that
do?  And how does that prove that his original idea was convergent?  The
thing that is wrong with these explanations is that they are not coherent.
Since his idea is incomputable, there are no algorithms that can be run to
demonstrate what he was talking about so the basic idea is papered with all
sorts of unverifiable approximations.

So, did Solomonoff's original idea involve randomizing whether the next bit
would be a 1 or a 0 in the program?  Even ignoring the halting problem what
kind of result would that give?  Have you ever solved the problem for
some strings and have you ever tested the solutions using a simulation?

Jim Bromer

On Mon, Aug 2, 2010 at 5:12 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 Interestingly, the formalization of Solomonoff induction I'm most familiar
 with uses a construction that relates the space of programs with the real
 numbers just as you say. This formulation may be due to Solomonoff, or
 perhaps Hutter... not sure. I re-formulated it to gloss over that in order
 to make it simpler; I'm pretty sure the version I gave is equivalent in the
 relevant aspects. However, some notes on the original construction.

 Programs are created by flipping coins to come up with the 1s and 0s. We
 are to think of it like this: whenever the computer reaches the end of the
 program and tries to continue on, we flip a coin to decide what the next bit
 of the program will be. We keep doing this for as long as the computer wants
 more bits of instruction.

 This framework makes room for infinitely long programs, but makes them
 infinitely improbable-- formally, they have probability 0. (We could alter
 the setup to allow them an infinitesimal probability.) Intuitively, this
 means that if we keep flipping a coin to tell the computer what to do,
 eventually we will either create an infinite loop-back (so the computer
 keeps executing the already-written parts of the program and never asks for
 more) or write out the HALT command. Avoiding doing one or the other
 forever is just too improbable.

 This also means all real numbers are output by some program! It just may be
 one which is infinitely long.

 However, all the programs that slip past my time bound as T increases to
 infinity will have measure 0, meaning they don't add anything to the sum.
 This means the convergence is unaffected.

 Note: in this construction, program space is *still* a well-defined entity.

 --Abram

 On Sun, Aug 1, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,

 This is a very interesting function.  I have spent a lot of time thinking
 about it.  However, I do not believe that does, in any way, prove or
 indicate that Solomonoff Induction is convergent.  I want to discuss the
 function but I need to take more time to study some stuff and to work
 various details out.  (Although I have thought a lot about it, I am writing
 this under a sense of deadline, so it may not be well composed.)



 My argument was that Solomonoff's conjecture, which was based (as far as I
 can tell) on 'all possible programs', was fundamentally flawed because the
 idea of 'all possible programs' is not a programmable definition.  All
 possible programs is a domain, not a class of programs that can be feasibly
 defined in the form of an algorithm that could 'reach' all the programs.



 The domain of all possible programs is trans-infinite just as the domain
 of irrational numbers are.  Why do I believe this?  Because if we imagine
 that infinite algorithms are computable, then we could compute irrational
 numbers.  That is, there are programs that, given infinite resources,
 could compute irrational numbers.  We can use the binomial theorem, for
 example to compute the square root of 2.  And we can use trial and error
 methods to compute the nth root of any number.  So that proves that there
 are infinite irrational numbers that can be computed by algorithms that run
 for infinity.



 So what does this have to do with Solomonoff's conjecture of all possible
 programs?  Well, if I could prove that any individual irrational number
 could be computed (with programs that ran through infinity) then I might be
 able to prove that there are trans-infinite programs.  If I could prove
 that some trans-infinite subset of irrational numbers could be computed then
 I might be able to prove that 'all possible programs' is a trans-infinite
 class.


 Now Abram said that since his sum, based on runtime and program length, is
 convergent it can prove that Solomonoff Induction is convergent.  Even
 assuming that his convergent sum method could be fixed up a little, I
 suspect that this time-length bound is misleading.  Since a Turing
 Machine allows for erasures this means that a program could 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-04 Thread Abram Demski
Jim,

Your function may be convergent but it is not a probability.


True! All the possibilities sum to less than 1. There are ways of addressing
this (ie, multiply by a normalizing constant which must also be approximated
in a convergent manner), but for the most part adherents of Solomonoff
induction don't worry about this too much. What we care about, mostly, is
comparing different hyotheses to decide which to favor. The normalizing
constant doesn't help us here, so it usually isn't mentioned.

You said that Solomonoff's original construction involved flipping a coin
 for the next bit.  What good does that do?


Your intuition is that running totally random programs to get predictions
will just produce garbage, and that is fine. The idea of Solomonoff
induction, though, is that it will produce systematically less garbage than
just flipping coins to get predictions. Most of the garbage programs will be
knocked out of the running by the data itself. This is supposed to be the
least garbage we can manage without domain-specific knowledge

This is backed up with the proof of dominance, which we haven't talked about
yet, but which is really the key argument for the optimality of Solomonoff
induction.

And how does that prove that his original idea was convergent?


The proofs of equivalence between all the different formulations of
Solomonoff induction are something I haven't cared to look into too deeply.

Since his idea is incomputable, there are no algorithms that can be run to
 demonstrate what he was talking about so the basic idea is papered with all
 sorts of unverifiable approximations.


I gave you a proof of convergence for one such approximation, and if you
wish I can modify it to include a normalizing constant to ensure that it is
a probability measure. It would be helpful to me if your criticisms were
more specific to that proof.

So, did Solomonoff's original idea involve randomizing whether the next bit
 would be a 1 or a 0 in the program?


Yep.

Even ignoring the halting problem what kind of result would that give?


Well, the general idea is this. An even distribution intuitively represents
lack of knowledge. An even distribution over possible data fails horribly,
however, predicting white noise. We want to represent the idea that we are
very ignorant of what the data might be, but not *that* ignorant. To capture
the idea of regularity, ie, similarity between past and future, we instead
take an even distribution over *descriptions* of the data. (The distribution
in the 2nd version of solomonoff induction that I gave, the one in which the
space of possible programs is represented as a continuum, is an even
distribution.) This appears to provide a good amount of regularity without
too much.

--Abram

On Wed, Aug 4, 2010 at 8:10 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 Thanks for the explanation.  I still don't get it.  Your function may be
 convergent but it is not a probability.  You said that Solomonoff's original
 construction involved flipping a coin for the next bit.  What good does that
 do?  And how does that prove that his original idea was convergent?  The
 thing that is wrong with these explanations is that they are not coherent.
 Since his idea is incomputable, there are no algorithms that can be run to
 demonstrate what he was talking about so the basic idea is papered with all
 sorts of unverifiable approximations.

 So, did Solomonoff's original idea involve randomizing whether the next bit
 would be a 1 or a 0 in the program?  Even ignoring the halting problem what
 kind of result would that give?  Have you ever solved the problem for
 some strings and have you ever tested the solutions using a simulation?

 Jim Bromer

 On Mon, Aug 2, 2010 at 5:12 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 Interestingly, the formalization of Solomonoff induction I'm most familiar
 with uses a construction that relates the space of programs with the real
 numbers just as you say. This formulation may be due to Solomonoff, or
 perhaps Hutter... not sure. I re-formulated it to gloss over that in order
 to make it simpler; I'm pretty sure the version I gave is equivalent in the
 relevant aspects. However, some notes on the original construction.

 Programs are created by flipping coins to come up with the 1s and 0s. We
 are to think of it like this: whenever the computer reaches the end of the
 program and tries to continue on, we flip a coin to decide what the next bit
 of the program will be. We keep doing this for as long as the computer wants
 more bits of instruction.

 This framework makes room for infinitely long programs, but makes them
 infinitely improbable-- formally, they have probability 0. (We could alter
 the setup to allow them an infinitesimal probability.) Intuitively, this
 means that if we keep flipping a coin to tell the computer what to do,
 eventually we will either create an infinite loop-back (so the computer
 keeps executing the already-written 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
I guess the trans-infinite is computable, given infinite resources.  It
doesn't make sense to me except that the infinite does not exist as a
number-like object, it is an active process of incrementation or something
like that.  End of Count.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
I see that erasure is from an alternative definition for a Turing Machine.
I am not sure if a four state Turing Machine could be used to
make Solomonoff Induction convergent.  If all programs that required working
memory greater than the length of the output string could be eliminated then
that would have an impact on convergent feasibility.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Jim Bromer
On Mon, Aug 2, 2010 at 7:21 AM, Jim Bromer jimbro...@gmail.com wrote:

 I see that erasure is from an alternative definition for a Turing Machine.
 I am not sure if a four state Turing Machine could be used to
 make Solomonoff Induction convergent.  If all programs that required working
 memory greater than the length of the output string could be eliminated then
 that would have an impact on convergent feasibility.

But then again this is getting back to my whole thesis.  By constraining the
definition of all possible programs sufficiently, we should be left with a
definable subset of programs that could be used in an actual computations.

I want to study more to try to better understand Abrams definition of a
convergent derivation of Solomonoff Induction.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-02 Thread Abram Demski
Jim,

Interestingly, the formalization of Solomonoff induction I'm most familiar
with uses a construction that relates the space of programs with the real
numbers just as you say. This formulation may be due to Solomonoff, or
perhaps Hutter... not sure. I re-formulated it to gloss over that in order
to make it simpler; I'm pretty sure the version I gave is equivalent in the
relevant aspects. However, some notes on the original construction.

Programs are created by flipping coins to come up with the 1s and 0s. We are
to think of it like this: whenever the computer reaches the end of the
program and tries to continue on, we flip a coin to decide what the next bit
of the program will be. We keep doing this for as long as the computer wants
more bits of instruction.

This framework makes room for infinitely long programs, but makes them
infinitely improbable-- formally, they have probability 0. (We could alter
the setup to allow them an infinitesimal probability.) Intuitively, this
means that if we keep flipping a coin to tell the computer what to do,
eventually we will either create an infinite loop-back (so the computer
keeps executing the already-written parts of the program and never asks for
more) or write out the HALT command. Avoiding doing one or the other
forever is just too improbable.

This also means all real numbers are output by some program! It just may be
one which is infinitely long.

However, all the programs that slip past my time bound as T increases to
infinity will have measure 0, meaning they don't add anything to the sum.
This means the convergence is unaffected.

Note: in this construction, program space is *still* a well-defined entity.

--Abram

On Sun, Aug 1, 2010 at 9:05 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,

 This is a very interesting function.  I have spent a lot of time thinking
 about it.  However, I do not believe that does, in any way, prove or
 indicate that Solomonoff Induction is convergent.  I want to discuss the
 function but I need to take more time to study some stuff and to work
 various details out.  (Although I have thought a lot about it, I am writing
 this under a sense of deadline, so it may not be well composed.)



 My argument was that Solomonoff's conjecture, which was based (as far as I
 can tell) on 'all possible programs', was fundamentally flawed because the
 idea of 'all possible programs' is not a programmable definition.  All
 possible programs is a domain, not a class of programs that can be feasibly
 defined in the form of an algorithm that could 'reach' all the programs.



 The domain of all possible programs is trans-infinite just as the domain of
 irrational numbers are.  Why do I believe this?  Because if we imagine
 that infinite algorithms are computable, then we could compute irrational
 numbers.  That is, there are programs that, given infinite resources,
 could compute irrational numbers.  We can use the binomial theorem, for
 example to compute the square root of 2.  And we can use trial and error
 methods to compute the nth root of any number.  So that proves that there
 are infinite irrational numbers that can be computed by algorithms that run
 for infinity.



 So what does this have to do with Solomonoff's conjecture of all possible
 programs?  Well, if I could prove that any individual irrational number
 could be computed (with programs that ran through infinity) then I might be
 able to prove that there are trans-infinite programs.  If I could prove
 that some trans-infinite subset of irrational numbers could be computed then
 I might be able to prove that 'all possible programs' is a trans-infinite
 class.


 Now Abram said that since his sum, based on runtime and program length, is
 convergent it can prove that Solomonoff Induction is convergent.  Even
 assuming that his convergent sum method could be fixed up a little, I
 suspect that this time-length bound is misleading.  Since a Turing Machine
 allows for erasures this means that a program could last longer than his
 time parameter and still produce an output string that matches the given
 string.  And if 'all possible programs' is a trans-infinite class then
 there are programs that you are going to miss.  Your encoding method will
 miss trans-infinite programs (unless you have trans-cended the
 trans-infinite.)

 However, I want to study the function and some other ideas related to this
 kind of thing a little more.  It is an interesting function.
 Unfortunately I also have to get back to other-worldly things.

 Jim Bromer


 On Mon, Jul 26, 2010 at 2:54 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I'll argue that solomonoff probabilities are in fact like Pi, that is,
 computable in the limit.

 I still do not understand why you think these combinations are necessary.
 It is not necessary to make some sort of ordering of the sum to get it to
 converge: ordering only matters for infinite sums which include negative
 numbers. (Perhaps that's where 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-08-01 Thread Jim Bromer
Abram,

This is a very interesting function.  I have spent a lot of time thinking
about it.  However, I do not believe that does, in any way, prove or
indicate that Solomonoff Induction is convergent.  I want to discuss the
function but I need to take more time to study some stuff and to work
various details out.  (Although I have thought a lot about it, I am writing
this under a sense of deadline, so it may not be well composed.)



My argument was that Solomonoff's conjecture, which was based (as far as I
can tell) on 'all possible programs', was fundamentally flawed because the
idea of 'all possible programs' is not a programmable definition.  All
possible programs is a domain, not a class of programs that can be feasibly
defined in the form of an algorithm that could 'reach' all the programs.



The domain of all possible programs is trans-infinite just as the domain of
irrational numbers are.  Why do I believe this?  Because if we imagine that
infinite algorithms are computable, then we could compute irrational
numbers.  That is, there are programs that, given infinite resources, could
compute irrational numbers.  We can use the binomial theorem, for example to
compute the square root of 2.  And we can use trial and error methods to
compute the nth root of any number.  So that proves that there are infinite
irrational numbers that can be computed by algorithms that run for infinity.



So what does this have to do with Solomonoff's conjecture of all possible
programs?  Well, if I could prove that any individual irrational number
could be computed (with programs that ran through infinity) then I might be
able to prove that there are trans-infinite programs.  If I could prove that
some trans-infinite subset of irrational numbers could be computed then
I might be able to prove that 'all possible programs' is a trans-infinite
class.


Now Abram said that since his sum, based on runtime and program length, is
convergent it can prove that Solomonoff Induction is convergent.  Even
assuming that his convergent sum method could be fixed up a little, I
suspect that this time-length bound is misleading.  Since a Turing Machine
allows for erasures this means that a program could last longer than his
time parameter and still produce an output string that matches the given
string.  And if 'all possible programs' is a trans-infinite class then there
are programs that you are going to miss.  Your encoding method will miss
trans-infinite programs (unless you have trans-cended the trans-infinite.)

However, I want to study the function and some other ideas related to this
kind of thing a little more.  It is an interesting function.  Unfortunately
I also have to get back to other-worldly things.

Jim Bromer


On Mon, Jul 26, 2010 at 2:54 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 I'll argue that solomonoff probabilities are in fact like Pi, that is,
 computable in the limit.

 I still do not understand why you think these combinations are necessary.
 It is not necessary to make some sort of ordering of the sum to get it to
 converge: ordering only matters for infinite sums which include negative
 numbers. (Perhaps that's where you're getting the idea?)

 Here's my proof, rewritten from an earlier post, using the properties of
 infinite sums of non-negative numbers.

 (preliminaries)

 Define the computation as follows: we start with a string S which we want
 to know the Solomonoff probability of. We are given a time-limit T. We start
 with P=0, where P is a real number with precision 2*log_4(T) or more. We use
 some binary encoding for programs which (unlike normal programming
 languages) does not contain syntactically invalid programs, but still will
 (of course) contain infinite loops and so on. We run program 0 and 1 for
 T/4 each, 00, 01, 10 and 11 for T/16 each, and in general run each
 program of length N for floor[T/(4^N)] until T/(4^N) is less than 1. Each
 time we run a program and the result is S, we add 1/(4^N) to P.

 (assertion)

 P converges to some value as T is increased.

 (proof)

 If every single program were to output S, then T would converge to 1/4 +
 1/4 + 1/16 + 1/16 + 1/16 + 1/16 + ... that is, 2*(1/(4^1)) + 4*(1/(4^2)) +
 8*(1/(4^3)) + ... which comes to 1/2 + 1/4 + 1/8 + 1/16 + .. i.e. 1/(2^1) +
 1/(2^2) + 1/(2^3) + ... ; it is well-known that this sequence converges to
 1. Thus 1 is an upper bound for P: it could only get that high if every
 single program were to output S.

 0 is a lower bound, since we start there and never subtract anything. In
 fact, every time we add more to P, we have a new lower bound: P will never
 go below a number once it reaches it. The sum can only increase. Infinite
 sums with this property must either converge to a finite number, or go to
 infinity. However, we already know that 1 is an upper bound for P; so, it
 cannot go to infinity. Hence, it must converge.

 --Abram

   On Mon, Jul 26, 2010 at 9:14 AM, Jim Bromer jimbro...@gmail.com wrote:

   As 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-26 Thread Jim Bromer
As far as I can tell right now, my theories that Solomonoff Induction is
trans-infinite were wrong.  Now that I realize that the mathematics do not
support these conjectures, I have to acknowledge that I would not be able to
prove or even offer a sketch of a proof of my theories.  Although I did not
use rigourous mathematics while I have tried to make an assessment of the
Solomonoff method, the first principle of rigourous mathematics is to
acknowledge that the mathematics does not support your supposition when they
don't.

Solomonoff Induction isn't a mathematical theory because the desired results
are not computable.  As I mentioned before, there are a great many functions
that are not computable but which are useful and important because they tend
toward a limit which can be seen in with a reasonable number of calculations
using the methods available.  Pi is one such function.  (I am presuming that
pi would require an infinite expansion which seems right.)

I have explained, and I think it is a correct explanation, that there is no
way that you could make an apriori computation of all possible
combinations taken from infinite values.  So you could not even come up with
a theoretical construct that could take account of that level of
complexity.  It is true that you could come up with a theoretical
computational method that could take account of any finite number of values,
and that is what we are talking about when we talk about the infinite, but
in this context it only points to a theoretical paradox.  Your theoretical
solution could not take the final step of computing a probability for a
string until it had run through the infinite combinations and this is
impossible.  The same problem does not occur for pi because the function
that produces it tends toward a limit.

The reason I thought Solomonoff Induction was trans-infinite was because I
thought it used the Bayesian notion to compute the probability using all
possible programs that produced a particular substring following a given
prefix.  Now I understand that the desired function is the computation of
only the probability of a particular string (not all possible substrings
that are identical to the string) following a given prefix.  I want to study
the method a little further during the next few weeks, but I just wanted to
make it clear that, as far as now understand the program, that I do not
think that it is trans-infinite.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-26 Thread Abram Demski
Jim,

I'll argue that solomonoff probabilities are in fact like Pi, that is,
computable in the limit.

I still do not understand why you think these combinations are necessary. It
is not necessary to make some sort of ordering of the sum to get it to
converge: ordering only matters for infinite sums which include negative
numbers. (Perhaps that's where you're getting the idea?)

Here's my proof, rewritten from an earlier post, using the properties of
infinite sums of non-negative numbers.

(preliminaries)

Define the computation as follows: we start with a string S which we want to
know the Solomonoff probability of. We are given a time-limit T. We start
with P=0, where P is a real number with precision 2*log_4(T) or more. We use
some binary encoding for programs which (unlike normal programming
languages) does not contain syntactically invalid programs, but still will
(of course) contain infinite loops and so on. We run program 0 and 1 for
T/4 each, 00, 01, 10 and 11 for T/16 each, and in general run each
program of length N for floor[T/(4^N)] until T/(4^N) is less than 1. Each
time we run a program and the result is S, we add 1/(4^N) to P.

(assertion)

P converges to some value as T is increased.

(proof)

If every single program were to output S, then T would converge to 1/4 + 1/4
+ 1/16 + 1/16 + 1/16 + 1/16 + ... that is, 2*(1/(4^1)) + 4*(1/(4^2)) +
8*(1/(4^3)) + ... which comes to 1/2 + 1/4 + 1/8 + 1/16 + .. i.e. 1/(2^1) +
1/(2^2) + 1/(2^3) + ... ; it is well-known that this sequence converges to
1. Thus 1 is an upper bound for P: it could only get that high if every
single program were to output S.

0 is a lower bound, since we start there and never subtract anything. In
fact, every time we add more to P, we have a new lower bound: P will never
go below a number once it reaches it. The sum can only increase. Infinite
sums with this property must either converge to a finite number, or go to
infinity. However, we already know that 1 is an upper bound for P; so, it
cannot go to infinity. Hence, it must converge.

--Abram

On Mon, Jul 26, 2010 at 9:14 AM, Jim Bromer jimbro...@gmail.com wrote:

 As far as I can tell right now, my theories that Solomonoff Induction is
 trans-infinite were wrong.  Now that I realize that the mathematics do not
 support these conjectures, I have to acknowledge that I would not be able to
 prove or even offer a sketch of a proof of my theories.  Although I did not
 use rigourous mathematics while I have tried to make an assessment of the
 Solomonoff method, the first principle of rigourous mathematics is to
 acknowledge that the mathematics does not support your supposition when they
 don't.

 Solomonoff Induction isn't a mathematical theory because the desired
 results are not computable.  As I mentioned before, there are a great many
 functions that are not computable but which are useful and important because
 they tend toward a limit which can be seen in with a reasonable number of
 calculations using the methods available.  Pi is one such function.  (I am
 presuming that pi would require an infinite expansion which seems right.)

 I have explained, and I think it is a correct explanation, that there is no
 way that you could make an apriori computation of all possible
 combinations taken from infinite values.  So you could not even come up with
 a theoretical construct that could take account of that level of
 complexity.  It is true that you could come up with a theoretical
 computational method that could take account of any finite number of values,
 and that is what we are talking about when we talk about the infinite, but
 in this context it only points to a theoretical paradox.  Your theoretical
 solution could not take the final step of computing a probability for a
 string until it had run through the infinite combinations and this is
 impossible.  The same problem does not occur for pi because the function
 that produces it tends toward a limit.

 The reason I thought Solomonoff Induction was trans-infinite was because I
 thought it used the Bayesian notion to compute the probability using all
 possible programs that produced a particular substring following a given
 prefix.  Now I understand that the desired function is the computation of
 only the probability of a particular string (not all possible substrings
 that are identical to the string) following a given prefix.  I want to study
 the method a little further during the next few weeks, but I just wanted to
 make it clear that, as far as now understand the program, that I do not
 think that it is trans-infinite.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-26 Thread Abram Demski
Jim,

Fair enough.

Oh, and Matt: kudos for being better at patiently explaining details than
me.

--Abram

On Mon, Jul 26, 2010 at 4:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 We all have some misconceptions about this, and about related issues.  Let
 me think about this more carefully and I will answer during the next week -
 or two, (I have already reached my quota of egregious errors in this
 thread).
 Jim

 On Mon, Jul 26, 2010 at 2:54 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I'll argue that solomonoff probabilities are in fact like Pi, that is,
 computable in the limit.

 I still do not understand why you think these combinations are necessary.
 It is not necessary to make some sort of ordering of the sum to get it to
 converge: ordering only matters for infinite sums which include negative
 numbers. (Perhaps that's where you're getting the idea?)

 Here's my proof, rewritten from an earlier post, using the properties of
 infinite sums of non-negative numbers.

 (preliminaries)

 Define the computation as follows: we start with a string S which we want
 to know the Solomonoff probability of. We are given a time-limit T. We start
 with P=0, where P is a real number with precision 2*log_4(T) or more. We use
 some binary encoding for programs which (unlike normal programming
 languages) does not contain syntactically invalid programs, but still will
 (of course) contain infinite loops and so on. We run program 0 and 1 for
 T/4 each, 00, 01, 10 and 11 for T/16 each, and in general run each
 program of length N for floor[T/(4^N)] until T/(4^N) is less than 1. Each
 time we run a program and the result is S, we add 1/(4^N) to P.

 (assertion)

 P converges to some value as T is increased.

 (proof)

 If every single program were to output S, then T would converge to 1/4 +
 1/4 + 1/16 + 1/16 + 1/16 + 1/16 + ... that is, 2*(1/(4^1)) + 4*(1/(4^2)) +
 8*(1/(4^3)) + ... which comes to 1/2 + 1/4 + 1/8 + 1/16 + .. i.e. 1/(2^1) +
 1/(2^2) + 1/(2^3) + ... ; it is well-known that this sequence converges to
 1. Thus 1 is an upper bound for P: it could only get that high if every
 single program were to output S.

 0 is a lower bound, since we start there and never subtract anything. In
 fact, every time we add more to P, we have a new lower bound: P will never
 go below a number once it reaches it. The sum can only increase. Infinite
 sums with this property must either converge to a finite number, or go to
 infinity. However, we already know that 1 is an upper bound for P; so, it
 cannot go to infinity. Hence, it must converge.

 --Abram

   On Mon, Jul 26, 2010 at 9:14 AM, Jim Bromer jimbro...@gmail.comwrote:

   As far as I can tell right now, my theories that Solomonoff Induction
 is trans-infinite were wrong.  Now that I realize that the mathematics do
 not support these conjectures, I have to acknowledge that I would not be
 able to prove or even offer a sketch of a proof of my theories.  Although I
 did not use rigourous mathematics while I have tried to make an assessment
 of the Solomonoff method, the first principle of rigourous mathematics is to
 acknowledge that the mathematics does not support your supposition when they
 don't.

 Solomonoff Induction isn't a mathematical theory because the desired
 results are not computable.  As I mentioned before, there are a great many
 functions that are not computable but which are useful and important because
 they tend toward a limit which can be seen in with a reasonable number of
 calculations using the methods available.  Pi is one such function.  (I am
 presuming that pi would require an infinite expansion which seems right.)

 I have explained, and I think it is a correct explanation, that there is
 no way that you could make an apriori computation of all possible
 combinations taken from infinite values.  So you could not even come up with
 a theoretical construct that could take account of that level of
 complexity.  It is true that you could come up with a theoretical
 computational method that could take account of any finite number of values,
 and that is what we are talking about when we talk about the infinite, but
 in this context it only points to a theoretical paradox.  Your theoretical
 solution could not take the final step of computing a probability for a
 string until it had run through the infinite combinations and this is
 impossible.  The same problem does not occur for pi because the function
 that produces it tends toward a limit.

 The reason I thought Solomonoff Induction was trans-infinite was because
 I thought it used the Bayesian notion to compute the probability using all
 possible programs that produced a particular substring following a given
 prefix.  Now I understand that the desired function is the computation of
 only the probability of a particular string (not all possible substrings
 that are identical to the string) following a given prefix.  I want to study
 the method a little further during the 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
I believe that trans-infinite would mean that there is no recursively
enumerable algorithm that could 'reach' every possible item in the
trans-infinite group.



Since each program in Solomonoff Induction, written for a Universal Turing
Machine could be written on a single role of tape, that means that every
possible combination of programs could also be written on the tape.  They
would therefore be recursively enumerable just as they could be enumerated
on a one to one basis with aleph null (counting numbers).



But, unfortunately for my criticism, there are algorithms that could reach
any finite combination of things which means that even though you could not
determine the ordering of programs that would be necessary to show that the
probabilities of each string approach a limit, it would be possible to write
an algorithm that could show trends, given infinite resources.  This doesn't
mean that the probabilities would approach the limit, it just means that if
they did, there would be an infinite algorithm that could make the best
determination given the information that the programs had already produced.
This would be a necessary step of a theoretical (but still
non-constructivist) proof.



So I can't prove that Solomonoff Induction is inherently trans-infinite.



I am going to take a few weeks to see if I can determine if the idea of
Solomonoff Induction makes hypothetical sense to me.
Jim Bromer



On Sat, Jul 24, 2010 at 6:04 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  Solomonoff Induction may require a trans-infinite level of complexity
 just to run each program.

 Trans-infinite is not a mathematically defined term as far as I can tell.
 Maybe you mean larger than infinity, as in the infinite set of real numbers
 is larger than the infinite set of natural numbers (which is true).

 But it is not true that Solomonoff induction requires more than aleph-null
 operations. (Aleph-null is the size of the set of natural numbers, the
 smallest infinity). An exact calculation requires that you test aleph-null
 programs for aleph-null time steps each. There are aleph-null programs
 because each program is a finite length string, and there is a 1 to 1
 correspondence between the set of finite strings and N, the set of natural
 numbers. Also, each program requires aleph-null computation in the case that
 it runs forever, because each step in the infinite computation can be
 numbered 1, 2, 3...

 However, the total amount of computation is still aleph-null because each
 step of each program can be described by an ordered pair (m,n) in N^2,
 meaning the n'th step of the m'th program, where m and n are natural
 numbers. The cardinality of N^2 is the same as the cardinality of N because
 there is a 1 to 1 correspondence between the sets. You can order the ordered
 pairs as (1,1), (1,2), (2,1), (1,3), (2,2), (3,1), (1,4), (2,3), (3,2),
 (4,1), (1,5), etc. See
 http://en.wikipedia.org/wiki/Countable_set#More_formal_introduction

 Furthermore you may approximate Solomonoff induction to any desired
 precision with finite computation. Simply interleave the execution of all
 programs as indicated in the ordering of ordered pairs that I just gave,
 where the programs are ordered from shortest to longest. Take the shortest
 program found so far that outputs your string, x. It is guaranteed that this
 algorithm will approach and eventually find the shortest program that
 outputs x given sufficient time, because this program exists and it halts.

 In case you are wondering how Solomonoff induction is not computable, the
 problem is that after this algorithm finds the true shortest program that
 outputs x, it will keep running forever and you might still be wondering if
 a shorter program is forthcoming. In general you won't know.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 3:59:18 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 Solomonoff Induction may require a trans-infinite level of complexity just
 to run each program.  Suppose each program is iterated through the
 enumeration of its instructions.  Then, not only do the infinity of
 possible programs need to be run, many combinations of the infinite programs
 from each simulated Turing Machine also have to be tried.  All the
 possible combinations of (accepted) programs, one from any two or more of
 the (accepted) programs produced by each simulated Turing Machine, have to
 be tried.  Although these combinations of programs from each of the
 simulated Turing Machine may not all be unique, they all have to be tried.
 Since each simulated Turing Machine would produce infinite programs, I am
 pretty sure that this means that Solmonoff Induction is, *by 
 definition,*trans-infinite.
 Jim Bromer


 On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to retract my

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
: [agi] Comments On My Skepticism of Solomonoff Induction

 Solomonoff Induction may require a trans-infinite level of complexity just
 to run each program.  Suppose each program is iterated through the
 enumeration of its instructions.  Then, not only do the infinity of
 possible programs need to be run, many combinations of the infinite programs
 from each simulated Turing Machine also have to be tried.  All the
 possible combinations of (accepted) programs, one from any two or more of
 the (accepted) programs produced by each simulated Turing Machine, have to
 be tried.  Although these combinations of programs from each of the
 simulated Turing Machine may not all be unique, they all have to be tried.
 Since each simulated Turing Machine would produce infinite programs, I am
 pretty sure that this means that Solmonoff Induction is, *by 
 definition,*trans-infinite.
 Jim Bromer


 On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to retract my claim that the programs of Solomonoff Induction
 would be trans-infinite.  Each of the infinite individual programs could be
 enumerated by their individual instructions so some combination of unique
 individual programs would not correspond to a unique program but to the
 enumerated program that corresponds to the string of their individual
 instructions.  So I got that one wrong.
 Jim Bromer


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
I got confused with the two kinds of combinations that I was thinking about.
Sorry.  However, while the reordering of the partial accumulation of a
finite number of probabilities, where each probability is taken just once,
can be done with a re algorithm, there is no re algorithm that can consider
all possible combinations for an infinite set of probabilities.  I believe
that this means that the probability of a particular string cannot be proven
to attain a stable value using general mathematical methods but that the
partial ordering of probabilities after any finite programs had been run can
be made, both with actual computed values and through the use of an a
priori methods made with general mathematical methods - if someone (like a
twenty second century AI program) was capable of dealing with the
extraordinary complexity of the problem.


So I haven't proven that there is a theoretical disconnect between the
desired function and the method.  Right now, no one has, as far as I can
tell, been able to prove that the method would actually produce the desired
function for all cases, but I haven't been able to sketch a proof that the
claimed relation between the method and the desired function is completely
unsound.

Jim Bromer


On Sun, Jul 25, 2010 at 9:36 AM, Jim Bromer jimbro...@gmail.com wrote:

 No, I might have been wrong about the feasibility of writing an algorithm
 that can produce all the possible combinations of items when I wrote my last
 message.  It is because the word combination is associated with more than
 one mathematical method. I am skeptical of the possibility that there is a
 re algorithm that can write out every possible combination of items taken
 from more than one group of *types* when strings of infinite length are
 possible.

 So yes, I may have proved that there is no re algorithm, even if given
 infinite resources, that can order the computation of Solomonoff Induction
 in such a way to show that the probability (or probabilities) tend toward a
 limit.  If my theory is correct, and right now I would say that there is a
 real chance that it is, I have proved that Solmonoff Induction is
 theoretically infeasible, illogical and therefore refuted.

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
Solomonoff Induction may require a trans-infinite level of complexity just
to run each program.  Suppose each program is iterated through the
enumeration of its instructions.  Then, not only do the infinity of possible
programs need to be run, many combinations of the infinite programs from
each simulated Turing Machine also have to be tried.  All the possible
combinations of (accepted) programs, one from any two or more of the
(accepted) programs produced by each simulated Turing Machine, have to be
tried.  Although these combinations of programs from each of the simulated
Turing Machine may not all be unique, they all have to be tried.  Since each
simulated Turing Machine would produce infinite programs, I am pretty sure
that this means that Solmonoff Induction is, *by definition,*trans-infinite.
Jim Bromer


On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to retract my claim that the programs of Solomonoff Induction would
 be trans-infinite.  Each of the infinite individual programs could be
 enumerated by their individual instructions so some combination of unique
 individual programs would not correspond to a unique program but to the
 enumerated program that corresponds to the string of their individual
 instructions.  So I got that one wrong.
 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
On Sat, Jul 24, 2010 at 3:59 PM, Jim Bromer jimbro...@gmail.com wrote:

 Solomonoff Induction may require a trans-infinite level of complexity just
 to run each program.  Suppose each program is iterated through the
 enumeration of its instructions.  Then, not only do the infinity of
 possible programs need to be run, many combinations of the infinite programs
 from each simulated Turing Machine also have to be tried.  All the
 possible combinations of (accepted) programs, one from any two or more of
 the (accepted) programs produced by each simulated Turing Machine, have to
 be tried.  Although these combinations of programs from each of the
 simulated Turing Machine may not all be unique, they all have to be tried.
 Since each simulated Turing Machine would produce infinite programs, I am
 pretty sure that this means that Solmonoff Induction is, *by 
 definition,*trans-infinite.
 Jim Bromer



All the possible combinations of (accepted) programs, one program taken from
any two or more simulated Turing Machines, have to be tried. Since each
simulated Turing Machine would produce infinite programs and there are
infinite simulated Turing Machines, I am pretty sure that this means
that Solmonoff
Induction is, *by definition,* trans-infinite.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Matt Mahoney
Jim Bromer wrote:
 Solomonoff Induction may require a trans-infinite level of complexity just to 
run each program. 

Trans-infinite is not a mathematically defined term as far as I can tell. 
Maybe you mean larger than infinity, as in the infinite set of real numbers is 
larger than the infinite set of natural numbers (which is true).

But it is not true that Solomonoff induction requires more than aleph-null 
operations. (Aleph-null is the size of the set of natural numbers, the 
smallest 
infinity). An exact calculation requires that you test aleph-null programs for 
aleph-null time steps each. There are aleph-null programs because each program 
is a finite length string, and there is a 1 to 1 correspondence between the set 
of finite strings and N, the set of natural numbers. Also, each program 
requires 
aleph-null computation in the case that it runs forever, because each step in 
the infinite computation can be numbered 1, 2, 3...

However, the total amount of computation is still aleph-null because each step 
of each program can be described by an ordered pair (m,n) in N^2, meaning the 
n'th step of the m'th program, where m and n are natural numbers. The 
cardinality of N^2 is the same as the cardinality of N because there is a 1 to 
1 
correspondence between the sets. You can order the ordered pairs as (1,1), 
(1,2), (2,1), (1,3), (2,2), (3,1), (1,4), (2,3), (3,2), (4,1), (1,5), etc. 
See http://en.wikipedia.org/wiki/Countable_set#More_formal_introduction

Furthermore you may approximate Solomonoff induction to any desired precision 
with finite computation. Simply interleave the execution of all programs as 
indicated in the ordering of ordered pairs that I just gave, where the programs 
are ordered from shortest to longest. Take the shortest program found so far 
that outputs your string, x. It is guaranteed that this algorithm will approach 
and eventually find the shortest program that outputs x given sufficient time, 
because this program exists and it halts.

In case you are wondering how Solomonoff induction is not computable, the 
problem is that after this algorithm finds the true shortest program that 
outputs x, it will keep running forever and you might still be wondering if a 
shorter program is forthcoming. In general you won't know.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sat, July 24, 2010 3:59:18 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


Solomonoff Induction may require a trans-infinite level of complexity just to 
run each program.  Suppose each program is iterated through the enumeration of 
its instructions.  Then, not only do the infinity of possible programs need to 
be run, many combinations of the infinite programs from each simulated Turing 
Machine also have to be tried.  All the possible combinations of (accepted) 
programs, one from any two or more of the (accepted) programs produced by each 
simulated Turing Machine, have to be tried.  Although these combinations of 
programs from each of the simulated Turing Machine may not all be unique, they 
all have to be tried.  Since each simulated Turing Machine would produce 
infinite programs, I am pretty sure that this means that Solmonoff Induction 
is, 
by definition, trans-infinite.
Jim Bromer


On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

I have to retract my claim that the programs of Solomonoff Induction would be 
trans-infinite.  Each of the infinite individual programs could be enumerated 
by 
their individual instructions so some combination of unique individual programs 
would not correspond to a unique program but to the enumerated program that 
corresponds to the string of their individual instructions.  So I got that one 
wrong.
Jim Bromer

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-24 Thread Jim Bromer
Abram,
II use constructivist's and intuitionist's (and for that matter finitist's)
methods when they seem useful to me.  I often make mistakes when I am not
wary of constructivist issues.  Constructist criticisms are interesting
because they can be turned against any presumptive method even though they
might seem to contradict a constructivist criticism taken from a different
presumption.

I misused the term computable a few times because I have seen it used in
different ways.  But it turns out that it can be used in different ways.
For example pi is not computable because it is infinite but a limiting
approximation to pi is computable.  So I would say that pi is computable -
given infinite resources.  One of my claims here is that I believe there are
programs that will run Solomonoff Induction so the method would therefore be
computable given infinite resources.  However, my other claim is that the
much desired function or result where one may compute the probability that a
string will be produced given a particular prefix is incomputable.

If I lived 500 years ago I might have said that a function that wasn't
computable wasn't well-defined.  (I might well have been somewhat
pompous about such things in 1510).  However, because of the efficacy of the
theory of limits and other methods of finding bounds on functions, I would
not say that now.  Pi is well defined, but I don't think that Solmonoff
Induction is completely well-defined.  But we can still talk about certain
aspects of it (using mathematics that are well grounded relative to those
aspects of the method that are computable) even though the entire function
is not completely well-defined.

One way to do this is by using conditional statements.  So if it turns out
that one or some of my assumptions are wrong, I can see how to revise my
theory about the aspect of the function that is computable (or seems
computable).

Jim Bromer

On Thu, Jul 22, 2010 at 10:50 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 Aha! So you *are* a constructivist or intuitionist or finitist of some
 variety? This would explain the miscommunication... you appear to hold the
 belief that a structure needs to be computable in order to be well-defined.
 Is that right?

 If that's the case, then you're not really just arguing against Solomonoff
 induction in particular, you're arguing against the entrenched framework of
 thinking which allows it to be defined-- the so-called classical
 mathematics.

 If this is the case, then you aren't alone.

 --Abram


 On Thu, Jul 22, 2010 at 5:06 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:
 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way.
  Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?

 You give a precise statement of the probability in general terms, but then
 say that it is uncomputable.  Then you ask if there is a paper that refutes
 it.  Well, why would any serious mathematician bother to refute it since you
 yourself acknowledge that it is uncomputable and therefore unverifiable and
 therefore not a mathematical theorem that can be proven true or false?  It
 isn't like you claimed that the mathematical statement is verifiable. It is
 as if you are making a statement and then ducking any responsibility for it
 by denying that it is even an evaluation.  You honestly don't see the
 irregularity?

 My point is that the general mathematical community doesn't accept
 Solomonoff Induction, not that I have a paper that *refutes it,*whatever 
 that would mean.

 Please give me a little more explanation why you say the fundamental
 method is that the probability of a string x is proportional to the sum of
 all programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
I have to retract my claim that the programs of Solomonoff Induction would
be trans-infinite.  Each of the infinite individual programs could be
enumerated by their individual instructions so some combination of unique
individual programs would not correspond to a unique program but to the
enumerated program that corresponds to the string of their individual
instructions.  So I got that one wrong.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:
The fundamental method is that the probability of a string x is proportional
to the sum of all programs M that output x weighted by 2^-|M|. That
probability is dominated by the shortest program, but it is equally
uncomputable either way.
Also, please point me to this mathematical community that you claim rejects
Solomonoff induction. Can you find even one paper that refutes it?

You give a precise statement of the probability in general terms, but then
say that it is uncomputable.  Then you ask if there is a paper that refutes
it.  Well, why would any serious mathematician bother to refute it since you
yourself acknowledge that it is uncomputable and therefore unverifiable and
therefore not a mathematical theorem that can be proven true or false?  It
isn't like you claimed that the mathematical statement is verifiable. It is
as if you are making a statement and then ducking any responsibility for it
by denying that it is even an evaluation.  You honestly don't see the
irregularity?

My point is that the general mathematical community doesn't accept
Solomonoff Induction, not that I have a paper that *refutes it,* whatever
that would mean.

Please give me a little more explanation why you say the fundamental method
is that the probability of a string x is proportional to the sum of all
programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim rejects
 Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise to claim that this method could
 stand as an ideal for some valid and feasible application of probability.
 Jim Bromer

 On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Matt Mahoney
Jim Bromer wrote:
 Please give me a little more explanation why you say the fundamental method 
 is 
that the probability of a string x is proportional to the sum of all programs 
M 
that output x weighted by 2^-|M|.  Why is the M in a bracket?

By |M| I mean the length of the program M in bits. Why 2^-|M|? Because each bit 
means you can have twice as many programs, so they should count half as much.

Being uncomputable doesn't make it wrong. The fact that there is no general 
procedure for finding the shortest program that outputs a string doesn't mean 
that you can never find it, or that for many cases you can't approximate it.

You apply Solomonoff induction all the time. What is the next bit in these 
sequences?

1. 0101010101010101010101010101010

2. 11001001110110101010001

In sequence 1 there is an obvious pattern with a short description. You can 
find 
a short program that outputs 0 and 1 alternately forever, so you predict the 
next bit will be 1. It might not be the shortest program, but it is enough that 
alternate 0 and 1 forever is shorter than alternate 0 and 1 15 times 
followed 
by 00 that you can confidently predict the first hypothesis is more likely.

The second sequence is not so obvious. It looks like random bits. With enough 
intelligence (or computation) you might discover that the sequence is a binary 
representation of pi, and therefore the next bit is 0. But the fact that you 
might not discover the shortest description does not invalidate the principle. 
It just says that you can't always apply Solomonoff induction and get the 
number 
you want.

Perhaps http://en.wikipedia.org/wiki/Kolmogorov_complexity will make this clear.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 22, 2010 5:06:12 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:
The fundamental method is that the probability of a string x is proportional to 
the sum of all programs M that output x weighted by 2^-|M|. That probability is 
dominated by the shortest program, but it is equally uncomputable either way.
Also, please point me to this mathematical community that you claim rejects 
Solomonoff induction. Can you find even one paper that refutes it?
 
You give a precise statement of the probability in general terms, but then say 
that it is uncomputable.  Then you ask if there is a paper that refutes it.  
Well, why would any serious mathematician bother to refute it since you 
yourself 
acknowledge that it is uncomputable and therefore unverifiable and therefore 
not 
a mathematical theorem that can be proven true or false?  It isn't like you 
claimed that the mathematical statement is verifiable. It is as if you are 
making a statement and then ducking any responsibility for it by denying that 
it 
is even an evaluation.  You honestly don't see the irregularity?
 
My point is that the general mathematical community doesn't accept Solomonoff 
Induction, not that I have a paper that refutes it, whatever that would mean.
 
Please give me a little more explanation why you say the fundamental method is 
that the probability of a string x is proportional to the sum of all programs M 
that output x weighted by 2^-|M|.  Why is the M in a bracket?

 
On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.com wrote:

Jim Bromer wrote:
 The fundamental method of Solmonoff Induction is trans-infinite.


The fundamental method is that the probability of a string x is proportional 
to 
the sum of all programs M that output x weighted by 2^-|M|. That probability 
is 
dominated by the shortest program, but it is equally uncomputable either way. 
How does this approximation invalidate Solomonoff induction?


Also, please point me to this mathematical community that you claim rejects 
Solomonoff induction. Can you find even one paper that refutes it?

 -- Matt Mahoney, matmaho...@yahoo.com 






 From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 21, 2010 3:08:13 PM 

Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction
 


I should have said, It would be unwise to claim that this method could stand 
as 
an ideal for some valid and feasible application of probability.
Jim Bromer


On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

The fundamental method of Solmonoff Induction is trans-infinite.  Suppose you 
iterate through all possible programs, combining different programs as you go. 
 
Then you have an infinite number of possible programs which have a 
trans-infinite number of combinations, because each tier of combinations can 
then be recombined to produce a second, third, fourth,... tier of 
recombinations.
 
Anyone who claims that this method is the ideal for a method of applied 
probability is unwise

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Jim Bromer
Thanks for the explanation.  I want to learn more about statistical
modelling and compression but I will need to take my time on it.  But no, I
don't apply Solomonoff Induction all the time, I never apply it.  I am not
being petty, it's just that you have taken a coincidence and interpreted it
the way you want to.

On Thu, Jul 22, 2010 at 9:33 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  Please give me a little more explanation why you say the fundamental
 method is that the probability of a string x is proportional to the sum of
 all programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?

 By |M| I mean the length of the program M in bits. Why 2^-|M|? Because each
 bit means you can have twice as many programs, so they should count half as
 much.

 Being uncomputable doesn't make it wrong. The fact that there is no general
 procedure for finding the shortest program that outputs a string doesn't
 mean that you can never find it, or that for many cases you can't
 approximate it.

 You apply Solomonoff induction all the time. What is the next bit in these
 sequences?

 1. 0101010101010101010101010101010

 2. 11001001110110101010001

 In sequence 1 there is an obvious pattern with a short description. You can
 find a short program that outputs 0 and 1 alternately forever, so you
 predict the next bit will be 1. It might not be the shortest program, but it
 is enough that alternate 0 and 1 forever is shorter than alternate 0 and
 1 15 times followed by 00 that you can confidently predict the first
 hypothesis is more likely.

 The second sequence is not so obvious. It looks like random bits. With
 enough intelligence (or computation) you might discover that the sequence is
 a binary representation of pi, and therefore the next bit is 0. But the fact
 that you might not discover the shortest description does not invalidate the
 principle. It just says that you can't always apply Solomonoff induction and
 get the number you want.

 Perhaps http://en.wikipedia.org/wiki/Kolmogorov_complexity will make this
 clear.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Thu, July 22, 2010 5:06:12 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:
 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way.
 Also, please point me to this mathematical community that you claim rejects
 Solomonoff induction. Can you find even one paper that refutes it?

 You give a precise statement of the probability in general terms, but then
 say that it is uncomputable.  Then you ask if there is a paper that refutes
 it.  Well, why would any serious mathematician bother to refute it since you
 yourself acknowledge that it is uncomputable and therefore unverifiable and
 therefore not a mathematical theorem that can be proven true or false?  It
 isn't like you claimed that the mathematical statement is verifiable. It is
 as if you are making a statement and then ducking any responsibility for it
 by denying that it is even an evaluation.  You honestly don't see the
 irregularity?

 My point is that the general mathematical community doesn't accept
 Solomonoff Induction, not that I have a paper that *refutes it,*whatever 
 that would mean.

 Please give me a little more explanation why you say the fundamental method
 is that the probability of a string x is proportional to the sum of all
 programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise to claim that this method could
 stand as an ideal for some valid and feasible application of probability.
 Jim Bromer

 On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Abram Demski
Jim,

Sorry for the short quip... I should have thought about how it would sound
before sending.

--Abram

On Wed, Jul 21, 2010 at 4:36 PM, Jim Bromer jimbro...@gmail.com wrote:

 You claim that I have not checked how Solomonoff Induction is actually
 defined, but then don't bother mentioning how it is defined as if it would
 be too much of an ordeal to even begin to try.  It is this kind of evasive
 response, along with the fact that these functions are incomputable, that
 make your replies so absurd.

 On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram

   On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.comwrote:

   The fundamental method of Solmonoff Induction is trans-infinite.
 Suppose you iterate through all possible programs, combining different
 programs as you go.  Then you have an infinite number of possible programs
 which have a trans-infinite number of combinations, because each tier of
 combinations can then be recombined to produce a second, third, fourth,...
 tier of recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-22 Thread Abram Demski
Jim,

Aha! So you *are* a constructivist or intuitionist or finitist of some
variety? This would explain the miscommunication... you appear to hold the
belief that a structure needs to be computable in order to be well-defined.
Is that right?

If that's the case, then you're not really just arguing against Solomonoff
induction in particular, you're arguing against the entrenched framework of
thinking which allows it to be defined-- the so-called classical
mathematics.

If this is the case, then you aren't alone.

--Abram

On Thu, Jul 22, 2010 at 5:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:
 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way.
 Also, please point me to this mathematical community that you claim rejects
 Solomonoff induction. Can you find even one paper that refutes it?

 You give a precise statement of the probability in general terms, but then
 say that it is uncomputable.  Then you ask if there is a paper that refutes
 it.  Well, why would any serious mathematician bother to refute it since you
 yourself acknowledge that it is uncomputable and therefore unverifiable and
 therefore not a mathematical theorem that can be proven true or false?  It
 isn't like you claimed that the mathematical statement is verifiable. It is
 as if you are making a statement and then ducking any responsibility for it
 by denying that it is even an evaluation.  You honestly don't see the
 irregularity?

 My point is that the general mathematical community doesn't accept
 Solomonoff Induction, not that I have a paper that *refutes it,*whatever 
 that would mean.

 Please give me a little more explanation why you say the fundamental method
 is that the probability of a string x is proportional to the sum of all
 programs M that output x weighted by 2^-|M|.  Why is the M in a bracket?


 On Wed, Jul 21, 2010 at 8:47 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The fundamental method of Solmonoff Induction is trans-infinite.

 The fundamental method is that the probability of a string x is
 proportional to the sum of all programs M that output x weighted by 2^-|M|.
 That probability is dominated by the shortest program, but it is equally
 uncomputable either way. How does this approximation invalidate Solomonoff
 induction?

 Also, please point me to this mathematical community that you claim
 rejects Solomonoff induction. Can you find even one paper that refutes it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Wed, July 21, 2010 3:08:13 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 I should have said, It would be unwise to claim that this method could
 stand as an ideal for some valid and feasible application of probability.
 Jim Bromer

 On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Matt Mahoney
Jim Bromer wrote:
 The question was asked whether, given infinite resources could Solmonoff 
Induction work.  I made the assumption that it was computable and found that 
it 
wouldn't work.  

On what infinitely powerful computer did you do your experiment?

 My conclusion suggests, that the use of Solmonoff Induction as an ideal for 
compression or something like MDL is not only unsubstantiated but based on a 
massive inability to comprehend the idea of a program that runs every possible 
program. 

It is sufficient to find the shortest program consistent with past results, not 
all programs. The difference is no more than the language-dependent constant. 
Legg proved this in the paper that Ben and I both pointed you to. Do you 
dispute 
his proof? I guess you don't, because you didn't respond the last 3 times this 
was pointed out to you.

 I am comfortable with the conclusion that the claim that Solomonoff Induction 
is an ideal for compression or induction or anything else is pretty shallow 
and not based on careful consideration.

I am comfortable with the conclusion that the world is flat because I have a 
gut 
feeling about it and I ignore overwhelming evidence to the contrary.

 There is a chance that I am wrong

So why don't you drop it?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Tue, July 20, 2010 3:10:40 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


The question was asked whether, given infinite resources could Solmonoff 
Induction work.  I made the assumption that it was computable and found that it 
wouldn't work.  It is not computable, even with infinite resources, for the 
kind 
of thing that was claimed it would do. (I believe that with a governance 
program 
it might actually be programmable) but it could not be used to predict (or 
compute the probability of) a subsequent string given some prefix string.  Not 
only is the method impractical it is theoretically inane.  My conclusion 
suggests, that the use of Solmonoff Induction as an ideal for compression or 
something like MDL is not only unsubstantiated but based on a massive inability 
to comprehend the idea of a program that runs every possible program.  

 
I am comfortable with the conclusion that the claim that Solomonoff Induction 
is 
an ideal for compression or induction or anything else is pretty shallow and 
not based on careful consideration.
 
There is a chance that I am wrong, but I am confident that there is nothing in 
the definition of Solmonoff Induction that could be used to prove it.
Jim Bromer
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
Matt,
I never said that I did not accept the application of the method of
probability, it is just that is has to be applied using logic.  Solomonoff
Induction does not meet this standard.  From this conclusion, and from other
sources of information, including the acknowledgement of incomputability and
the lack of acceptance in the general mathematical community I feel
comfortable with rejecting the theory of Kolomogrov Complexity as well.

What I said was: My conclusion suggests, that the use of Solmonoff Induction
as an ideal for compression or something like MDL...
What you said was: It is sufficient to find the shortest program consistent
with past results, not all programs. The difference is no more than the
language-dependent constant...
This is an equivocation based on the line you were responding to.  You are
presenting a related comment as if it were a valid response to what I
actually said.  That is one reason why I am starting to ignore you.

Jim Bromer

On Wed, Jul 21, 2010 at 1:15 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.

 On what infinitely powerful computer did you do your experiment?

  My conclusion suggests, that the use of Solmonoff Induction as an ideal
 for compression or something like MDL is not only unsubstantiated but based
 on a massive inability to comprehend the idea of a program that runs every
 possible program.

 It is sufficient to find the shortest program consistent with past results,
 not all programs. The difference is no more than the language-dependent
 constant. Legg proved this in the paper that Ben and I both pointed you to.
 Do you dispute his proof? I guess you don't, because you didn't respond the
 last 3 times this was pointed out to you.

  I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 I am comfortable with the conclusion that the world is flat because I have
 a gut feeling about it and I ignore overwhelming evidence to the contrary.

  There is a chance that I am wrong

 So why don't you drop it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, July 20, 2010 3:10:40 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.  It is not computable, even with infinite resources, for
 the kind of thing that was claimed it would do. (I believe that with a
 governance program it might actually be programmable) but it could not be
 used to predict (or compute the probability of) a subsequent string
 given some prefix string.  Not only is the method impractical it is
 theoretically inane.  My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL is not only
 unsubstantiated but based on a massive inability to comprehend the idea of a
 program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is nothing
 in the definition of Solmonoff Induction that could be used to prove it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I meant this was what I said was: My conclusion suggests, that the use of
Solmonoff Induction as an ideal for compression or something like MDL is not
only unsubstantiated but based on a massive inability to comprehend the idea
of a program that runs every possible program.
What Matt said was: It is sufficient to find the shortest program consistent
with past results, not all programs. The difference is no more than the
language-dependent constant...
This is an equivocation based on the line you were responding to.  You are
presenting a related comment as if it were a valid response to what I
actually said.  That is one reason why I am starting to ignore you.
Jim Bromer



On Wed, Jul 21, 2010 at 2:36 PM, Jim Bromer jimbro...@gmail.com wrote:

 Matt,
 I never said that I did not accept the application of the method of
 probability, it is just that is has to be applied using logic.  Solomonoff
 Induction does not meet this standard.  From this conclusion, and from other
 sources of information, including the acknowledgement of incomputability and
 the lack of acceptance in the general mathematical community I feel
 comfortable with rejecting the theory of Kolomogrov Complexity as well.

 What I said was: My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL...
 What you said was: It is sufficient to find the shortest program consistent
 with past results, not all programs. The difference is no more than the
 language-dependent constant...
 This is an equivocation based on the line you were responding to.  You are
 presenting a related comment as if it were a valid response to what I
 actually said.  That is one reason why I am starting to ignore you.

 Jim Bromer

 On Wed, Jul 21, 2010 at 1:15 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.

 On what infinitely powerful computer did you do your experiment?

  My conclusion suggests, that the use of Solmonoff Induction as an ideal
 for compression or something like MDL is not only unsubstantiated but based
 on a massive inability to comprehend the idea of a program that runs every
 possible program.

 It is sufficient to find the shortest program consistent with past
 results, not all programs. The difference is no more than the
 language-dependent constant. Legg proved this in the paper that Ben and I
 both pointed you to. Do you dispute his proof? I guess you don't, because
 you didn't respond the last 3 times this was pointed out to you.

  I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 I am comfortable with the conclusion that the world is flat because I have
 a gut feeling about it and I ignore overwhelming evidence to the contrary.

  There is a chance that I am wrong

 So why don't you drop it?


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Tue, July 20, 2010 3:10:40 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 The question was asked whether, given infinite resources could Solmonoff
 Induction work.  I made the assumption that it was computable and found that
 it wouldn't work.  It is not computable, even with infinite resources, for
 the kind of thing that was claimed it would do. (I believe that with a
 governance program it might actually be programmable) but it could not be
 used to predict (or compute the probability of) a subsequent string
 given some prefix string.  Not only is the method impractical it is
 theoretically inane.  My conclusion suggests, that the use of Solmonoff
 Induction as an ideal for compression or something like MDL is not only
 unsubstantiated but based on a massive inability to comprehend the idea of a
 program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is
 nothing in the definition of Solmonoff Induction that could be used to prove
 it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
you iterate through all possible programs, combining different programs as
you go.  Then you have an infinite number of possible programs which have a
trans-infinite number of combinations, because each tier of combinations can
then be recombined to produce a second, third, fourth,... tier of
recombinations.

Anyone who claims that this method is the ideal for a method of applied
probability is unwise.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I should have said, It would be unwise to claim that this method could stand
as an ideal for some valid and feasible application of probability.
Jim Bromer

On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Abram Demski
Jim,

This argument that you've got to consider recombinations *in addition to*
just the programs displays the lack of mathematical understanding that I am
referring to... you appear to be arguing against what you *think* solomonoff
induction is, without checking how it is actually defined...

--Abram

On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

 The fundamental method of Solmonoff Induction is trans-infinite.  Suppose
 you iterate through all possible programs, combining different programs as
 you go.  Then you have an infinite number of possible programs which have a
 trans-infinite number of combinations, because each tier of combinations can
 then be recombined to produce a second, third, fourth,... tier of
 recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
You claim that I have not checked how Solomonoff Induction is actually
defined, but then don't bother mentioning how it is defined as if it would
be too much of an ordeal to even begin to try.  It is this kind of evasive
response, along with the fact that these functions are incomputable, that
make your replies so absurd.

On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram

   On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

   The fundamental method of Solmonoff Induction is trans-infinite.
 Suppose you iterate through all possible programs, combining different
 programs as you go.  Then you have an infinite number of possible programs
 which have a trans-infinite number of combinations, because each tier of
 combinations can then be recombined to produce a second, third, fourth,...
 tier of recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram


I mean this in a friendly way.  (When I started to write in a fiendly way,
it was only a typo and nothing more.)

Is it possible that it is Abram who doesn't understand how Solomonoff
Induction is actually defined.  Is it possible that it is Abram who has
missed an implication of the defiinition because it didn't fit in with his
ideal of a convenient and reasonable application of Bayesian mathematics?

I am just saying that you should ask yourself this: is it possible that
Abram doesn't see the obvious flaws because it obviously wouldn't make any
sense vis a vis a reasonable and sound application of probability theory.
For example, would you be willing to write to a real expert in probability
theory to ask him for his opinions on Solomonoff Induction?  Because I would
be.

Jim Bromer


On Wed, Jul 21, 2010 at 4:01 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 This argument that you've got to consider recombinations *in addition to*
 just the programs displays the lack of mathematical understanding that I am
 referring to... you appear to be arguing against what you *think* solomonoff
 induction is, without checking how it is actually defined...

 --Abram

   On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

   The fundamental method of Solmonoff Induction is trans-infinite.
 Suppose you iterate through all possible programs, combining different
 programs as you go.  Then you have an infinite number of possible programs
 which have a trans-infinite number of combinations, because each tier of
 combinations can then be recombined to produce a second, third, fourth,...
 tier of recombinations.

 Anyone who claims that this method is the ideal for a method of applied
 probability is unwise.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
I can't say where you are going wrong because I really have no idea.
However, my guess is that you are ignoring certain contingencies that would
be necessary to make your claims valid.  I tried to use a reference to the
theory of limits to explain this but it seemed to fall on deaf ears.

If I were to write everything I knew about Bernoulli without looking it up,
it would be a page of few facts.  I have read some things about him, I just
don't recall much of it.  So when I dare say that Abram couldn't write much
about Cauchy without looking it up, it is not a pretentious put down, but
more like a last-ditch effort to teach him some basic humility.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Jim Bromer
If someone had a profound knowledge of Solomonoff Induction and the *science
of probability* he could at the very least talk to me in a way that I knew
he knew what I was talking about and I knew he knew what he was talking
about.  He might be slightly obnoxious or he might be casual or (more
likely) he would try to be patient.  But it is unlikely that he would use a
hit and run attack and denounce my conjectures without taking the
opportunity to talk to me about what I was saying.  That is one of the ways
that I know that you don't know as much as you think you know.  You rarely
get angry about being totally right.

A true expert would be able to talk to me and also take advantage of my
thinking about the subject to weave some new information into the
conversation so that I could leave it with a greater insight about the
problem than I did before.  That is not just a skill that only good teachers
have, it is a skill that almost any expert can develop if he is willing to
use it.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-21 Thread Matt Mahoney
Jim Bromer wrote:
 The fundamental method of Solmonoff Induction is trans-infinite.

The fundamental method is that the probability of a string x is proportional to 
the sum of all programs M that output x weighted by 2^-|M|. That probability is 
dominated by the shortest program, but it is equally uncomputable either way. 
How does this approximation invalidate Solomonoff induction?

Also, please point me to this mathematical community that you claim rejects 
Solomonoff induction. Can you find even one paper that refutes it?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 21, 2010 3:08:13 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


I should have said, It would be unwise to claim that this method could stand as 
an ideal for some valid and feasible application of probability.
Jim Bromer


On Wed, Jul 21, 2010 at 2:47 PM, Jim Bromer jimbro...@gmail.com wrote:

The fundamental method of Solmonoff Induction is trans-infinite.  Suppose you 
iterate through all possible programs, combining different programs as you go.  
Then you have an infinite number of possible programs which have a 
trans-infinite number of combinations, because each tier of combinations can 
then be recombined to produce a second, third, fourth,... tier of 
recombinations.
 
Anyone who claims that this method is the ideal for a method of applied 
probability is unwise.
 Jim Bromer

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-20 Thread Abram Demski
Jim,

An example reference on the theory of computability is Computability and
Logic by Boolos, Burgess and Jeffrey. For those who accept the
church-turing thesis, this mathematical theory provides a sufficient account
of the notion of computability, including the space of possible programs
(which is formalized as the set of Turing machines).

--Abram

On Mon, Jul 19, 2010 at 6:44 AM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 I feel a responsibility to make an effort to explain myself when someone
 doesn't understand what I am saying, but once I have gone over the material
 sufficiently, if the person is still arguing with me about it I will just
 say that I have already explained myself in the previous messages.  For
 example if you can point to some authoritative source outside the
 Solomonoff-Kolmogrov crowd that agrees that full program space, as it
 pertains to definitions like, all possible programs, or my example
 of, all possible mathematical functions, represents an comprehensible
 concept that is open to mathematical analysis then tell me about it.  We use
 concepts like the set containing sets that are not members of themselves
 as a philosophical tool that can lead to the discovery of errors in our
 assumptions, and in this way such contradictions are of tremendous value.
 The ability to use critical skills to find flaws in one's own presumptions
 are critical in comprehension, and if that kind of critical thinking has
 been turned off for some reason, then the consequences will be predictable.
 I think compression is a useful field but the idea of universal induction
 aka Solomonoff Induction is garbage science.  It was a good effort on
 Solomonoff's part, but it didn't work and it is time to move on, as the
 majority of theorists have.
 Jim Bromer

 On Sun, Jul 18, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I'm still not sure what your point even is, which is probably why my
 responses seem so strange to you. It still seems to me as if you are jumping
 back and forth between different positions, like I said at the start of this
 discussion.

 You didn't answer why you think program space does not represent a
 comprehensible concept. (I will drop the full if it helps...)

 My only conclusion can be that you are (at least implicitly) rejecting
 some classical mathematical principles and using your own very different
 notion of which proofs are valid, which concepts are well-defined, et
 cetera.

 (Or perhaps you just don't have a background in the formal theory of
 computation?)

 Also, not sure what difference you mean to say I'm papering over.

 Perhaps it *is* best that we drop it, since neither one of us is getting
 through to the other; but, I am genuinely trying to figure out what you are
 saying...

 --Abram

   On Sun, Jul 18, 2010 at 9:09 PM, Jim Bromer jimbro...@gmail.comwrote:

   Abram,
 I was going to drop the discussion, but then I thought I figured out why
 you kept trying to paper over the difference.  Of course, our personal
 disagreement is trivial; it isn't that important.  But the problem with
 Solomonoff Induction is that not only is the output hopelessly tangled and
 seriously infinite, but the input is as well.  The definition of all
 possible programs, like the definition of all possible mathematical
 functions, is not a proper mathematical problem that can be comprehended in
 an analytical way.  I think that is the part you haven't totally figured out
 yet (if you will excuse the pun).  Total program space, does not represent
 a comprehensible computational concept.  When you try find a way to work out
 feasible computable examples it is not enough to limit the output string
 space, you HAVE to limit the program space in the same way.  That second
 limitation makes the entire concept of total program space, much too
 weak for our purposes.  You seem to know this at an intuitive operational
 level, but it seems to me that you haven't truly grasped the implications.

 I say that Solomonoff Induction is computational but I have to use a
 trick to justify that remark.  I think the trick may be acceptable, but I am
 not sure.  But the possibility that the concept of all possible programs,
 might be computational doesn't mean that that it is a sound mathematical
 concept.  This underlies the reason that I intuitively came to the
 conclusion that Solomonoff Induction was transfinite.  However, I wasn't
 able to prove it because the hypothetical concept of all possible program
 space, is so pretentious that it does not lend itself to mathematical
 analysis.

 I just wanted to point this detail out because your implied view that you
 agreed with me but total program space was mathematically well-defined
 did not make any sense.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-20 Thread Jim Bromer
The question was asked whether, given infinite resources could Solmonoff
Induction work.  I made the assumption that it was computable and found that
it wouldn't work.  It is not computable, even with infinite resources, for
the kind of thing that was claimed it would do. (I believe that with a
governance program it might actually be programmable) but it could not be
used to predict (or compute the probability of) a subsequent string
given some prefix string.  Not only is the method impractical it is
theoretically inane.  My conclusion suggests, that the use of Solmonoff
Induction as an ideal for compression or something like MDL is not only
unsubstantiated but based on a massive inability to comprehend the idea of a
program that runs every possible program.

I am comfortable with the conclusion that the claim that Solomonoff
Induction is an ideal for compression or induction or anything else is
pretty shallow and not based on careful consideration.

There is a chance that I am wrong, but I am confident that there is nothing
in the definition of Solmonoff Induction that could be used to prove it.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-20 Thread Jim Bromer
I am not going in circles.  I probably should not express myself in
replies.  I made a lot of mistakes getting to the conclusion that I got to,
and I am a little uncertain as to whether the construction of the diagonal
set actually means that there would be uncountable sets for this
particular example, but that, for example, has nothing to do with anything
that you said.
Jim Bromer

On Tue, Jul 20, 2010 at 5:07 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 *sigh* My response to that would just be to repeat certain questions I
 already asked you... I guess we should give it up after all. The best I can
 understand you is to assume that you simply don't understand the relevant
 mathematical constructions, and you've reached pretty much the same point
 with me. I'd continue in private if you're interested, but we should
 probably stop going in circles on a public list.

 --Abram

   On Tue, Jul 20, 2010 at 3:10 PM, Jim Bromer jimbro...@gmail.com wrote:

   The question was asked whether, given infinite
 resources could Solmonoff Induction work.  I made the assumption that it was
 computable and found that it wouldn't work.  It is not computable, even with
 infinite resources, for the kind of thing that was claimed it would do. (I
 believe that with a governance program it might actually be programmable)
 but it could not be used to predict (or compute the probability of) a
 subsequent string given some prefix string.  Not only is the method
 impractical it is theoretically inane.  My conclusion suggests, that the use
 of Solmonoff Induction as an ideal for compression or something like MDL is
 not only unsubstantiated but based on a massive inability to comprehend the
 idea of a program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is
 nothing in the definition of Solmonoff Induction that could be used to prove
 it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
Abram,
I feel a responsibility to make an effort to explain myself when someone
doesn't understand what I am saying, but once I have gone over the material
sufficiently, if the person is still arguing with me about it I will just
say that I have already explained myself in the previous messages.  For
example if you can point to some authoritative source outside the
Solomonoff-Kolmogrov crowd that agrees that full program space, as it
pertains to definitions like, all possible programs, or my example
of, all possible mathematical functions, represents an comprehensible
concept that is open to mathematical analysis then tell me about it.  We use
concepts like the set containing sets that are not members of themselves
as a philosophical tool that can lead to the discovery of errors in our
assumptions, and in this way such contradictions are of tremendous value.
The ability to use critical skills to find flaws in one's own presumptions
are critical in comprehension, and if that kind of critical thinking has
been turned off for some reason, then the consequences will be predictable.
I think compression is a useful field but the idea of universal induction
aka Solomonoff Induction is garbage science.  It was a good effort on
Solomonoff's part, but it didn't work and it is time to move on, as the
majority of theorists have.
Jim Bromer

On Sun, Jul 18, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I'm still not sure what your point even is, which is probably why my
 responses seem so strange to you. It still seems to me as if you are jumping
 back and forth between different positions, like I said at the start of this
 discussion.

 You didn't answer why you think program space does not represent a
 comprehensible concept. (I will drop the full if it helps...)

 My only conclusion can be that you are (at least implicitly) rejecting some
 classical mathematical principles and using your own very different notion
 of which proofs are valid, which concepts are well-defined, et cetera.

 (Or perhaps you just don't have a background in the formal theory of
 computation?)

 Also, not sure what difference you mean to say I'm papering over.

 Perhaps it *is* best that we drop it, since neither one of us is getting
 through to the other; but, I am genuinely trying to figure out what you are
 saying...

 --Abram

   On Sun, Jul 18, 2010 at 9:09 PM, Jim Bromer jimbro...@gmail.com wrote:

   Abram,
 I was going to drop the discussion, but then I thought I figured out why
 you kept trying to paper over the difference.  Of course, our personal
 disagreement is trivial; it isn't that important.  But the problem with
 Solomonoff Induction is that not only is the output hopelessly tangled and
 seriously infinite, but the input is as well.  The definition of all
 possible programs, like the definition of all possible mathematical
 functions, is not a proper mathematical problem that can be comprehended in
 an analytical way.  I think that is the part you haven't totally figured out
 yet (if you will excuse the pun).  Total program space, does not represent
 a comprehensible computational concept.  When you try find a way to work out
 feasible computable examples it is not enough to limit the output string
 space, you HAVE to limit the program space in the same way.  That second
 limitation makes the entire concept of total program space, much too
 weak for our purposes.  You seem to know this at an intuitive operational
 level, but it seems to me that you haven't truly grasped the implications.

 I say that Solomonoff Induction is computational but I have to use a trick
 to justify that remark.  I think the trick may be acceptable, but I am not
 sure.  But the possibility that the concept of all possible programs,
 might be computational doesn't mean that that it is a sound mathematical
 concept.  This underlies the reason that I intuitively came to the
 conclusion that Solomonoff Induction was transfinite.  However, I wasn't
 able to prove it because the hypothetical concept of all possible program
 space, is so pretentious that it does not lend itself to mathematical
 analysis.

 I just wanted to point this detail out because your implied view that you
 agreed with me but total program space was mathematically well-defined
 did not make any sense.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
I checked the term program space and found a few authors who used it, but
it seems to be an ad-hoc definition that is not widely used.  It seems to be
an amalgamation of term sample space with the the set of all programs or
something like that.  Of course, the simple comprehension of the idea of,
all possible programs is different than the pretense that all possible
programs could be comprehended through some kind of strategy of evaluation
of all those programs.  It would be like confusing a domain from mathematics
with a value or an possibly evaluable variable (that can be assigned a value
from the domain).  These type distinctions are necessary for logical
thinking about these things.  The same kind of reasoning goes for Russell's
Paradox.  While I can, (with some thought) comprehend the definition and
understand the paradox, I cannot comprehend the set itself, that is, I
cannot comprehend the evaluation of the set.  Such a thing doesn't make any
sense.  It is odd that the set of all evaluable functions (or all programs)
is an inherent paradox when you try to think of it in the terms of an
evaluable function (as if writing a program that produces all possible
programs was feasible).  The oddness is due to the fact that there is
nothing that obviously leads to a paradox, and it is not easy to prove it
is a paradox (because it lacks the required definition). The only reason we
can give for the seeming paradox is that it is wrong to confuse the domain
of a mathematical definition with a value or values from the domain.  While
this barrier can be transcended in some very special cases, it very
obviously cannot be ignored for the general case.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-19 Thread Jim Bromer
I made a remark about confusing a domain with the values that was wrong.  What
I should have said is that you cannot just treat a domain of functions or of
programs as if they were a domain of numbers or values and expect them to
act in ways that are familiar from a study of numbers.



Of course you can use any of the members of a domain of numbers or numerical
variables in evaluation methods, but when you try that with a domain of
functions, programs or algorithms, you have to expect that you may get some
odd results.



I believe that since programs can be represented by strings, the Solomonoff
Induction of programs can be seen to be computable because you can just
iterate through every possible program string.  I believe that the same
thing could be said of all possible Universal Turing Machines.  If these two
statements are true, then I believe that the program is both computable and
will create the situation of Cantor's diagonal argument.  I believe that the
construction of the infinite sequences of Cantor's argument can be
constructed through an infinite computable program, and since the program
can also act on the infinite memory that Solomonoff Induction needs,
Cantor's diagonal sequence can also be constructed by a program.  Since
Solomonoff Induction is defined so that it will use every possible program,
this situation cannot be avoided.



Thus, Solomonoff Induction would be both computable and it would produce
uncountable infinities of strings.  When combined with the problem of
ordering the resulting strings in order to show how the functions might
approach stable limits for each probability, since you cannot a priori
determine the ordering of the programs that you would need for the
computation of these stable limiting probabilities you would be confronted
with the higher order infinity of all possible combinations of orderings of
the trans infinite strings that the program would hypothetically produce.



Therefore, Solomonoff Induction is either incomputable or else it cannot be
proven to be capable of avoiding the production of trans infinite strings
whose ordering is so confused that they would be totally useless for any
kind of prediction of a string based on a given prefix, as is claimed. The
system is not any kind of ideal but rather *a confused theoretical notion.
*

I might be wrong.  Or I might be right.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
Solomonoff Induction is not well-defined because it is either incomputable
and/or absurdly irrelevant.  This is where the communication breaks down.  I
have no idea why you would make a remark like that.  It is interesting that
you are an incremental-progress guy.



On Sat, Jul 17, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,


 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?


 I'm not sure what you mean here; Solomonoff induction and the full program
 space both seem like well-defined concepts to me.


 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


 A polynom SAT would certainly be a major breakthrough for AI and
 computation generally; and if the brain utilizes something like such an
 algorithm, then AGI could almost certainly never get off the ground without
 it.

 However, I'm far from saying there must be a breakthrough coming in this
 area, and I don't have any other areas in mind. I'm more of an
 incremental-progress type guy. :) IMHO, what the field needs to advance is
 for more people to recognize the importance of relational methods (as you
 put it I think, the importance of structure).

 --Abram

   On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.comwrote:

   Well I guess I misunderstood what you said.
 But, you did say,
  The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing you
 have in mind, and why you think it is important?

 Jim Bromer


  On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore, I
 was just agreeing with what you said, and pointing out that the statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Abram Demski
Jim,

I think you are using a different definition of well-defined :). I am
saying Solomonoff induction is totally well-defined as a mathematical
concept. You are saying it isn't well-defined as a computational entity.
These are both essentially true.

Why you might insist that program-space is not well-defined, on the other
hand, I do not know.

--Abram

On Sun, Jul 18, 2010 at 8:02 AM, Jim Bromer jimbro...@gmail.com wrote:

 Solomonoff Induction is not well-defined because it is either incomputable
 and/or absurdly irrelevant.  This is where the communication breaks down.  I
 have no idea why you would make a remark like that.  It is interesting that
 you are an incremental-progress guy.



 On Sat, Jul 17, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,


 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?


 I'm not sure what you mean here; Solomonoff induction and the full program
 space both seem like well-defined concepts to me.


 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


 A polynom SAT would certainly be a major breakthrough for AI and
 computation generally; and if the brain utilizes something like such an
 algorithm, then AGI could almost certainly never get off the ground without
 it.

 However, I'm far from saying there must be a breakthrough coming in this
 area, and I don't have any other areas in mind. I'm more of an
 incremental-progress type guy. :) IMHO, what the field needs to advance is
 for more people to recognize the importance of relational methods (as you
 put it I think, the importance of structure).

 --Abram

   On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.comwrote:

   Well I guess I misunderstood what you said.
 But, you did say,
  The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have
 any meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing you
 have in mind, and why you think it is important?

 Jim Bromer


  On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski 
 abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore,
 I was just agreeing with what you said, and pointing out that the statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
On Sun, Jul 18, 2010 at 11:09 AM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I think you are using a different definition of well-defined :). I am
 saying Solomonoff induction is totally well-defined as a mathematical
 concept. You are saying it isn't well-defined as a computational entity.
 These are both essentially true.

 Why you might insist that program-space is not well-defined, on the other
 hand, I do not know.

 --Abram


I said: does talk about the full program space, merit mentioning?
Solomonoff Induction is not totally well-defined as a mathematical
concept, as you said it was.
In both of these instances you used qualifications of excess.  Totally,
well-defined and full. It would be like me saying that because your
thesis is wrong in a few ways, your thesis is 'totally wrong in full concept
space or something like that.
Jim Bromer






 On Sun, Jul 18, 2010 at 8:02 AM, Jim Bromer jimbro...@gmail.com wrote:

 Solomonoff Induction is not well-defined because it is either incomputable
 and/or absurdly irrelevant.  This is where the communication breaks down.  I
 have no idea why you would make a remark like that.  It is interesting that
 you are an incremental-progress guy.



 On Sat, Jul 17, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,


 Saying that something approximates Solomonoff Induction doesn't have
 any meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit 
 mentioning?


 I'm not sure what you mean here; Solomonoff induction and the full
 program space both seem like well-defined concepts to me.


 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


 A polynom SAT would certainly be a major breakthrough for AI and
 computation generally; and if the brain utilizes something like such an
 algorithm, then AGI could almost certainly never get off the ground without
 it.

 However, I'm far from saying there must be a breakthrough coming in this
 area, and I don't have any other areas in mind. I'm more of an
 incremental-progress type guy. :) IMHO, what the field needs to advance is
 for more people to recognize the importance of relational methods (as you
 put it I think, the importance of structure).

 --Abram

   On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.comwrote:

   Well I guess I misunderstood what you said.
 But, you did say,
  The question of whether the function would be useful for the sorts
 of things we keep talking about ... well, I think the best argument that I
 can give is that MDL is strongly supported by both theory and practice for
 many *subsets* of the full program space. The concern might be that, so 
 far,
 it is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful 
 predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have
 any meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit 
 mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing
 you have in mind, and why you think it is important?

 Jim Bromer


  On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski 
 abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore,
 I was just agreeing with what you said, and pointing out that the 
 statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Jim Bromer
Abram,
I was going to drop the discussion, but then I thought I figured out why you
kept trying to paper over the difference.  Of course, our personal
disagreement is trivial; it isn't that important.  But the problem with
Solomonoff Induction is that not only is the output hopelessly tangled and
seriously infinite, but the input is as well.  The definition of all
possible programs, like the definition of all possible mathematical
functions, is not a proper mathematical problem that can be comprehended in
an analytical way.  I think that is the part you haven't totally figured out
yet (if you will excuse the pun).  Total program space, does not represent
a comprehensible computational concept.  When you try find a way to work out
feasible computable examples it is not enough to limit the output string
space, you HAVE to limit the program space in the same way.  That second
limitation makes the entire concept of total program space, much too
weak for our purposes.  You seem to know this at an intuitive operational
level, but it seems to me that you haven't truly grasped the implications.

I say that Solomonoff Induction is computational but I have to use a trick
to justify that remark.  I think the trick may be acceptable, but I am not
sure.  But the possibility that the concept of all possible programs,
might be computational doesn't mean that that it is a sound mathematical
concept.  This underlies the reason that I intuitively came to the
conclusion that Solomonoff Induction was transfinite.  However, I wasn't
able to prove it because the hypothetical concept of all possible program
space, is so pretentious that it does not lend itself to mathematical
analysis.

I just wanted to point this detail out because your implied view that you
agreed with me but total program space was mathematically well-defined
did not make any sense.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Matt Mahoney
Jim Bromer wrote:
 The definition of all possible programs, like the definition of all 
 possible 
mathematical functions, is not a proper mathematical problem that can be 
comprehended in an analytical way.

Finding just the shortest program is close enough because it dominates the 
probability. Or which step in the proof of theorem 1.7.2 
in http://www.vetta.org/documents/disSol.pdf do you disagree with?

You have been saying that you think Solomonoff induction is wrong, but offering 
no argument except your own intuition. So why should we care?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, July 18, 2010 9:09:36 PM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


Abram,
I was going to drop the discussion, but then I thought I figured out why you 
kept trying to paper over the difference.  Of course, our personal disagreement 
is trivial; it isn't that important.  But the problem with Solomonoff Induction 
is that not only is the output hopelessly tangled and seriously infinite, but 
the input is as well.  The definition of all possible programs, like the 
definition of all possible mathematical functions, is not a proper 
mathematical problem that can be comprehended in an analytical way.  I think 
that is the part you haven't totally figured out yet (if you will excuse the 
pun).  Total program space, does not represent a comprehensible computational 
concept.  When you try find a way to work out feasible computable examples it 
is 
not enough to limit the output string space, you HAVE to limit the program 
space 
in the same way.  That second limitation makes the entire concept of total 
program space, much too weak for our purposes.  You seem to know this at an 
intuitive operational level, but it seems to me that you haven't truly grasped 
the implications.
 
I say that Solomonoff Induction is computational but I have to use a trick to 
justify that remark.  I think the trick may be acceptable, but I am not sure.  
But the possibility that the concept of all possible programs, might be 
computational doesn't mean that that it is a sound mathematical concept.  This 
underlies the reason that I intuitively came to the conclusion that Solomonoff 
Induction was transfinite.  However, I wasn't able to prove it because the 
hypothetical concept of all possible program space, is so pretentious that it 
does not lend itself to mathematical analysis.
 
I just wanted to point this detail out because your implied view that you 
agreed 
with me but total program space was mathematically well-defined did not 
make 
any sense.
Jim Bromer
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-18 Thread Abram Demski
Jim,

I'm still not sure what your point even is, which is probably why my
responses seem so strange to you. It still seems to me as if you are jumping
back and forth between different positions, like I said at the start of this
discussion.

You didn't answer why you think program space does not represent a
comprehensible concept. (I will drop the full if it helps...)

My only conclusion can be that you are (at least implicitly) rejecting some
classical mathematical principles and using your own very different notion
of which proofs are valid, which concepts are well-defined, et cetera.

(Or perhaps you just don't have a background in the formal theory of
computation?)

Also, not sure what difference you mean to say I'm papering over.

Perhaps it *is* best that we drop it, since neither one of us is getting
through to the other; but, I am genuinely trying to figure out what you are
saying...

--Abram

On Sun, Jul 18, 2010 at 9:09 PM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 I was going to drop the discussion, but then I thought I figured out why
 you kept trying to paper over the difference.  Of course, our personal
 disagreement is trivial; it isn't that important.  But the problem with
 Solomonoff Induction is that not only is the output hopelessly tangled and
 seriously infinite, but the input is as well.  The definition of all
 possible programs, like the definition of all possible mathematical
 functions, is not a proper mathematical problem that can be comprehended in
 an analytical way.  I think that is the part you haven't totally figured out
 yet (if you will excuse the pun).  Total program space, does not represent
 a comprehensible computational concept.  When you try find a way to work out
 feasible computable examples it is not enough to limit the output string
 space, you HAVE to limit the program space in the same way.  That second
 limitation makes the entire concept of total program space, much too
 weak for our purposes.  You seem to know this at an intuitive operational
 level, but it seems to me that you haven't truly grasped the implications.

 I say that Solomonoff Induction is computational but I have to use a trick
 to justify that remark.  I think the trick may be acceptable, but I am not
 sure.  But the possibility that the concept of all possible programs,
 might be computational doesn't mean that that it is a sound mathematical
 concept.  This underlies the reason that I intuitively came to the
 conclusion that Solomonoff Induction was transfinite.  However, I wasn't
 able to prove it because the hypothetical concept of all possible program
 space, is so pretentious that it does not lend itself to mathematical
 analysis.

 I just wanted to point this detail out because your implied view that you
 agreed with me but total program space was mathematically well-defined
 did not make any sense.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-17 Thread Abram Demski
Jim,

Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?


I'm not sure what you mean here; Solomonoff induction and the full program
space both seem like well-defined concepts to me.

I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.


A polynom SAT would certainly be a major breakthrough for AI and computation
generally; and if the brain utilizes something like such an algorithm, then
AGI could almost certainly never get off the ground without it.

However, I'm far from saying there must be a breakthrough coming in this
area, and I don't have any other areas in mind. I'm more of an
incremental-progress type guy. :) IMHO, what the field needs to advance is
for more people to recognize the importance of relational methods (as you
put it I think, the importance of structure).

--Abram

On Sat, Jul 17, 2010 at 10:28 PM, Jim Bromer jimbro...@gmail.com wrote:

 Well I guess I misunderstood what you said.
 But, you did say,
 The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 Saying that something approximates Solomonoff Induction doesn't have any
 meaning since we don't know what Solomonoff Induction actually
 represents.  And does talk about the full program space, merit mentioning?

 I can see how some of the kinds of things that you have talked about (to
 use my own phrase in order to avoid having to list all the kinds of claims
 that I think have been made about this subject) could be produced from
 finite sets, but I don't understand why you think they are important.

 I think we both believe that there must be some major breakthrough in
 computational theory waiting to be discovered, but I don't see how
 that could be based on anything other than Boolean Satisfiability.

 Can you give me a simple example and explanation of the kind of thing you
 have in mind, and why you think it is important?

 Jim Bromer


 On Fri, Jul 16, 2010 at 12:40 AM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 The statements about bounds are mathematically provable... furthermore, I
 was just agreeing with what you said, and pointing out that the statement
 could be proven. So what is your issue? I am confused at your response. Is
 it because I didn't include the proofs in my email?

 --Abram

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining the
 probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs. In
 this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
 stops below 1, though we can't say where precisely without the infinite
 computing power required to compute the limit).

 --Abram


I believe that Solomonoff Induction would be computable given infinite time
and infinite resources (the Godel Theorem fits into this category) but some
people disagree for reasons I do not understand.

If it is not computable then it is not a mathematical theorem and the
question of whether the sum of probabilities equals 1 is pure fantasy.

If it is computable then the central issue is whether it could (given
infinite time and infinite resources) be used to determine the probability
of a particular string being produced from all possible programs.  The
question about the sum of all the probabilities is certainly an interesting
question. However, the problem of making sure that the function was actually
computable would interfere with this process of determining the probability
of each particular string that can be produced.  For example, since some
strings would be infinite, the computability problem makes it imperative
that the infinite strings be partially computed at an iteration (or else the
function would be hung up at some particular iteration and the infinite
other calculations could not be considered computable).

My criticism is that even though I believe the function may be theoretically
computable, the fact that each particular probability (of each particular
string that is produced) cannot be proven to approach a limit through
mathematical analysis, and since the individual probabilities will fluctuate
with each new string that is produced, one would have to know how to reorder
the production of the probabilities in order to demonstrate that the
individual probabilities do approach a limit.  If they don't, then the claim
that this function could be used to define the probabilities of a particular
string from all possible program is unprovable.  (Some infinite calculations
fluctuate infinitely.)  Since you do not have any way to determine how to
reorder the infinite probabilities a priori, your algorithm would have to be
able to compute all possible reorderings to find the ordering and filtering
of the computations that would produce evaluable limits.  Since there are
trans infinite rearrangements of an infinite list (I am not sure that I am
using the term 'trans infinite' properly) this shows that the conclusion
that the theorem can be used to derive the desired probabilities is
unprovable through a variation of Cantor's Diagonal Argument, and that you
can't use Solomonoff Induction the way you have been talking about using it.

Since you cannot fully compute every string that may be produced at a
certain iteration, you cannot make the claim that you even know the
probabilities of any possible string before infinity and therefore your
claim that the sum of the probabilities can be computed is not provable.

But I could be wrong.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Matt Mahoney
Jim Bromer wrote:
 Since you cannot fully compute every string that may be produced at a certain 
iteration, you cannot make the claim that you even know the probabilities of 
any 
possible string before infinity and therefore your claim that the sum of the 
probabilities can be computed is not provable.
 
 But I could be wrong.

Could be. Theorem 1.7.2 in http://www.vetta.org/documents/disSol.pdf proves 
that 
finding just the shortest program that outputs x gives you a probability for x 
close to the result you would get if you found all of the (infinite number of) 
programs that output x. Either number could be used for Solomonoff induction 
because the difference is bounded only by the choice of language.
 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, July 15, 2010 8:18:13 AM
Subject: Re: [agi] Comments On My Skepticism of Solomonoff Induction


On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.com wrote:

Jim,

There is a simple proof of convergence for the sum involved in defining the 
probability of a given string in the Solomonoff distribution:

At its greatest, a particular string would be output by *all* programs. In 
this 
case, its sum would come to 1. This puts an upper bound on the sum. Since 
there 
is no subtraction, there is a lower bound at 0 and the sum monotonically 
increases as we take the limit. Knowing these facts, suppose it *didn't* 
converge. It must then increase without bound, since it cannot fluctuate back 
and forth (it can only go up). But this contradicts the upper bound of 1. So, 
the sum must stop at 1 or below (and in fact we can prove it stops below 1, 
though we can't say where precisely without the infinite computing power 
required to compute the limit).

--Abram
 
I believe that Solomonoff Induction would be computable given infinite time and 
infinite resources (the Godel Theorem fits into this category) but some people 
disagree for reasons I do not understand.  

 
If it is not computable then it is not a mathematical theorem and the question 
of whether the sum of probabilities equals 1 is pure fantasy.
 
If it is computable then the central issue is whether it could (given infinite 
time and infinite resources) be used to determine the probability of a 
particular string being produced from all possible programs.  The question 
about 
the sum of all the probabilities is certainly an interesting question. However, 
the problem of making sure that the function was actually computable would 
interfere with this process of determining the probability of each particular 
string that can be produced.  For example, since some strings would be 
infinite, 
the computability problem makes it imperative that the infinite strings be 
partially computed at an iteration (or else the function would be hung up at 
some particular iteration and the infinite other calculations could not be 
considered computable).  

 
My criticism is that even though I believe the function may be theoretically 
computable, the fact that each particular probability (of each particular 
string 
that is produced) cannot be proven to approach a limit through mathematical 
analysis, and since the individual probabilities will fluctuate with each new 
string that is produced, one would have to know how to reorder the production 
of 
the probabilities in order to demonstrate that the individual probabilities do 
approach a limit.  If they don't, then the claim that this function could be 
used to define the probabilities of a particular string from all possible 
program is unprovable.  (Some infinite calculations fluctuate infinitely.)  
Since you do not have any way to determine how to reorder the infinite 
probabilities a priori, your algorithm would have to be able to compute all 
possible reorderings to find the ordering and filtering of the computations 
that 
would produce evaluable limits.  Since there are trans infinite rearrangements 
of an infinite list (I am not sure that I am using the term 'trans infinite' 
properly) this shows that the conclusion that the theorem can be used to derive 
the desired probabilities is unprovable through a variation of Cantor's 
Diagonal 
Argument, and that you can't use Solomonoff Induction the way you have been 
talking about using it.
 
Since you cannot fully compute every string that may be produced at a certain 
iteration, you cannot make the claim that you even know the probabilities of 
any 
possible string before infinity and therefore your claim that the sum of the 
probabilities can be computed is not provable.
 
But I could be wrong.
Jim Bromer
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
I think that Solomonoff Induction includes a computational method that
produces probabilities of some sort and whenever those probabilities were
computed (in a way that would make the function computable) they would sum
up to 1.  But the issue that I am pointing out is that there is no way that
you can determine the margin of error in what is being computed for what has
been repeatedly claimed that the function is capable of computing.  Since
you are not able to rely on something like the theory of limits, you are not
able to determine the degree of error in what is being computed.  And in
fact, there is no way to determine that what the function would compute
would be in any way useful for the sort of things that you guys keep talking
about.

Jim Bromer



On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs. In
 this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
 stops below 1, though we can't say where precisely without the infinite
 computing power required to compute the limit).

 --Abram


 I believe that Solomonoff Induction would be computable given infinite time
 and infinite resources (the Godel Theorem fits into this category) but some
 people disagree for reasons I do not understand.

 If it is not computable then it is not a mathematical theorem and the
 question of whether the sum of probabilities equals 1 is pure fantasy.

 If it is computable then the central issue is whether it could (given
 infinite time and infinite resources) be used to determine the probability
 of a particular string being produced from all possible programs.  The
 question about the sum of all the probabilities is certainly an interesting
 question. However, the problem of making sure that the function was actually
 computable would interfere with this process of determining the probability
 of each particular string that can be produced.  For example, since some
 strings would be infinite, the computability problem makes it imperative
 that the infinite strings be partially computed at an iteration (or else the
 function would be hung up at some particular iteration and the infinite
 other calculations could not be considered computable).

 My criticism is that even though I believe the function may be
 theoretically computable, the fact that each particular probability (of each
 particular string that is produced) cannot be proven to approach a limit
 through mathematical analysis, and since the individual probabilities will
 fluctuate with each new string that is produced, one would have to know how
 to reorder the production of the probabilities in order to demonstrate that
 the individual probabilities do approach a limit.  If they don't, then the
 claim that this function could be used to define the probabilities of a
 particular string from all possible program is unprovable.  (Some
 infinite calculations fluctuate infinitely.)  Since you do not have any way
 to determine how to reorder the infinite probabilities a priori, your
 algorithm would have to be able to compute all possible reorderings to find
 the ordering and filtering of the computations that would produce evaluable
 limits.  Since there are trans infinite rearrangements of an infinite list
 (I am not sure that I am using the term 'trans infinite' properly) this
 shows that the conclusion that the theorem can be used to derive the desired
 probabilities is unprovable through a variation of Cantor's Diagonal
 Argument, and that you can't use Solomonoff Induction the way you have been
 talking about using it.

 Since you cannot fully compute every string that may be produced at a
 certain iteration, you cannot make the claim that you even know the
 probabilities of any possible string before infinity and therefore your
 claim that the sum of the probabilities can be computed is not provable.

 But I could be wrong.
 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Abram Demski
Jim,

Yes this is true  provable: there is no way to compute a correct error
bound such that it converges to 0 as the computation of algorithmic
probability converges to the correct number. More specifically--- we can
approximate the algorithmic probability from below, computing better lower
bounds which converge to the correct number, but we cannot approximate it
from above, as there is no procedure (and can never be any procedure) which
creates closer and closer upper bounds converging to the correct number.

(We can produce upper bounds that get closer and closer w/o getting
arbitrarily near, and we can produce numbers which do approach arbitrarily
near to the correct number in the limit but sometimes dip below for a time;
but we can't have both features.)

The question of whether the function would be useful for the sorts of
things we keep talking about ... well, I think the best argument that I can
give is that MDL is strongly supported by both theory and practice for many
*subsets* of the full program space. The concern might be that, so far, it
is only supported by *theory* for the full program space-- and since
approximations have very bad error-bound properties, it may never be
supported in practice. My reply to this would be that it still appears
useful to approximate Solomonoff induction, since most successful predictors
can be viewed as approximations to Solomonoff induction. It approximates
solomonoff induction appears to be a good _explanation_ for the success of
many systems.

What sort of alternatives do you have in mind, by the way?

--Abram

On Thu, Jul 15, 2010 at 11:50 AM, Jim Bromer jimbro...@gmail.com wrote:

 I think that Solomonoff Induction includes a computational method that
 produces probabilities of some sort and whenever those probabilities were
 computed (in a way that would make the function computable) they would sum
 up to 1.  But the issue that I am pointing out is that there is no way that
 you can determine the margin of error in what is being computed for what has
 been repeatedly claimed that the function is capable of computing.  Since
 you are not able to rely on something like the theory of limits, you are not
 able to determine the degree of error in what is being computed.  And in
 fact, there is no way to determine that what the function would compute
 would be in any way useful for the sort of things that you guys keep talking
 about.

 Jim Bromer



 On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs.
 In this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
 stops below 1, though we can't say where precisely without the infinite
 computing power required to compute the limit).

 --Abram


 I believe that Solomonoff Induction would be computable given infinite
 time and infinite resources (the Godel Theorem fits into this category) but
 some people disagree for reasons I do not understand.

 If it is not computable then it is not a mathematical theorem and the
 question of whether the sum of probabilities equals 1 is pure fantasy.

 If it is computable then the central issue is whether it could (given
 infinite time and infinite resources) be used to determine the probability
 of a particular string being produced from all possible programs.  The
 question about the sum of all the probabilities is certainly an interesting
 question. However, the problem of making sure that the function was actually
 computable would interfere with this process of determining the probability
 of each particular string that can be produced.  For example, since some
 strings would be infinite, the computability problem makes it imperative
 that the infinite strings be partially computed at an iteration (or else the
 function would be hung up at some particular iteration and the infinite
 other calculations could not be considered computable).

 My criticism is that even though I believe the function may be
 theoretically computable, the fact that each particular probability (of each
 particular string that is produced) cannot be proven to approach a limit
 through mathematical analysis, and since the individual probabilities will
 fluctuate with each new string that is produced, one would have to know how
 to reorder the production of the probabilities in order to demonstrate 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Jim Bromer
We all make conjectures all of the time, but we don't often don't have
anyway to establish credibility for the claims that are made.  So I wanted
to examine one part of this field, and the idea that seemed most natural for
me was Solomonoff Induction.  I have reached a conclusion about the subject
and that conclusion is that all of the claims that I have seen made about
Solomonoff Induction are rationally unfounded including the one that you
just made when you said: We can produce upper bounds that get closer and
closer w/o getting arbitrarily near, and we can produce numbers which do
approach arbitrarily near to the correct number in the limit but sometimes
dip below for a time; but we can't have both features.

Your inability to fully recognize or perhaps acknowledge that you cannot use
Solomonoff Induction to make this claim is difficult for me to comprehend.

While the fields of compression and probability have an impressive body of
evidence supporting them, I simply have no reason to think the kind of
claims that have been made about Solomonoff Induction have any merit.  By
natural induction I feel comfortable drawing the conclusion that this whole
area related to algorithmic information theory is based on shallow methods
of reasoning.  It can be useful, as it was for me, just as means of
exploring ideas that I would not have otherwise explored.  But its
usefulness comes in learning how to determine its lack of merit.

I will write one more thing about my feelings about computability, but I
will start a new thread and just mention the relation to this thread.

Jim Bromer

On Thu, Jul 15, 2010 at 2:45 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 Yes this is true  provable: there is no way to compute a correct error
 bound such that it converges to 0 as the computation of algorithmic
 probability converges to the correct number. More specifically--- we can
 approximate the algorithmic probability from below, computing better lower
 bounds which converge to the correct number, but we cannot approximate it
 from above, as there is no procedure (and can never be any procedure) which
 creates closer and closer upper bounds converging to the correct number.

 (We can produce upper bounds that get closer and closer w/o getting
 arbitrarily near, and we can produce numbers which do approach arbitrarily
 near to the correct number in the limit but sometimes dip below for a time;
 but we can't have both features.)

 The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 What sort of alternatives do you have in mind, by the way?

 --Abram

   On Thu, Jul 15, 2010 at 11:50 AM, Jim Bromer jimbro...@gmail.comwrote:

   I think that Solomonoff Induction includes a computational method that
 produces probabilities of some sort and whenever those probabilities were
 computed (in a way that would make the function computable) they would sum
 up to 1.  But the issue that I am pointing out is that there is no way that
 you can determine the margin of error in what is being computed for what has
 been repeatedly claimed that the function is capable of computing.  Since
 you are not able to rely on something like the theory of limits, you are not
 able to determine the degree of error in what is being computed.  And in
 fact, there is no way to determine that what the function would compute
 would be in any way useful for the sort of things that you guys keep talking
 about.

 Jim Bromer



 On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs.
 In this case, its sum would come to 1. This puts an upper bound on the sum.
 Since there is no subtraction, there is a lower bound at 0 and the sum
 monotonically increases as we take the limit. Knowing these facts, suppose
 it *didn't* converge. It must then increase without bound, since it cannot
 fluctuate back and forth (it can only go up). But this contradicts the 
 upper
 bound of 1. So, the sum must stop at 1 or below (and in fact we can prove 
 it
 stops below 1, 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-15 Thread Abram Demski
Jim,

The statements about bounds are mathematically provable... furthermore, I
was just agreeing with what you said, and pointing out that the statement
could be proven. So what is your issue? I am confused at your response. Is
it because I didn't include the proofs in my email?

--Abram

On Thu, Jul 15, 2010 at 7:30 PM, Jim Bromer jimbro...@gmail.com wrote:

 We all make conjectures all of the time, but we don't often don't have
 anyway to establish credibility for the claims that are made.  So I wanted
 to examine one part of this field, and the idea that seemed most natural for
 me was Solomonoff Induction.  I have reached a conclusion about the subject
 and that conclusion is that all of the claims that I have seen made about
 Solomonoff Induction are rationally unfounded including the one that you
 just made when you said: We can produce upper bounds that get closer and
 closer w/o getting arbitrarily near, and we can produce numbers which do
 approach arbitrarily near to the correct number in the limit but sometimes
 dip below for a time; but we can't have both features.

 Your inability to fully recognize or perhaps acknowledge that you cannot
 use Solomonoff Induction to make this claim is difficult for me to
 comprehend.

 While the fields of compression and probability have an impressive body of
 evidence supporting them, I simply have no reason to think the kind of
 claims that have been made about Solomonoff Induction have any merit.  By
 natural induction I feel comfortable drawing the conclusion that this whole
 area related to algorithmic information theory is based on shallow methods
 of reasoning.  It can be useful, as it was for me, just as means of
 exploring ideas that I would not have otherwise explored.  But its
 usefulness comes in learning how to determine its lack of merit.

 I will write one more thing about my feelings about computability, but I
 will start a new thread and just mention the relation to this thread.

 Jim Bromer

 On Thu, Jul 15, 2010 at 2:45 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 Yes this is true  provable: there is no way to compute a correct error
 bound such that it converges to 0 as the computation of algorithmic
 probability converges to the correct number. More specifically--- we can
 approximate the algorithmic probability from below, computing better lower
 bounds which converge to the correct number, but we cannot approximate it
 from above, as there is no procedure (and can never be any procedure) which
 creates closer and closer upper bounds converging to the correct number.

 (We can produce upper bounds that get closer and closer w/o getting
 arbitrarily near, and we can produce numbers which do approach arbitrarily
 near to the correct number in the limit but sometimes dip below for a time;
 but we can't have both features.)

 The question of whether the function would be useful for the sorts of
 things we keep talking about ... well, I think the best argument that I can
 give is that MDL is strongly supported by both theory and practice for many
 *subsets* of the full program space. The concern might be that, so far, it
 is only supported by *theory* for the full program space-- and since
 approximations have very bad error-bound properties, it may never be
 supported in practice. My reply to this would be that it still appears
 useful to approximate Solomonoff induction, since most successful predictors
 can be viewed as approximations to Solomonoff induction. It approximates
 solomonoff induction appears to be a good _explanation_ for the success of
 many systems.

 What sort of alternatives do you have in mind, by the way?

 --Abram

   On Thu, Jul 15, 2010 at 11:50 AM, Jim Bromer jimbro...@gmail.comwrote:

   I think that Solomonoff Induction includes a computational method that
 produces probabilities of some sort and whenever those probabilities were
 computed (in a way that would make the function computable) they would sum
 up to 1.  But the issue that I am pointing out is that there is no way that
 you can determine the margin of error in what is being computed for what has
 been repeatedly claimed that the function is capable of computing.  Since
 you are not able to rely on something like the theory of limits, you are not
 able to determine the degree of error in what is being computed.  And in
 fact, there is no way to determine that what the function would compute
 would be in any way useful for the sort of things that you guys keep talking
 about.

 Jim Bromer



 On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski 
 abramdem...@gmail.comwrote:

 Jim,

 There is a simple proof of convergence for the sum involved in defining
 the probability of a given string in the Solomonoff distribution:

 At its greatest, a particular string would be output by *all* programs.
 In this case, its sum would come to 1. This puts an upper bound on the 
 sum.
 Since 

[agi] Comments On My Skepticism of Solomonoff Induction

2010-07-14 Thread Jim Bromer
Last week I came up with a sketch that I felt showed that Solomonoff
Induction was incomputable *in practice* using a variation of Cantor's
Diagonal Argument.  I wondered if my argument made sense or not.  I will
explain why I think it did.



First of all, I should have started out by saying something like, Suppose
Solomonoff Induction was computable, since there is some reason why people
feel that it isn't.



Secondly I don't think I needed to use Cantor's Diagonal Argument (for the *in
practice* case), because it would be sufficient to point out that since it
was impossible to say whether or not the probabilities ever approached any
sustained (collared) limits due to the lack of adequate mathematical
definition of the concept all programs, it would be impossible to make the
claim that they were actual representations of the probabilities of all
programs that could produce certain strings.



But before I start to explain why I think my variation of the Diagonal
Argument was valid, I would like to make another comment about what was
being claimed.



Take a look at the n-ary expansion of the square root of 2 (such as the
decimal expansion or the binary expansion).  The decimal expansion or the
binary expansion of the square root of 2 is an infinite string.  To say that
the algorithm that produces the value is predicting the value is a
torturous use of meaning of the word 'prediction'.  Now I have less than
perfect grammar, but the idea of prediction is so important in the field of
intelligence that I do not feel that this kind of reduction of the concept
of prediction is illuminating.



Incidentally, There are infinite ways to produce the square root of 2 (sqrt
2 +1-1, sqrt2 +2-2, sqrt2 +3-3,...).  So the idea that the square root of 2
is unlikely is another stretch of conventional thinking.  But since there
are an infinite ways for a program to produce any number (that can be
produced by a program) we would imagine that the probability that one of the
infinite ways to produce the square root of 2 approaches 0 but never reaches
it.  We can imagine it, but we cannot prove that this occurs in Solomonoff
Induction because Solomonoff Induction is not limited to just this class of
programs (which could be proven to approach a limit).  For example, we could
make a similar argument for any number, including a 0 or a 1 which would
mean that the infinite string of digits for the square root of 2 is just as
likely as the string 0.



But the reason why I think a variation of the diagonal argument can work in
my argument is that since we cannot prove that the infinite computations of
the probabilities that a -program will produce a string- will ever approach
a limit, to use the probabilities reliably (even as an infinite theoretical
method) we would have to find some way to rearrange the computations of the
probabilities so that they could.  While the number of ways to rearrange the
ordering of a finite number of things is finite no matter how large the
number is, the number of possible ways to rearrange an infinite number of
things is infinite.  I believe that this problem of finding the right
rearrangement of an infinite list of computations of values after the
calculation of the list is finished qualifies for an infinite to infinite
diagonal argument.


I want to add one more thing to this in a few days.

Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-14 Thread Matt Mahoney
Jim Bromer wrote:
 Last week I came up with a sketch that I felt showed that Solomonoff 
 Induction 
was incomputable in practice using a variation of Cantor's Diagonal Argument.

Cantor proved that there are more sequences (infinite length strings) than 
there 
are (finite length) strings, even though both sets are infinite. This means 
that 
some, but not all, sequences have finite length descriptions or are the output 
of finite length programs (which is the same thing in a more formal sense). For 
example, the digits of pi or sqrt(2) are infinite length sequences that have 
finite descriptions (or finite programs that output them). There are many more 
sequences that don't have finite length descriptions, but unfortunately I can't 
describe any of them except to say they contain infinite amounts of random data.

Cantor does not prove that Solomonoff induction is not computable. That was 
proved by Kolmogorov (and also by Solomonoff). Solomonoff induction says to use 
the shortest program that outputs the observed sequence to predict the next 
symbol. However, there is no procedure for finding the length of the shortest 
description. The proof sketch is that if there were, then I could describe the 
first string that cannot be described in less than a million bits even though 
I 
just did. The formal proof 
is 
http://en.wikipedia.org/wiki/Kolmogorov_complexity#Incomputability_of_Kolmogorov_complexity


I think your confusion is using the uncomputability of Solomonoff induction to 
question its applicability. That is an experimental question, not one of 
mathematics. The validity of using the shortest or simplest explanation of the 
past to predict the future was first observed by William of Ockham in the 
1400's. It is standard practice in all fields of science. The minimum 
description length principle is applicable to all branches of machine learning.

However, in the conclusion 
of http://mattmahoney.net/dc/dce.html#Section_Conclusion I argue for Solomonoff 
induction on the basis of physics. Solomonoff induction supposes that all 
observable strings are finite prefixes of computable sequences. Occam's Razor 
might not hold if it were possible for the universe to produce uncomputable 
sequences, i.e. infinite sources of random data. I argue that is not possible 
because the observable universe if finitely computable according to the laws of 
physics as they are now understood.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Jim Bromer jimbro...@gmail.com
To: agi agi@v2.listbox.com
Sent: Wed, July 14, 2010 11:29:13 AM
Subject: [agi] Comments On My Skepticism of Solomonoff Induction


Last week I came up with a sketch that I felt showed that Solomonoff Induction 
was incomputable in practice using a variation of Cantor's Diagonal Argument.  
I 
wondered if my argument made sense or not.  I will explain why I think it did.
 
First of all, I should have started out by saying something like, Suppose 
Solomonoff Induction was computable, since there is some reason why people feel 
that it isn't.
 
Secondly I don't think I needed to use Cantor's Diagonal Argument (for the in 
practice case), because it would be sufficient to point out that since it was 
impossible to say whether or not the probabilities ever approached any 
sustained 
(collared) limits due to the lack of adequate mathematical definition of the 
concept all programs, it would be impossible to make the claim that they were 
actual representations of the probabilities of all programs that could produce 
certain strings.
 
But before I start to explain why I think my variation of the Diagonal Argument 
was valid, I would like to make another comment about what was being claimed.
 
Take a look at the n-ary expansion of the square root of 2 (such as the decimal 
expansion or the binary expansion).  The decimal expansion or the binary 
expansion of the square root of 2 is an infinite string.  To say that the 
algorithm that produces the value is predicting the value is a torturous use 
of meaning of the word 'prediction'.  Now I have less than perfect grammar, but 
the idea of prediction is so important in the field of intelligence that I do 
not feel that this kind of reduction of the concept of prediction is 
illuminating.  

 
Incidentally, There are infinite ways to produce the square root of 2 (sqrt 2 
+1-1, sqrt2 +2-2, sqrt2 +3-3,...).  So the idea that the square root of 2 is 
unlikely is another stretch of conventional thinking.  But since there are an 
infinite ways for a program to produce any number (that can be produced by a 
program) we would imagine that the probability that one of the infinite ways to 
produce the square root of 2 approaches 0 but never reaches it.  We can imagine 
it, but we cannot prove that this occurs in Solomonoff Induction because 
Solomonoff Induction is not limited to just this class of programs (which could 
be proven to approach a limit).  For example, we could make

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-14 Thread Abram Demski
Jim,

There is a simple proof of convergence for the sum involved in defining the
probability of a given string in the Solomonoff distribution:

At its greatest, a particular string would be output by *all* programs. In
this case, its sum would come to 1. This puts an upper bound on the sum.
Since there is no subtraction, there is a lower bound at 0 and the sum
monotonically increases as we take the limit. Knowing these facts, suppose
it *didn't* converge. It must then increase without bound, since it cannot
fluctuate back and forth (it can only go up). But this contradicts the upper
bound of 1. So, the sum must stop at 1 or below (and in fact we can prove it
stops below 1, though we can't say where precisely without the infinite
computing power required to compute the limit).

--Abram

On Wed, Jul 14, 2010 at 11:29 AM, Jim Bromer jimbro...@gmail.com wrote:

 Last week I came up with a sketch that I felt showed that Solomonoff
 Induction was incomputable *in practice* using a variation of Cantor's
 Diagonal Argument.  I wondered if my argument made sense or not.  I will
 explain why I think it did.



 First of all, I should have started out by saying something like, Suppose
 Solomonoff Induction was computable, since there is some reason why people
 feel that it isn't.



 Secondly I don't think I needed to use Cantor's Diagonal Argument (for the
 *in practice* case), because it would be sufficient to point out that
 since it was impossible to say whether or not the probabilities ever
 approached any sustained (collared) limits due to the lack of adequate
 mathematical definition of the concept all programs, it would be
 impossible to make the claim that they were actual representations of the
 probabilities of all programs that could produce certain strings.



 But before I start to explain why I think my variation of the Diagonal
 Argument was valid, I would like to make another comment about what was
 being claimed.



 Take a look at the n-ary expansion of the square root of 2 (such as the
 decimal expansion or the binary expansion).  The decimal expansion or the
 binary expansion of the square root of 2 is an infinite string.  To say
 that the algorithm that produces the value is predicting the value is a
 torturous use of meaning of the word 'prediction'.  Now I have less than
 perfect grammar, but the idea of prediction is so important in the field of
 intelligence that I do not feel that this kind of reduction of the concept
 of prediction is illuminating.



 Incidentally, There are infinite ways to produce the square root of 2 (sqrt
 2 +1-1, sqrt2 +2-2, sqrt2 +3-3,...).  So the idea that the square root of
 2 is unlikely is another stretch of conventional thinking.  But since
 there are an infinite ways for a program to produce any number (that can be
 produced by a program) we would imagine that the probability that one of the
 infinite ways to produce the square root of 2 approaches 0 but never reaches
 it.  We can imagine it, but we cannot prove that this occurs in Solomonoff
 Induction because Solomonoff Induction is not limited to just this class of
 programs (which could be proven to approach a limit).  For example, we
 could make a similar argument for any number, including a 0 or a 1 which
 would mean that the infinite string of digits for the square root of 2 is
 just as likely as the string 0.



 But the reason why I think a variation of the diagonal argument can work in
 my argument is that since we cannot prove that the infinite computations of
 the probabilities that a -program will produce a string- will ever approach
 a limit, to use the probabilities reliably (even as an infinite theoretical
 method) we would have to find some way to rearrange the computations of the
 probabilities so that they could.  While the number of ways to rearrange
 the ordering of a finite number of things is finite no matter how large the
 number is, the number of possible ways to rearrange an infinite number of
 things is infinite.  I believe that this problem of finding the right
 rearrangement of an infinite list of computations of values after the
 calculation of the list is finished qualifies for an infinite to infinite
 diagonal argument.


 I want to add one more thing to this in a few days.

 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com