Jim,

The statements about bounds are mathematically provable... furthermore, I
was just agreeing with what you said, and pointing out that the statement
could be proven. So what is your issue? I am confused at your response. Is
it because I didn't include the proofs in my email?

--Abram

On Thu, Jul 15, 2010 at 7:30 PM, Jim Bromer <[email protected]> wrote:

> We all make conjectures all of the time, but we don't often don't have
> anyway to establish credibility for the claims that are made.  So I wanted
> to examine one part of this field, and the idea that seemed most natural for
> me was Solomonoff Induction.  I have reached a conclusion about the subject
> and that conclusion is that all of the claims that I have seen made about
> Solomonoff Induction are rationally unfounded including the one that you
> just made when you said: "We can produce upper bounds that get closer and
> closer w/o getting arbitrarily near, and we can produce numbers which do
> approach arbitrarily near to the correct number in the limit but sometimes
> dip below for a time; but we can't have both features."
>
> Your inability to fully recognize or perhaps acknowledge that you cannot
> use Solomonoff Induction to make this claim is difficult for me to
> comprehend.
>
> While the fields of compression and probability have an impressive body of
> evidence supporting them, I simply have no reason to think the kind of
> claims that have been made about Solomonoff Induction have any merit.  By
> natural induction I feel comfortable drawing the conclusion that this whole
> area related to algorithmic information theory is based on shallow methods
> of reasoning.  It can be useful, as it was for me, just as means of
> exploring ideas that I would not have otherwise explored.  But its
> usefulness comes in learning how to determine its lack of merit.
>
> I will write one more thing about my feelings about computability, but I
> will start a new thread and just mention the relation to this thread.
>
> Jim Bromer
>
> On Thu, Jul 15, 2010 at 2:45 PM, Abram Demski <[email protected]>wrote:
>
>> Jim,
>>
>> Yes this is true & provable: there is no way to compute a correct error
>> bound such that it converges to 0 as the computation of algorithmic
>> probability converges to the correct number. More specifically--- we can
>> approximate the algorithmic probability from below, computing better lower
>> bounds which converge to the correct number, but we cannot approximate it
>> from above, as there is no procedure (and can never be any procedure) which
>> creates closer and closer upper bounds converging to the correct number.
>>
>> (We can produce upper bounds that get closer and closer w/o getting
>> arbitrarily near, and we can produce numbers which do approach arbitrarily
>> near to the correct number in the limit but sometimes dip below for a time;
>> but we can't have both features.)
>>
>> The question of whether the function would be useful for the "sorts of
>> things we keep talking about" ... well, I think the best argument that I can
>> give is that MDL is strongly supported by both theory and practice for many
>> *subsets* of the full program space. The concern might be that, so far, it
>> is only supported by *theory* for the full program space-- and since
>> approximations have very bad error-bound properties, it may never be
>> supported in practice. My reply to this would be that it still appears
>> useful to approximate Solomonoff induction, since most successful predictors
>> can be viewed as approximations to Solomonoff induction. "It approximates
>> solomonoff induction" appears to be a good _explanation_ for the success of
>> many systems.
>>
>> What sort of alternatives do you have in mind, by the way?
>>
>> --Abram
>>
>>   On Thu, Jul 15, 2010 at 11:50 AM, Jim Bromer <[email protected]>wrote:
>>
>>>   I think that Solomonoff Induction includes a computational method that
>>> produces probabilities of some sort and whenever those probabilities were
>>> computed (in a way that would make the function computable) they would sum
>>> up to 1.  But the issue that I am pointing out is that there is no way that
>>> you can determine the margin of error in what is being computed for what has
>>> been repeatedly claimed that the function is capable of computing.  Since
>>> you are not able to rely on something like the theory of limits, you are not
>>> able to determine the degree of error in what is being computed.  And in
>>> fact, there is no way to determine that what the function would compute
>>> would be in any way useful for the sort of things that you guys keep talking
>>> about.
>>>
>>> Jim Bromer
>>>
>>>
>>>
>>> On Thu, Jul 15, 2010 at 8:18 AM, Jim Bromer <[email protected]> wrote:
>>>
>>>>  On Wed, Jul 14, 2010 at 7:46 PM, Abram Demski 
>>>> <[email protected]>wrote:
>>>>
>>>>> Jim,
>>>>>
>>>>> There is a simple proof of convergence for the sum involved in defining
>>>>> the probability of a given string in the Solomonoff distribution:
>>>>>
>>>>> At its greatest, a particular string would be output by *all* programs.
>>>>> In this case, its sum would come to 1. This puts an upper bound on the 
>>>>> sum.
>>>>> Since there is no subtraction, there is a lower bound at 0 and the sum
>>>>> monotonically increases as we take the limit. Knowing these facts, suppose
>>>>> it *didn't* converge. It must then increase without bound, since it cannot
>>>>> fluctuate back and forth (it can only go up). But this contradicts the 
>>>>> upper
>>>>> bound of 1. So, the sum must stop at 1 or below (and in fact we can prove 
>>>>> it
>>>>> stops below 1, though we can't say where precisely without the infinite
>>>>> computing power required to compute the limit).
>>>>>
>>>>> --Abram
>>>>
>>>>
>>>> I believe that Solomonoff Induction would be computable given infinite
>>>> time and infinite resources (the Godel Theorem fits into this category) but
>>>> some people disagree for reasons I do not understand.
>>>>
>>>> If it is not computable then it is not a mathematical theorem and the
>>>> question of whether the sum of probabilities equals 1 is pure fantasy.
>>>>
>>>> If it is computable then the central issue is whether it could (given
>>>> infinite time and infinite resources) be used to determine the probability
>>>> of a particular string being produced from all possible programs.  The
>>>> question about the sum of all the probabilities is certainly an interesting
>>>> question. However, the problem of making sure that the function was 
>>>> actually
>>>> computable would interfere with this process of determining the probability
>>>> of each particular string that can be produced.  For example, since some
>>>> strings would be infinite, the computability problem makes it imperative
>>>> that the infinite strings be partially computed at an iteration (or else 
>>>> the
>>>> function would be hung up at some particular iteration and the infinite
>>>> other calculations could not be considered computable).
>>>>
>>>> My criticism is that even though I believe the function may be
>>>> theoretically computable, the fact that each particular probability (of 
>>>> each
>>>> particular string that is produced) cannot be proven to approach a limit
>>>> through mathematical analysis, and since the individual probabilities will
>>>> fluctuate with each new string that is produced, one would have to know how
>>>> to reorder the production of the probabilities in order to demonstrate that
>>>> the individual probabilities do approach a limit.  If they don't, then the
>>>> claim that this function could be used to define the probabilities of a
>>>> particular string from all possible program is unprovable.  (Some
>>>> infinite calculations fluctuate infinitely.)  Since you do not have any way
>>>> to determine how to reorder the infinite probabilities a priori, your
>>>> algorithm would have to be able to compute all possible reorderings to find
>>>> the ordering and filtering of the computations that would produce evaluable
>>>> limits.  Since there are trans infinite rearrangements of an infinite list
>>>> (I am not sure that I am using the term 'trans infinite' properly) this
>>>> shows that the conclusion that the theorem can be used to derive the 
>>>> desired
>>>> probabilities is unprovable through a variation of Cantor's Diagonal
>>>> Argument, and that you can't use Solomonoff Induction the way you have been
>>>> talking about using it.
>>>>
>>>> Since you cannot fully compute every string that may be produced at a
>>>> certain iteration, you cannot make the claim that you even know the
>>>> probabilities of any possible string before infinity and therefore your
>>>> claim that the sum of the probabilities can be computed is not provable.
>>>>
>>>> But I could be wrong.
>>>> Jim Bromer
>>>>
>>>
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com/>
>>>
>>
>>
>>
>> --
>> Abram Demski
>> http://lo-tho.blogspot.com/
>> http://groups.google.com/group/one-logic
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to