I don't think Solomonoff induction is a particularly useful direction for
AI, I was just taking issue with the statement made that it is not capable
of correct prediction given adequate resources...

On Fri, Jul 9, 2010 at 11:35 AM, David Jones <davidher...@gmail.com> wrote:

> Although I haven't studied Solomonoff induction yet, although I plan to
> read up on it, I've realized that people seem to be making the same mistake
> I was. People are trying to find one silver bullet method of induction or
> learning that works for everything. I've begun to realize that its OK if
> something doesn't work for everything. As long as it works on a large enough
> subset of problems to be useful. If you can figure out how to construct
> justifiable methods of induction for enough problems that you need to solve,
> then that is sufficient for AGI.
>
> This is the same mistake I made and it was the point I was trying to make
> in the recent email I sent. I kept trying to come up with algorithms for
> doing things and I could always find a test case to break it. So, now I've
> begun to realize that it's ok if it breaks sometimes! The question is, can
> you define an algorithm that breaks gracefully and which can figure out what
> problems it can be applied to and what problems it should not be applied to.
> If you can do that, then you can solve the problems where it is applicable,
> and avoid the problems where it is not.
>
> This is perfectly OK! You don't have to find a silver bullet method of
> induction or inference that works for everything!
>
> Dave
>
>
>
> On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel <b...@goertzel.org> wrote:
>
>>
>> To make this discussion more concrete, please look at
>>
>> http://www.vetta.org/documents/disSol.pdf
>>
>> Section 2.5 gives a simple version of the proof that Solomonoff induction
>> is a powerful learning algorithm in principle, and Section 2.6 explains why
>> it is not practically useful.
>>
>> What part of that paper do you think is wrong?
>>
>> thx
>> ben
>>
>>
>>
>> On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer <jimbro...@gmail.com> wrote:
>>
>>>  On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel <b...@goertzel.org> wrote:
>>>
>>> If you're going to argue against a mathematical theorem, your argument
>>> must be mathematical not verbal.  Please explain one of
>>>
>>> 1) which step in the proof about Solomonoff induction's effectiveness you
>>> believe is in error
>>>
>>> 2) which of the assumptions of this proof you think is inapplicable to
>>> real intelligence [apart from the assumption of infinite or massive compute
>>> resources]
>>>  --------------------------------
>>>
>>> Solomonoff Induction is not a provable Theorem, it is therefore a
>>> conjecture.  It cannot be computed, it cannot be verified.  There are many
>>> mathematical theorems that require the use of limits to "prove" them for
>>> example, and I accept those proofs.  (Some people might not.)  But there is
>>> no evidence that Solmonoff Induction would tend toward some limits.  Now
>>> maybe the conjectured abstraction can be verified through some other means,
>>> but I have yet to see an adequate explanation of that in any terms.  The
>>> idea that I have to answer your challenges using only the terms you specify
>>> is noise.
>>>
>>> Look at 2.  What does that say about your "Theorem".
>>>
>>> I am working on 1 but I just said: "I haven't yet been able to find a way
>>> that could be used to prove that Solomonoff Induction does not do what Matt
>>> claims it does."
>>>   Z
>>> What is not clear is that no one has objected to my characterization of
>>> the conjecture as I have been able to work it out for myself.  It requires
>>> an infinite set of infinitely computed probabilities of each infinite
>>> "string".  If this characterization is correct, then Matt has been using the
>>> term "string" ambiguously.  As a primary sample space: A particular string.
>>> And as a compound sample space: All the possible individual cases of the
>>> substring compounded into one.  No one has yet to tell of his "mathematical"
>>> experiments of using a Turing simulator to see what a finite iteration of
>>> all possible programs of a given length would actually look like.
>>>
>>> I will finish this later.
>>>
>>>
>>>>
>>>>
>>>>  On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer <jimbro...@gmail.com>wrote:
>>>>
>>>>> Abram,
>>>>> Solomoff Induction would produce poor "predictions" if it could be used
>>>>> to compute them.
>>>>>
>>>>
>>>> Solomonoff induction is a mathematical, not verbal, construct.  Based on
>>>> the most obvious mapping from the verbal terms you've used above into
>>>> mathematical definitions in terms of which Solomonoff induction is
>>>> constructed, the above statement of yours is FALSE.
>>>>
>>>> If you're going to argue against a mathematical theorem, your argument
>>>> must be mathematical not verbal.  Please explain one of
>>>>
>>>> 1) which step in the proof about Solomonoff induction's effectiveness
>>>> you believe is in error
>>>>
>>>> 2) which of the assumptions of this proof you think is inapplicable to
>>>> real intelligence [apart from the assumption of infinite or massive compute
>>>> resources]
>>>>
>>>> Otherwise, your statement is in the same category as the statement by
>>>> the protagonist of Dostoesvky's "Notes from the Underground" --
>>>>
>>>> "I admit that two times two makes four is an excellent thing, but if we
>>>> are to give everything its due, two times two makes five is sometimes a 
>>>> very
>>>> charming thing too."
>>>>
>>>> ;-)
>>>>
>>>>
>>>>
>>>>> Secondly, since it cannot be computed it is useless.  Third, it is not
>>>>> the sort of thing that is useful for AGI in the first place.
>>>>>
>>>>
>>>> I agree with these two statements
>>>>
>>>> -- ben G
>>>>
>>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>>> Modify<https://www.listbox.com/member/?&;>Your Subscription 
>>>> <http://www.listbox.com/>
>>>>
>>>
>>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com/>
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> CTO, Genescient Corp
>> Vice Chairman, Humanity+
>> Advisor, Singularity University and Singularity Institute
>> External Research Professor, Xiamen University, China
>> b...@goertzel.org
>>
>> "
>> “When nothing seems to help, I go look at a stonecutter hammering away at
>> his rock, perhaps a hundred times without as much as a crack showing in it.
>> Yet at the hundred and first blow it will split in two, and I know it was
>> not that blow that did it, but all that had gone before.”
>>
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

"
“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to