Mark,

"...and that the (actually explicit) assumption underlying the whole
scientific method is that the same causes produces the same results.
Comments?"

It seems like a somewhat weaker assumption *could* work; namely, "the
same causes produce the same probability distribution on effects".
This weakening accepts physical random events. (Although its meaning
is arguable-- one interpretation of probability says probability =
frequency, so all the assumption is claiming is that a frequency
exists. That seems really weak. Another interpretation says that
probabilities are real physical properties that are merely
*discovered* by counting frequencies of physical events. That's
somewhat stronger, but a little strange sounding...)

I believe the AIXI answer is that AIXI applies to *either* computable
universes *or* universes with computable probability distributions.

--Abram

On Sat, Oct 25, 2008 at 11:02 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>>> -- truly general AI, even assuming the universe is computable, is
>>> impossible for any finite system
> Excellent.  Unfortunately, I personally missed (or have forgotten) how AIXI
> shows or proves this (as opposed to invoking some other form of
> incompleteness) unless it is merely because of the assumption that the
> universe itself is assumed to be infinite (which I do understand but
> which then makes the argument rather pedestrian and less interesting).
>
>>> The computability of the universe is something that can't really be
>>> proved, but I argue that it's an implicit assumption underlying the whole
>>> scientific method.
>
> It seems to me (and I certainly can be wrong about this) that computability
> is frequently improperly conflated with consistency (though may be you want
> to argue that such a conflation isn't improper) and that the (actually
> explicit) assumption underlying the whole scientific method is that the same
> causes produces the same results.  Comments?
>
>
> ----- Original Message -----
> From: Ben Goertzel
> To: agi@v2.listbox.com
> Sent: Saturday, October 25, 2008 7:48 PM
> Subject: **SPAM** Re: AIXI (was Re: [agi] If your AGI can't learn to play
> chess it is no AGI)
>
> AIXI shows a couple interesting things...
>
> -- truly general AI, even assuming the universe is computable, is impossible
> for any finite system
>
> -- given any finite level L of general intelligence that one desires, there
> are some finite R, M so that you can create a computer with less than R
> processing speed and M memory capacity, so that the computer can achieve
> level L of general intelligence
>
> This doesn't tell you *anything* about how to make AGI in practice.  It does
> tell you that, in principle, creating AGI is a matter of *computational
> efficiency* ... assuming the universe is computable.
>
> The computability of the universe is something that can't really be proved,
> but I argue that it's an implicit assumption underlying the whole scientific
> method.  If the universe can't be usefully modelable as computable then the
> whole methodology of gathering finite datasets of finite-precision data is
> fundamentally limited in what it can tell us about the universe ... which
> would really suck...
>
> -- Ben G
>
> -- Ben G
>
> On Sat, Oct 25, 2008 at 7:21 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>>
>> --- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:
>>
>> > Ummm.  It seems like you were/are saying then that because
>> > AIXI makes an
>> > assumption limiting it's own applicability/proof (that
>> > it requires that the
>> > environment be computable) and because AIXI can make some
>> > valid conclusions,
>> > that that "suggests" that AIXI's limiting
>> > assumptions are true of the
>> > universe.  That simply doesn't work, dude, unless you
>> > have a very loose
>> > inductive-type definition of "suggests" that is
>> > more suited for inference
>> > control than anything like a logical proof.
>>
>> I am arguing by induction, not deduction:
>>
>> If the universe is computable, then Occam's Razor holds.
>> Occam's Razor holds.
>> Therefore the universe is computable.
>>
>> Of course, I have proved no such thing.
>>
>> -- Matt Mahoney, [EMAIL PROTECTED]
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "A human being should be able to change a diaper, plan an invasion, butcher
> a hog, conn a ship, design a building, write a sonnet, balance accounts,
> build a wall, set a bone, comfort the dying, take orders, give orders,
> cooperate, act alone, solve equations, analyze a new problem, pitch manure,
> program a computer, cook a tasty meal, fight efficiently, die gallantly.
> Specialization is for insects."  -- Robert Heinlein
>
>
> ________________________________
> agi | Archives | Modify Your Subscription
>
> ________________________________
> agi | Archives | Modify Your Subscription


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to