Eliezer wrote:
> James Rogers wrote:
> >
> > Your intuition is correct, depending on how strict you are about
> > "knowledge".  The intrinsic algorithmic information content of any
> > machine is greater (sometimes much greater) than the algorithmic
> > information content of its static state.  The intrinsic AIC doesn't
> > change even though the AIC of the machine state may.  For this reason,
> > it is not possible for a machine with a smaller AIC to perfectly model a
> > machine with greater or even equal AIC.  By extension, it is also not
> > possible to have perfect self-knowledge.  It is a common misapplication
> > and/or misunderstanding to interchangeably use the intrinsic AIC of a
> > machine with the AIC of the machine's state; I'm not saying that is
> > happening here, but I see it regularly in other less rigorous forums and
> > so it is worth bringing up.
> >
> > All this does not preclude a smaller machine from having a very good
> > predictive model of a larger machine.  Just not a perfect one.
>
> I'm not sure whether your definition of AIC precludes this, but it is
> possible for a small physical system to perfectly model a large physical
> system providing that the large physical system possesses perfect, large
> regularities such that its state can be fully represented within
> the small
> regularities of the small physical system.

James's definition gets around the point you're making, because the
definition of AIC of a machine is (roughly) the size of the smallest
self-delimiting program that computes that machine....  so your large system
with a lot of regularity has very low AIC...

He's basically restating Chaitin's epigrammatic restatement of Godel's
Theorem as: "You can't prove a 20 pound theorem with a 10 pound formal
system" ;-)   [poundage being algorithmic information content]

-- Ben G


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to