--- David Clark <[EMAIL PROTECTED]> wrote:

> > If you reversed the roles, you could not do it because you would need to
> > declare a 2 MB array on a computer with only 1 MB of memory.  The best you
> > could do is simulate a machine like B but with a smaller memory.  For some
> > test programs you will get the same answer, but for others your simulation
> > will get an out of memory error, whereas the real program would not.  This
> is
> > a probabilistic model.  It is useful, but not 100% accurate.
> 
> As I said already, this is not true.  If you have off line memory, you can
> simulate a much larger memory than you have physical memory.  All current
> PC's use paged memory, for instance, but if you have to swap real memory out
> to a disk drive, it slows the program down a lot.  This is why I said that
> you can trade time for memory.

(sigh)  This has nothing to do with speed.  Offline memory is still memory. 
It does not change the fact that a finite state machine cannot simulate itself
no matter how much time you give it.  You can simulate some programs and get
the right answer, but you can't do it for every case.

I am trying to explain this using a minimum of mathematics, because if you
don't understand the math, you won't believe a proof.  If you want a formal
proof of the more general case of Turing machines bounded by algorithmic
complexity, see Legg's paper, http://www.vetta.org/documents/IDSIA-12-06-1.pdf


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to