----- Original Message ----- 
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Friday, May 11, 2007 4:15 PM
Subject: Re: [agi] Determinism


> Suppose machine A has 1 MB of memory and machine B has 2 MB.  They may
have
> different instruction sets.  You have a program written for A but you want
to
> test it on B to see if it will work on A.  So you write a program on B
that
> simulates A.  Your simulator has to include a 1 MB array to represent A's
> memory.  You load the test program in this array, simulate running A's
> instructions and get the output that you would have gotten on A.
>
> If you reversed the roles, you could not do it because you would need to
> declare a 2 MB array on a computer with only 1 MB of memory.  The best you
> could do is simulate a machine like B but with a smaller memory.  For some
> test programs you will get the same answer, but for others your simulation
> will get an out of memory error, whereas the real program would not.  This
is
> a probabilistic model.  It is useful, but not 100% accurate.

As I said already, this is not true.  If you have off line memory, you can
simulate a much larger memory than you have physical memory.  All current
PC's use paged memory, for instance, but if you have to swap real memory out
to a disk drive, it slows the program down a lot.  This is why I said that
you can trade time for memory.

> Now suppose you wanted to simulate A on A.  (You may suspect a program has
a
> virus and want to see what it would do without actually running it).  Now
you
> have the same problem.  You need an array to reprsent your own memory, and
it
> would use all of your memory with no space left over for your simulator
> program.  This is true even if you count disk and virtual memory, because
that
> has to be part of your simulation too.
>
> Why is this important to AGI?  Because the brain is a computer with finite
> memory.  When you think about how you think, you are simulating your own
> brain.  Whatever model you use must be a simplified approximation, because
you
> don't have enough memory to model it exactly.  Any such model cannot give
the
> right answer every time.  So the result is we perceive our own thoughts as
> having some randomness, and this must be true whether the brain is
> deterministic or not.

As I said before, a full complete real time model as you describe is rarely
needed.  Humans simulate their brains and others all the time.  The part
they simulate obviously is at a much higher level than the physical neurons
in our brains but it is a simulation non the less.  We never simulate
anything but a tiny aspect of ours or others brains because we don't need
to, for it to be useful and we don't have the mental tools in any case.

Some models can give the "right" answer all the time if the model is either
perfectly known or simple enough.  In the case of humans, we model many
situations at a very high level so that we also can get exactly the same
answer from the model every time.  Simulate and model aren't words that
define just one low level.  They can be correctly and usefully used at many
levels and all levels of modeling must be accommodated if you want to make
sweeping generalizations about whether humans/AGI are deterministic or not.

I fully understand what you are saying but I disagree with your narrow usage
of model and simulate.

David Clark


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to