On Fri, Jan 25, 2008 at 10:43:59AM -0800, SJS wrote:

I think you're going to have a problem because you don't constrain what
"severely bounded" means. We're no longer talking turing machines; if we
bound our machine to run only primitive recursive programs, we're can
make confident judgements about the machine halting in fixed time.

Ok.  Let's make our only bound memory.  To simplify, let's have the program
memory be separate, but finite.  The PC needs to be represented in our
memory though (so basically the memory includes CPU registers).

There are no constraints on what the architecture is.

The goal isn't to show it is possible to make a computer simple enough to
solve halting, but to show that limiting memory still is impractical to
solve halting.

  1.  I define impractical as where the solution to any non-trivial input
  size requires more storage than the size of the universe or more time
  than the duration of the known universe.

Duration is meaningless without a performance limit. More bits than the
current estimated size of the universe is a better yardstick, but even
then you're making assumptions as to how the target scheme will run.

Let me clarify.  Assume a clock tick of the planck time (~ 10e-44 s).
Assuming 13.8e9 years for the age, there are about 4e61 possible clock
ticks in the universe.  It is not "practical" to build a computer whose
clock can run faster than the planck time.  I don't care if your computer
is slower, since it only makes things worse.

  2.  Given a processing system with 'n' bits of memory.  Each processing
  step modifies zero or more of these bits.  Halting is defined as a
  processing step that modifies zero bits.  The processing system has no
  other state.

Don't we also need to define input?  I would think that our bounded machine
could be described with a DFA operating on a symbol set of size 1 and
an unbounded input string (the clock). We need to keep the input separate
so that our program is general, yes?

Any input is part of memory, that's the point of bounding it.

  3.  Because there are a finite number of states, it is sufficient to
  detect a repeated state to detect a non-halt.  I know of two ways of
  detecting loops (but I don't know how to prove there aren't more):

    a.  Track all visited states.  There are 2^(2^n) possible states.  This
    requires impractical storage for all non-trivial n.

There may be clever ways around that.

Not if you don't know anything about what possible states and transitions
there are.

    b.  Run two identical systems from the same starting system, but run
    one two steps for each step of the first.  If they ever reach the same
    state, there is a cycle, and the system will not halt.  This requires
    2*2^n storage.  But, the longest possible cycle would go through every
    state, requiring 2^(2^n) steps.  This requires impractical time.

Take the trivial machine that halts.  The trailing machine will have the
same bit-pattern as the leading machine, and flag a cycle, despite no
state changing ever.

The proof is not that it is possible for you to construct a machine that
can detect some halts, but that it is impossible to construct a machine
that detects all halts.

                                                     It also necessary to
prove that the repeat detection of 3 is necessary instead of merely
sufficient.

I can see why that would help, but why would that be true?

Without it, I've not proven what I intended to prove.  Otherwise, we don't
know if there isn't some yet-undiscovered algorithm out there that solves
the problem more efficiently.  If we can prove that all algorithms must use
either O(2^2^n) space or time, the conclusion follows.

Dave

--
KPLUG-LPSG@kernel-panic.org
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to