On Sun, Jan 12, 2003 at 09:38:26AM -0500, Ben Goertzel wrote:

> To me, the question of what a computational model can do with moderately
> small space and time resource constraints is at least equally "fundamental"

Computability theory: can this be computed?
Complexibility theory: does it take polynomial or exponential time?
Algorithms: can we get it to run 20% faster?

I'll take the risk of replying to other messages here without reading the 2
dozen other replies first.  It sounds like James is using primitive recursive
functions, equivalent to Hofstadter's BLoop in _GEB_.  Those are a subset of
the Turing-decidable/total recursive (always halt) functions, themselves a
subset of the Turing-recognizable/partial recursive (short for partially
defined, vs. totally defined) functions, which are equivalent to all Turing
machines.

The halting problem for BLoop is solvable, because it was designed that way.

If I overheard my comp theory prof correctly, any problem can be transformed
into a primitive recursive program wrapped by a single unbounded loop.
Actually that makes sense -- you'd just keep trying the inner program with
bigger and bigger bounds.  Not very exciting.

Shane's point that everything in reality is a FSA makes me want to say "the
universe is a regular expression!"  Possibly a better intuitive model for our
computers is a linear bounded automaton, a TM whose tape is some fixed
multiple of its input, it can't go chewing up more and more tape.  Of course
our computers aren't really LBAs, since you can construct inputs too large for
them to handle.  But before that point might be a better practical model.

Done now,
-xx- Damien X-) 

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to