On 1/11/03 5:31 PM, "Pei Wang" <[EMAIL PROTECTED]> wrote:
> 
> My argument, briefly speaking, is that it is quite possible, in the current
> computer, to solve problems in such a way that is non-deterministic (i.e.,
> context-sensitive) and open-ended (as in anytime algorithms).  Such a
> process doesn't satisfy the definition of computation, doesn't follow a
> predetermined algorithm, and has no fixed complexity.


Interesting, as I make a similar argument.  I'm using an unconventional
model of universal computation on finite state machinery that, among other
things, exhibits these properties.  I would say that it is less that
computational theory doesn't apply than it is a matter of people making
assumptions about the underlying nature of the machine when they apply it.

For example, in our system it is quite possible for an iterative algorithm
implemented on our machinery to get rewritten as its executing in a
particular context such that it completes itself in an "unexpected" manner.
In this case, "rewritten" is much deeper than simple self-modifying code;
there is literally no distinction between code, data, and the machinery
itself, so strange things can happen.  Normal computational analysis
generally assumes that at the very least the machinery is static, if not the
code, but that is not the case here.  The very concept of "algorithm" as
normally used makes certain implicit assumptions about the fundamental
nature of the machinery it is running on, whether valid or not.  Standard
computational analysis could be done on our machinery, it would just be very
different.


Note that this particular property is more-or-less how our system cheats the
halting problem, infinite loops, and similar.  When you run an program
instance that ends up in this state, the algorithms (in a generic sense)
tend to rewrite themselves such that they jump to a conclusion and
terminate.  However, this is not so much a property of the algorithm but the
nature of the universal computing machinery you are running it on.  The very
machinery itself is deeply probabilistic and non-axiomatic, so there is no
guarantee that an algorithm will run in an expected manner every time
("expected" in the sense of the conventional model of universal computers --
obviously I know this will happen).  "Here be monsters..."


 
> For that "level" issue, one way to see it is through the concept of "virtual
> machine".  We all know that at a low level computer only has procedural
> language and binary data, but at a high level it has non-procedural language
> (such as functional or logical languages) and decimal data.  Therefore, if
> virtual machine M1 is implemented by virtual machine M2, the two may still
> have quite different properties.  What I'm trying to do is to implement a
> "non-computing" system on a computing one.


I know exactly what you are talking about here, though I don't know if
everyone will get it; I've had to explain this to others before (and
demonstrate it too).  We implement a virtual machine on top of a standard
computer architecture that is designed around a fundamentally different
model of a universal computer.

One of the demonstrations that never fails to impress is that we can run
classes of algorithms with well-known complexity profiles (typically ones
with intractable scalability characteristics), and then run the same
algorithms on the virtual machine and get results that conventional analysis
suggests should be effectively impossible on that same hardware.  (Note:
Only certain classes of algorithms exhibit this characteristic on our
virtual machine, so naturally those are the ones we choose and fortunately
they tend to be "interesting" algorithms.)  The problem is that most
conventional analytical computational theory typically makes many
assumptions about the behavior and nature of the universal computer (e.g.
that it is fundamentally a zero-order machine) that do not apply to our
virtual machine.  The virtual machine is still bound by the same laws of
computational theory that everyone else is, but the effective mechanisms of
computation are very different from the norm and you have to do your
analysis with a different set of assumptions to get useful results.

At the same time, I would note that our virtual machine is substantially
worse at doing some types of computation that conventional universal
computation architectures do very well, so there is something of a
trade-off.

To put it another way, it is possible to write a universal virtual machine
that runs on standard silicon that is not itself reducible to silicon using
any existing electronics technology (though in theory it might be possible
with, say, some molecular computer technology that doesn't exist yet).


-James Rogers
 [EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to