David Barrett-Lennard writes: > On a computer we generate a million numbers at random, and we write a > program that tests whether the numbers are stored in ascending order, and if > not causes the experimenter to be killed. > ... > An interesting question is whether there will be a tendency to discover > short and concise algorithms. In any case, the experiment could be repeated > many times to allow the experimenter to seek short programs - by giving it > less space to work in.

Yes, I think it would. If each bit of the program is set randomly, then the measure of the branches where the experimenter survives should be proportional to the length of the program. So he would expect to see one of the shortest programs possible. > Some really difficult problems could be solved with this technique - eg > "Find a proof to ...", and the test code simply validates the line of > reasoning. > > Evidently QTI gives the experimenter the equivalent computing power of a > Quantum computer. ie exponential order problems can be solved in linear > time. Quantum computers can't solve exponential problems in linear time. They can solve some problems, like factoring, in linear time, but no one knows whether these problems are exponential or not. They can do some exponential problems, like searching, in less time than a classical computer but the time is still exponential. I'm not sure how the power of this QTI based computer would compare with a conventional quantum computer. One practical problem, even if you accept the correctness of QTI, is in making the kill-you circuit reliable enough that it is not the weakest link in the chain. You need to have its failure probability be significantly less than the probability that your program will work. But that latter probability may be exponentially tiny, so engineering the killer circuit may be intractable. Hal Finney