On Tue, Apr 24, 2007, Uri Even-Chen wrote about "Re: [off topic] Some new articles I wrote about science": > No theory can be both complete and consistent. Every question we ask > affects the answer. For example, the question "is the number 10 > prime?" depends on whether 10 is a binary number or a decimal number > (or any base-n number). I came to a conclusion that ANY well-defined > function can be computed in polynomial time (in theory). There are > infinitely many algorithms. It is not possible to prove that any > specific problem is not polynomial (unless it's not computable and > therefore not well defined). Any such proof will contain some > uncertainty (or at least, cannot be proved not to contain uncertainty) > and therefore, it can be contradicted.
When Alan Turing started thinking about the complexity of algorithms about 70 years ago, he noticed the same problem with I think you noticed. Like you can't say anything about "10" before you define what the string "10" means (in your example, it could be in base 2 or base 10), similarly you can't talk about "algorithms" without defining precisely what an algorithm means. A cynic would have said that you can't define an algorithm precisely, because it can be run on a "computer" of any type (at Turing's time, the computer was usually imagined...), written in various "language"s, and so on. But Turing wasn't a cynic, he was a genious. He formalized two types of very specific "computers" with their own very specific "languages". These are the (deterministic) "Turing Machine" and the "Non-deterministic Turing Machine". Turing, as well as his contemporary Alonzo Church, believed that any physical computer is a *subset* of the Turing Machine (in other words, it can be emulated by a Turing Machine). This belief, the so-called "Church-Turing Thesis" cannot be proven - you can consider this an axiom. (consult your favorite CS book, or even Wikipedia, to refresh you knowledge on these issues if you need to). Now that you understand this axiom, all the theorems of complexity become clearer. "P" means polynomial-time algorithms on a (deterministic) Turing Machine, and "NP" means polynomial-time algorithms on a Non-deterministic Turing Machine. These definitions are very precise, and its perfectly possible to prove that certain algorithms are in P, in NP, not in P or not in NP (just nobody found an algorithm which is in NP but not in P - which is why the question of P=NP is still open). You said that "I came to a conclusion that ANY well-defined function can be computed in polynomial time (in theory).", but this is NOT true on a deterministic Turing Machine. For various algorithms, it is very easy to prove that they are *NOT* in P, i.e., there cannot be any polynomial-time algorithm. Such are algorithms which require exponential computation time (I'm sure you can think of some). > I'm referring to the general case, whether any specific problem is not > in O(1). No specific problem can be proved not to be in O(1) for all > algorithms. Why not? Please let me know what I'm missing in your theory. Let's say I take the easily defined function f(n) = 111...111 (a sequence n 1's). Creating the output of f(n) requires n steps on a Turing machine. So the time this function takes is NOT o(1) like you say. > I also wrote more articles, for example a proof that the speed of > light is not constant (or more generally speaking, that deterministic > constants don't exist). It means that there is no theoretical limit I am at a loss trying to understand your reason in that article. As far as I remember, the double-slit experiment is never done with a single photon, but rather with a constant source of photons, whose wave function looks like a sine wave. If these sine waves pass different distances before meeting each other, they will meet each other at a different phase, causing the interference pattern to come out different. I also don't understand your "conclusion" (not seemed to be based on the previous arguments) that the speed of light is not well-defined at the small scale of quantum mechanics. Heisenberg's uncertainty princple indeed says that you cannot know a photon's position and momentum at absolute precision at the same time. But even if you don't know these, why does it mean that you can't know its *speed*? Mind you, that unlike matter particles, a photon's momentum is a function of its wavelength, not its speed. In any case, there are many problems reconciling Quantum Mechanics, Special Relativity and General Relativity, at many levels. Steven Hawking is famous for proving that a black hole - an object so massive that something would need to travel faster than light to escape from - actually has stuff coming out of it. The reason is quantum mechanics. But none of these "problems" in reconciling the theories is an experiment that someone forgot to try, as you suggest. By the way, many recent theories suggest the possibility of Tachyons, particles traveling above the speed of light. However, these Tachyons are "stuck", as you will, travelling above the speed of light, just as ordinary matter is stuck below this limit. I'm not aware of any current credible theory that suggests that particles traveling below the speed of light can be sped up to travel above the speed of light. -- Nadav Har'El | Tuesday, Apr 24 2007, 7 Iyyar 5767 [EMAIL PROTECTED] |----------------------------------------- Phone +972-523-790466, ICQ 13349191 |Corduroy pillows - they're making http://nadav.harel.org.il |headlines! ================================================================= To unsubscribe, send mail to [EMAIL PROTECTED] with the word "unsubscribe" in the message body, e.g., run the command echo unsubscribe | mail [EMAIL PROTECTED]
