On 4/24/07, Orr Dunkelman <[EMAIL PROTECTED]> wrote:
This reminds me an interesting question cryptographers deal with nowadays:
what is a secure cryptographic hash function.
In short (assuming you all know what a hash function is), a cryptographic
hash function is such that it is hard to find collisions ( i.e., x and x'
s.t. h(x) = h(x')), second pre-image (given x finding x' s.t. h(x)=h(x'))
and pre-images (given y, find x s.t. h(x)=y).
The reason we have problems is that collisions must exist, and if a and b is
a collision than print(a,b) is a polynomial time algorithm which outputs the
collision.
Of course, this is dubious, because for any given hash function we might
know the specific algorithm. For more information about that, lock at
http://eprint.iacr.org/2006/281
In any case, there are infinite (Aleph zero) number of polynomial
algorithms, but there is (Aleph one) languages.
Actually, the only reason this is not a proof that P \neq NP is because the
size of NP is Aleph zero.
So indeed there are sufficiently many algorithms, but the question is
whether the number of languages the algorithms identify is infinite (please
note that two algorithms may identify the same language).
As for O(1) for all algorithms - there are many such languages.
For example, determining whether an input string is of the form a^n||b^n
requires reading the string (so the running time of any algorithm that
solves it is at least O(n)). There are even problems were you can prove that
you need extra memory of logarithmic size (see the connectivity problem).
And finally, the question whether P=NP does not necessarily require the
theory complexity to be complete. And given the definitions its quite
consistent (as long as polynomials are consistent).
Orr.
I distinguish between problems which are known to be hard and problems
which can be mathematically proved to be hard for any algorithm (or
even, for example, for any algorithm which can be represented by a
sequence of 50 million bits). I'm saying that there is no
mathematical proof which is not inconsistent, but I'm not saying
problems can never be hard. There are hard problems, according to our
experience.
Cryptographic functions are good example. Although we can generate
functions which are really hard to compute (the reverse function), a
reverse function (either one-to-one or one-to-many) exists. And if we
define a hash function which returns a hash string of 10,000 bits (for
example), which represent 2^10,000 different possible values - the
number 2^50,000,000 is much bigger. And it's very probable that an
algorithm of size 50,000,000 bits (or less) who can compute the
reverse function in a short time does exist. The more bits we
generate for each hash function, the harder it is to find a reverse
function. But it is an assumption. It can never be proved. It
doesn't mean cryptography doesn't exist, it just means that it can't
be proved not to be insecure, unless we use something like one-time
key, which is random, and this has been proved to be completely
secure.
In the army they use such one-time keys dotted on paper, each of them
has exactly two copies, and is used only once. After it's used, the
paper is burned. But it's wasting a lot of paper. This is completely
secure if the key is random, the only way to decrypt such a message is
to find out the key (or the message itself). But if the same key is
used more than once than it can become insecure.
Uri.
=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]