### Re: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]

On 8/28/06, Dave Korn [EMAIL PROTECTED] wrote: The author has made the *exact* same error as when someone comes up with a magical compression algorithm that they say can compress absolutely any data down to a tiny size. They always get the data to compress, sure, but they always have problems with the decompression stage. They tend to think that this is just a minor bug in their decompressor they need more time to work on, but no amount of time will let them work around the fundamental mathematical fact that they've not got sufficient information in their compressed file to reconstruct the original. Thanks, sorry for the hassle, me (and others) should've checked it before asking. Ad. compression algorithm: I conjecture there exists an algorithm (not necessarily *finite*) that can compress large numbers (strings/files/...) into small space, more precisely, it can compress number that is N bytes long into O(P(log N)) bytes, for some polynomial P. Here's a lead: Take as an example group of Z_p* with p prime (in another words: DLP). The triplet (Z, p, generator g) is a compression of a string of p-1 numbers, each number about log2(p) bits. (legend: DTM - deterministic Turing machine, NTM - nondeterministic Turing machine) There exists a way (not necessarily fast/polynomial with DTM) that a lot of strings can be compressed into the mentioned triplets. There are \phi(p-1) different strings that can be compressed with these triplets. Exact number of course depends on factorization of p-1. Decompression is simple: take generator g and compute g, g^2, g^3, g^4, ... in Z_p* I conjecture that for every permutation on 1..N there exists a function that compresses the permutation into a short representation. It is possible that only NTM, possibly with infinite algorithm (e.g. a human) can do it in some short finite time. Again, I could've/should've proven or disproven the conjecture, I just don't want to do it - yet ;-) Thanks OM - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Re: Impossible compression still not possible. [was RE: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor (was: A security bug in PGP products?)]]

We are both talking about the same thing :-) I am not saying there is a finite deterministic algorithm to compress every string into small space, there isn't. BTW, thanks for There is ***NO*** way round the counting theory. :-) All I wanted to say is: For a specific structure (e.g. movie, picture, sound) there is some good compression algorithm. E.g.: if you take a GIF 65536x65536, all white, with just one pixel black, it can be compressed into 35 bytes, see here: http://i.iinfo.cz/urs-att/gif3_6-115626056883166.gif If you wanted to compress the same picture using JPEG (i.e. discrete cosine transform), then two things would happen: The compressed jpeg file would be a) much bigger b) decompressed image would have artifacts, because Fourier transform of a pulse is sync (infinitely many frequencies). Sure, JPEG is a lossy compression, but good enough for photos and images that don't have a lot of high frequencies. Cheers, OM On 8/28/06, Dave Korn [EMAIL PROTECTED] wrote: On 28 August 2006 15:30, Ondrej Mikle wrote: Ad. compression algorithm: I conjecture there exists an algorithm (not necessarily *finite*) that can compress large numbers (strings/files/...) into small space, more precisely, it can compress number that is N bytes long into O(P(log N)) bytes, for some polynomial P. [ My maths isn't quite up to this. Is it *necessarily* the case that /any/ polynomial of log N /necessarily/ grows slower than N? If not, there's nothing to discuss, because this is then a conventional compression scheme in which some inputs lead to larger, not smaller, outputs. Hmm. It would seem to me that if P(x)==e^(2x), P(log n) will grow exponentially faster than N. I'm using log to mean natural log here, substitute 10 for e in that formula if we're talking about log10, the base isn't important. However, assuming that we're only talking about P that *do* grow more slowly, I'll discuss the problem with this theory anyway. ] There are many, but there are no algorithms that can compress *all* large numbers into small space; for all compression algorithms, some sets of input data must result in *larger* output than input. There is *no* way round the sheer counting theory aspect of this. There are only 2^N unique files of N bits. If you compress large files of M bits into small files of N bits, and you decompression algorithm produces deterministic output, then you can only possibly generate 2^N files from the compressed ones. Take as an example group of Z_p* with p prime (in another words: DLP). The triplet (Z, p, generator g) is a compression of a string of p-1 numbers, each number about log2(p) bits. (legend: DTM - deterministic Turing machine, NTM - nondeterministic Turing machine) There exists a way (not necessarily fast/polynomial with DTM) that a lot of strings can be compressed into the mentioned triplets. There are \phi(p-1) different strings that can be compressed with these triplets. Exact number of course depends on factorization of p-1. Decompression is simple: take generator g and compute g, g^2, g^3, g^4, ... in Z_p* This theory has been advanced many times before. The Oh, if I can just find the right parameters for a PRNG, maybe I can get it to reconstruct my file as if by magic. (Or subsitute FFT, or wavelet transform, or key-expansion algorithm, or ... etc.) However, there are only as many unique generators as (2 to the power of the number of bits it takes to specify your generator) in this scheme. And that is the maximum number of unique output files you can generate. There is ***NO*** way round the counting theory. I conjecture that for every permutation on 1..N there exists a function that compresses the permutation into a short representation. I'm afraid to tell you that, as always, you will find the compression stage easy and the decompression impossible. There are many many many more possible permutations of 1..N than the number of unique short representations in the scheme. There is no way that the smaller number of unique representations can possibly produce any more than the same (smaller) number of permutations once expanded. There is no way to represent the other (vast majority) of permutations. It is possible that only NTM, possibly with infinite algorithm (e.g. a human) can do it in some short finite time. Then it's not really an algorithm, it's an ad-hoc collection of different schemes. If you're allowed to use a completely different encryption scheme for every file, I can do better than that: for every file, define an encryption scheme where the bit '1' stands for the content of that file, and everything else is represented by a '0' bit followed by the file itself. Sure, most files grow 1 bit bigger, but the only file we care about is compressed to just a single bit! Great! Unfortunately, all you've done is moved information around. The amount of information you'd have to have in the decompressor to know which file

### Re: Impossible compression still not possible. [was RE: Debunking the PGP backdoor myth for good. [was RE: Hypothesis: PGP backdoor

Dave Korn wrote: Of course, I could point out that there is precisely *1* bit of information in that huge GIF, so even compressing it to 35 bytes isn't a great achievement... it's one of the set of less-common inputs that grow bigger as a compromise so that real pictures, which tend to have at least *some* variation, grow smaller. OK, according to Shannon's definition of entropy, how much information is there in \pi or e or other transcendent number? AFAIK all finite strings are in fractional expansion of \pi (take positive integer base other than 10 if you want). Sure, the numbers are correlated, because we can express \pi or e as infinite series, though only couple of bytes is necessary. But: if you take any finite string, there exists a finite natural number as index where the string can be found in \pi. So are there any random strings at all? What if I can express the digits starting at 2^(2^k)-th index analytically in a short, fast algorithm? In case no other can, then I have perfect one-time pad, at least for some time, because conventional computers will not reach the place in some reasonable time (yes, I know, there may be other correlations). The point I am trying to make: if I take e.g. a transcendent number and can analytically express a part of its fractional expansion, I *might* have a strong key. The point for this: I am researching an authenticating scheme where keys are *programs*, not data. It means that key can change itself over time, for example. The strongest keys would therefore be somewhat recursive polymorphic code, that enumerates all possible permutations on some finite set in some deterministic, though seemingly random order. I know that there are short descriptions for lots of infinite structures, the mentioned transcendent numbers, then take Mandelbrot set, various IFSs (iterated function systems), quaternions, unmeasurable sets, there are lots of possibilities. What I know for sure is that for many mathematical structures short descriptions (programs or (partially) recursive functions) exist. What I conjecture is that for each finite ordered set a short compression function exists. The whole trick is that it is almost impossible (meaning computationally infeasible) to look for the compression functions using a finite deterministic algorithm. Though a human can do it. It will yet take some time until I have some more results about the categories of key programs. Cheers, OM - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Re: Hypothesis: PGP backdoor

Len Sassaman wrote: On Thu, 24 Aug 2006, Ondrej Mikle wrote: I also have no question, personally, that if there's a backdoor in PGP, neither Mr. Callas nor any of the PGP engineers I had the pleasure to work with know of it. Your theory is indeed wild, and though I don't mean to discourage vigilance in questioning these sorts of potential subversions of integrity in software as important as PGP, you might consider doing more research into the background of people against whom you choose to levy hypothetical accusations in public forums in the future. OK, thanks for answering. I had only very limited view of the background behind PGP (i.e. stuff about NAI/PGP corp). One last question: what about the PGPdisk SDA (self-decrypting archives, i.e. executables)? There has been a claim that SDA archives can be decrypted using a debugger. Is it true or false? See the section Two Ways to bypass PGP SDA Authentication and EXTRACT with success in the advisory http://www.safehack.com/Advisory/pgp/PGPcrack.html. Is the guy confused again? ;-) Thanks OM - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Hypothesis: PGP backdoor (was: A security bug in PGP products?)

Hello. We discussed with V. Klima about the recent bug in PGPdisk that allowed extraction of key and data without the knowledge of passphrase. The result is a *very*wild*hypothesis*. Cf. http://www.safehack.com/Advisory/pgp/PGPcrack.html Question 1: why haven't anybody noticed in three months? Why has not there been a serious notice about it? According to the paper, both standard .pgd and self-extracting SDA (self-decrypting archives) are affected. Systematic backdoor maybe? Possibilities: 1) it is a hoax. Though with very low probability. The text seems to include a lot of work and makes perfect sense (REPE CMPS, all the assembly), i.e. we suppose it is highly improbable that somebody would make such hoax. This can be either proven or disproven simply by checking the Win program using hex editor/debugger (using an already downloaded copy). I haven't had the time to check it yet (no Win). 2) AFAIK, Zimmerman is no longer in control of the company making PGP. AFAIK the company (NAI) has been bought by another group couple of years ago. www.pgp.org says: 2002/03/08 - NAI drops PGP Desktop 2001/10/15 - NAI to sell PGP division It may be therefore quite possible that NSA/CIA/FBI/etc. couldn't force Zimmerman to compromise his own product directly, so they have bought the company. The backdoor might have been introduced in the latest releases (e.g. 8.x, 9.x). 3) there was a lazy programmer, or a programmer-infiltrator from the ranks of intelligence services. What does one do when a cryptosystem seems unbreakable? He circumvents it. AFAIK the code has been checked many times in NAI, until some point in time. As you all probably know, there has been a lot of mischief around Zimmerman and PGP in the '90-ties. We don't think NSA/CIA/FBI/etc would just give up without fight. You know, the three-line PERL RSA implementations on T-shirts and so on. Code of PGPdisk 9.x looks like this according to the paper: when the passphrase is changed, the key itself remains untouched. If at least the encryption key has been encrypted by a symmetric key generated e.g. by PBDFK2 from the passphrase. Conclusion: it seems that NSA/CIA/FBI/etc. haven't called truce. Thought, very clever solution. Nevertheless, nothing we haven't had already seen in 1st/2nd world war tactics. What do you think? Your input is welcome. OM P.S. sorry for any misspellings of names - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Provably secure cryptosystem

Hello. I humbly say that I *might* have devised a provably secure cryptosystem that actually *might* work in reality. It provides secure authentication and possibly might be extended to something else. Sounds too good to be true? Well, you're right. In reality it's a bit more complicated. I'm risking again making fool of myself, but what the heck ;-) At least I think it's something you all know, at least to some extent. I have searched for some other provably secure schemas for MACs/signatures/etc., they use many similar assumptions (e.g. random oracle assumption, strong one-way hash function with uniform distribution assumption) and some similar techniques. In the system I can prove there *is* a random oracle (this is also not so entirely true, but...you'll see). OK, the point: In complexity theory with random oracle, NP != P (cf. Shoup 1997: Lower Bounds for Discrete Logarithms and related problems, Baker-Gill-Solovay 1975). The trick is based on this fact. Keys in the cryptosystem are not *data*, but *programs*, i.e. (partially) recursive functions. The trick is to simulate random oracle by a program, each program is the key describing a random permutation on some subset of natural numbers. The cryptosystem is based on the following observation (extension of halting problem): Given a program L as an input that takes 1..n as input, L stops exactly for one 1=j =n. If we give this program to a DTM (deterministic Turing machine), no finite algorithm can decide for all possible input programs L and numbers n, for which j the input program L will stop in polynomial time. In another words, no finite program can detect every virus (virus is defined to be a self-replicating program) or check if other program will draw prescribed picture for a given input, etc. in polynomial time. So, the paper can be found here (alpha-draft, by no means complete, some parts such as related work and references, acks are not complete): -- http://urtax.ms.mff.cuni.cz/~abyssal/PS.pdf -- Be warned, it may be at least *partially* false, because I *deliberately* left out one proof (protecting the keyspace, though it's security by obscurity). Well, hopefully there are no more serious glitches ;-] The system as whole is secure, but every single instance can be of course broken by man (stealing the key, guessing the key, cryptanalysis). Integer factorization problem, (generalized) discrete logarithm problem are also elements of the system (it is a set). At least I hope you'll have some fun. OM - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Re: A security bug in PGP products?

Max A. wrote: Hello! Could anybody familiar with PGP products look at the following page and explain in brief what it is about and what are consequences of the described bug? http://www.safehack.com/Advisory/pgp/PGPcrack.html It seemed a bit obscure to me at first, but it says basically: PGPdisk does not use key derived from passphrase, just does simply this: if (somehash(entered_password) == stored_password_hashed) then access_granted(); That's the REPE CMPS chain instruction (string comparison). The check can be simply skipped using debugger by interrupting the program, changing CS:EIP (i.e. the place of execution) to resume after successful check. The text probably implies that the key is stored somewhere in the PGPdisk file and key's successful extraction does not depend on knowledge of the passphrase. So if you change passphrase, the disk won't get re-encrypted, just by copypasting the old bytes you will revert to the old passphrase or you can create another disk with passphrase chosen by you and use copypasting method to decrypt other PGPdisk protected with passphrase. I haven't checked myself if their claim is true, but it's possible. Hope that helped O. Mikle - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Re: hashes in p2p, was Re: switching from SHA-1 to Tiger ?

Travis H. wrote: On 7/11/06, Zooko O'Whielacronx [EMAIL PROTECTED] wrote: I hope that the hash function designers will be aware that hash functions are being used in more and more contexts outside of the traditional digital signatures and MACs. These new contexts include filesystems like ZFS [3], decentralized revision control systems like Monotone [4], git [5], mercurial [6] and bazaar-ng [7], and peer-to-peer file-sharing systems such as Direct Connect, Gnutella, and Bitzi [6]. MD4/5 are commonly used as a unique fixed-size identifier of an arbitrarily-chosen* length of data in p2p file systems, and we are all aware of the collision attacks. They bring up some interesting points to consider: 1) What semantics can one induce by using a collision attack, given the existing protocols/clients? There are some rumors the MPAA or RIAA is using protocol-level attacks to poison p2p networks like bittorrent and KaZaa. Can cryptanalysis results be playing a part? RIAA uses only very basic cryptanalytic attacks. Specifically, Kazaa (FastTrack protocol) uses very stupid UUHash algorithm, which works like this: first 300kB are hashed with MD5, then 300 kB at 1MB boundary is added to hash, then 300 kB at 2MB boundary, then 300kB at each 2^n MB boundary. It is clear that it is very easy to generate collisions for UUHash. For other networks, mostly sybil attacks are used (spawning a lot of fake nodes and fake files with the same name so that search turns them up). 2) How do we refactor these widely deployed systems with a new, stronger hash function? An example how to handle this might be aMule and eMule clients. The basic ed2k protocol only uses MD4 hashes (it is hash list, the hash in magnet link is the MD4 hash of MD4 hashes of file blocks). These two clients add protocol extension called AICH algorithm (basically it is Merkle tree of SHA-1 hashes). The root hash might be a part of magnet link, but does not have to be (since it is not part of the original protocol). If the root hash is part of the magnet link, then the received tree can be checked. If it is not part of the magnet link, then the client attempts to retrieve it from clients that support AICH. If at least 10 clients send the same root hash and if that is at least 92% of all received root hashes, such root hash is considered trusted for the current session. It is definitely not 100% secure, but the more clients support AICH, the more often you will find the root hash in magnet link (thus being able to check the received tree). Even in the absence of root hash in magnet link, the attacker needs to control significant portion of network to be able to spoof the root hash with some high probability. A simple concept that would allow replacing the hash functions would be adding optional parameters to protocol specification. Then users can switch clients continuosly, i.e. users with old clients will not be cut off of the network each time hash function changes. - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Re: Factorization polynomially reducible to discrete log - known

David Wagner wrote: The algorithm is very simple: 1. Choose a big random value x from some very broad range (say, {1,2,..,N^2}). 2. Pick a random element g (mod N). 3. Compute y = g^x (mod N). 4. Ask for the discrete log of y to the base g, and get back some answer x' such that y = g^x' (mod N). 5. Compute x-x'. Note that x-x' is a multiple of phi(N), and it is highly likely that x-x' is non-zero. It is well-known that given a non-zero multiple of phi(N), you can factor N in polynomial time. Not exactly. Consider N = 3*7 = 21, phi(N) = 12, g = 4, x = 2, x' = 5. You'll only get a multiple of phi(N) if g was a generator of the multiplicative group Z_N^*. When N is a large RSA modulus, there is a non-trivial probability that g will be a generator (or that g will be such that x-x' lets you factor N). The above is good enough for a polytime reduction. Actually, you can never get a generator of Z_N^* unless p=q, because when p != q, it is not a cyclic group (this error was in my proof, too). Though with great probability you get an element of high order. It is enough doing lcm() to get the phi(N) and it will run in polynomial time. I noted the fact IFP(N) to DLOG in Z_N^* is really mentioned in Handbook of Applied Crypto, but without proof or algorithm, just two lines (I guess that's why I missed/didn't remember it). - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Re: Factorization polynomially reducible to discrete log - known fact or not?

Charlie Kaufman wrote: I believe this has been known for a long time, though I have never seen the proof. I could imagine constructing one based on quadratic sieve. I believe that a proof that the discrete log problem is polynomially reducible to the factorization problem is much harder and more recent (as in sometime in the last 20 years). I've never seen that proof either. --Charlie OK, I had the proof checked. I put it here: http://www.ms.mff.cuni.cz/~miklo1am/Factorization_to_DLog.pdf Warning: it may be not what you'd expect. First of all, it reduces the factorization to a discrete log in a group of unknown order (or put in another words: you'd need to factorize to learn the group order). It has been proven by V. Shoup that when group operation and the inverse are the only operations that can be done with group elements, then the best algorithm can be O(sqrt(n)), where n is the number of elements. I guess then the group of Z_N* (where N=pq) of unknown order qualifies for this if we don't want to use factorization (actually you can't compute inverse group operation here). In the light of this fact, is this proof of any use? Even if the proof is not useful, is the generator picking lemma (lemma 2) anything new? It states basically this: In any cyclic group of order n there is at least 1/log2(n) probability of picking a generator randomly and thus generator can be found in polynomial time with overwhelming probability of success. The only facts close to this lemma I found were: 1) Product phi(p_i)/p_i for consecutive primes p_i approaches zero as more and factors are added to the product (phi is Euler phi function). The lemma states a lower bound for the product. 2) If the generalized Riemann hypothesis is true, then for every prime number p, there exists a primitive root modulo p that is less than 70 (ln(p))^2. (http://en.wikipedia.org/wiki/Primitive_root_modulo_n) Charlie: Thanks for answering my second question which I have not asked yet :-) (the reduction in opposite direction). I'm also working on the opposite reduction, but I'm at best halfway through (and not sure if I am able to finish it). Last question: Joseph Ashwood mentioned someone who claimed to have algorithm for factorization and had only the reduction to DLP. Anyone knows where I could find the algorithm? Or maybe name of the person, so I could search the web. Thanks O. Mikle - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Factorization polynomially reducible to discrete log - known fact or not?

Hello. I believe I have the proof that factorization of N=p*q (p, q prime) is polynomially reducible to discrete logarithm problem. Is it a known fact or not? I searched for such proof, but only found that the two problems are believed to be equivalent (i.e. no proof). I still might have some error in the proof, so it needs to be checked by someone yet. I'd like to know if it is already known (in that case there would be no reason to bother with it). Thanks O. Mikle - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Re: expanding a password into many keys

On 6/12/05, Ian G [EMAIL PROTECTED] wrote: I'd like to take a password and expand it into several keys. It seems like a fairly simple operation of hashing the concatonatonation of the password with each key name in turn to get each key. Are there any 'gotchas' with that? iang I guess you should use some scheme like PKCS #5 PBKDF2 scheme (password based key derivation function). The only difference between your idea and PBKDF2 is that the latter does a lot of hash rounds and is salted (I guess you pick key name to be static and not random, so they are not used as salts). Salting helps a bit against static precomputed hashes and techniques like rainbow tables. Ondrej Mikle - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

### Re: The Pointlessness of the MD5 attacks

On Tue, 14 Dec 2004 14:43:24 +, Ben Laurie [EMAIL PROTECTED] wrote: But the only way I can see to exploit this would be to have code that did different things based on the contents of some bitmap. My contention is that if the code is open, then it will be obvious that it does something bad if a bit is tweaked, and so will be suspicious, even if the something bad is not triggered in the version seen. There are many ways to obfuscate code so tha it will seem innocent, see for example International Obfuscated C Code Contest (http://www.ioccc.org/main.html). It can be based on variable modification using buffer overflows, integer overflows, strange side effects of expression evaluation, etc. Another possibility for an attacker is the use of deliberate and very rare race conditions, which only attacker knows how to trigger. Race conditions are very difficult to discover. Cf. Linux ptrace race condition (http://archives.neohapsis.com/archives/bugtraq/2001-10/0135.html). It's been there in kernels 2.2.0 - 2.2.19 and 2.4.0 to 2.4.9. It allowed for local privilege escalation. Took quite a long time to discover it, even though it was open source code. Quite a long time for opportunity, if we assumed an attacker would do similar attack deliberately. So, to exploit this successfully, you need code that cannot or will not be inspected. My contention is that any such code is untrusted anyway, so being able to change its behaviour on the basis of embedded bitmap changes is a parlour trick. That's true in theory, but it's different in real world. Take Microsoft software as an example. Many banks use their software (and sometimes even military). I don't think that all of them reviewed Microsoft's source code (I guess only a few, if any at all). There was an incident of a worm attacking ATMs. Another example, Enigma was being sold after WW 2, but the Allies knew it could be broken. The purchasers did not. Same as when US army sold some radio communications that used frequency hopping to Iraq during 1980's. US knew that it could be broken (just in case...). Ondrej Mikle - The Cryptography Mailing List Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]