Re: pipad, was Re: bounded storage model - why is R organized as 2-d array?
| >| Anyone see a reason why the digits of Pi wouldn't form an excellent | >| public large (infinite, actually) string of "random" bits? | | >The issue would be: Are there any dependencies amoung the bits of | >pi that would make it easier to predict where an XOR of n streams of | >bits taken from different positions actually come from - or, more | >weakly, to predict subsequent bits. | | When you build this scheme, you have to compare it to all other ways | of generating random-looking keystream for a stream cipher. That | means comparing it with generators which are guaranteed to be as hard | to predict output bits for as a large integer is hard to factor, for | example. Beyond the coolness factor of pi, it's hard to work out what | additional guarantee we're getting from using it here. | | I don't know what the performance of the algorithm for generating the | next n bits of pi looks like, but I'm guessing that I can do a fair | number of AES/Serpent/Twofish/MARS/RC6/Blowfish/CAST/3DES/IDEA/RC5 | calculations for the same cost. And we know how to build a stream | cipher out of all those ciphers (running in OFB or counter mode) which | is guaranteed to be as strong as the strongest of them. | | It's all about tradeoffs between performance, security, and what | strong statements you can make about your security when you're done. | In some applications, I am willing to give up a lot of performance for | a solid proof of security; in others, I am willing to give up any hope | of a proof of security to get really good performance. I agree 100% from a practical point of view. Given that you would have to use a very large prefix of pi - at least 2^128 bits, probably more - just the large-number arithmetic needed pretty much guarantees that you aren't going to get a competitive system. I do find this interesting from a theoretical point of view, since it *might* give us some kind of provable security. We don't seem to have any techniques that have any hope of proving security for any conventional system. What we have are reductions. An approach like this ties into a very rich body of mathematics, and *might* lead to a path to a proof. (Given the connection that this work has to chaotic dynamical systems, there's even the outside possibility that one might get a provably secure efficient random bit generator out of such systems.) I certainly wouldn't want to place any bets here. In fact, my guess is that this won't go through - that the best you'll get is a result of the form: The set of reals for which a system is *not* provably secure has measure 0. Unfortunately, either you can't write down any *particular* r that works, or there are artificially constructed r's that are too expensive to compute. -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: pipad, was Re: bounded storage model - why is R organized as 2-d array?
>From: [EMAIL PROTECTED] >Sent: Mar 21, 2006 9:58 AM >To: [EMAIL PROTECTED] >Cc: [EMAIL PROTECTED], cryptography@metzdowd.com >Subject: Re: pipad, was Re: bounded storage model - why is R organized as >2-d array? ... >| Anyone see a reason why the digits of Pi wouldn't form an excellent >| public large (infinite, actually) string of "random" bits? >The issue would be: Are there any dependencies amoung the bits of >pi that would make it easier to predict where an XOR of n streams of >bits taken from different positions actually come from - or, more >weakly, to predict subsequent bits. When you build this scheme, you have to compare it to all other ways of generating random-looking keystream for a stream cipher. That means comparing it with generators which are guaranteed to be as hard to predict output bits for as a large integer is hard to factor, for example. Beyond the coolness factor of pi, it's hard to work out what additional guarantee we're getting from using it here. I don't know what the performance of the algorithm for generating the next n bits of pi looks like, but I'm guessing that I can do a fair number of AES/Serpent/Twofish/MARS/RC6/Blowfish/CAST/3DES/IDEA/RC5 calculations for the same cost. And we know how to build a stream cipher out of all those ciphers (running in OFB or counter mode) which is guaranteed to be as strong as the strongest of them. It's all about tradeoffs between performance, security, and what strong statements you can make about your security when you're done. In some applications, I am willing to give up a lot of performance for a solid proof of security; in others, I am willing to give up any hope of a proof of security to get really good performance. > -- Jerry --John Kelsey - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: pipad, was Re: bounded storage model - why is R organized as 2-d array?
| Anyone see a reason why the digits of Pi wouldn't form an excellent | public large (infinite, actually) string of "random" bits? | | There's even an efficient digit-extraction (a/k/a "random access to | fractional bits") formula, conveniently base 16: | http://mathworld.wolfram.com/BBPFormula.html | | I dub this "pi pad". The issue would be: Are there any dependencies amoung the bits of pi that would make it easier to predict where an XOR of n streams of bits taken from different positions actually come from - or, more weakly, to predict subsequent bits. I doubt anyone knows. What would worry me is exactly the existence of the algorithm that would make this approach workable: A way to compute the i'th digit of pi without computing all the earlier ones. As a starter problem, how about a simpler version: Take n=1! That is, the key is simply a starting position in pi - taken from a suitably large set, say the first 2^256 bits of pi - and we use as our one-time pad the bits of pi starting from there. An attackers problem now turns into: Given a sequence of k successive bits of pi taken from among the first 2^256 bits, can you do better than chance in predicting the k+1'st bit? The obvious approach of searching through pi for matches doesn't look fruitful, but perhaps we can do better. Note that if pi *isn't* normal to base 2 - and we still don't know if it is - this starter problem is soluable. BTW, Bailey and Crandall's work - which led to this discussion - ties the question of normality to questions about chaotic sequences. If the approach of using pi as a one-time pad works, then all the systems based on chaotic generators will suddenly deserve a closer look! (Many fail for much simpler reasons than relying on such a generator, but some are untrustworthy not because we don't know of an attack but because we have no clue how to tell if there is one.) | Is this idea transcendental or irrational? Mathematician's insult: You're transcendental (dense and totally irrational). -- Jerry - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
pipad, was Re: bounded storage model - why is R organized as 2-d array?
Anyone see a reason why the digits of Pi wouldn't form an excellent public large (infinite, actually) string of "random" bits? There's even an efficient digit-extraction (a/k/a "random access to fractional bits") formula, conveniently base 16: http://mathworld.wolfram.com/BBPFormula.html I dub this "pi pad". Is this idea transcendental or irrational? -- Security Guru for Hire http://www.lightconsulting.com/~travis/ -><- GPG fingerprint: 9D3F 395A DAC5 5CCC 9066 151D 0A6B 4098 0C55 1484 [Moderator's note: I'd say "irrational" but I'll let other people chime in first. --Perry] - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: bounded storage model - why is R organized as 2-d array?
At 06:13 PM 3/9/2006 +, Ben Laurie wrote: Steven M. Bellovin wrote: > On Thu, 09 Mar 2006 02:10:58 -0500 > [EMAIL PROTECTED] wrote: > >> This is very useful for encrypting things like video >> streams without an expensive hardware cryptographic accelerator card. >> > I think you vastly overestimate how much hardware one needs to do > something like AES. I ran > > dd if=/dev/zero bs=32k count=1024| openssl speed aes-128-cbc I presume you were expecting this to test encrypting a long stream of data. It doesn't. "openssl speed" does encryption internally - the stuff on stdin was ignored. Something like: dd if=/dev/zero bs=32k count=1024 | openssl enc -aes-128-cbc > /dev/null is probably what you want (untested). You have to be careful. I meant in regards to the incremental overhead of the cipher once the cleartext is loaded from DRAM into L1 cache. And the cleartext data is constantly streaming in enough to cause the L1 cache to be refreshed constantly during the timing test. Of course if the Vernam cipher's pad is constantly loading in too, then it's DRAM overhead must be counted against it. The best ones tend to place as much of the original random material as possible in L1 and randomly remix the pads inside. It's very tricky, balancing speed against the decay rate of the cipher (and countering things like partial key attacks). - Alex -- - Alex Alten - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: bounded storage model - why is R organized as 2-d array?
At 10:37 AM 3/9/2006, Chris Palmer wrote: Right, but even though a 1.5GHz machine is a bit old (heh...) for a workstation, my dinky little Linksys WRT54GC wireless AP still needs to AES-encrypt a theoretical maximum of 54Mbps when I turn on WPA. Unless you're using your Linksys for file-sharing between machines at home, you're not likely to be encrypting more than about 6 Mbps (or whatever DSL and Cable Modem do these days in better cities.) Thus, something faster than AES, but still strong, would be nice. Your point about CPU cache size vs. pad size is well-taken, though. I'd trust RC4-used-correctly before trusting Tri-Strata, if there weren't so much bad history of people misusing RC4... - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: bounded storage model - why is R organized as 2-d array?
Steven M. Bellovin writes: > I think you vastly overestimate how much hardware one needs to do > something like AES. I ran > > dd if=/dev/zero bs=32k count=1024| openssl speed aes-128-cbc > > on a 1500 Mhz Athlon. It reported speeds of ~27.5 MBps, or 220 Mbps. > Even video isn't that fast, and that's a slow CPU by today's standards. Right, but even though a 1.5GHz machine is a bit old (heh...) for a workstation, my dinky little Linksys WRT54GC wireless AP still needs to AES-encrypt a theoretical maximum of 54Mbps when I turn on WPA. http://www.linksysinfo.org/modules.php?name=Content&pa=showpage&pid=12&page=1 Its adorable little CPU is a far cry from a 1.5GHz Athlon. :) I haven't tested to see if there is any difference in throughput with WPA on vs. off, but I'd be surprised if there weren't. Thus, something faster than AES, but still strong, would be nice. Your point about CPU cache size vs. pad size is well-taken, though. This message signed and nonrepudiable for your reading pleasure, -- https://www.eff.org/about/staff/#chris_palmer pgphJtl53fOSS.pgp Description: PGP signature
Re: bounded storage model - why is R organized as 2-d array?
Steven M. Bellovin wrote: > On Thu, 09 Mar 2006 02:10:58 -0500 > [EMAIL PROTECTED] wrote: > >> This is very useful for encrypting things like video >> streams without an expensive hardware cryptographic accelerator card. >> > I think you vastly overestimate how much hardware one needs to do > something like AES. I ran > > dd if=/dev/zero bs=32k count=1024| openssl speed aes-128-cbc I presume you were expecting this to test encrypting a long stream of data. It doesn't. "openssl speed" does encryption internally - the stuff on stdin was ignored. Something like: dd if=/dev/zero bs=32k count=1024 | openssl enc -aes-128-cbc > /dev/null is probably what you want (untested). Cheers, Ben. -- http://www.apache-ssl.org/ben.html http://www.links.org/ "There is no limit to what a man can do or how far he can go if he doesn't mind who gets the credit." - Robert Woodruff - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: bounded storage model - why is R organized as 2-d array?
On Thu, 09 Mar 2006 02:10:58 -0500 [EMAIL PROTECTED] wrote: > This is very useful for encrypting things like video > streams without an expensive hardware cryptographic accelerator card. > I think you vastly overestimate how much hardware one needs to do something like AES. I ran dd if=/dev/zero bs=32k count=1024| openssl speed aes-128-cbc on a 1500 Mhz Athlon. It reported speeds of ~27.5 MBps, or 220 Mbps. Even video isn't that fast, and that's a slow CPU by today's standards. Also -- I don't know how large these random tables have to be, but if they don't fit in cache the cipher will be quite slow -- memory bandwidth hasn't increased nearly as rapidly as CPU speed; modern machines utterly rely on their caches. --Steven M. Bellovin, http://www.cs.columbia.edu/~smb - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
Re: bounded storage model - why is R organized as 2-d array?
> - Original Message - > From: "Travis H." <[EMAIL PROTECTED]> > To: cryptography@metzdowd.com > Subject: bounded storage model - why is R organized as 2-d array? > Date: Thu, 2 Mar 2006 23:06:41 -0600 > > > Hey, > > In Maurer's paper, which is the last link here on the following page, > he proposes to use a public random "pad" to encrypt the plaintext > based on bits selected by a key. What I'm wondering is why he chose > the strange construction for encryption; namely, that he uses an > additive (mod 2) cipher where each plaintext bit is (apparently) XORed > against K bits from the random pad. He also uses a 2-d array > structure, both of which appear kind of arbitrary to me. > > http://www.crypto.ethz.ch/research/itc/samba/ > This is a subject that I'm painfully familiar with. Dr. Maurer has slowly but surely over the last decade or so been putting together a solid proof of a cipher very similar in construction to Dr. Atalla's proprietary Tristrata cipher (US patent #6088449). The basic idea is that you do something like random1 XOR random2 = pad, then pad XOR plaintext = ciphertext. If random1 and random2 are arrays of bits (n-bit random numbers) from a much larger amount of random material, then one can tune the probability of the pad repeating to be almost zero. Essentially you get an equation with 2 unknowns (pad and plaintext), which is pretty hard to solve if the pad is hard to determine. The 2-D array simplifies getting the randomX's. Basically you treat the initial random material as a big array of N random bit strings. Each randomX value is a random strip inside each bit string. Each strip is the same size (at TriStrata we typically used four 64-Kbyte strips, each from a 250 Kbyte bit string). You XOR them together and get a "random" pad (ours was 64 Kbytes long). We were able to XOR about 32 Kbytes of plaintext into ciphertext, before an inverse matrix attack became feasible. Then a new pad had to be generated for the next 32 Kbytes. The reason people like Dr. A or Dr. M are interested in doing this is because you can get super-fast ciphers or get some sort of fundamental proof about its strength that is applicable to all types of ciphers. The speed, from a software engineering perspective, results from the classic tradeoff between memory lookup and execution loop complexity. If one can get down to a couple of XORs and memory copies (per 32 or 64 bit data)using ordinary microprocessor instructions then it will outperform AES by around a couple of orders of magnitude. This is very useful for encrypting things like video streams without an expensive hardware cryptographic accelerator card. Dr. M showed in an earlier paper that if one uses enough randomX values that you can get arbitarily close to generating a truly random pad each time you need it to encrypt some ciphertext (but with N XORs!). Anyway, the big negatives are that with this type of Vernam cipher the entropy decays really fast, especially if the initial big source of random bits is publicly known, the pad management is really hairy, and generating lots of random bits of the initial material is very slow. Huge numbers of pads have to be generated and distributed all the time, and a great deal of care has to be taken to mitigate the cipher's decaying strength and to avoid several types of attacks on the pad generation. It is of great personal interest to me to watch Dr. M slowly prove that the underlying mathematics of Dr. A's Tristrata cipher may not have been snake oil after all. - Alex - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
bounded storage model - why is R organized as 2-d array?
Hey, In Maurer's paper, which is the last link here on the following page, he proposes to use a public random "pad" to encrypt the plaintext based on bits selected by a key. What I'm wondering is why he chose the strange construction for encryption; namely, that he uses an additive (mod 2) cipher where each plaintext bit is (apparently) XORed against K bits from the random pad. He also uses a 2-d array structure, both of which appear kind of arbitrary to me. http://www.crypto.ethz.ch/research/itc/samba/ Does anyone have information on: 1) Deep space sources or terrestrial satellite transmissions which could be used as publicly-available random bits 2) The nature of noise, especially the noise when a receiver is de-tuned (I have heard ~1% of this signal power is cosmic background radiation left over from the big bang, and that the rest is largely a mystery) -- Security Guru for Hire http://www.lightconsulting.com/~travis/ -><- GPG fingerprint: 9D3F 395A DAC5 5CCC 9066 151D 0A6B 4098 0C55 1484 - The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]