Cryptography-Digest Digest #550
Cryptography-Digest Digest #550, Volume #14 Thu, 7 Jun 01 13:13:01 EDT Contents: Re: Best, Strongest Algorithm (gone from any reasonable topic) (Tim Tyler) Re: Best, Strongest Algorithm (gone from any reasonable topic) (John A. Malley) Re: Def'n of bijection (Paul Pires) Re: Best, Strongest Algorithm (gone from any reasonable topic) (Tim Tyler) Re: OTP WAS BROKEN!!! (Tim Tyler) Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack, and (Phil Carmody) Re: OTP WAS BROKEN!!! (Robert J. Kolker) Re: practical birthday paradox issues (Phil Carmody) Re: Best, Strongest Algorithm (gone from any reasonable topic) (Mok-Kong Shen) Re: Best, Strongest Algorithm (gone from any reasonable topic) (Mok-Kong Shen) Re: Best, Strongest Algorithm (gone from any reasonable topic) (Mok-Kong Shen) Re: Best, Strongest Algorithm (gone from any reasonable topic) ([EMAIL PROTECTED]) Re: Best, Strongest Algorithm (gone from any reasonable topic) (Tim Tyler) From: Tim Tyler [EMAIL PROTECTED] Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) Reply-To: [EMAIL PROTECTED] Date: Thu, 7 Jun 2001 15:09:19 GMT [EMAIL PROTECTED] wrote: : Tim Tyler [EMAIL PROTECTED] writes: : [EMAIL PROTECTED] wrote: :: Tim Tyler [EMAIL PROTECTED] writes: :: Tom St Denis [EMAIL PROTECTED] wrote: ::: Or if you just pad the bloody thing to a multiple of say 64 bytes. [...] :: :: Still not enough for perfect secrecy :-( :: :: Right--no matter what ``a multiple'' means. : : That's correct - so long as no restraints are placed on the set of : possible plaintexts. : Exactly. So why do you keep switching premises? Specifically: : 1. When somebody says, ``OTP on padded messages gives (Tim Tyler's :definition of) perfect secrecy,'' you reply, ``No, because no :amount of padding is enough.'' In other words, you assume that the :space of plaintexts is infinite. : 2. When somebody replies, ``Okay: if the space of messages is infinite, :then (your definition of) perfect secrecy is impossible to achieve.'' :You reply, ``No, because the space of messages is actually finite.'' :(Or alternately, ``...has cardinality 2 in my universe.'') You dare to misquote me in the course of misrepresenting my postion. You misquote yourself as well to distort things still further - but I guess that's your privelidge. We can talk again when you have learned to put quotation marks around stuff people have actually said. -- __ |im |yler [EMAIL PROTECTED] Home page: http://alife.co.uk/tim/ -- From: John A. Malley [EMAIL PROTECTED] Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) Date: Thu, 07 Jun 2001 08:18:08 -0700 Tim Tyler wrote: : You probably question whether such usage leads to : Shannon's perfect security which, as you said, is claimed : to be a property of OTP. However, I don't see where in the : literature about OTP (in connection with perfect security) : the length enters into the argumentation, i.e. plays a role : in the proof. Shannon's paper Communications Theory of Secrecy Systems addresses this. Perfect secrecy is a property of the OTP (i.e. the Vernam cipher specifically cited in that paper) AND message length DOES enter into the argument. However, using an OTP is NOT required for perfect secrecy when the set of messages is finite. :-) I also think that it's not mentioned. I beleive it is common to consider the domain where all plaintexts are the same length - perhaps in order to get the perfect secrecy result. : My memory of Shannon's paper is no good, but I don't think that he : considered the length of the messages. I don't think it was mentioned either - all the messages were the same length in the system in question. -- Shannon's important paper on cipher systems carefully considers the length of the messages. Shannon shows the OTP is NOT required for a finite set of messages to give perfect secrecy. (I've posted on this before, given examples of such ciphers, just search google or drop me a note by email for more specific examples. :-) ) The OTP is required for message sources with an infinite number of messages. From page 682 of Communications Theory of Secrecy Systems, C. E. Shannon, Bell System Technical Journal, pp. 656-715, 1949: The situation [perfect secrecy] is somewhat more complicated if the number of messages is infinite. Suppose, for example, that they are generated as infinite sequences of letters by a suitable Markov process. It is clear that no finite key will give perfect secrecy. We suppose, then, that the key source generates key in the same manner, that is, as an infinite sequence of symbols. Suppose further that only a certain length of L_k is needed to encipher and decipher a length L_m of message. Let the logarithm of the number of letters in the message alphabet be R_m
Cryptography-Digest Digest #550
Cryptography-Digest Digest #550, Volume #13 Thu, 25 Jan 01 18:13:01 EST Contents: Re: unknown encryption algorithm ("Douglas A. Gwyn") Re: Random stream testing. (long) ("Paul Pires") Re: crypt(3C) algorithm under Solaris 2.6? (Jim Gillogly) Re: Differential Analysis of "A + (B xor X)" ([EMAIL PROTECTED]) Re: Barrett modular reduction (Bryan Olson) Re: DES check values (58) Re: How much of this group's discussion violates the DMCA ("Douglas A. Gwyn") Re: Differential Analysis of "A + (B xor X)" (Alexis Machado) Re: NSA and Linux Security (Greggy) Re: How much of this group's discussion violates the DMCA (lcs Mixmaster Remailer) Re: How much of this group's discussion violates the DMCA (Darren New) Re: How much of this group's discussion violates the DMCA (Ichinin) Re: unknown encryption algorithm ([EMAIL PROTECTED]) Re: Dynamic Transposition Revisited (long) (Terry Ritter) Re: Fitting Dynamic Transposition into a Binary World (Terry Ritter) Re: Random stream testing. (long) ("Douglas A. Gwyn") Re: Echelon in Asia. (Jim) From: "Douglas A. Gwyn" [EMAIL PROTECTED] Subject: Re: unknown encryption algorithm Date: Thu, 25 Jan 2001 19:30:57 GMT [EMAIL PROTECTED] wrote: plaintext: dclient177-193 some examples of encrypted (all of the same plaintext) kUqmnlaqr433-774 jemuvastu663-255 aEcofidqd366-512 Djltfhmpi664-422 2dnsymiov322-450 EEouuclio090-343 The first two characters are evidently a "salt", or key used in the encryption. Since letters encrypt to letters, digits to digits, and punctuation to itself, it must be a simple ad-hoc scheme. I looked at the bit level for a simple Vigenere scheme, assuming ASCII coding, but didn't find one. -- From: "Paul Pires" [EMAIL PROTECTED] Crossposted-To: sci.crypt.random-numbers Subject: Re: Random stream testing. (long) Date: Thu, 25 Jan 2001 12:22:10 -0800 Matt Timmermans [EMAIL PROTECTED] wrote in message news:OvNb6.5022$[EMAIL PROTECTED]... "Paul Pires" [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]... snip You're testing PRNGs, right? So you can make as much data as you like. I would suggest using 100 meg samples, and accepting as "OK" any result between 0.0001 and .. Anything outside of that would make me nervous, Well, I have what I think is a bizzare scheme for a PRNG and I just can't make it fail. Not even a little bit looking at gigabytes in every way I can think of. I'm searching for what to do now since I am still curious about it and more testing seems pointless. I guess I was searching for the "PRNG evaluation process - Level 2 handbook" for clueless newbies and the mathematically challenged. Pretty unrealistic. unless I had several hundred test results. If you get something outside of that "good range", but you're running a whole lot of tests, then make a new or larger sample and run it again -- if it is an artifact of your PRNG, then it should be reasonably consistent. It seems that you simply stumbled upon a bad description of how to use randomness tests. I seem to remember that Marsaglia (sp?) included a very nice description of how to interpret the results of his DIEHARD suite. Yes, he does say "but remember, P's happen" which I took to mean that results cannot be interperated standalone but only in the context of the actual sample. Thanks. Paul Terry Ritter has made a convincing argument that data sets should be examined for any deviation from a random expectation including the case were the results are "too good". That is exactly right -- if you run the same tests on multiple samples, you should expect an even distribution of P-values. If you run 1000 tests and don't get any results outside of .25-.75, it is very likely that your data isn't random. == Posted via Newsfeeds.Com, Uncensored Usenet News == http://www.newsfeeds.com - The #1 Newsgroup Service in the World! === Over 80,000 Newsgroups = 16 Different Servers! == -- From: Jim Gillogly [EMAIL PROTECTED] Subject: Re: crypt(3C) algorithm under Solaris 2.6? Date: Thu, 25 Jan 2001 12:29:51 -0800 [EMAIL PROTECTED] wrote: Does anybody now what encrypt algorithm is used by crypt(3C) under Solaris 2.6? crypt(3C) under Solaris is a hashing algorithm, not an encryption algorithm. It uses a "salt" of two characters and does 25 iterations of a runtime-modified DES (26*16 rounds). I'm looking for DES encryption under 2.6. Solaris 7 and 8 seem to include libraries for these. There's an encrypt(3C) that claims to provide encryption/decryption based on crypt(3C) and mentions, but since it doesn't give details in the man page I wouldn't use it. If you're determined to get
Cryptography-Digest Digest #550
Cryptography-Digest Digest #550, Volume #12 Sun, 27 Aug 00 20:13:01 EDT Contents: Re: Encryption tool ([EMAIL PROTECTED]) Re: RSA Security Conference for 2001 (Samuel Paik) Re: New Site, Purple/Enigma/Sigaba/Russia Emulators (Charles Petersen) Re: PGP 6.5.8 test: That's NOT enough !!! (Olaf =?ISO-8859-1?Q?Schl=FCter?=) avalanche characteristic (Future Beacon) Re: avalanche characteristic (Ichinin) Re: avalanche characteristic ([EMAIL PROTECTED]) Re: On pseudo-random permutation (Bryan Olson) Re: Bytes, chars, and I/O (Mark McIntyre) Re: On pseudo-random permutation (Bryan Olson) Re: Destruction of CDs (Guy Macon) Re: PGP ADK Bug: What we expect from N.A.I. (Ed Reppert) Re: PGP ADK Bug: What we expect from N.A.I. ("gleu") Re: avalanche characteristic (Terry Ritter) Re: My (New) New algorithm ("Scott Fluhrer") From: [EMAIL PROTECTED] Subject: Re: Encryption tool Date: Sun, 27 Aug 2000 19:58:35 GMT In article [EMAIL PROTECTED], No User [EMAIL PROTECTED] wrote: Put your own encrption algorithm in here: http://homepages.go.com/~psychlcentral/cipher.html (source is completely viewable) Here is my magical puzzle iz nq u0 0ad tw lx 9rm6 ih 0qp 1ev0aj4a y9s bf0 pmyh ko3t I used the output of rand() mod 26 + 'a' as the vigenere key. Tom Sent via Deja.com http://www.deja.com/ Before you buy. -- From: Samuel Paik [EMAIL PROTECTED] Subject: Re: RSA Security Conference for 2001 Date: Sun, 27 Aug 2000 13:30:11 -0700 [EMAIL PROTECTED] wrote: The 15 pages I can handle (the TC5 paper was 20 pages I think) but I don't know much of LaTex at all. Can somebody please help me get some Latex tools for win98 and how to use them? A commonly recommended TeX for Win32 is MiKTeX http://www.miktex.org/ There are others. You may want to check the TeX Users Group site. They seem (unverified) to use teTeX for their TeX Live CD http://www.tug.org/ -- Samuel S. Paik | http://www.webnexus.com/users/paik/ 3D and multimedia, architecture and implementation -- From: Charles Petersen [EMAIL PROTECTED] Subject: Re: New Site, Purple/Enigma/Sigaba/Russia Emulators Date: Sun, 27 Aug 2000 13:33:44 -0700 From what I've been told, they're all relatively weak, so superencipherment probably wouldn't help all that much. Though I really don't know, anyone else have comments? Mok-Kong Shen wrote: Charles Petersen wrote: I thought you all might like to check out my new site. http://dev.thinkquest.org/C004911/ It has a simulation and explanations of the cryptography used by the major powers of World War II. This includes java applets that emulate the Purple, Enigma, Sigaba, Russian Espionage Cipher, and a public domain Bombe. In addition, there is a public forum reminiscent of Just curious: If a message is superenciphered with a couple of these machines, how vulnerable is it in the time of modern technology? M. K. Shen -- From: Olaf =?ISO-8859-1?Q?Schl=FCter?= [EMAIL PROTECTED] Date: Sun, 27 Aug 2000 20:16:39 GMT Subject: Re: PGP 6.5.8 test: That's NOT enough !!! Crossposted-To: alt.security.pgp,comp.security.pgp.discuss Reading the Senderek report got me stunned. I was very amazed about the existance of a data part in a public key information record unprotected by a signature. Is there anything that should go into the non-hashed par= t of a public-key block ? From what I see so far the part not protected by= a signature should remain empty and a warning should be issued if anything is in there. Why is there even such a thing as the non-hashed part ? Here is a list of security issues raised by unsigned parts of a security= relevant information: - X.509v1: algorithm parameters outside the signature of certificate. Th= e recommendation is now to ignore them and leave them empty upon creation.= - SSL v2: downgrading attack. List of acceptable algorithms transmitted without signing or MACing it. Resolved in SSL v3. - PGP: see senderek report Probably not complete. S/MIME pending, as there is also an unsigned information part in PKCS#7. But should have been enough to learn the lesson. Urspr=FCngliche Nachricht Am 26.08.00, 16:30:03, schrieb "Michel Bouissou" zum Thema Re: PGP 6.5.8 test: That's NOT enough !!!: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 "Keith" a =E9crit dans le message news: [EMAIL PROTECTED] There is no way for PGP to detect a forged key. That is what a signature and trust values are for. As long as PGP removes and/or doesn't recognize the forged ADK on a tampered key, which will lead to the encryption of a file or message to the forged ADK, then that is the proper action. You're wrong. An ADK should NEVER sit outside of the hashed part of the self-signature. An ADK sitting in the non-hashed part c
Cryptography-Digest Digest #550
Cryptography-Digest Digest #550, Volume #11 Sat, 15 Apr 00 02:13:01 EDT Contents: Re: BlowWire ("Trevor L. Jackson, III") Number 37, same format output as 36 (wtshaw) Re: $100 Code Challenge (Tom St Denis) CLOSE Encryption ("MeneLaus") Re: CryptoBag Source (update) (Tom St Denis) Re: Help decrypt message exercise (Frederic Limouzin) Re: ? Backdoor in Microsoft web server ? (Tim Tyler) Re: ? Backdoor in Microsoft web server ? ("David C. Oshel") Re: Biggest Public-key Cryptography Crack Ever (David Hopwood) Open Public Key ([EMAIL PROTECTED]) Re: BlowWire ("Spleen Splitter") Re: BlowWire ("Spleen Splitter") Re: BlowWire ("Spleen Splitter") Re: Open Public Key (Tom St Denis) Re: BlowWire (Tom St Denis) Re: BlowWire ("Spleen Splitter") Re: Open Public Key ([EMAIL PROTECTED]) Re: ___IDEA (updated info.) (kctang) Re: Regulation of Investigatory Powers Bill (Your Name) Re: new Echelon article ("Stou Sandalski") Re: OAP-L3: Answer me these? ("Bo Johnson") Re: Open Public Key (Bill Unruh) Re: Biggest Public-key Cryptography Crack Ever (Scott Contini) Date: Fri, 14 Apr 2000 19:00:51 -0400 From: "Trevor L. Jackson, III" [EMAIL PROTECTED] Subject: Re: BlowWire [EMAIL PROTECTED] wrote: In article F1zJ4.3613$[EMAIL PROTECTED], "Spleen Splitter" Spleen*no spam*[EMAIL PROTECTED] wrote: I'm interested in the general mapping of algorithms to hardware. The random tides swung to Blowfish. I don't care how big it is. The more gates the example burns the better as far as I'm concerned, as long as the objective is met. That "mapping" is really a redesign process, and the reason why design engineers get paid well. You use the general-purpose (e.g., "C") implementation as a reference. Add lots of printfs. Watch the halves swap. Zero the subkeys and watch it. If you want to burn gates but not spend transistors on RAM, do IDEA. Use lots of multipliers, and go ahead, unroll a few stages. Or do an RSA accelerator. Commerce servers need them badly. I think it would be much more interesting and useful to build a FULLY pipelined version of triple-DES, with separate S-boxes at each of all 48 rounds, so it's a completely asynchronous circuit (no clocking) whose speed is limited only by the gate and lookup delays through the pipeline. Ok. This can be good, too. I'll go look for the code. Not sure about that asynchronous stuff, tho. Dynamic logic is getting somewhat out of favor as we go below 100nm. Really? Paul's async idea is interesting because self-timed design is interesting, though largely confined to research, but unrolling DES is not very challenging by itself. Asynchronous designs may be out of favor (I wouldn't know; not my field), but they have been used effectively. There's a robustness advantage because you can get maximum throughput without inserting artificial delays (wait states) necessary to maintain synchronicity. DEC used this in their busses to great advantage. Fast devices are not affected by the presence of slow devices, and slower (cheap) devices don't have to meet artificially high performance standards in order to be useful. -- From: [EMAIL PROTECTED] (wtshaw) Subject: Number 37, same format output as 36 Date: Fri, 14 Apr 2000 16:18:51 -0600 Several base translation algorithms have outout in base 78, which could be expressed in a number of ways. I chose composite characters, which are in the form of letters in one of three forms, a, (a), or [a], referenced as diamond, circle, or square A. Banas used 77 character input, no digits allowed and certain other characters omitted, and an intermediate base of 26. The latest is Lima, which uses an intermediate base of 12, and transposition of those doits. It does not use substitution at the intermediate stage like Banas, but uses the same 26 character output substitution scheme used in Banas. In short, Lima has less key structure than Banas. Lima has these default keys: Sub(Li): abcdefghijklmnopqrstuvwxyz Trans(Li): abcdefg hijklmn opqrstu I have said that I will forego ciphertext examples in these relative application what have like appearing output. Two are done of the five applications. One way of looking at these composite characters is that each represents a letter-trit pair. If normal 26 letter ciphertext was impressed with trits in this manner of Banas and Lima, two forms of encryption might be mixed, with either or both being significant. It could be that any number of other algorithms might have ciphertext be falsely classed as being kin to the Banas Family. The trit aspect could be stripped and send some other way,
Cryptography-Digest Digest #550
Cryptography-Digest Digest #550, Volume #10 Fri, 12 Nov 99 01:13:04 EST Contents: Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY) Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY) Re: Build your own one-on-one compressor (SCOTT19U.ZIP_GUY) Re: Compression: A ? for David Scott (SCOTT19U.ZIP_GUY) Re: Signals From Intelligent Space Aliens? Forget About It. (SCOTT19U.ZIP_GUY) Re: Build your own one-on-one compressor (Don Taylor) McAfee Fortress ("Kris Hendricks") From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) Crossposted-To: comp.compression Subject: Re: Build your own one-on-one compressor Date: Fri, 12 Nov 1999 04:19:14 GMT In article [EMAIL PROTECTED], [EMAIL PROTECTED] wrote: In sci.crypt Mok-Kong Shen [EMAIL PROTECTED] wrote: : Tim Tyler wrote: : In sci.crypt Mok-Kong Shen [EMAIL PROTECTED] wrote: : : As I said previously, for such numerical coding the compression is : : already so good that one need not (at least in the first : : experimental phase) consider the aspect of word freqeucies. : : I doubt this. I expect non-dictionary words will typically bulk up the messages : by a larger factor than they are compressed by, for (say) email messages. : : It may be possible to develop a scheme that (roughly) breaks even on the : compression stakes - but I doubt good compression ratios will ever be obtained - : except on obscure or contrived types of text. : Just try to roughly compute the compression ratio of your paragraph : above (noting that each word is translated to 16 bits) and : you'll see that you get something that is probably better than : what you expect from a normal compression of ASCII text. If you design your dectionary for my message I don't doubt you can perform excellent compression. Will the 65536 words in your dictionary contain all the words I used? I think the scheme you are proposing can compress English texts. I doubt it will be as good as methods allowing variable-length symbols, and schemes like arithmetic coding which allow symbols which are not an integral number of bits in length. It's not a walk-over, though. Unless you choose your dictionary carefully, more bulking-up than slimming down will occur - as all rogue non-disctionary symbols get expanded up from one to two bytes. : Also, any 16-bit granularity in the output file will immediately render "8-bit" : one-on-one property invalid: if you have a file which is an odd number of bytes : long, you can rule it out immediately as a candidate compressed file ;-/ : : In fact, this will /probably/ have few implications for security, given various : assumptions - e.g. that the length of the compressed file is already clear. : Since each word is translated to 2 bytes, every compressed file : has a even number of bytes. Now, suppose one gets a 'wrong' file : which comes into being because the analyst uses an incorrect key : to decrypt. This file certainly also has an even number of bytes : (assumung normal block algorithms). Since the dictionary has : 2^16 entries, any 2 bytes, whatever the content, has a corresponding : word (the dictionary is assumed to be full, i.e. exhausting the : coding space of 2^16). [...] I agree - *if* ordinary block-encryption is employed. However, there /are/ techniques which attempt to disguise the file length, through the use of random padding. **If** these are used, the analyst may well be able to usefully discard supposedly compressed files if they are an odd number of bytes long. : Thus the 1-1 property (definition of Scott) is trivially present, since : one can translate the words back again to the same numbers. *Which* definition of "Scott"? "Scott" commonly mentions that ``Comp(Decomp(X)) = X for all X'' is what one-on-one compression involves. The scheme under discussion fails this - if X is an odd number of bytes long. I don't want to stress this too much - for most applications, it's probably a relatively minor issue. Actually you still can make one to one compress if you redefine a byte to be 16 bits. That way all files of your english text when tokenized would be a mulitply of 16bits. THe huffman compression could be based on a larger tree than I am presently using. But the resulting file would still be one to one. Since even if a wrong key used for then decryption the file would decompress back to a file that is a multiply of 16 bits. THen that would map to the english or whatever words. Note this has nothing to do with weather the english file is an odd or even number of bytes. David A. Scott -- SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE http://www.jim.com/jamesd/Kong/scott19u.zip Scott famous encryption website NOT FOR WIMPS http://members.xoom.com/ecil/index.htm Scott rejected paper for the ACM http://members.xoom.com/ecil/dspaper.htm Scott famous Compression Page WI
Cryptography-Digest Digest #550
Cryptography-Digest Digest #550, Volume #9 Fri, 14 May 99 14:13:03 EDT Contents: Re: Hello I am paper, please read me. (Bryan Olson) Re: Hello I am paper, please read me. ([EMAIL PROTECTED]) Re: Hello I am paper, please read me. (Gustaf Dellkrantz) Re: Hello I am paper, please read me. ([EMAIL PROTECTED]) Strength of PGP 1.0 conventional block cipher? (David Crick) Re: Lemming and Lemur: New Block and Stream Cipher ("Craig Clapp") Re: Crypto export limits ruled unconstitutional (Mok-Kong Shen) Re: RSA-modulus binary decomposition (Alan Braggins) Re: Hello I am paper, please read me. (Jim Felling) Re: Review of "Cryptonomicon" (Tim Maurer) WELCOM'99 Submission extended to May 29 (Uwe WILHELM) Re: Thought question: why do public ciphers use only simple ops like shift and XOR? (Christopher) From: Bryan Olson [EMAIL PROTECTED] Subject: Re: Hello I am paper, please read me. Date: Thu, 13 May 1999 22:24:34 -0700 [EMAIL PROTECTED] wrote: Sorry if the title is mean, but what's up? I wrote the paper on TwoDeck, and nobody even comments on it... That's because I am a newbie right? It's because the volume of material you post is overwhelming. Look at the decks after 8 or 16 shuffles. It seems the reason for the non-randomness is that if x and y are n apart before a shuffle, there's a very good chance they'll be 2n % 256 apart after the shuffle. --Bryan -- From: [EMAIL PROTECTED] Subject: Re: Hello I am paper, please read me. Date: Fri, 14 May 1999 07:37:33 GMT Look at the decks after 8 or 16 shuffles. It seems the reason for the non-randomness is that if x and y are n apart before a shuffle, there's a very good chance they'll be 2n % 256 apart after the shuffle. So if you know the shuffling bisectors for all t shuffles (summed into n) you can predict the original starting points? Hmm, not good. Is this exploitable if you don't know the shuffling bisectors? Tom -- PGP public keys. SPARE key is for daily work, WORK key is for published work. The spare is at 'http://members.tripod.com/~tomstdenis/key_s.pgp'. Work key is at 'http://members.tripod.com/~tomstdenis/key.pgp'. Try SPARE first! --== Sent via Deja.com http://www.deja.com/ ==-- ---Share what you know. Learn what you don't.--- -- From: Gustaf Dellkrantz [EMAIL PROTECTED] Subject: Re: Hello I am paper, please read me. Date: Fri, 14 May 1999 08:32:29 GMT In article 7hd3mo$3ba$[EMAIL PROTECTED], [EMAIL PROTECTED] wrote: I wrote: There is yet another weakness that might improve your attack. The problem is in the shuffle function and allows us to recover the shuffle value used when shuffling deck_A after a round of encryption. That's saying you can isolate the deck_A from it's possible solutions. Of N possible shuffle values N-2 move all cards of the deck except one. Um, no. All N values will create at one new deck from the old one. Where do you find this? Maybe it's true, I just don't understand. An example shows what I mean. (This assumes I have interpreted the shuffle function correctly :) The example is using N=8 and the starting deck A = {0, 1, 2, 3, 4, 5, 6, 7}. Now we'll shuffle it with some different values. 0 - 0, 4, 1, 5, 2, 6, 3, 7 (A[0] A[7] stays) 1 - 1, 5, 2, 6, 3, 7, 4, 0 (A[2] stays) 2 - 2, 6, 3, 7, 4, 0, 5, 1 (A[4] stays) 3 - 3, 7, 4, 0, 5, 1, 6, 2 (A[6] stays) 4 - 4, 0, 5, 1, 6, 2, 7, 3 (none) 5 - 5, 1, 6, 2, 7, 3, 0, 4 (A[1] stays) 6 - 6, 2, 7, 3, 0, 4, 1, 5 (A[3] stays) 7 - 7, 3, 0, 4, 1, 5, 2, 6 (A[5] stays) If you look at this you see that when shuffle=0, the first and last value stay the same. When shuffle=4 no value stay the same and in all other cases one value stays the same. This means that if (for example) shuffle=0 the first value you pick out of deck_A is the same as the first value that was picked out of deck_A in the run before the shuffle. This also means that the first value that is picked out of deck_B (indexed by A[0]) is the same as the first value picked out of deck_B in the run before the shuffle. The first output byte of this block is therefore equal to the first output byte of the previous block and this can only happen when shuffle=0. This property holds for all shuffle values as can be seen from the table above. This allows us to get the shuffle value by looking at two consecutive outputs and comparing them. This is easy assuming known/chosen plaintext. This might allow us to break the PRNG used to generate shuffle values for shuffling deck_A. If you know the deck_A. Your attack requires isolating deck_A from 'N! / 2' valid solutions. I don't need to know the contents of deck_A, just that some value stayed in place when shuffling. And then I can detect which value stayed (which position in deck_A holds the same value) and get the shuffle value from that