Cryptography-Digest Digest #165

2001-04-17 Thread Digestifier

Cryptography-Digest Digest #165, Volume #14  Tue, 17 Apr 01 04:13:01 EDT

Contents:
  Re: Incomplete Blocks in Twofish (Paul Rubin)
  Re: Incomplete Blocks in Twofish ("Scott Fluhrer")
  Publishing is *hard* ("John A. Malley")
  Re: HOW THE HELL DO I FIND "D"?!?! ("Joseph Ashwood")
  Re: Reusing A One Time Pad ("Joseph Ashwood")
  Re: Incomplete Blocks in Twofish ("Tom St Denis")
  Re: AES poll (SCOTT19U.ZIP_GUY)
  Re: patent issue (Paul Crowley)
  Re: Reusing A One Time Pad ("Mark G Wolf")
  Re: Distinguisher for RC4 ([EMAIL PROTECTED])
  Re: Incomplete Blocks in Twofish (Eric Lee Green)
  Re: Reusing A One Time Pad ("Douglas A. Gwyn")
  Re: MS OSs "swap" file:  total breach of computer security. (wtshaw)
  Re: There Is No Unbreakable Crypto (Mok-Kong Shen)
  Re: Note on combining PRNGs with the method of Wichmann and Hill ("Bryan Olson")
  Re: Note on combining PRNGs with the method of Wichmann and Hill ("Bryan Olson")



From: Paul Rubin [EMAIL PROTECTED]
Subject: Re: Incomplete Blocks in Twofish
Date: 16 Apr 2001 21:22:39 -0700

"Mike Moulton" [EMAIL PROTECTED] writes:
 I've been in the process of testing and modifying the reference
 implementation of Twofish to meet my needs, and I'm wondering what is the
 process of encrypting incomplete blocks.  What technique should I use
 considering that I am going to be running it in ECB for Encryption and CBC
 as a MAC?  Should I pad the ECB encryption like MD4/MD5 with a string of
 zero's and a the length of the padding, or use ciphertext-stealing?  After
 having read through Stinson/HAC/Schneier and the Twofish Book/Paper the only
 one to even mention it was Applied Cryptgraphy by Schneier.  Perhaps there
 was a mention somewhere else about an *official* way to implement padding in
 Twofish.  If so I'd be grateful for the reference.

The closest thing to a standard I know of for this is PKCS5.

--

From: "Scott Fluhrer" [EMAIL PROTECTED]
Subject: Re: Incomplete Blocks in Twofish
Date: Mon, 16 Apr 2001 21:20:17 -0700


Mike Moulton [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 I've been in the process of testing and modifying the reference
 implementation of Twofish to meet my needs, and I'm wondering what is the
 process of encrypting incomplete blocks.  What technique should I use
 considering that I am going to be running it in ECB for Encryption and CBC
 as a MAC?  Should I pad the ECB encryption like MD4/MD5 with a string of
 zero's and a the length of the padding, or use ciphertext-stealing?  After
 having read through Stinson/HAC/Schneier and the Twofish Book/Paper the
only
 one to even mention it was Applied Cryptgraphy by Schneier.  Perhaps there
 was a mention somewhere else about an *official* way to implement padding
in
 Twofish.  If so I'd be grateful for the reference.
Well, padding is not particularly Twofish specific -- any block cipher has
to deal with it.  There's several generic ways to handle it, none of which
is the "official" way (whatever that means):

- Add an explicit length field at the beginning.  Pad the last block out any
way that's convienent.

- Put the pad length as the last byte of the last block.  This implies that
you always have at least one byte of padding

- Ciphertext stealing

- Use a mode that handles arbitrary message lengths, such as CFB, OFB or
counter mode.

BTW: why are you using ECB for encryption?  Unless you know a priori that
duplicate plaintext blocks are unlikely, that's insecure.

--
poncho





--

From: "John A. Malley" [EMAIL PROTECTED]
Subject: Publishing is *hard*
Date: Mon, 16 Apr 2001 21:52:31 -0700

Well the *bad* news is my first attempt to publish the results of some
amateur research ended up rejected. :-(

(I put up a URL to a draft of the paper last year here in the group -
http://www.homeworlds.com/papers/SECLCG.pdf )
 
The *good* news is I gained better understanding of the referee process.
:- )

I got some insightful and helpful comments from an anonymous referee. I
don't agree with everything he wrote, BUT, I do now understand the
standards to which work will be held, I appreciate the advice, critique
and pointers and I'll take it to heart.


John A. Malley
[EMAIL PROTECTED]

--

From: "Joseph Ashwood" [EMAIL PROTECTED]
Subject: Re: HOW THE HELL DO I FIND "D"?!?!
Date: Mon, 16 Apr 2001 21:29:06 -0700

Finding D you use the extended Euclidian Algorithm (documented several
places)
For encrypting values you take some representation of it (ASCII is fine) to
build a very large number, then you take the first n-1 bits of it (where n
is the size of your RSA modulus) (you could also use n-8 bits if it's more
convenient) and encrypt. On decryption you know how many top bits were not
original data and so can remove them before displaying.
Joe


"Dopefish" [EMAIL PROTECTED] wrote in
message news:[EMAIL 

Cryptography-Digest Digest #166

2001-04-17 Thread Digestifier

Cryptography-Digest Digest #166, Volume #14  Tue, 17 Apr 01 08:13:01 EDT

Contents:
  Re: Publishing is *hard* (Mok-Kong Shen)
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
   See the Latest Cryptology Theories Here, Why Silverman is DEAD Wrong! 
(Mathman)
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  Re: Lorentz attractor... (Gerhard Wesp)
  Re: NSA is funding stegano detection (Mok-Kong Shen)
  Re: NSA is funding stegano detection (Mok-Kong Shen)
  Re: Note on combining PRNGs with the method of Wichmann and Hill ("Brian Gladman")
  Re: Rabin-Miller prime testing (Simon Josefsson)
  Streem combination without the use of dynamic tables. (David Formosa (aka ? the 
Platypus))
  Re: HOW THE HELL DO I FIND "D"?!?! (John Savard)
  Re: Lorentz attractor... (John Savard)



From: Mok-Kong Shen [EMAIL PROTECTED]
Subject: Re: Publishing is *hard*
Date: Tue, 17 Apr 2001 10:09:36 +0200



"John A. Malley" wrote:
  
 (I put up a URL to a draft of the paper last year here in the group -
 http://www.homeworlds.com/papers/SECLCG.pdf )
[snip]

Very dumb question: Does the article mean that there 
is indication of a probable weakness of ElGamal (for
otherwise such an attack would be inconceivable) or
else that for ANY secure cipher encrypting the output
of a poor PRNG can be problematical? If it is the
second case, then passing the output of a common
(statistically good) PRNG to, say, AES would result
in a sequence that could be cracked, which I would
suppose to be a new and rather significant result.

M. K. Shen

--

From: Mok-Kong Shen [EMAIL PROTECTED]
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Tue, 17 Apr 2001 10:17:13 +0200



Bryan Olson wrote:
 
 Mok-Kong Shen wrote:
 
 I like to add something to make my last paragraph better
 understandable: If one of the streams gets a factor 1.0
 (and it is uniform), isn't that everything is again
 (rigorously) theoretically o.k. in that particular issue?
 
 Of course not.  The theorem was:
 | as long as the streams are independent, if any of the
 | streams are uniform then the sum is uniform.

The PRNGs are assumed to be independent (I forgot to
explictly say that) and uniform. Now one stream gets
the factor 1.0, so that is uniform. The others are
not uniform. So according to the theorem the modular
sum is uniform, isn't it? (As I said elsewhere, the
continuous case can be considered the limiting case
of the discrete case, whose proof we have discussed
sometime back. There is in fact a rigorous proof
of the continuous case that doesn't use that limiting
process. I found the paper oneday by chance in Math. 
Rev. but I unfortunately didn't note down the reference.)

M. K. Shen

--

From: Mathman [EMAIL PROTECTED]
Subject:  See the Latest Cryptology Theories Here, Why Silverman is DEAD Wrong!
Date: Tue, 17 Apr 2001 03:36:49 -0500

www.mediaboy.net/1010100-111-010/gahk/


--

From: Mok-Kong Shen [EMAIL PROTECTED]
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Tue, 17 Apr 2001 11:01:08 +0200



Mok-Kong Shen wrote:
 
 Bryan Olson wrote:
 
  Mok-Kong Shen wrote:
 
  I like to add something to make my last paragraph better
  understandable: If one of the streams gets a factor 1.0
  (and it is uniform), isn't that everything is again
  (rigorously) theoretically o.k. in that particular issue?
 
  Of course not.  The theorem was:
  | as long as the streams are independent, if any of the
  | streams are uniform then the sum is uniform.
 
 The PRNGs are assumed to be independent (I forgot to
 explictly say that) and uniform. Now one stream gets
 the factor 1.0, so that is uniform. The others are
 not uniform. So according to the theorem the modular
 sum is uniform, isn't it? (As I said elsewhere, the
 continuous case can be considered the limiting case
 of the discrete case, whose proof we have discussed
 sometime back. There is in fact a rigorous proof
 of the continuous case that doesn't use that limiting
 process. I found the paper oneday by chance in Math.
 Rev. but I unfortunately didn't note down the reference.)

Addendum: The scheme of Wichmann and Hill is intended
to get a more uniform stream from a number of not well
uniform streams. The assumption I made above that
the PRNGs are uniform is for discussion of the 
theoretical point you raised which I quote below:

   The modification destroys an important property of 
   the basic combination method: as long as the streams 
   are independent, if any of the streams are uniform 
   then the sum is uniform.

So in that situation we assume that there are uniform
streams to start with. Note that we are actually doing
splitting of hairs. The 

Cryptography-Digest Digest #167

2001-04-17 Thread Digestifier

Cryptography-Digest Digest #167, Volume #14  Tue, 17 Apr 01 13:13:01 EDT

Contents:
  Re: "I do not feel secure using your program any more." ("upyerkilt")
  Re: Lorentz attractor... (Richard Heathfield)
  Re: Publishing is *hard* ("John A. Malley")
  Re: Publishing is *hard* ("John A. Malley")
  Re: MS OSs "swap" file:  total breach of computer security. (Richard Heathfield)
  Re: Distinguisher for RC4 ("Scott Fluhrer")
  Re: Distinguisher for RC4 ("Roger Schlafly")
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  Re: Publishing is *hard* (Mok-Kong Shen)
  Q: Choosing a Smartcard/Crypto Chip for PKI on Win2K ("Miguel Almeida")
  Re: REAL OTP Systems (Mok-Kong Shen)
  Re: Reusing A One Time Pad (SCOTT19U.ZIP_GUY)
  Re: Reusing A One Time Pad (SCOTT19U.ZIP_GUY)
  Re: Primality Testing Algorithm ("Jack Lindso")
  Size of dictionaries (Mok-Kong Shen)
  Re: "differential steganography/encryption" ("Paul Pires")
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: REAL OTP Systems (Volker Hetzer)



Reply-To: "upyerkilt" [EMAIL PROTECTED]
From: "upyerkilt" [EMAIL PROTECTED]
Crossposted-To: talk.politics.misc,alt.hacker
Subject: Re: "I do not feel secure using your program any more."
Date: Tue, 17 Apr 2001 13:02:52 GMT

As you requested, Tom St Denis.



"Tom St Denis" [EMAIL PROTECTED] wrote in message
news:KoMC6.24724$[EMAIL PROTECTED]...

 "Anthony Stephen Szopa" [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]...
  "I do not feel secure using your program any more."
 
  You sure jumped to a hasty conclusion.

 Who posed the quotation text?  (BTW I may be in his killfile, so for fun
 could someone please just repost this message in it's completeness)


  For instance, one could suggest:  "Why even generate these random
  digit sequences using the OAP-L3 methods.  I will just generate a
  file containing nothing but the digit sequence 0123456789 repeated
  until I have created a file of 18,144,000 bytes in length then I
  will use the other methods from OAP-L3 to scramble these up to create
  a random digit sequence."

 Why not juse use 0 and 1? for a binary system... I surely don't make files
 that are base10 in size...

  You surely know that it won't take very many of these processes to
  scramble up THIS file before the odds of guessing the final sequence
  becomes practicably impossible to guess or analyze because there will
  simply not be enough computing power available or time or energy to
  accomplish this.

 Perhaps.  Where does the source of entropy come from?  What "mixing"
process
 is used?  Not all mixing algorithms are equal.  If only use 14 bits of
 entropy and a trivially stupid algorithm to shuffle the array it should be
 simple to figure out what the state is.

  Again, using the methods of OAP-L3 to generate your random digit
  sequences is just the first step of creating your OTPs.  And since I
  believe you would agree that even if you started with a known file
  containing the sequences of 0123456789 of length 18,144,000 bytes
  and this becoming very quickly practicably impossible to guess
  using the methods from OAP-L3, then by actually generating the random
  digit files using OAP-L3 makes this impossibility that much more
  impossible.

 Unless the # of bits into OAP is the same as the # of bits out OAP can
never
 be a OTP (assuming OAP doesn't degrade the entropy).  It's as simple as
 that.

 Again you assume that you shuffled your array with some good source of
 entropy at hand...

  This is because if you have gone to ciphile.com and looked up the
  Sterling Approximation web page you would know that there are about
  1E66,000,000 possible sequences to arrange three permutation files.
  Of course there may be trillions and trillions of useable sequences.
  How lucky are you at guessing one in 1E24 or 1E48 or 1E96, etc.
  sequences from 1E66,000,000.  Not even incredible Chinese gamblers
  think they are this lucky.

 So what.  Lucifer had a 128-bit key... that's
 340,282,366,920,938,463,463,374,607,431,768,211,456 what a big number!

 Does that mean it's secure... er NOPE!

  Now you may feel that the problem is greatly simplified (as if
  anything could simplify a problem with 1E66,000,000 possible
  outcomes) if you just had some plaintext and encrypted text.  How
  so?
 
  The random number sequence you determine from having these two data
  sources has so little relationship with the random digit file(s)
  generated from using the OAP-L3 methods as to be worthless for
  attacking subsequent numbers from the OTPs because this original
  random digit file has been further processed using all the methods
  available with OAP-L3, where above I hope you agreed, even
  if you knew this file and its sequences, it would make no
  difference.

 You assume your OAP methods are sound and non-degenerate.  You 

Cryptography-Digest Digest #168

2001-04-17 Thread Digestifier

Cryptography-Digest Digest #168, Volume #14  Tue, 17 Apr 01 14:13:01 EDT

Contents:
  Re: Reusing A One Time Pad (SCOTT19U.ZIP_GUY)
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: Reusing A One Time Pad (SCOTT19U.ZIP_GUY)
  Re: Note on combining PRNGs with the method of Wichmann and Hill ("Brian Gladman")
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: Reusing A One Time Pad (SCOTT19U.ZIP_GUY)
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: Choosing a Smartcard/Crypto Chip for PKI on Win2K ("Ryan Phillips")



From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Reusing A One Time Pad
Date: 17 Apr 2001 17:04:59 GMT

[EMAIL PROTECTED] (Tom St Denis) wrote in
4u_C6.31062$[EMAIL PROTECTED]: 


"SCOTT19U.ZIP_GUY" [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 [EMAIL PROTECTED] (Joseph Ashwood) wrote in
 ePD6covxAHA.328@cpmsnbbsa07: 

 The moment you reuse *any* portion of the pad, the security
 immediately falls to precisely 0. That's a simple fact of life
 (assuming you are using a standard XOR based pad). There is no
 getting around that, no amount of postulating will get around the
 mathematic problems involved, your idea is plain and simply insecure.
 Joe

Actually the security is not ZERO. You may have reduced the
 ppssible set of the messages but as long as there is more than
 one plausble decryption it is not zero. But I agree the more
 you use the less secure and its not to had to reach ZERO.
 Especailly if the plain text file was known to be compressed
 as part of the encryption package as in PGP.

Um to disprove anything you have said for the past 8 years or so.. an
OTP is secure even if the message was compressed with deflate. so
tell me again how the compression hurts?


   I don't wish to get in a long argument with you since you really
don't want to learn. But I will clarify what I think your 
misunderstanding is.  First of all we both know that a OTP is secure.
Joe was commenting that if you repeat one bit its not secure at all.
We all know or should know that if you use it over and over again its
not secure. IF a "one time Pad" is used correctly. Meaning "one time"
then the encryption is secure regardless of what compression was used.

   As to weakness of using nonbijecetive file compression in encryption
systems I think you know where I stand on that after years of trying
to get you to understand it.


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


--

From: "Tom St Denis" [EMAIL PROTECTED]
Subject: Re: Reusing A One Time Pad
Date: Tue, 17 Apr 2001 17:19:22 GMT


"SCOTT19U.ZIP_GUY" [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 [EMAIL PROTECTED] (Tom St Denis) wrote in
 4u_C6.31062$[EMAIL PROTECTED]:

 
 "SCOTT19U.ZIP_GUY" [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]...
  [EMAIL PROTECTED] (Joseph Ashwood) wrote in
  ePD6covxAHA.328@cpmsnbbsa07:
 
  The moment you reuse *any* portion of the pad, the security
  immediately falls to precisely 0. That's a simple fact of life
  (assuming you are using a standard XOR based pad). There is no
  getting around that, no amount of postulating will get around the
  mathematic problems involved, your idea is plain and simply insecure.
  Joe
 
 Actually the security is not ZERO. You may have reduced the
  ppssible set of the messages but as long as there is more than
  one plausble decryption it is not zero. But I agree the more
  you use the less secure and its not to had to reach ZERO.
  Especailly if the plain text file was known to be compressed
  as part of the encryption package as in PGP.
 
 Um to disprove anything you have said for the past 8 years or so.. an
 OTP is secure even if the message was compressed with deflate. so
 tell me again how the compression hurts?
 

I don't wish to get in a long argument with you since you really
 don't want to learn. But I will clarify what I think your
 misunderstanding is.  First of all we both know that a OTP is secure.
 Joe was commenting that if you repeat one bit its not secure at all.
 We all know or should know that if you use it over and over again its
 not secure. IF a "one time Pad" is used correctly. Meaning "one time"
 then the encryption is secure regardless of what compression was used.

As to weakness of using nonbijecetive file compression in encryption
 systems I 

Cryptography-Digest Digest #169

2001-04-17 Thread Digestifier

Cryptography-Digest Digest #169, Volume #14  Tue, 17 Apr 01 16:13:01 EDT

Contents:
  Re: Reusing A One Time Pad (SCOTT19U.ZIP_GUY)
  Re: Reusing A One Time Pad (SCOTT19U.ZIP_GUY)
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  Re: Reusing A One Time Pad (SCOTT19U.ZIP_GUY)
  Re: Reusing A One Time Pad ("Tom St Denis")
  Re: LFSR Security (Ian Goldberg)
  Re: Reusing A One Time Pad ("Joseph Ashwood")
  ansi x9.23 / iso 10126 (Fabian Kaiser)



From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Reusing A One Time Pad
Date: 17 Apr 2001 18:14:29 GMT

[EMAIL PROTECTED] (Tom St Denis) wrote in
Td%C6.31344$[EMAIL PROTECTED]: 

You have yet to prove that bijective compression is any more secure.  In
fact it's not.  Let's demonstrate.  I get a message from you and I want
to try decrypting it.  Say you use a 64-bit RC5 key or something...  I
feel it's about money so amongst the ciphertexts I decrypt I get


   What are you trying to show?
First of all if I was sending text I would be using my
conditional adaptive huffman compressor so that only
characters in the  set "A to Z' "0 to 9" and space
actually I would use underscore for space it i used space at all.

AKSJHDKH2309SLDFJSDFHSFJKGHW
DFGJSHSKFHKJ2340OWE7FKJFSD
THE MONEY IS IN MY POCKET
345U3RWHERFRLSJFHKJGFYSUKJFTHSJK
DFGJLHGJKHGDFJKGHDFKJGDGJDFLK349

I see using the general bijective compressor you get only
the cases above that contian characters. How nice. Yes you
can say if you know it was about money that you can tell which
one was the input message. And in this example even if you
didn't know it was about money you can guess which one it was.

Lets suppose instead that I compressed not using a bijective
compressor but a compressor that compresses as well but is non
bijective. Since many cases are not used becasue the test encryption
keys lead to files that could not have compressed with the 
nonbijective compressor

DFGJSHSKF J2340OWE7FKJFSD
THE MONEY IS IN MY POCKET

Gee there are fewer cases to look at since the filter effect
casued by the use of the nonbijective compressor. Gives me
extra information on how to eliminate candidate files. In
the bijective compressor you don't get this free filter to
toss out cases that can't exist.

 Yes Tom which one is easier to look at and guess a solution.
But you know all this we go over and over this all the time
what do you have that is new. Or are you just trying to confuse
new people.


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


--

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Reusing A One Time Pad
Date: 17 Apr 2001 18:33:17 GMT

[EMAIL PROTECTED] (Tom St Denis) wrote in
hA%C6.31591$[EMAIL PROTECTED]: 


"SCOTT19U.ZIP_GUY" [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 I can argue with your statments about the "super evil deflate".
 Being better. Compression is a lose lose situation. It is well known
 for certain classes of files Huffman would be the optimal. I don't
 think that can be said for deflate for any class of files.

There are no classes of files where huffman is optimal.  If the file has
a bias towards a certain symbol more than likely it's in some pattern.


   Actually I need go no farther. 

snip

David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


--

From: "Tom St Denis" [EMAIL PROTECTED]
Subject: Re: Reusing A One Time Pad
Date: Tue, 17 Apr 2001 18:45:44 GMT


"SCOTT19U.ZIP_GUY" [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 [EMAIL PROTECTED] (Tom St Denis) wrote in
 Td%C6.31344$[EMAIL PROTECTED]:

 You have yet to prove that bijective compression is any more secure.  In
 fact it's not.  Let's demonstrate.  I get a message from you and I want
 to try decrypting it.  Say you use a 64-bit RC5 key or something...  I
 

Cryptography-Digest Digest #170

2001-04-17 Thread Digestifier

Cryptography-Digest Digest #170, Volume #14  Tue, 17 Apr 01 19:13:01 EDT

Contents:
  Re: Distinguisher for RC4 ("Joris Dobbelsteen")
  "UNCOBER" = Universal Code Breaker (newbie)
  Re: Size of dictionaries (Jim Gillogly)
  Re: There Is No Unbreakable Crypto (David Wagner)
  DES Optimizaton - Can Someone Explain? ("Kevin D. Kissell")
  Re: "differential steganography/encryption" ("Dopefish")
  Re: HOW THE HELL DO I FIND "D"?!?! ("Dopefish")
  Re: "UNCOBER" = Universal Code Breaker ("Joseph Ashwood")
  lagged fibonacci generator idea ("Tom St Denis")
  Re: Size of dictionaries (Mok-Kong Shen)
  Re: "differential steganography/encryption" (Mok-Kong Shen)
  Re: "UNCOBER" = Universal Code Breaker (newbie)
  Re: Note on combining PRNGs with the method of Wichmann and Hill ("Brian Gladman")



From: "Joris Dobbelsteen" [EMAIL PROTECTED]
Subject: Re: Distinguisher for RC4
Date: Tue, 17 Apr 2001 22:53:00 +0200

And using a method like CBC, OFB or CFB? These XOR a result that comes out a
block cipher, making it possible to replace them for a stream cipher. This
may, however, still not be suited for every situation, but it can be adapted
easily...

- Joris

"David Formosa (aka ? the Platypus)" [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 On 15 Apr 2001 06:24:34 GMT, David Wagner [EMAIL PROTECTED]
wrote:

 [...]

  Well, it does scare me away from the idea of using RC4 in new
  systems.
 [...]
  Why bother with such uncertainty when we have 3DES and AES?
  But then, maybe I'm too paranoid.

 RC4 is a streem cyper/pydorandom number generator, while 3DES is a
 block cyper.  You can't replace RC4 with 3DES in most situtations.

 --
 Please excuse my spelling as I suffer from agraphia. See
 http://dformosa.zeta.org.au/~dformosa/Spelling.html to find out more.
 Free the Memes.



--

From: newbie [EMAIL PROTECTED]
Subject: "UNCOBER" = Universal Code Breaker
Date: Tue, 17 Apr 2001 16:49:34 -0300

Cryptographers are still talking about unbreakable code. And sometimes
after someone break the code.
So, all the cryptographers are always trying to find the "unbreakable".
Has cryptograhers thought to universal way to break any code?
Has cryptographers thought to the basis to elaborate universal theory to
break any code?
I found a funny name to that theory "UNCOBER" = Universal Code Breaker


Newbie

--

From: Jim Gillogly [EMAIL PROTECTED]
Subject: Re: Size of dictionaries
Date: Tue, 17 Apr 2001 14:15:02 -0700

Mok-Kong Shen wrote:
 
 I am interested to know the order of magnitude of the
 minimum size of a dictionary in English that could permit
 ordinary messages of common people (private and commercial)
 to be written in as terse a manner as possible without
 negatively affecting the essential semantics that is to be
 conveyed. In other words, I am thinking of writing in a style

CK Ogden chopped the vocabulary down to 850 words for his
proposal "Basic English" which he said "will cover those needs
of everyday life for which a vocabulary of 20,000 words is
frequently employed."  It was first proposed in the 1920s,
during the time when the second- and third-generation constructed
languages (Esperanto, Ido, etc.) were hitting their stride.  It
had a few high-profile supporters, such as Winston Churchill
and Franklin D. Roosevelt.  I may be mistaken, but I think he
left out verbs, but presumably few enough verbs would be needed
to keep it under 10 bits.  Shakespeare looks quite odd in Basic
English, as you might guess.

 like telegrams. Among technical points to be clarified seem
 to be those of grammatical forms, e.g. whether the present
 participle of a verb is to be in the dictionary and how
 special words, if needed but not in the dictionary, can be
 represented (using escape symbols?), etc. etc. As foreigner,
 I clearly can barely have good ideas about these. Would
 2^12 be an appropriate size of the dictionary? In that case

As a foreigner (umm -- you mean a foreigner in Germany?  we're
from all over), what would be your estimate for one of the
languages you know?  I understand unabridged English dictionaries
are larger than those of most (all?) other languages, so English
might not be the best choice for your plan.  You might consider
an artificial language such as Esperanto or one of the more
modern efforts, depending on what application you have in mind.

 each word of a message could be coded into 12 bits, which
 means quite a significant compression ratio, isn't it?

It seems to me that a proposal like this would be useful only
in extreme conditions of some sort, where constructing the
message could be very human-intensive labor, and the transmission
channel is extremely expensive.  An example might be communicating
with scientists on the 30-year Titan probe, assuming they had lost
the use of their main antenna and can receive mail on only a very
limited-bandwidth 

Cryptography-Digest Digest #171

2001-04-17 Thread Digestifier

Cryptography-Digest Digest #171, Volume #14  Tue, 17 Apr 01 23:13:00 EDT

Contents:
  Re: "UNCOBER" = Universal Code Breaker ("Joseph Ashwood")
  Re: "UNCOBER" = Universal Code Breaker ("Tom St Denis")
  Re: "UNCOBER" = Universal Code Breaker ("Jack Lindso")
  Re: Advantages of attackers and defenders (Bart Bailey)
  Re: "UNCOBER" = Universal Code Breaker (newbie)
  Re: "UNCOBER" = Universal Code Breaker (newbie)
  Re: "UNCOBER" = Universal Code Breaker ("Tom St Denis")
  Re: "UNCOBER" = Universal Code Breaker ("Tom St Denis")
  Re: lagged fibonacci generator idea ("Scott Fluhrer")
  Re: Distinguisher for RC4 ("Scott Fluhrer")
  Re: lagged fibonacci generator idea ("Tom St Denis")
  Re: "UNCOBER" = Universal Code Breaker (David Formosa (aka ? the Platypus))
  Re: "UNCOBER" = Universal Code Breaker (John Savard)
  Re: "UNCOBER" = Universal Code Breaker (John Savard)
  Re: "UNCOBER" = Universal Code Breaker (John Savard)
  Re: DES Optimizaton - Can Someone Explain? (John Savard)
  Re: C code for GF mults ("Brian McKeever")



From: "Joseph Ashwood" [EMAIL PROTECTED]
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Tue, 17 Apr 2001 16:11:34 -0700

"newbie" [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 I'm not talking about universal algo, but about "unified theory".
 Theory means designing concepts and tools to solve types of encryption.
 Theory means thinking to hypothesis, conditions etc... to solve any
 encryption.
 Theory means conceptual vision of the cryptanalysis concern.
 Theory does not mean creating a super-algo breaker.
But the existance of a single algorithm which can provably not be broken by
such a theory eliminates the possibility of that theory. The One-Time-Pad
algorithm exists, therefore a unified theory that "solves" (aka breaks,
because there is no other possibility) an encryption algorithm cannot exist.
I proved that at least 2 ways now.
Joe



--

From: "Tom St Denis" [EMAIL PROTECTED]
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Tue, 17 Apr 2001 23:20:40 GMT


"newbie" [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...
 I'm not talking about universal algo, but about "unified theory".
 Theory means designing concepts and tools to solve types of encryption.
 Theory means thinking to hypothesis, conditions etc... to solve any
 encryption.
 Theory means conceptual vision of the cryptanalysis concern.
 Theory does not mean creating a super-algo breaker.

What are you yabbing about?  He proved that it can't exist given you can't
break an OTP

Tom



--

From: "Jack Lindso" [EMAIL PROTECTED]
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Wed, 18 Apr 2001 02:42:21 +0200

The fact that TRUE OTP is unbreakable doesn't contradict the existence of a
cryptanalysis algorithm e.g. a set of rules or a function which tries to
evaluate encrypted cipher text.
Perhaps Code Breaker is a wrong name for it, a Cryptanalysis Algorithm would
be a better one :

1: check the deviation from RAND
2: Try shifting 
And so on.

--

Anticipating the future is all about envisioning the Infinity.
http://www.atstep.com

"Tom St Denis" [EMAIL PROTECTED] wrote in message
news:cd4D6.33598$[EMAIL PROTECTED]...

 "newbie" [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]...
  I'm not talking about universal algo, but about "unified theory".
  Theory means designing concepts and tools to solve types of encryption.
  Theory means thinking to hypothesis, conditions etc... to solve any
  encryption.
  Theory means conceptual vision of the cryptanalysis concern.
  Theory does not mean creating a super-algo breaker.

 What are you yabbing about?  He proved that it can't exist given you can't
 break an OTP

 Tom





--

From: Bart Bailey [EMAIL PROTECTED]
Subject: Re: Advantages of attackers and defenders
Date: Tue, 17 Apr 2001 16:46:46 -0700

AY wrote:



 , but I am not sure how the knowledge of "network
 terrains" can help defend against attackers.

Knowledge of the nomenclature and locations of bait files and tripwires would,
hopefully, be held by the defenders and not the attackers. There could be some
rearranging of these resources as attack methods are exhibited.

~~Bart~~

--

From: newbie [EMAIL PROTECTED]
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Tue, 17 Apr 2001 19:51:28 -0300

You are starting to elaborate it.
This theory.
Because the universal breaking-theory is kowing the conditions
unbreakability too.
This theory has to answer why is OTP likely unbreakable. It is not so
sure.
It is sure only in theory. Because the language domain is not infinite.
There is not only redundancy is using the alphabet. There is redundancy
in plain-text using.
The domain of vocabulray used in