Cryptography-Digest Digest #521, Volume #9        Sun, 9 May 99 19:13:03 EDT

Contents:
  Re: AES (Bruce Schneier)
  Re: Just a couple of questions (Bruce Schneier)
  Snuffle -- Source Code -- Where Is It? (GivenRandy)
  Re: Just a couple of questions ([EMAIL PROTECTED])
  Re: How was this key constructed? (Alex)
  Re: Scramdisk: Security flaw in VxD? (Aman.)
  Re: True Randomness & The Law Of Large Numbers ([EMAIL PROTECTED])
  Re: Just a couple of questions ([EMAIL PROTECTED])
  Re: DES cracked in hardware? (CT Franklin)
  Re: Scramdisk: Security flaw in VxD? (Alen I. Morky)
  Re: True Randomness & The Law Of Large Numbers ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: AES
Date: Sun, 09 May 1999 18:44:29 GMT

On Sun, 09 May 1999 05:32:39 GMT, [EMAIL PROTECTED] wrote:
>    Now, the original statement to which Schneier agreed was: "Triple
>    DES is proven to be very secure". We can argue that this statement
>    is correct within the common use of the English language. But it
>    is a fact that many people who read this followed by Schneier's
>    validation will understand that there is a mathematical proof for
>    the security of 3DES. 

Indeed.  In retrospect, I should have been more exact in my agreement.
This has proven to be linguisticly complicated.  Although the only
people who seems to have misunderstood is Terry Ritter, who you'd
think would be used to this kind of usage already, and David Scott,
who is mercifully in my kill file and whose posts I do not see.

>    To me one answer to the problem of absence of proofs is
>    super-encipherment where several ciphers of radically different
>    designs are combined together. The analogy would be like building
>    several bridges side by side. If one of them should fail, the
>    other(s) would keep our communication lines open. Fortunately, in
>    many applications of information technology combining several
>    ciphers is only marginally more expensive than using only one. (By
>    the way, this is another argument in favor of not patenting
>    ciphers.)

Multiple encryption is generally a good idea.  The only reason you
don't see it widely used in practice is that using N ciphers cuts the
performance by a factor of N (more or less).

Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
           Free crypto newsletter.  See:  http://www.counterpane.com

------------------------------

From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: Just a couple of questions
Date: Sun, 09 May 1999 18:49:58 GMT

On Sun, 09 May 1999 11:31:37 -0400, Casey Sybrandy
<[EMAIL PROTECTED]> wrote:

>I just had a couple of questions on cipher analysis
>
>1. When testing to see if an algorithm can withstand linear,
>differential, etc. cryptanalysis, do you explicitly perform the analysis
>or is there some sort of shortcut.  I was just reading up on this topic
>in Applied Cryptography, but it came off as something that is just too
>time consuming.

It's even worse than that.  You have to spend hundreds of hours trying
to make the attacks work, and even then you aren't sure that someone
smarter than you can make the attacks work.  There are no shortcuts to
cryptanalysis.  To me, proving your ability at cryptanalysis by
breaking many published ciphers, and then spending about 1000 hours
analyzing your own new design, and then making everything
public...that's the bare minimum before your algorithm will be taken
seriously.

See:  http://www.counterpane.com/crypto-gram-9810.html#cipherdesign

There are those in this newsgroup who will disagree.  But the leading
AES candidates were created by cryptographers who have several
published breaks to their credit.

>2. There has been some talk as to how many rounds it takes for a 1 bit
>change in the plaintext or ciphertext to affect every bit in the
>plaintext.  How does one perform this sort of analysis?

Examine the output after each round.

>3. Can someone give me some information as to what makes a good
>algorithm for creating keys for a cipher without any initial setup
>time.  Also, does anybody know any good criteria for good pre-defined
>S-boxes?  If you need specifics, 8x32 S-boxes.

Gadzooks.  There are many, many papers on these topics.  Look through
Crypto, Eurocrypt, Asiacrypt, and Fast Software Encryption
proceedings.  My book gives a pretty good bibliography up to the point
of publication, but there is a lot of new material since then.  The
short answer to all your design questions is: "It depends."

Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
           Free crypto newsletter.  See:  http://www.counterpane.com

------------------------------

From: [EMAIL PROTECTED] (GivenRandy)
Subject: Snuffle -- Source Code -- Where Is It?
Date: 9 May 1999 19:34:28 GMT

I think it's great, and right on target, with the recent Bernstein
ruling.  Does anyone know where to get the complete source
code referenced in the case (I think it amounts to 4 pages or so)?

Randy Given
[EMAIL PROTECTED]
http://members.aol.com/GivenRandy
public key at http://members.aol.com/GivenRandy/pgpkey.asc


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Just a couple of questions
Date: Sun, 09 May 1999 19:46:41 GMT


> 1. When testing to see if an algorithm can withstand linear,
> differential, etc. cryptanalysis, do you explicitly perform the
analysis
> or is there some sort of shortcut.  I was just reading up on this
topic
> in Applied Cryptography, but it came off as something that is just too
> time consuming.

Well differential analysis exploits the fact that some known difference
can occur which is a direct result of the key.  The obvious way to
protect this is to make the subkeys oblivious or have better
diffusion.  If the person cannot maintain that certain bits are related
to certain bits of the key, or other plaintext then finding the subkeys
is harder.  Linear analysis tries to reduce a round, or several rounds
down to a simpler linear function.  It usually doesn't try to
approximate all of the plaintext/ciphertext but certain bits.  If for
example all even bits can be reduced to a linear function indenendant
of the number of rounds then the cipher is obviously weak.


> 2. There has been some talk as to how many rounds it takes for a 1 bit
> change in the plaintext or ciphertext to affect every bit in the
> plaintext.  How does one perform this sort of analysis?

That's diffusion.  Just step through the algorithm, their should be
some defined equation for diffusion.  The ideal being maximal (50% of
the bits) 100 % of the time, but that's not always possible, an thusly
cna be exploited.

tom
--
PGP public keys.  SPARE key is for daily work, WORK key is for
published work.  The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'.  Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'.  Try SPARE first!


--== Sent via Deja.com http://www.deja.com/ ==-- &nbsp;
---Share what you know. Learn what you don't.---

------------------------------

From: Alex <[EMAIL PROTECTED]>
Subject: Re: How was this key constructed?
Date: Sun, 09 May 1999 09:17:53 -0700

>

Does there have to be a pattern anyway? Couldn't have random values been
assigned to the letters?


P.S. How can I overcome the little problem of having to write more than I want
to in replies because there is a message that states that the reply cannot be
sent because there is "more included text than new text" or something to that
effect?


------------------------------

From: [EMAIL PROTECTED] (Aman.)
Crossposted-To: alt.security.pgp,comp.security.misc
Subject: Re: Scramdisk: Security flaw in VxD?
Date: 9 May 1999 15:42:03 -0500

On Sun, 09 May 1999 16:49:28 GMT, [EMAIL PROTECTED] wrote:

>While looking at ScramDisk, I came accross what appears to be a significant
>flaw in the way it handles (caches) passwords.
>

Expect this to be defeated very very soon....

>In particular; it is possible to write an application that can interrogate the
>driver for the (plaintext!) passwords it has cached.
>

It is possible to write a driver or Scramdisk.exe lookalike that will
post the passwords via Email

The fact is, that you still have to install software to take advantage
of this....

However I do intend to try plug this one and very quickly...

BTW That buffer *isn't* the password cache as such.
It returns a pointer to a locked buffer (in the driver) which will
never be swapped to disk.... The password just entered gets put in
there.....

Sniff!

Regards,
Aman.



------------------------------

Date: Sun, 09 May 1999 17:40:29 -0400
From: [EMAIL PROTECTED]
Subject: Re: True Randomness & The Law Of Large Numbers

Douglas A. Gwyn wrote:
> 
> "R. Knauer" wrote:
> > Maybe in that instance 10,000 keys is an overkill - but whatever that
> > number is, it is certainly not 1. If you think 1-bit bias adequately
> > characterizes a TRNG process for the purposes of secure crypto, then
> > go ahead and run the FIPS-140 Monobit Test, but at least run it enough
> > times under varying circumstances so that you can see a decent
> > distribution, from which you can then infer the 1-bit bias
> > characteristic of the TRNG.
> 
> The FIPS-140 Monobit Test was never meant to certify the cryptographic
> quality of the *algorithm*, just as one of a handful of simple checks
> that are to be performed on presumed high-grade systems to detect when
> they might have broken during operation.
> 
> > Using the Monobit Test only once has no theoretical justification.
> > Claiming that you are measuring 20,000 samples of a single bit has no
> > meaning in terms of modeling the TRNG. What you want to do is to
> > characterize the overall generation process, not a single bit
> > operation (unless you plan on sending only 1-bit messages). You want
> > to see how the TRNG behaves for 10,000 bit keys, not 1 bit keys.
> 
> Sure, it has a very good theoretical justification.  The required
> key stream properties are such that a UBP is a very good model, and
> the Monobit Test checks the actual data against one property of that
> model.  Sure, it doesn't check serial correlation properties, but it
> is just one of a battery of tests; the other tests specified in
> FIPS-140 check other properties.  If you want to design a test that
> checks all possible properties at once, be my guest -- it has been
> suggested (off-line) that Maurer's universal test might be suitable.
> 
> > So you must generate many 10,000 bit keys until you have a
> > sufficiently large sample of such keys.  Maybe 1,000 such keys is
> > enough to get a distribution which will let you know with reasonable
> > certainty that the TRNG will generate crypto-secure 10,000 bit keys.
> 
> Unfortunately, by then the brokenness of the key generator has
> allowed thousands of sensitive messages to be read by enemy
> cryptanalysts.  So, what can you do with only 20,000 sequential
> bits?
> 
> > Any one 10,000 bit key can be anomolous - that is what Feller and
> > Li & Vitanyi have been trying to tell you.
> 
> Gee, we don't need them to tell us that, because it is exceedingly
> obvious.  The FIPS-140 key-stream-monitoring tests were designed so
> that anomalies capable of generating a spurious warning in a
> correctly operating generator would be rare enough to not be much
> of a nuisance in the anticipated applications.\

I think there's a flaw here.  If we fear an anomalous output from an RNG
might betray our secrets than we have to pre-select the acceptable
outputs.  If we create a (logical) library of acceptable samples and use
the RNG to select from among them we'll see exactly the same
distribution of outputs as if we had simply filtered the RNG's raw
output.  Indeed, comparison of the raw output against the contents of
the library (rejecting mismatches) is the same as selecting from the
library.

If these processes are statistically identical they are equally suitable
for cryptographic use.

> 
> > Therefore you cannot use just one time average from a single
> > sequence to infer anything about the ensemble average.
> 
> "I see no ensemble here."  Just a specific instance of a generator
> which might or might not be functioning properly.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Just a couple of questions
Date: Sun, 09 May 1999 21:42:38 GMT


> It's even worse than that.  You have to spend hundreds of hours trying
> to make the attacks work, and even then you aren't sure that someone
> smarter than you can make the attacks work.  There are no shortcuts to
> cryptanalysis.  To me, proving your ability at cryptanalysis by
> breaking many published ciphers, and then spending about 1000 hours
> analyzing your own new design, and then making everything
> public...that's the bare minimum before your algorithm will be taken
> seriously.

I think at least cryptanalysis of your own design, with published
results, and possibly the reasoning behind the cipher design.  If you
can at least prove that the cipher has a difficult solution, or that
the solution is somewhat difficult with respect to ... then it's ok for
reading.

Time is the ultimate judge of any cipher.  IDEA for example is new, but
it's more recent resistance to cryptanalysis has proved it worthy of
consideration (read PGP...).

I agree that your book contains many valuable references.  I think the
original poster should follow them up.

Tom
--
PGP public keys.  SPARE key is for daily work, WORK key is for
published work.  The spare is at
'http://members.tripod.com/~tomstdenis/key_s.pgp'.  Work key is at
'http://members.tripod.com/~tomstdenis/key.pgp'.  Try SPARE first!


--== Sent via Deja.com http://www.deja.com/ ==-- &nbsp;
---Share what you know. Learn what you don't.---

------------------------------

From: CT Franklin <[EMAIL PROTECTED]>
Subject: Re: DES cracked in hardware?
Date: Sun, 09 May 1999 22:07:37 +0000

Keith Brodie wrote:

>     I think you can take it as a given that a DES cracker existed at the
> time it was introduced, that is why the key length was limited to 56 bits.I

I know people thought this way at one time.  But, another hypothesis suggests
itself.  We now know that DES is susceptible to differential cryptanalysis
attacks with a complexity of about 2^56 steps.  If such attacks (or attacks
with similar levels of complexity) were known at the time the DES was adopted
as a federal standard, dropping the effective key size to 56 bits could have
been a form ot truth in labelling.  Even a naive analyst can identify an attack
on DES with 2.^56 complexity.   If NSA at the time knew about the differential
attack on DES, it might have been imprudent for them to publish that fact.
But, shortening the key space sent the same message --- without revealing any
thing about methods.

Any thoughts?

Regards
CT


------------------------------

From: [EMAIL PROTECTED] (Alen I. Morky)
Crossposted-To: alt.security.pgp,comp.security.misc
Subject: Re: Scramdisk: Security flaw in VxD?
Date: Sun, 09 May 1999 21:24:53 GMT

Thank you, Sarah. I'm sure Aman meant to say that too, but I guess he
forgot.

------------------------------

Date: Sun, 09 May 1999 17:34:50 -0400
From: [EMAIL PROTECTED]
Subject: Re: True Randomness & The Law Of Large Numbers

R. Knauer wrote:
> 
> On Sun, 09 May 1999 08:14:46 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> 
> >Sure, it has a very good theoretical justification.
> 
> Then why does Feller claim that it is fundamentally incorrect to infer
> the properties of random number generation from the time average of a
> single sequence?
> 
> >The required
> >key stream properties are such that a UBP is a very good model,
> 
> The UBP model states that the outcome of a random process is dual
> valued with p=1/2. That a TRNG is dual valued comes from the fact that
> it is based on a random binary process, so that takes care of the BP
> part of the UBP. The fact that it must ideally have p=1/2 comes from
> the requirement of a flat distribution of sequences, so that takes
> care of the U part of the UBP
> 
> Therefore claiming that a TRNG can be modeled by a UBP says nothing
> that we do not already know - it adds nothing substantial to the
> discussion.
> 
> >and the Monobit Test checks the actual data against one property of that
> >model.
> 
> And just what might that "one property" be? 1-bit bias perhaps? If so,
> then a sequence of 20,000 bits is but one sample used to determine 1
> value of that 1-bit bias.

Wrong.  20,000 bits can be partitioned into 2^20,000-1 distinct samples
(the null set is not a sample).  Of those, 2^20,000-20,001 constitute
samples from which one can extract a figure for bit bias.

> 
> The Monobit Test seems to be saying that John Jones is not likely to
> be a salesman because he earns far less than the typical salesman. The
> typical salesman makes $100,000 per year and most salesmen (say 95%)
> make between $85,000 and $115,000 per year, so Jones cannot be a
> salesman since he makes only $25,000 per year.
> 
> It is indeed true that Jones earns an amount far outside the central
> part of the distribution of salesmen's earnings, and therefore he is
> not a "typical" salesman. But what does that say about all the rest of
> the people who are salesmen? How can Jone's poor earnings be a
> reflection on the typical earnings of the vast majority of salesmen?
> 
> The  Monobit Test is an attempt to characterize a random process on
> the basis of some statistical expectation applied to only one sample
> sequence. That's like saying that a herd is not likely made up of
> horses because there is one unicorn among them. Similarly, the Monobit
> Test purports that a TRNG is broken because one of its just one of its
> sequences failed the test.
> 
> >Sure, it doesn't check serial correlation properties,
> 
> We never claimed it did. It is only a simplistic test of 1-bit bias on
> one sample.
> 
> If you were to take 10,000 such samples at random times and find the
> distribution of 1-bit biases from them, you would have a much better
> picture of the bias characteristics of the TRNG. That's because such
> an extensive test would be closer to an ensemble average than the
> simple time average of one sample, like with the Monobit Test.

If that is your preferred analysis, divide the 20,000 bit into 10,000
two-bit sequences and analyse the bit bias inherent in each.  The FIPS
test indicates that a single sample is adequate if it is large enough. 
20,000 is define to be large enough.

Either you do not understand this construction, or are refusing to deal
with it.  Which is it?

> 
> Here is a proscription for a meaningful test: Take a 1,000-bit sample
> and calculate the 1-bit bias for it. Now repeat that 1,000 times at
> random and get 1,000 1-bit bias values. Plot those as a distribution
> and see how it behaves. Calculate an expectation and a variance, and
> see how they compare to the UBP model parameters.
> 
> Then you will gain a real insight into the TRNG process, something you
> cannot possibly do with a single sample of 20,000 bits and just one
> value of the 1-bit bias.
> 
> >but it is just one of a battery of tests; the other tests specified in FIPS-140 
>check other properties.
> 
> There are a total of 4 such tests, the Monobit Test being the first.
> After we arrive at a consensus on the Monobit Test, if ever, we need
> to consider the merits of the remaining tests one at a time.
> 
> I have mentioned that the Long Run Test appears to have some
> theoretical justification because a floating or shorted output is a
> possible mechanism of failure for a TRNG. But let's discuss each test
> in due time. If we cannot resolve the issues for the first test, there
> is little reason to go onto a discussion of the remaining tests.
> 
> >If you want to design a test that
> >checks all possible properties at once, be my guest -- it has been
> >suggested (off-line) that Maurer's universal test might be suitable.
> 
> I do not believe it is possible to design a single test. Each test is
> a measure of the strength against a particular attack, and since there
> are many different attacks against a stream cipher, I conclude that
> there must be many different tests.
> 
> >Unfortunately, by then the brokenness of the key generator has
> >allowed thousands of sensitive messages to be read by enemy
> >cryptanalysts.
> 
> Not necessarily, if you buffer the keystreams and use them only after
> testing. Memory and disk space are cheap.
> 
> >So, what can you do with only 20,000 sequential bits?
> 
> Beats me.
> 
> I do not think you can do anything meaningful with only one such
> sequence. Perhaps you could break the sequence into 1,000 samples of
> 20 bits each and use them to plot the distribution and calculate the
> parameters of the UBP model. But 20 bits does not seem all that many
> to calculate a 1-bit bias and 1,000 samples does not seem all that
> many for getting a true distribution.
> 
> Perhaps someone can give us the calculations for how large any single
> sample must be and how many such samples we would need to arrive at
> "reasonable certainty" regarding the distribution of 1-bit biases and
> UBP model parameters.

FIPS-140 did.  The adequate sample size of 20,000 bits.  READ IT.

> 
> But that is not what the Monobit Test is doing. It is calculating a
> single 1-bit bias value from a single sequence of 20,000 bits and
> claiming that a TRNG is broken if that single bias value is outside
> some statistical range of expected values. That is snake oil of the
> purest kind.

It is elementary statistics.  If it appears to you to be snake oil I
have to onclude the defect is in the reader not the material.

> 
> >> Any one 10,000 bit key can be anomolous - that is what Feller and
> >> Li & Vitanyi have been trying to tell you.
> 
> >Gee, we don't need them to tell us that, because it is exceedingly
> >obvious.
> 
> And your statement is exceedingly smug - which is not at all
> surprising coming from you.
> 
> Nothing about true randomness is "exceedingly obvious" - except to an
> idiot.
> 
> Recall how statisticians screwed up when Feller confronted them with
> the results of his random walk simulations. They could not believe
> that only 8 paths out of 10,000 would end up at the origin. After all,
> the origin represents the expected value of the random walk variable,
> and most paths are contained within a narrow range of that expected
> value. Therefore, according to those statisticians who considered
> randomness to be "exceedingly obvious", it is not possible for the
> random walk variable to be at the origin only 8 times out of 10,000
> trials.

You keep referring to this poll as if it were indicative.  It is
stupidity of the first rank.  The art of designing polls is well
developed.  Feller probably cast his questions to PRODUCE the the
desired (the wrong one).  This kind of thing is trivially easy to do.

Why are you lending weight to apocryphal, anecdotal issues and ignoring
the real mathematical issue?  Perhaps because the anectodal stories make
god stories and the math leaves you no room to attract attention?

> 
> And mathematicians once proved that bumble bees could not fly.
> 
> >The FIPS-140 key-stream-monitoring tests were designed so
> >that anomalies capable of generating a spurious warning in a
> >correctly operating generator would be rare enough to not be much
> >of a nuisance in the anticipated applications.
> 
> Thereby giving a false sense of security.
> 
> Tests on a single sequence can either give a lot of false alarms if
> they are too tight or can give a false sense of security if they are
> too loose.
> 
> >"I see no ensemble here."
> 
> I take that to be a quote of Herman Rubin.
> 
> Herman Rubin is entitled to his opinion on the matter of the existence
> of ensembles, but many people in the sciences and mathematics do refer
> to them, if only conceptually. Feller himself refers to the concept of
> an ensemble throughout his books.
> 
> In our case, the ensemble represents the collection of all possible
> sequences of finite length that a TRNG can generate. I see no harm in
> using that as a conceptual framework for our discussions.
> 
> >Just a specific instance of a generator
> >which might or might not be functioning properly.
> 
> Well, which is it? Is it functioning properly or not?
> 
> You cannot know, if all you take is one sample.

We cannot know at all acording to your original thesis.  Funny how
you've devolved down from arguing that all sampling is nonsense to
arguing that the FIPS sampling size is inadequate.

Just what size *IS* adequate according to Knauer?

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to