Cryptography-Digest Digest #381, Volume #9       Mon, 12 Apr 99 22:13:04 EDT

Contents:
  cryptlib 2.1 crypto toolkit final beta released (Peter Gutmann)
  Re: Encrypting Fields in Microsoft Access Database (Dworkin of Amber)
  Re: LFSR polynomial testing ([EMAIL PROTECTED])
  4th attempt => SNAKE (with less oil :-) (Peter Gunn)
  Adequacy of FIPS-140 ("Trevor Jackson, III")
  Re: True Randomness & The Law Of Large Numbers ("Trevor Jackson, III")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Peter Gutmann)
Crossposted-To: alt.security,comp.security.misc
Subject: cryptlib 2.1 crypto toolkit final beta released
Date: 13 Apr 1999 00:49:57 GMT




I've just uploaded what had better be the final beta of cryptlib 2.1, you can
get the source code (Unix/Windows/DOS/whatever) + precompiled Win16 and Win32
DLL's as ftp://ftp.franken.de/pub/crypt/cryptlib/beta/beta0413.zip and the
230-page manual as ftp://ftp.franken.de/pub/crypt/cryptlib/beta/manual.pdf. The
cryptlib home page is at http://www.cs.auckland.ac.nz/~pgut001/cryptlib/.

cryptlib provides the ability to create and read S/MIME messages (with real
encryption, not the usual RC2/40), a reasonably complete PKIX and X.509v3
certificate handling implementation (YMMV), and various other useful features
like key databases, a certificate trust manager, automated checking of certs
against CRL contents, LDAP directory access, and other odds and ends - grab a
copy of the manual for more information.  The main design goal was ease of use,
for example here's what it takes to create a signed S/MIME message:

  /* Create an envelope for the message and push in the signing key */
  cryptCreateEnvelopeEx( &cryptEnvelope, CRYPT_FORMAT_SMIME, CRYPT_USE_DEFAULT );
  cryptAddEnvComponentNumeric( cryptEnvelope, CRYPT_ENVELOPE_SIGNATURE,
                               signatureKey );

  /* Push in the message data and pop out the signed result */
  cryptPushData( cryptEnvelope, message, messageSize, &bytesIn );
  cryptPushData( cryptEnvelope, NULL, 0, NULL );
  cryptPopData( cryptEnvelope, buffer, bufferSize, &bytesOut );

  /* Clean up */
  cryptDestroyEnvelope( cryptEnvelope );

The manual contains examples of how you'd integrate this into mailers like
Eudora and interfaces like MAPI and c-client to provide full-strength S/MIME
encryption.  It was developed in the free world so it's not crippled or only 
available in the US.

Peter.


------------------------------

From: [EMAIL PROTECTED] (Dworkin of Amber)
Subject: Re: Encrypting Fields in Microsoft Access Database
Date: Mon, 12 Apr 1999 21:16:28 -0400

Thanks for all the help.

I have now encrypted the record number of the field (IV), and XORed it 
with the plaintext.  In addition, someone emailed me about the sapphire 
(sp?) algorithm which can have a 1-byte output block (17 bytes in, 17 
bytes out, etc.).  

With the field encrypted two different way with two different algorithms, 
I feel fairly sure that the field will be secure against moderately 
powerful attacks.  :)

Thanks for the help

James.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: LFSR polynomial testing
Date: Tue, 13 Apr 1999 01:16:30 GMT

In article <[EMAIL PROTECTED]>,
  "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > ... I was wondering if anyone had a program to test for valid
> > LFSR polynomials (LFSRs of any bit length)?
>
> What do you mean by "valid"?  Irreducible?
>

Primitive (is that the same)?  Basically I am looking for a program to allow
me to say '127 bits' and it will list all valid polynomials.

Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: Peter Gunn <[EMAIL PROTECTED]>
Subject: 4th attempt => SNAKE (with less oil :-)
Date: Tue, 13 Apr 1999 02:34:45 +0100

Looks like a couple of small problems with SNAKE...

* it doesnt actually 'authenticate' the client or
  server, so the name's a bit misleading.

* the server returning H((y1^x2)%p) serves no
  purpose at all.

The simplest way to sort this out is to change the
name and drop the extra fluff, which basically leads me
back to where I started (which is no bad thing :-)

Since this really is just a Diffie-Hellman and a hash
Im reasonably confident that this does avoid the
man-in-the-middle and is OK with short U & P.

IMHO the main lesson here is that crypto is far more
difficult than it looks, and if you're looking for
something to use in a non-trivial real world application
always try and get something thats been designed and
scrutinized by experienced people (which basically means
use SPEKE or SRP-3 in this case). However, that
having been said, application development is becoming
more and more of an integration oriented task these
days, and if you're gonna mess with crypto, sooner or
later you're going to have to combine hashes, key
exchanges, symmetric & asymmetric encryption and so on,
and I think experimenting is the only effective
way of doing this... the hard bit is trying not to
add extra confusion to an already obfuscate subject
by getting carried away :-)

All comments welcome...

PG.

=======================================================

SNAKE => Simple Non-Authenticating Key Exchange

x1 is a random number (<p-1)
x2 is a random number (<p-1)
g is an agreed 'generator' (<p-1)
A is the Client.
B is the Server.
H() is a one way crypto hash function (SHA or similar)
p a fixed large safe prime
key is the resulting shared key
U is the userid (short, probably <=8 chars)
P is the user's password (short, probably <=8 chars)

[ NOTE: ^ means to-the-power-of, % means mod ]

1) A->B: y1=(g^x1)%p, U
2) B->A: y2=(g^x2)%p
3) B: key=H(P,(y1^x2)%p)
4) A: key=H(P,(y2^x1)%p)

What happens is...

1) The client sends its public DH value to the Server,
   along with a user identifier.
2) The server looks up a list of valid users to find P
   and disconnects if the user is not identified,
   otherwise it returns its public DH value.
3) Server calculates key=H(P,(y1^x2)%p)
4) Client calculates key=H(P,(y2^x1)%p)

* Short usernames and passwords are OK.

* It really is just a standard Diffie-Hellman with a hash,
  so no patent, copyright, or crypto export issues. (AFAIK
  you can use this for whatever you want :-)

* Avoids the man in the middle.

* If you dont want to advertise the username you could
  use a hash of this value instead (though this doesnt
  add much in the way of security).

* It trivially simple.




------------------------------

Date: Mon, 12 Apr 1999 09:43:29 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Adequacy of FIPS-140

Several of the questions posed below address the same issues in
different forms.

Herman Rubin wrote:

[snip]

> 
> I would not accept the stated standards as adequate for statistical
> purposes.  I want much better randomness that they promise, and
> I would test much harder, and I still would not accept the output
> of a single device stream, as it is impossible to test it adequately.
> Even if a decay source is "perfect", the transcribing device has its
> problems, and these are not the problems of dead time.

There are several references to adequacy in the above paragraph.  Do you
have a particular standard, hopefully quantitative, that defines the
boundary between adequate and inadequate?

A second quation relates to the utility of statistical testing.  Your
rejection of the stated standards (FIPS-140 I presume) does not indicate
a rejection of the testing process, but of the minimum criteria defined
by those standards.  It appears to me that you would accept tests as
useful, but with tighter standards.  Is this correct?

I believe there is no serious dispute on the value of RNG diversity. 
Since independent device streams are not expensive or difficult, any
serious user should demand a composite stream.

Given a composite stream whose component streams pass the "much harder
tests" mentioned above, is there any theoretical utility in testing the
composite output?  It appears to me that the engineering considerations
are going to demand such tests, but are the necessary in the ideal case
as well?

[snip]

> >We know that there can be no statistical tests that pass judgement on
> >how good a TRNG is. The candidate TRNG could pass the tests with a
> >99.999% significance and still be a very poor TRNG.
> 
> This is correct.  What should be done instead is to construct tests
> which poor RNGs would fail with high probability.  This is not that
> hard to do, but forces large sample sizes.   A good approximation to
> a TRNG (notice that I did not say a TRNG) should pass it with large
> probability, and a poor approximation should fail it with large
> probability.  I do not think this possible for a direct use of a
> single generated sequence; the sample size needed would be well
> beyond feasibility.

The difference between a test for good/not-good and a test for
bad/not-bad appears to be a linguistic sophistry.  The fact that a test
cannot indicate strength, but can indicate weakness appears to be well
understoond and agreed upon.  The fact that lack of detected weakness
does not imply strength appears to be the basis for much heat but little
light.  Do you take issue with either of these proposition?

Please indicate, in general if not specific terms, how you conclude that
"adequate" sample sizes are infeasible.  What standard of adequacy are
you assuming, and how do you derive the necessary sample size from that
assumption?

[snip]

> 
> >Testing of a TRNG is not a measurement in the traditional sense.
> >Randomness and non-randomness are nominal entities, not real measures
> >along an interval.
> 
> It is, if one has a long enough sequence, or considers that one
> has a poor measurement.

How long is long enough?

> But we are not going to get sequences of length 10^30, or even 10^20.

Why not?  Memory of all kinds is cheap.

> Sequences of 20,000 are too short for any kind of consideration.

What standard do these sequences fail to meet?

[snip]

> The full model is that there is a joint distribution of the
> observations.  This model is huge.  What one can test for is that
> the behavior of the physical process is close to what is wanted,
> ASSUMING that it changes slowly, and that long-term dependencies can
> be ignored.  It is possible to come up with tests which will be very
> likely to accept a sequence which is random within an error of .001,
> and very likely to fail for a sequence which has an error of .01.

Naively, I interpret this to indicate that we would expect to reject one
out of 1000 samples from good RNGs, and accept 1 out of 100 samples from
bad RNGs.  I supposed I need to interpret the phrase "a sequence which
is random within an error of N".  What did you mean?


> If we then test something like eight sequences produced by different
> devices, or the same device at different times, and accept if at
> least 5 of them pass, and XOR the sequences to get one sequence, I
> would feel rather confident in using that sequence as having the
> desired probabilities of a TRNG within about one part in 10^10.
> 
> Notice that the tests would consider the probabilities of both
> kinds of errors, and use a defined measure of "good" and "bad".

It appears that the one in 10^10 in the above was calculated as 0.01^5,
where 0.01 was the rejection threshold and 5 was the number of component
sequences.  Is this a correct interpretation?

Does this scale?  I.e., could I merge 20 such sequences to get one part
in 2^128?

------------------------------

Date: Mon, 12 Apr 1999 09:48:51 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers

R. Knauer wrote:
> 
> On 12 Apr 1999 12:23:00 -0500, [EMAIL PROTECTED] (Herman
> Rubin) wrote:
> 
> >If one does not believe in physical probability, one does not believe
> >in physical probability.
> 
> Can I quote you on that. :-)
> 
> >Einstein claimed, "God does not play dice with the universe."
> 
> And Big Al was wrong, too. The Universe is just one big crap shoot at
> the quantum level when it comes to measurement. There are no hidden
> variables. Locality is not observed. Systems do become entangled over
> super-luminal distances.
> 
> The reason is that the result of a given measurement cannot be known
> in advance, because in order to determine it, one would have to
> measure the state of the system, which would then change the result
> one determined. Quantum Randomness is intimately tied in with the
> Unknown in general meta-mathematical terms. See Chaitin.
> 
> >I would not accept the stated standards as adequate for statistical
> >purposes.
> 
> I smell a consensus beginning to form here, sports fans.

A single agreement does not imply consensus.  Consensus implies lack of
serious dispute.  'ppears to me we still have plenty of that.

> 
> Are the rest of you taking notes - or are you going to accuse me again
> of having fabricated the consensus once it emerges?

I won't accuse you of fabrication; just outrageous distortion.

> 
> >I want much better randomness that they promise, and
> >I would test much harder, and I still would not accept the output
> >of a single device stream, as it is impossible to test it adequately.
> 
> That is what I have been saying all along, based on the comments of Li
> & Vitanyi, Kolmogorov, Feller and Williams & Clearwater.
> 
> >Even if a decay source is "perfect", the transcribing device has its
> >problems, and these are not the problems of dead time.
> 
> What other problems do you consider significant besides dead time? We
> assume that pulse pileup and discriminator problems are lumped into
> what we call the dead time problem, which is really a general
> detection problem not limited to quenching the detector.
> 
> >The chi squared test was originally a test for the statistical fit of
> >cell frequencies to theoretical probabilities.  This test does not
> >have exactly the chi squared distribution under the hypothesis, but
> >does have this distribution asymptotically.
> 
> But what is its applicability to randomness?
> 
> >>It is tempting to equate statistical significance with closeness to
> >>being a TRNG, as if there is some kind of measurement being conducted.
> 
> >This is the typical misuse of statistics by those who have not had
> >decent statistical theory.
> 
> Understand that I did not mean to imply that I agreed with that
> statement.
> 
> In fact I most assuredly do not agree with it, because
> randomness/non-randomness is a nominal consideration and cannot be
> measured on some scale or interval, like the salaries of teachers in
> New York or the skid distance of smow tires.
> 
> > I suspect the statistics book you have
> >cited conveys the same impression.
> 
> Actually it did not in any way. Triola avoids the issue of randomness
> testing like the plague.
> 
> >The term "statistical significance"
> >should be eliminated from the vocabulary; its meaning is limited
> >and not relevant to what one is doing.
> 
> I'll drink to that!
> 
> >This is correct.  What should be done instead is to construct tests
> >which poor RNGs would fail with high probability.  This is not that
> >hard to do, but forces large sample sizes.
> 
> Is there an echo in here? That is exactly what I said a couple weeks
> ago. I smell a consensus building.
> 
> >A good approximation to
> >a TRNG (notice that I did not say a TRNG)
> 
> I think you may have typoed since you DID say TRNG.
> 
> But I get your point. We are talking about classical devices, and they
> can never meet the TRNG specification perfectly, any more than a round
> object can meet the circularity specification perfectly.
> 
> > should pass it with large
> >probability, and a poor approximation should fail it with large
> >probability.  I do not think this possible for a direct use of a
> >single generated sequence; the sample size needed would be well
> >beyond feasibility.
> 
> Yeah, like of order the size of the ensemble, perhaps or some
> reasonable fraction.
> 
> When you run simulations, how large is the sample in general terms
> compared to some typical parameter of the process you are trying to
> simulate? How do you know that you have an adequate simulation in
> terms of sample size?
> 
> >>Testing of a TRNG is not a measurement in the traditional sense.
> >>Randomness and non-randomness are nominal entities, not real measures
> >>along an interval.
> 
> >It is, if one has a long enough sequence, or considers that one
> >has a poor measurement.
> 
> I meant for finite sequences. An infinite sequence must be Borel
> normal, which is a quantitive entity. (But see below for further
> comments.)
> 
> >But we are not going to get sequences of
> >length 10^30, or even 10^20.  Sequences of 20,000 are too short for
> >any kind of consideration.
> 
> I have been saying that exact thing for some month now. I am pleased
> that finally a True Expert is concurring with me.
> 
> >The "ensemble" is a construct in the absence of a belief in
> >probability.
> 
> Isn't that a bit harsh? When I studied statistical mechanics long ago,
> I recall using ensemble averages along with probability concepts, e.g.
> the "grand canonical ensemble".
> 
> >It does not.  It involves a finite sequence.  The "population of
> >sequences" is a purely mathematical construct.
> 
> Yes, to the extent that the ensemble is a purely mathematical concept.
> 
> >The full model is that there is a joint distribution of the
> >observations.
> 
> To make absolutely certain that we all understand exactly what you
> mean by the term "joint distribution", please give a concise
> defintion. I know it to mean one thing, namely P(A|B) which is the
> probability of A given B. If that is what you mean by it, why is it so
> important?
> 
> >This model is huge.
> 
> How huge is it?
> 
> >What one can test for is that
> >the behavior of the physical process is close to what is wanted,
> >ASSUMING that it changes slowly,
> 
> That is the so-called "adiabatic hypothesis", which is also used in
> quantum mechanics for certain calculations.
> 
> >and that long-term dependencies can
> >be ignored.  It is possible to come up with tests which will be very
> >likely to accept a sequence which is random within an error of .001,
> >and very likely to fail for a sequence which has an error of .01.
> >If we then test something like eight sequences produced by different
> >devices, or the same device at different times, and accept if at
> >least 5 of them pass, and XOR the sequences to get one sequence, I
> >would feel rather confident in using that sequence as having the
> >desired probabilities of a TRNG within about one part in 10^10.
> 
> I wonder if you could you use a quantum computer (assuming one
> existed) programmed to calculate true random numbers as the standard,
> and if so how would you go about the tests in general terms?
> 
> >Notice that the tests would consider the probabilities of both
> >kinds of errors, and use a defined measure of "good" and "bad".
> 
> I agree that those entities are nominal, if that is what you are
> saying. Or maybe with this more elaborate methodology you are saying
> that one CAN construct a scale for randomness.
> 
> >>Are you agreeing with me?
> 
> >No.
> 
> Oh, well - I will take whatever agreement I can get when I can get it.
> 
> After all, I am only a lowly Informed Layman (tm), so I cannot expect
> complete conformity with True Experts all the time.
> 
> > The law of large numbers is meaningful, but frequency
> >interpretations of probability are inadequate.
> 
> I have been saying that from the very beginning, ever since I read Li
> & Vitanyi and they said that. Even Kolmogorov said that, and so did
> Feller.
> 
> Folk, you are watching a consensus in the formation.

Riiiiiight.

How Clintonesque.  Just declare victory and go home.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to