Cryptography-Digest Digest #33, Volume #9 Thu, 4 Feb 99 11:13:03 EST
Contents:
sci.math.research sci.crypt.research (Herman J.J. te Riele)
Re: I hate to bring up PRNGs again... (Soeren Mors)
Implementation of SHA1 ("Hudson Barton")
Re: Who will win in AES contest ?? ([EMAIL PROTECTED])
Factorization of RSA-140 with the Number Field Sieve (Herman J.J. te Riele)
Re: David _R._ Scott
Re: Cipher used by iomega in ZIP products ? (Paul Gover)
Re: Random numbers generator and Pentium III (Patrick Juola)
Re: Sanity check on authentication protocol ([EMAIL PROTECTED])
Re: On a Method of Session Key Generation (revised) (Mok-Kong Shen)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Herman J.J. te Riele)
Subject: sci.math.research sci.crypt.research
Date: Thu, 4 Feb 1999 13:09:34 GMT
Acknowledgements are due also to the Medicis center at Ecole Polytechnique,
Palaiseau, France, for the use of several of its computing resources
(http://medicis.polytechnique.fr).
------------------------------
From: Soeren Mors <[EMAIL PROTECTED]>
Subject: Re: I hate to bring up PRNGs again...
Date: 04 Feb 1999 15:09:15 +0100
Brett W <[EMAIL PROTECTED]> writes:
> Hi
>
> Just a quick newbie question:
>
> Although you may have a whole tonne of things that are psuedo-random and
> not that high on entropy, could you combine several of these independant
> devices together to make stronger PRNGs?
>
> Probably not is my guess, but I'd like an experienced response.
Its not possible to increade the entropy by a deterministic
process. You might succed in making it harder to break the resulting
PRNG.
--
Soeren Mors
Student of Computer Science at DAIMI [EMAIL PROTECTED]
Student programmer at Cryptomathic A/S [EMAIL PROTECTED]
------------------------------
From: "Hudson Barton" <[EMAIL PROTECTED]>
Subject: Implementation of SHA1
Date: Thu, 04 Feb 1999 09:13:49 -0500
I'm doing an implementation of SHA1. Unfortunately I don't have much
documentation for it, so I need to ask a question. In my sample code I
notice that a file is read into the SHA1 buffer 16384 bytes at a time,
and I wonder whether that is standard or mandatory, or whether I can
choose any size segment I want.
Is there any documentat available that would address this issue, and
if not then does anyone just know the answer?
Please reply to [EMAIL PROTECTED]
Thanks,
Hudson Barton
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Who will win in AES contest ??
Date: Thu, 04 Feb 1999 10:23:11 GMT
In article <[EMAIL PROTECTED]>,
Sammy <[EMAIL PROTECTED]> wrote:
> David Crick wrote:
>
> > I'm in contact with Jim Foti @ NIST regarding the public availability
> > of submitted papers.
> >
> > At the moment it looks like NONE of the AES2 papers (accepted or not)
> > will be available from NIST until *AFTER* the end of the Round 1
> > comment period, when ALL Round 1 comments will be published.
> >
> > This seems a little silly, as the pre- and post-AES2 papers and the
> > discussions arising from them would be of general use to everyone
> > who intends to submit comments for Round 1.
> >
> > As I said, I'm talking to Jim about this at the moment, but would
> > like to get the general feeling of those on sci.crypt on this matter.
> >
> > David.
>
> The submitters sign a non-exclusive copyright agreement so NIST can
> publish the papers. Since it is non-exclusive, authors may publish the
> papers at any time independently of NIST. However, many authors work in
> large organizations so an internal team concensus would be needed for
> such publication. Therefore, you should expect that many papers will not
> be published in advance of the conference in Rome on March 22. They are
> not eager to pre-announce their results in such an undignified forum as
> sci.crypt . For maximum impact, they will pronounce their findings while
> in the spotlights of the mainstream press in Rome.
>
Me so happy!
For awhile I was afraid I was posting on a "dignified forum" I guess
now that it is really an "undignified forum" it will actaully be a
better and more productive forum. By the way I wont pronounce any
findings in the spotlight of the mainstream press in Rome. You will
have to read it here.
Dave Scott
http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: [EMAIL PROTECTED] (Herman J.J. te Riele)
Subject: Factorization of RSA-140 with the Number Field Sieve
Date: Thu, 4 Feb 1999 09:52:24 GMT
On February 2, 1999, we found that
RSA-140 =
2129024631825875754749788201627151749780670396327721627823338321538194\
9984056495911366573853021918316783107387995317230889569230873441936471
can be written as the product of two 70-digit primes:
3398717423028438554530123627613875835633986495969597423490929302771479
*
6264200187401285096151654948264442219302037178623509019111660653946049
Primality of the factors was proved with the help of two different primality
proving codes. An Appendix gives the prime decompositions of p +- 1.
The number RSA-140 is taken from the RSA Challenge list
(http://www.rsa.com/rsalabs/html/factoring.html).
This factorization was found using the Number Field Sieve (NFS) factoring
algorithm, and beats the 130-digit record that was set on April 10, 1996,
also with the help of NFS [Cetal].
The amount of computer time spent on this new 140-digit NFS-record is
prudently estimated to be equivalent to 2000 mips years.
For the old 130-digit NFS-record, this effort is estimated to be
1000 mips years.
For both numbers, lower "could-have-done-it-in" estimates,
based on a better use of the lattice siever, are:
500 mips years for RSA-130 and 1500 mips years for RSA-140.
For information about NFS, see [LL]. For additional information,
implementations and previous large NFS factorizations, see [DL, E1, E2, GLM].
We used the two polynomials
F_1(x,y) = 43 96820 82840 x^5
+ 39031 56785 38960 y *x^4
- 7387 32529 38929 94572 y^2*x^3
- 19 02715 324374 29887 14824 y^3*x^2
- 6 34410 25694 46461 79139 30613 y^4*x
+ 31855 39170 71474 35039 22235 07494 y^5
F_2(x,y) = x - 3 44356 57809 24253 69517 79007 y .
They were selected with the help of a new polynomial search method
developed by Peter Montgomery and Brian Murphy (The Australian
National University, Canberra).
The polynomial F_1(x,y) was chosen to have a good combination of
two properties; being unusually small over its sieving region and
having unusually many roots modulo small primes (and prime powers).
The effect of the second property alone gives F_1(x,y) a smoothness
yield comparable to that of a polynomial chosen at random for an
integer of 121 decimal digits.
The selection took 2000 CPU hours on four 250 MHz SGI Origin 2000 processors
at CWI. Calendar time for the polynomial selection was four weeks.
Sieving was done on about 125 SGI and Sun workstations running at 175 MHz
on average, and on about 60 PCs running at 300 MHz on average.
The total amount of CPU-time spent on sieving was 8.9 CPU years.
For the purpose of comparison, two sieving methods were used:
lattice sieving and line sieving.
Lattice sieving was introduced by Pollard [P] and the code used is based
on the implementation described in [GLM, Cetal].
For the lattice sieving, a rational factor base of 250 000 elements
(the primes <= 3 497 867) and an algebraic factor base of 800 000 elements
(ideals of norm <= 12 174 433) were chosen.
For the line sieving, different factor base bounds were chosen, namely:
a rational factor base consisting of the primes < 8 000 000 and an
algebraic factor base with the primes < 16 777 215 = 2^24 - 1.
For both sieves the large prime bounds were: 500 000 000 for the rational
primes and 1 000 000 000 for the algebraic primes.
A total of 66 933 395 relations were generated, 55% of them with lattice
sieving (L), 45% with line sieving (C). Among them, there were 10 327 897
duplicates, partially because of the simultaneous use of the two sievers.
Sieving was done at five different locations with the following contributions:
36.8 % Peter L. Montgomery, Stefania Cavallar, Herman J.J. te Riele,
Walter M. Lioen (C,L at CWI, Amsterdam, The Netherlands)
28.8 % Paul C Leyland (L at Microsoft Research Ltd, Cambridge, UK)
26.6 % Bruce Dodson (C,L at Lehigh University, Bethlehem, PA, USA)
5.4 % Paul Zimmermann (L at Inria Lorraine and Loria, Nancy, France)
2.5 % Arjen K. Lenstra (L at Citibank, Parsippany, NJ, USA, and
at the University of Sydney, Australia)
Sieving started the day before Christmas 1998 and was completed one month
later. The relations were collected at CWI and required 3.7 Gbytes of memory.
The filtering of the data and the building of the matrix were carried out
at CWI and took one calendar week.
The resulting matrix had 4 671 181 rows and 4 704 451 columns,
and weight 151 141 999 (32.36 nonzeros per row). With the help of
Peter Montgomery's Cray implementation of the blocked Lanczos algorithm
(cf. [M95]) it took almost 100 CPU hours and 810 Mbytes of central memory
on the Cray C916 at the SARA Amsterdam Academic Computer Center to find 64
dependencies among the rows of this matrix.
Calendar time for this job was five days.
During February 1-2, 1999, four different square root (cf. [M93]) jobs were
started in parallel on four different 250 MHz processors of CWI's SGI Origin
2000, each handling one dependency. After 14.2 CPU hours, one of the four jobs
stopped, giving the two prime factors of RSA-140. Two others also expired with
the two prime factors after 19 CPU hours (due to different input parameter
choices). Only one of the four jobs expired with the trivial factors.
Herman te Riele, CWI, February 3, 1999
with Stefania Cavallar
Bruce Dodson
Arjen Lenstra
Paul Leyland
Walter Lioen
Peter Montgomery
Brian Murphy
Paul Zimmermann
Acknowledgements are due to the contributors, and to the Dutch National
Computing Facilities Foundation (NCF) for the use of the Cray-C916
supercomputer at SARA.
[Cetal] James Cowie, Bruce Dodson, R.-Marije Elkenbracht-Huizing,
Arjen K. Lenstra, Peter L. Montgomery and Joerg Zayer,
A world wide number field sieve factoring record: on to 512 bits,
pp. 382-394 in: Kwangjo Kim and Tsutomu Matsumoto (editors),
Advances in Cryptology - Asiacrypt '96, Lecture Notes in
Computer Science # 1163, Springer-Verlag, Berlin, 1996.
[DL] B. Dodson, A.K. Lenstra, NFS with four large primes: an
explosive experiment, Proceedings Crypto 95, Lecture Notes
in Comput. Sci. 963, (1995) 372-385.
[E1] R.-M. Elkenbracht-Huizing, Factoring integers with the
Number Field Sieve, Doctor's Thesis, Leiden University,
1997.
[E2] R.-M. Elkenbracht-Huizing, An implementation of the number
field sieve, Exp. Math. 5, (1996) 231-253.
[GLM] R. Golliver, A.K. Lenstra, K.S. McCurley, Lattice sieving
and trial division, Algorithmic number theory symposium,
Proceedings, Lecture Notes in Comput. Sci. 877, (1994) 18-27.
[LL] A.K. Lenstra, H.W. Lenstra, Jr., The development of the
number field sieve, Lecture Notes in Math. 1554, Springer-
Verlag, Berlin, 1993
[M93] Peter L. Montgomery, Square roots of products of algebraic
numbers, in Proceedings of Symposia in Applied Mathematics,
Mathematics of Computation 1943-1993, Vancouver, 1993,
Walter Gautschi, ed.
[M95] Peter L. Montgomery, A block Lanczos algorithm for finding
dependencies over GF(2), Proceedings Eurocrypt 1995,
Lecture Notes in Comput. Sci. 921, (1995) 106-120.
[P] J.M. Pollard, The lattice sieve, pages 43-49 in [LL].
Appendix
========
3398717423028438554530123627613875835633986495969597423490929302771479
3398717423028438554530123627613875835633986495969597423490929302771478
2 7 7649 435653 396004811
183967535370446691250943879126698812223588425357931
3398717423028438554530123627613875835633986495969597423490929302771480
2 2 2 3 3 5 13 8429851 33996935324034876299
2534017077123864320746970114544624627539
6264200187401285096151654948264442219302037178623509019111660653946049
6264200187401285096151654948264442219302037178623509019111660653946048
2 2 2 2 2 2 61 135613 3159671789
3744661133861411144034292857028083085348933344798791
6264200187401285096151654948264442219302037178623509019111660653946050
2 3 5 5 389 6781 982954918150967
16106360796654291745007358391328807590779968869
------------------------------
From: [EMAIL PROTECTED] ()
Subject: Re: David _R._ Scott
Date: 4 Feb 99 14:22:27 GMT
[EMAIL PROTECTED] wrote:
: you greatly edited my response I assumed better of you.
I didn't cancel your original post; whatever I didn't repeat is still
there to be seen.
I edited out your next two sentences here. I didn't feel I could reply to
them in an adequate fashion.
John Savard
------------------------------
From: Paul Gover <[EMAIL PROTECTED]>
Subject: Re: Cipher used by iomega in ZIP products ?
Date: Thu, 04 Feb 1999 15:04:37 +0000
I thought it was ROT-13, with two rounds for extra security :-)
------------------------------
From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Random numbers generator and Pentium III
Date: 2 Feb 1999 08:59:00 -0500
In article <[EMAIL PROTECTED]>,
fungus <[EMAIL PROTECTED]> wrote:
>
>
>"Trevor Jackson, III" wrote:
>>
>> R. Knauer wrote:
>>
>> > What statistical test? There is none.
>>
>> You keep repeating this same falsehood. Please get it straight. The fact
>> that there is no statistical test that is sufficient to qualify a sequence
>> as random DOES NOT mean that a sequence can be qualified as random without
>> passign the necessary statistical tests.
>>
>
>So if a finite number fails your test, does that mean it isn't random?
Out of cheese error -- Invalid domain -- redo from start. 8-)
The sequence 000 is random. I know, because I just generated it by
flipping coins (three successive tails).
The sequence 000 is not random. I know because I just wrote lots of
zeros.
What's the difference? The *METHOD OF GENERATION*.
-kitten
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Sanity check on authentication protocol
Date: Thu, 04 Feb 1999 14:55:22 GMT
In article <[EMAIL PROTECTED]>,
Eric Norman <[EMAIL PROTECTED]> wrote:
> Paul Onions wrote:
> >
> > On Thu, 28 Jan 1999 19:58:59 -0600, Eric Norman
> > <[EMAIL PROTECTED]> wrote:
> > >
> > >Since Alice and Bob share a secret, don't you get authentication
> > >for free? That is, Alice encrypts a message with the secret and
> > >sends it to Bob. Bob now knows that the message came from Alice
> > >since she's the only one who could have encrypted it (this
> > >assumes that Bob can recognize a "valid message").
> >
> > You don't get authentication for free. The final parenthesised
> > comment hits the nail on the head. It's not always possible for Bob
> > to recognise a valid message. For example if Alice is just sending an
> > encrypted nonce (single-use random value) then Bob has no means of
> > verifying (upon decryption) that the value he computes is indeed the
> > one that Alice sent. That's why it's always good practice to use MACs
> > (as was pointed out by Antti).
>
> The point I was trying to make is that you don't need any fancy
> back and forth protocol to get authentication when Alice and Bob
> share a secret. Just one message from Alice to Bob will do as long
> as the message is carefully constructed and the mechanism for such
> construction is known by both. Details about "careful construction"
> aren't specified, but additional protocol exchanges seem unnecessary.
>
> --
> Eric Norman
>
> "Congress shall make no law restricting the size of integers
> that may be multiplied together, or the number of times that
> an integer may be multiplied by itself, or the modulus by
> which an integer may be reduced".
>
Sorry Eric but congress does not have to follow common sense. They
can even define the vaule of PI and make it a felony to use
different values.
David Scott.
http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On a Method of Session Key Generation (revised)
Date: Thu, 04 Feb 1999 16:53:15 +0100
R. Knauer wrote:
>
> On Thu, 04 Feb 1999 01:33:28 GMT, [EMAIL PROTECTED]
> (John Savard) wrote:
>
> >I'm proposing that one uses message text as a source of randomness, as
> >you suggest, to manipulate identical such files at both ends. But
> >since you've said you're only storing hashes of the messages, this is
> >essentially no different from what you're proposing.
>
> This method of key generation was proposed by me over a year ago, and
> ubdoubtedly hundreds of time before that. At that time the experts on
> sci.crypt claimed that text ciphers cannot be made secure because even
> if you used the least significant bits (LSBs) of text characters,
> there is no adequate method to remove correlations.
>
> The matter was re-opened recently (by Mr. Shen) and this time around
> several people have recommended hash and compression techniques to
> remove correlation. Some have suggested a CRC hash, and one person has
> suggested the LZ77 compression algorithm.
>
> When pressed for their rationale for selecting these various methods,
> people seem to shy away from providing the reasons, other than the
> vague entropic concept of "distilling randomness", which I find
> inadequate.
>
> I suppose the lack of solid reasons is either due to one of two
> things, which I cannot decide at this time:
>
> 1) There are no solid reasons for recommending various methods for
> removing correlations - there is only expert opinion based on
> intuition. The problem with that is that the crypto highway is
> littered with the corpses of "experts" who used their intuition to
> design ciphers. If some method is not proveably secure, in formal
> terms (like the OTP) or in terms of work effort, then its use is just
> like a crap shoot.
>
> 2) There cannot be any reasons, since correlation cannot be removed
> algorithmically. Once the cryptanalyist discovers your method you are
> no better off than before you used it.
>
> Perhaps you can offer some comments to resolve these issues. Why do
> you believe that a hash is adequate to remove the correlations
> inherent in text, even taking the LSB as your source of
> pseudo-randomness? Keep in mind the famous dictum that one cannot
> generate crypto-grade random numbers algorithmically.
I have the impression that you are opening a new track in the
discussion in that you ask how good a hashing algorithm can be
in (perfectly) eliminating the correlations. Let me say first of
all that my proposal does not have the ambition of achieving
provable perfect security (which I personally consider to be
not (practically) realizable in this non-perfect world even with
much more sophisticated schemes than that of mine.) The hashing in
the proposal aims at a relatively modest goal: To ensure that the
session keys generated using the master key are different and at
the same time offer a level of protection against the analyst (which
is certainly not perfect).
While I don't think it is good, as said above, to discuss in this
thread on good or better ways of hashing, I do like to comment
a bit on what you said so that it might lead you to consider
eventually initiating an independent thread on hashing in which
more people interested in that topic (including myself) could
participate. Globally I think that in applied (as against
theoretical) cryptography it is more often than not that one has
no rigorous formal proofs. One has to resort to experiments guided
by intuitions. It is much like what the engineers do rather than
what the pure mathematicians do. So one can e.g. design a hardware
to produce random bits with diverse methods to reduce bias
and use tests to show that these goals are indeed sufficiently
well achieved. But the best one can get is an (intuitively) best
approximation (within the state of the art) of the ideal OTP.
There can be no rigorous proof that one has ever reached the
ideal OTP as long as one can't practically measure the Kolmogorov
complexity or do like things. On the other hand, if one does not
pedantically negate the value of intuitions, then one can think
of diverse methods of obtaining sequences of reduced (not totally
eliminated) auto-correlations and do expermiments to test whether
these methods sufficiently well meet one's expectation. (Taking the
parity of, say, 32 bits is one such method.) One should also not
forget in striving to approach perfection that very often than not
there are essential constraints of what one can do, not necessarily
of scientific or technical art, e.g. time and human resources etc.
Well, I am afraid I have said too much here. I am certainly very
interested to further discuss with you on the topic if you would
like to open a new thread so that we could entirely concentrate on
that there.
M. K. Shen
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************