Cryptography-Digest Digest #115, Volume #10      Thu, 26 Aug 99 17:13:03 EDT

Contents:
  Re: 512 bit number factored (Boudewijn W. Ch. Visser)
  SHA-1 test vectors (Tomas)
  Re: How Easy Can Terrorists Get Strong Encrypt? (JPeschel)
  Re: where to get CCITT X.509? (Jean-Jacques Quisquater)
  Re: NEW THREAD on compression (Mok-Kong Shen)
  Re: NEW THREAD on compression (Mok-Kong Shen)
  Re: SHA-1 test vectors (James Pate Williams, Jr.)
  Re: 512 bit number factored (Helger Lipmaa)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Boudewijn W. Ch. Visser)
Subject: Re: 512 bit number factored
Date: 26 Aug 1999 19:15:54 GMT

Anton Stiglic <[EMAIL PROTECTED]> writes:

[RSA challenge, 155 TBA]

>155 digits is around 514 bitsl
>Would the result you mention be the To Be Announced TBA of
>the rsa status report?

I guess so. On the CWI homepage there is a link called 'RSA-155 factored',
which links to the short announcement that I posted in translation.
(http://www.cwi.nl/ )

Boudewijn
-- 
+--------------------------------------------------------------+
|Boudewijn Visser        | E-mail:[EMAIL PROTECTED]      |           
| -finger for PGP-keys.- | http://www.ph.tn.tudelft.nl/~visser |
+-- my own opinions etc ---------------------------------------+

------------------------------

From: [EMAIL PROTECTED] (Tomas)
Subject: SHA-1 test vectors
Date: 26 Aug 1999 19:04:35 GMT


Where could I get some SHA-1 test vectors?

Tomas

------------------------------

From: [EMAIL PROTECTED] (JPeschel)
Subject: Re: How Easy Can Terrorists Get Strong Encrypt?
Date: 26 Aug 1999 18:44:04 GMT

<[EMAIL PROTECTED]> writes:

>I just got a Code Composer release from TI.  In the "End-User
>License Agreement" it says:
>6 - Export Control - The re-export of US orignal softare and
>documentation is subject to the Export Administration Act of
>1969 as amended.  Compliance with such regulations is your
>responsibility.  You agree that neither you nor your customers
>intend to or will, directly or indirectly, export or transmit
>the Licensed Materials or related documentation or technical
>data to any country to which such export or transmission is
>restricted by any applicable US regulations or statute, without
>the proper written consent, if required of the BXA (fully
>spelled out), or such other government entity as may have
>jurisdiction over such export or transmission.
>
>There's a compiler for one type of processor (the C3x series)
>somewhere on the CD.  I wouldn't be supprised if they (TI)
>haven't shipped the CD all over the world and they just
>change the boiler plate for the country it's delivered in.
>But it's clear TI thinks there are controls on compilers
>and other software.

Mike, this agreement, found on many software packages, only
says that the consumer may need an export license to
export to certain countries. The company, for instance,
Microsoft or Borland,  needs an export license to export
to other countires, but that doesn't stop them from selling
compilers throughout the world, as John suggested.

Joe



__________________________________________

Joe Peschel 
D.O.E. SysWorks                                 
http://members.aol.com/jpeschel/index.htm
__________________________________________


------------------------------

From: Jean-Jacques Quisquater <[EMAIL PROTECTED]>
Subject: Re: where to get CCITT X.509?
Date: Thu, 26 Aug 1999 21:15:43 +0200

Are you sure you want to use it?

See pkix and spki at www.ietf.org and be happy
(the first one is what you want but the second one is what you need :-).

See also ftp://ftp.bull.com/pub/OSIdirectory/ITUnov96/

Regards,

Jean-Jacques Quisquater,

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: NEW THREAD on compression
Date: Thu, 26 Aug 1999 21:42:45 +0200

SCOTT19U.ZIP_GUY wrote:
> 
>        Don't add anything else yet. I agree one extra symbol is small.
> I agree the scheme will allow any file to be compresed uncompressed
> and will compress back again to the original file. But it is not "one to one".
> To be "one to one" and useful as a first pass before encryption it should have
> in my view the property that. If any A compresses to a file B then
> uncompressing B should get A back (yours meets this part as do most
> methods). The next part is harder to meet and that is. Any file C when
> uncompressed to a file D and then when D is compressed you get file
> C back (your scheme does not meet this).  The counter example is
> to pretend that the file made of only two bytes 00000000 00000000
> is a compressed file. How would you uncompress this file with your
> method. SHOW ME use any tree you want.
>   Know you may ask how such a file gets to be even a candidate for
> uncompression. This is how. You send a message to your friend. First
> you compress the message then you encrypt it. Your friend gets the
> message  decrypts it and then decompress that result to get original
> message. Your enemy also intercpets the encrypted message during
> the transmition. When it guess a ( worng key) lets pretend he gets
> in the front of the file 00000000 00000000 ....  know he tries to decompress
> it. He can't becasue this file is not possibly a valid compressed file under
> your scheme so the enemy is imediately given information as to why his
> guess of the key is wrong. This may lead to a greater reduction than even
> the blind testing of keys, THe attacker may even have the math smarts to
> eliminate who classes of keys based on the fact they would lead to unreal
> compressed files. It would be far safer for you if every file that an attacker
> got with a wrong key was a real compressed file. That why the attacker has
> to uncompress the file and then decide by exaiming the next output phase as
> to weather or not the file had meaning. This is more than a few extra steps.
> If he gets an impossible file he KNOWS he has the wrong key. IF he gets
> somthing strange he does not know right a way that it is wrong. He has to
> rule out other languages oher types of files or even the posssiblity that
> these are special files such as just a binary of your secret keys that your
> friend needed to decrypt some stuff that he needs for some other program.
> The point is if every file that the attacker can get with a false key should
> send the attacker farther down the wrong trail. Don't give him the data to
> know that a path is false. Make the bastards work
> 
> that he is on a fasle trail.

I think that I correctly understand (since the more recent exchange 
of posts) your 'one to one' property and also that the purpose of
your deletion of a sequence of 1's on compression (with the analog 
of addition of 1's on decompression) is not for saving one byte but 
to enable the finding of a valid Huffman code, in case at the end of 
the decompression process the buffer is not empty (because the 
'compressed' file has been obtained by the analyst through decryption 
with a wrong key). But, if you examine closely, you'll see that
a similar feature is provided yb me for such cases. I use 0's
instead of 1's. The 'real' (though not very big) difference is that
I have tried to avoid the type of rules for handling certain special 
cases at the end of processing that you have employed as decribed
in a previous post of yours.

Through our discussion I have modified a little bit my convention (3),
it now reads:

(3) After (2) is done, any number of trailing bytes that contain
    contain all 0's may be deleted or added according to user's
    choice or else randomly decided upon by the program.

That this convention successfully deals with the buffer problem 
mentioned above should be clear. Now let's consider your file C 
problem. As said in my previous post, file C with two bytes of 0's 
decompresses to an empty file, which we name D. On compressing D, 
the Huffman scheme of course produces nothing. But (3) says one 
can add to the output any number of bytes. A particular case is 
then adding 2 bytes, which means we can get back to C (more exactly
C is one of the possible outcomes of compressing D, my compression
being 'non-deterministic' due to (3)). Since the analyst can't know 
what "the user's choice" in (3) in any compression run is, he can't 
rule out that this getting back to file C is 'normal', 
i.e. he gets no clue from decompression and again compression in
the present case. (Let me remark in addition that if file C is the
result of decryption form a file X, then only a single one of the 
normally huge number of possible keys could produce such an
extraordinary file consisting of all 0's, so that it isn't very 
crucial after all, even if he were able to ascertain that that one 
key is not valid.)

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: NEW THREAD on compression
Date: Thu, 26 Aug 1999 21:43:40 +0200

[EMAIL PROTECTED] wrote:
> 
> Mok-Kong Shen ([EMAIL PROTECTED]) wrote:
> : I think that
> : the following scheme is simpler (hence somewhat easier to program)
> : and yet achieves all the purposes of yours. It has the following
> : conventions:
> 
> : (1) No input symbol has a Huffman code of all zeros (any number).
> 
> This dangerously puts redundancy in the compressed text, making life
> easier for the cryptanalyst. I can't recommend such a technique.

Sorry that I don't understand what you wrote. What redundancy is 
being introduced through this convention of mine? (It simply leaves 
one single Huffman code unused.) Would you please detail a little bit?

> 
> : (2) If the last output bit is not at byte boundary, add 0's
> :     till byte boundary.
> 
> : (3) After (2) is done, delete all trailing bytes (any number) that
> :     contain all 0's.
> 
> If the goal is to ensure that the input file is exactly decompressed,
> without adding a spurious byte at the end, the conventional technique is
> to prefix the compressed file with the number of bytes in the file before
> compression. Once you have decompressed the correct number of bytes, stop.

The goal is to realize a property which Mr. Scott terms 'one to one'.
See my most recent follow-up in answer to him.

> 
> To introduce even less redundancy into the file, simply begin the
> compressed message with _three bits_ indicating the number of unused bits
> in the last byte of the file.
> 
> Or, to avoid having to store the whole compressed message, put those three
> bits at the start of the second-last byte in the compressed file.

I don't yet see your redundancy argument. My convention (1) enables
me to output the compressed bit sequence without adding any length
information at all (only 0 padding is used to obtain whole bytes,
which is normally required in practice). Convention (3) even allows 
me to shorten the output from the Huffman algorithm under some 
favorable circumstances. So I believe there is less redundancy, if 
any, in my scheme as compared to that of yours.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (James Pate Williams, Jr.)
Subject: Re: SHA-1 test vectors
Date: Thu, 26 Aug 1999 20:31:31 GMT

On 26 Aug 1999 19:04:35 GMT, [EMAIL PROTECTED] (Tomas) wrote:

>Where could I get some SHA-1 test vectors?

_Handbook of Applied Cryptography_ by Alfred J. Menezes et al. Table
9.6 Test vectors for selected hash functions page 345. You might be
able to find this chapter on-line. Do a web search for the title or
author.

==Pate Williams==
[EMAIL PROTECTED]
http://www.mindspring.com/~pate


------------------------------

From: Helger Lipmaa <[EMAIL PROTECTED]>
Subject: Re: 512 bit number factored
Date: Thu, 26 Aug 1999 20:34:09 +0000

"Boudewijn W. Ch. Visser" wrote:

> See http://www.cwi.nl/cwi/Latest_News.html :
>
> quick translation:
> - Scientists at CWI (Center for Mathematics and Computer Science) led
> by Herman te Riele have on Sunday August 22 factored a 512 bit number,
> which models 95% of the keys used to secure electronic commerce on the
> Internet.
> This proves, much sooner than expected at the start of E-commerce on
> the Internet, that the popular 512 bit keysize is no longer secure.
> These 512 bit keys are daily used to protect large amounts of money.
>
> More information, and pictures of the pressconference will follow
> later today (26 august 1999). For more information, please call:
> (020) 5924248, or email : [EMAIL PROTECTED]

Some bits of more information follows.

Helger
http://home.cyber.ee/helger

Factorization of a 512-bits RSA key using the Number Field Sieve
================================================================
On August 22, 1999, we found that the 512-bits number

RSA-155 =
1094173864157052742180970732204035761200373294544920599091384213147634\
9984288934784717997257891267332497625752899781833797076537244027146743\
531593354333897

can be written as the product of two 78-digit primes:

102639592829741105772054196573991675900716567808038066803341933521790711307

779
*
106603488380168454820927220360012878679207958575989291522270608237193062808

643

Primality of the factors was proved with the help of two different
primality
proving codes. An Appendix gives the prime decompositions of p +- 1.
The number RSA-155 is taken from the RSA Challenge list
(see http://www.rsa.com/rsalabs/html/factoring.html).

This factorization was found using the Number Field Sieve (NFS) factoring

algorithm, and beats the 140-digit record RSA-140 that was set on
February 2, 1999, also with the help of NFS [RSA140].
The amount of computer time spent on this new factoring world record is
estimated to be equivalent to 8000 mips years.
For the old 140-digit NFS-record, this effort was estimated to be
2000 mips years. Extrapolation using the asymptotic complexity formula
for NFS would predict approximately 14000 mips years for RSA-155. The
gain
is caused by an improved application of the polynomial search method used

for RSA-140.

For information about NFS, see [LL]. For additional information,
implementations and previous large NFS factorizations, see [DL, E1, E2,
GLM].

Polynomial selection
====================
The following two polynomials

F_1(x,y) =                           11 93771 38320     x^5
                             - 80 16893 72849 97582 y  *x^4
                          - 66269 85223 41185 74445 y^2*x^3
                  + 1 18168 48430 07952 18803 56852 y^3*x^2
                + 745 96615 80071 78644 39197 43056 y^4*x
           - 40 67984 35423 62159 36191 37084 05064 y^5

F_2(x,y)  =  x - 3912 30797 21168 00077 13134 49081 y

were selected with the help of a polynomial search method developed by
Peter Montgomery (Microsoft Research, USA and CWI) and Brian Murphy
(The Australian National University, Canberra), which was applied already

to RSA-140, and now, even more successfully, to RSA-155.

The polynomial F_1(x,y) was chosen to have a good combination of
two properties: being unusually small over its sieving region and
having unusually many roots modulo small primes (and prime powers).
The effect of the second property alone gives F_1(x,y) a smoothness
yield comparable to that of a polynomial chosen at random for an
integer of 137 decimal digits.
Measured in a different way: the pair (F_1, F_2) has a yield of relations

approximately 13.5 times that of a random polynomial selection for
RSA-155 (the corresponding figure for the polynomial selected for the
RSA-140 factorisation is 8).

The polynomial selection took approximately 100 MIPS years,
which is equivalent to 0.40 CPU years on a 250 MHz SGI Origin 2000
processor (most of the searches were done on such processors).

The original polynomial selection code was ported by Arjen Lenstra
to use his multiple precision arithmetic package LIP.
Brian Murphy, Peter Montgomery, Arjen Lenstra and Bruce Dodson
ran the polynomial searches for RSA-155 with this code. The above
polynomial emerged from Bruce Dodson's search.

Calendar time for the polynomial selection was approximately nine
weeks.

The Sieving
===========
Sieving was done on about 160 175-400 MHz SGI and Sun workstations,
on 8 300 MHz SGI Origin 2000 processors, on about 120 300-450 MHz
Pentium II PCs, and on 4 500 MHz Digital/Compaq boxes.
The total amount of CPU-time spent on sieving was 35.7 CPU years
estimated to be equivalent to approximately 8000 mips years.
Calendar time for sieving was 3 1/2 months.

For the purpose of comparison, both lattice sieving and line sieving were

used.
Lattice sieving was introduced by Pollard [P] and the code used is based
on the implementation described in [GLM, Cetal].

For the lattice sieve, a factor base bound of 16 777 216 (2^24) was
chosen,
both for the rational and for the algebraic side. Two large primes were
allowed on both sides.
Most of the line sieve was carried out with two large primes on both the
rational and the algebraic side. The rational factor base consisted of
the
primes < 44 000 000 and the algebraic factor base of the primes < 110 000

000.
Some line sieving allowed three large primes instead of two on the
algebraic
side. In that case the rational factor base consisted of the primes < 8
000
000
and the algebraic factor base of the primes < 25 000 000.

For both sieves the large prime bound 1 000 000 000 was used both
for the rational and for the algebraic primes.

A total of 124 722 179 relations were generated, 71% of them with lattice

sieving (L), 29% with line sieving (C). Among them, there were 39 187 441

duplicates, partially because of the simultaneous use of the two sievers.

Sieving was done at eleven different locations with the following
contributions:

(L: using lattice sieving code from Arjen K. Lenstra
 C: using line sieving code from CWI)

20.1 % (3057 CPU days) Alec Muffett (L at Sun Microsystems Professional
                                     Services, Camberley, UK)
17.5 % (2092 CPU days) Paul Leyland (L,C at Microsoft, Cambridge, UK)
14.6 % (1819) Peter L. Montgomery, Stefania Cavallar (C,L at CWI,
Amsterdam)
13.6 % (2222) Bruce Dodson (L,C at Lehigh University, Bethlehem, PA, USA)

13.0 % (1801) Francois Morain and Gerard Guillerm
              (L,C at Ecole Polytechnique, Palaiseau, France)
 6.4 % (576)  Joel Marchand (L,C at Ecole Polytechnique/CNRS, Palaiseau,
France)
 5.0 % (737)  Arjen K. Lenstra (L at Citibank, Parsippany, NJ, USA
                                and Univ. of Sydney, Australia)
 4.5 % (252)  Paul Zimmermann (C at Inria Lorraine and Loria, Nancy,
France)
 4.0 % (366)  Jeff Gilchrist (L at Entrust Technologies Ltd., Ottawa,
Canada)
 0.65 % (62)  Karen Aardal (L at Utrecht University, The Netherlands)
 0.56 % (47)  Chris and Craig Putnam (L at ?)

Calendar time for the sieving was 3.7 months.
The relations were collected at CWI and required 3.7 Gbytes of disk
space.
                                                 ^^^

Filtering and linear algebra
============================

The filtering of the data and the building of the matrix were carried out

at CWI and took one month.

The resulting matrix had 6 699 191 rows, 6 711 336 columns, and weight
417 132 631 (62.27 nonzeros per row).
With the help of Peter Montgomery's Cray implementation of the blocked
Lanczos algorithm (cf. [M95]) it took 224 CPU hours and 2 Gbytes of
central
memory on the Cray C916 at the SARA Amsterdam Academic Computer Center to

find
64 dependencies among the rows of this matrix.
Calendar time for this job was 9 1/2 days.

Square root
===========

On August 20-21, 1999, four different square root (cf. [M93]) jobs were
started in parallel on four different 300 MHz processors of CWI's SGI
Origin
2000, each handling one dependency.
One job found the factorisation after 39.4 CPU-hours, the other three
jobs
found the trivial factorization after 38.3, 41.9, and 61.6 CPU-hours
(different
CPU times are due to the use of different parameters in the four jobs).

Herman te Riele, CWI, August 26, 1999

with the algorithmic and sieving contributors:

        Stefania Cavallar
        Bruce Dodson
        Arjen Lenstra
        Walter Lioen
        Peter L. Montgomery
        Brian Murphy

and the sieving contributors:

        Karen Aardal
        Jeff Gilchrist
        Gerard Guillerm
        Paul Leyland
        Joel Marchand
        Francois Morain
        Alec Muffett
        Craig and Chris Putnam
        Paul Zimmermann

Acknowledgements are due to the contributors of idle PC and workstation
cycles,
to the Dutch National Computing Facilities Foundation (NCF) for the use
of
the Cray-C916 supercomputer at SARA and (in alphabetical order) to

Centre Charles Hermite (Nancy, France),
Citibank (Parsippany, NJ, USA),
CWI (Amsterdam, The Netherlands),
Ecole Polytechnique/CNRS (Palaiseau, France),
Entrust Technologies Ltd. (Ottawa, Canada),
Lehigh University (Bethlehem, PA, USA),
the Magma Group of John Cannon at the University of Sydney,
the Medicis Center at Ecole Polytechnique (Palaiseau, France),
Microsoft Research (Cambridge, UK),
Sun Microsystems Professional Services (Camberley, UK), and
The Australian National University (Canberra, Australia)

for the use of their computing resources.

References:
===========

[RSA140]Stefania Cavallar, Bruce Dodson, Arjen Lenstra, Paul Leyland,
        Walter Lioen, Peter L. Montgomery, Brian Murphy, Herman te Riele,

        and Paul Zimmermann,
        Factorization of RSA-140 using the Number Field Sieve, to appear
in:
        Lam Kwok Yan, Eiji Okamoto, and Xing Chaoping, editors,
        Advances in Cryptology -- Asiacrypt '99,
        Lecture Notes in Computer Science # xxxx,
        Springer--Verlag, Berlin, etc., 1999.

[Cetal] James Cowie, Bruce Dodson, R.-Marije Elkenbracht-Huizing,
        Arjen K. Lenstra, Peter L. Montgomery and Joerg Zayer,
        A world wide number field sieve factoring record: on to 512 bits,

        pp. 382-394 in: Kwangjo Kim and Tsutomu Matsumoto (editors),
        Advances in Cryptology - Asiacrypt '96, Lecture Notes in
        Computer Science # 1163, Springer-Verlag, Berlin, 1996.

[DL]    B. Dodson, A.K. Lenstra, NFS with four large primes: an
        explosive experiment, Proceedings Crypto 95, Lecture Notes
        in Comput. Sci. 963, (1995) 372-385.

[E1]    R.-M. Elkenbracht-Huizing, Factoring integers with the
        Number Field Sieve, Doctor's Thesis, Leiden University,
        1997.

[E2]    R.-M. Elkenbracht-Huizing, An implementation of the number
        field sieve, Exp. Math. 5, (1996) 231-253.

[GLM]   R. Golliver, A.K. Lenstra, K.S. McCurley, Lattice sieving
        and trial division, Algorithmic number theory symposium,
        Proceedings, Lecture Notes in Comput. Sci. 877, (1994) 18-27.

[LL]    A.K. Lenstra, H.W. Lenstra, Jr., The development of the
        number field sieve, Lecture Notes in Math. 1554, Springer-
        Verlag, Berlin, 1993

[M93]   Peter L. Montgomery, Square roots of products of algebraic
        numbers, in Proceedings of Symposia in Applied Mathematics,
        Mathematics of Computation 1943-1993, Vancouver, 1993,
        Walter Gautschi, ed.

[M95]   Peter L. Montgomery, A block Lanczos algorithm for finding
        dependencies over GF(2), Proceedings Eurocrypt 1995,
        Lecture Notes in Comput. Sci. 921, (1995) 106-120.

[P]     J.M. Pollard, The lattice sieve, pages 43-49 in [LL].

Appendix

Here are the P+-1 factorizations of the factors:

102639592829741105772054196573991675900716567808038066803341933521790711307

779
        p78
102639592829741105772054196573991675900716567808038066803341933521790711307

778
        2.607.305999.

276297036357806107796483997979900139708537040550885894355659143575473
102639592829741105772054196573991675900716567808038066803341933521790711307

780
        2.2.3.5.5253077241827.
        325649100849833342436871870477394634879398067295372095291531269

106603488380168454820927220360012878679207958575989291522270608237193062808

643
        p78
106603488380168454820927220360012878679207958575989291522270608237193062808

642
        2.241.430028152261281581326171.
        514312985943800777534375166399250129284222855975011
106603488380168454820927220360012878679207958575989291522270608237193062808

644
        2.2.3.130637011.237126941204057.10200242155298917871797
        28114641748343531603533667478173



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to