Cryptography-Digest Digest #143, Volume #12      Fri, 30 Jun 00 21:13:01 EDT

Contents:
  Re: Comments/analysis requested (Samuel Paik)
  Re: very large primes (stanislav shalunov)
  Re: TEA question (Shawn Willden)
  Re: Diffie Hellman Primes : Speed Tradeoff Q (Bob Silverman)
  Re: Encryption and IBM's 12 teraflop MPC...... ("Douglas A. Gwyn")
  Re: Encryption and IBM's 12 teraflop MPC...... ("Douglas A. Gwyn")
  Re: Newbie question about factoring ([EMAIL PROTECTED])
  Re: Encryption and IBM's 12 teraflop MPC...... (Bill Unruh)
  Re: Encryption and IBM's 12 teraflop MPC...... ([EMAIL PROTECTED])
  Re: Encryption and IBM's 12 teraflop MPC...... (Tom McCune)
  Re: Newbie question about factoring (Dido Sevilla)
  Re: An encryption protocol, version 2 (Dido Sevilla)
  Re: Sellotape and scotch tape (John M. Gamble)

----------------------------------------------------------------------------

From: Samuel Paik <[EMAIL PROTECTED]>
Subject: Re: Comments/analysis requested
Date: Fri, 30 Jun 2000 21:58:13 GMT

In article <8j7lmq$9jh$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
> Thanks for the response.  I have a couple of questions, though.

I'm not David Wagner, nor do I pretend to be a crypto expert but...

> > First, note that the least significant bit (LSB) of each of the 16
> > words of the ciphertext depend only on the LSB of each of the
> > 16 words of the plaintext.
>
> How is the LSB only dependant on the LSB of each of the 16 words
> of the plain text?  When you add the keyw before writing to the
> text, wouldn't that make it dependant on the keyw?

I think David meant was that the LSBs of the ciphertext do not depend
on any other bits of the plaintext but the LSBs.

> > Second, note that this dependency is in fact affine.
> > (The only non-linearity in the above is the carry bits, but
> > there are no carry bits coming into the LSB.)  Consequently,
> > given a few dozen known texts, we can find this affine
> > transformation, using Gaussian elimination.
>
> Unfortunately, I am not usually able to get more that 1 known text
> for each password.  There are, however, always 64 0xB0 at the end.

Each block of 64 bytes is a known plaintext.

> Do you know of a website that has info on these techniques, such as
> Gaussian elimination?

I think the idea David is referring to is this:

Take the LSBs of the plaintexts and ciphertexts and form
vectors (we'll use just 4 bits to simplify matters...)

   ciphertext     plaintext
     | 0 |          | 1 |
     | 1 |          | 0 |
     | 0 |          | 0 |
     | 1 |          | 1 |

A swap between elements from plaintext to ciphertext can be
represented as a (boolean) matrix

   ciphertext    swap     plaintext
     | 0 |    | 0 1 0 0 |   | 1 |
     | 1 |    | 1 0 0 0 |   | 0 |
     | 0 | =  | 0 0 1 0 | X | 0 |
     | 1 |    | 0 0 0 1 |   | 1 |

Two swaps would be two matrix multiplied together, and so forth.
This specific kind of matrix consisting only of 0s and 1s with
exactly one 1 per row and one 1 per column represents a permuation,
a reordering of the plaintext vector, and reading out the permuation
from the matrix is straightforward.  Now, given an unknown
permutation and known ciphertexts and plaintexts, you can combine
the vectors...

   ciphertexts        permuation      plaintexts
   0 1 2 3 4 5 6                      0 1 2 3 4 5 6

 | 0 1 0 1 1 1 1 |    | ? ? ? ? |   | 1 1 1 0 0 1 1 |
 | 1 1 1 0 0 1 1 |    | ? ? ? ? |   | 0 0 0 1 1 0 0 |
 | 0 0 0 1 1 0 0 | =  | ? ? ? ? | X | 1 1 0 1 0 1 0 |
 | 1 1 0 1 0 1 0 |    | ? ? ? ? |   | 0 1 0 1 1 1 1 |

This example is simple enough to solve by inspection, but more
generally, Gaussian elimination is an old and widely used
technique for solving systems of linear equations.

How does this apply to the encryption?  Let's represent a 0 LSB
in the ciphertext or plaintext as a -1 and a 1 LSB in the texts
as 1.

  ciphertext    swap & invert   plaintext
    | -1 |     |  0  0  0 -1 |   |  1 |
    | -1 |     |  0  1  0  0 |   | -1 |
    | -1 |  =  |  0  0  1  0 | X | -1 |
    | -1 |     | -1  0  0  0 |   |  1 |

Both the permutation and inversions should be obvious from the
transform matrix here.

Sam


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: stanislav shalunov <[EMAIL PROTECTED]>
Subject: Re: very large primes
Date: 30 Jun 2000 17:41:44 -0400

Dennis Ritchie <[EMAIL PROTECTED]> writes:

> I no longer remember the precise results and their consequences,
> but it is probably more like: for every recursively enumerable set,
> there is a polynomial f(a; x1, x2, ..., xn) where n is fairly small
> (10 or 20 or so) whose coefficients are integers, such that there is a
> solution in integers for f()=0 if and only if a<0 or a is a member
> of the r.e. set.

The known result I remember is for 26 variables.  I remember also a
remark that it could be improved.

------------------------------

From: Shawn Willden <[EMAIL PROTECTED]>
Subject: Re: TEA question
Date: Fri, 30 Jun 2000 16:11:27 -0600

[EMAIL PROTECTED] wrote:

> > There's no problem.  Just use a long instead of an int to store this
> > number.  It will be slower, but it can be made to work.
>
> delta=0x9E3779B9   is already unsigned long not int

I was assuming the poster is using Java, because he has no unsigned numbers.  In
Java, longs are 64 bits, and ints are 32 bits.

> > If the magic number is only used in bitwise operations, (no arithemetic),
> > you can even put it in an int and just allow it to be negative.
> > Shawn.
>
> it is used in arithemetic operations in TEA/XTEA (sum += delta; sum -= delta;)
> but still you can use negative number

Doesn't surprise me.  I didn't want to take the time to think through what was
going to happen to the sign bit, but it wouldn't surprise me if it works out just
fine.

Shawn.


------------------------------

From: Bob Silverman <[EMAIL PROTECTED]>
Subject: Re: Diffie Hellman Primes : Speed Tradeoff Q
Date: Fri, 30 Jun 2000 22:14:07 GMT

In article <8jeela$e5v$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:
> I'm implementing Diffie-Hellman for a client-server product. I
> understand that in an ideal world, a new shared p and g are chosen for
> each session.

Your understanding is wrong, even for an ideal world. It is neither
necessary or desired to have a new key for each exchange.

If you want forward secrecy, you must use both an ephemeral sesssion
key and a static, permanent key in combination.


Generating the key pair is easy, but coming up with a
> large (1024 or 2048 bit) p every time is prohibitively slow when we're
> talking about potentially 100's of new sessions per hour.

False.  Generating a 1024 bit prime should take a few tenths of a
second AT MOST if done correctly.

>
> Of the following options, which would the friends of sci.crypt
> recommend?
>
> 1. Generate a new smaller prime (say, 128 bit) for each new session.

This is pointless. 128 bits is *trivial* to break


Select a 1024 bit prime.  Use it for years. Breaking a GF(p)
discrete log for 1024 bit p is (and will be for years) hopelessly
out of reach.


--
Bob Silverman
"You can lead a horse's ass to knowledge, but you can't make him think"


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Encryption and IBM's 12 teraflop MPC......
Date: Fri, 30 Jun 2000 22:07:36 GMT

[EMAIL PROTECTED] wrote:
> Any sources online that discuss the new IBM machine and its crypto
> implications. Think the NSA already has one?

The NSA has "always" had supercomputers, often with special designs
such as associative memory to speed up tasks they are interested in.

The problems with a massively parallel array of general-purpose CPUs
(as in the RS/6000) are twofold: (1) Not every problem lends itself
to partitioning into a zillion parallel tasks; and (2) Bookkeeping
(coordination) can sometimes be a major problem.

I'm not very familiar with modern factorization algorithms, but I'd
be very surprised if they were other than sublinear in the number of
processors.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Encryption and IBM's 12 teraflop MPC......
Date: Fri, 30 Jun 2000 22:08:57 GMT

Dann Corbit wrote:
> "Tom McCune" <[EMAIL PROTECTED]> wrote ...:
> > Out of curiousity, why does it have 8192 processors?
> A power of 2 lends itself to n-cube style wiring (just a guess).

Also a power of two is natural for addressing the units.

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: comp.theory
Subject: Re: Newbie question about factoring
Date: Fri, 30 Jun 2000 22:45:32 GMT

In article <
8jj1qb$7t7$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Daniel A. Jimenez)
wrote:

> No, just "real hard."  Unless P=NP, it's unlikely that factoring is NP-hard.
> Nevertheless, that doesn't mean factoring is easy; one could still imagine
> a lower bound of subexponential or very high order polynomial on its
> complexity, making it practically very hard.  For instance, maybe a lower
> bound on the complexity of factoring is Omega(n^1000), where n is the
> logarithm of the number to be factored.  Then it's doable in polynomial
> time, but it's still going to take a very long time.
>
> >Also, can factoring be described as a decision problem?
>
> Yes, in a couple of different ways:
>
> 1.  Given positive integers N and K, is there a non-trivial prime factor of
> N less than K?  You can use a binary search to find a factor of N this way
> (or determine that N is prime if the only such K is equal to N).
>
> 2.  Given positive integers N and K, is the K'th bit in the binary
> representation of the smallest prime factor of N equal to one?  Stepping
> through K gives you a factor of N one bit at a time (or N itself, if N
> is prime).
> --
> Daniel Jimenez                     [EMAIL PROTECTED]
> "I've so much music in my head" -- Maurice Ravel, shortly before his death.
> "                             " -- John Cage
>

Thanks for the reply. Since you seem to be an
expert I will ask you another question:

In RSA's FAQ it also states that Shor's
(quantum) algorithm for factorization could do
factoring in polynomial time if properly
implemented. Additionally, it was proven that
Shor's algorithm cannot be generalized for
*all* NP problems. Does this imply that
factoring *cannot* be NP-complete, or what
does it mean?


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (Bill Unruh)
Subject: Re: Encryption and IBM's 12 teraflop MPC......
Date: 1 Jul 2000 00:12:40 GMT

In <8jj2sp$sre$[EMAIL PROTECTED]> [EMAIL PROTECTED] writes:

]I was just reading about the new 12 teraflop MPC (massively parallel
]computer) IBM has just released to the Department of Energys Lawrence
]Livermore National Laboratory in Livermore, California.

](http://www.msnbc.com/news/426657.asp?
]bt=pu&btu=http://www.msnbc.com/m/olk2k/msnbc_o_install.asp  I don't
]know if that link will work.....)

]There is a lot of discussion about all that can be done with such a
]computer along the lines of the simulation of nuclear explosions and
]weather phenomenon. But suprisingly, nothing about cryptographic
]implications (such as factoring).

]I seem to recall Schnierer had a table in on of his books that compared
]keylength to time required for brute force key recovery given computing
]capability at the time. But it seems computing power is growing at a
]rate faster than predicted. Does anyone know how such computing power
]as a 12 teraflop MPC effects current preception of the security of
]various keylengths under various crypto systems?

That is a Massivly Parallel Computer, which means it acts like a whole
bunch of individual machines. Parts of the factoring are much faster on
such a machine, parts apparently are not. However if you just use the
raw assymptotic formula for say NFS, then 2046 bits would take 10^22
sec. 1024 would take 10^13 sec. Ie, just take Schneier's times and
divide by the ratio in speeds of the computers.



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Encryption and IBM's 12 teraflop MPC......
Date: Sat, 01 Jul 2000 00:26:33 GMT

Dann Corbit <[EMAIL PROTECTED]> wrote:
>> Out of curiousity, why does it have 8192 processors?  Is it just a nice
>> even 8k number, or is there special reason to have an even multiple,
>> rather than 8200 or 8100?

> A power of 2 lends itself to n-cube style wiring (just a guess).

ASCI White is composed of 512 RS6000s with some minor hardware changes
to speed up inter-node communications. Since the nodes are 16
processor SMPs, 16 * 512 = 8192.

All of the ASCI machines, at least recent ones, have been fairly
similar designes, because it's the optimal solution for modeling
nuclear explosions. As to its use in cryptography, brute force key
searches and the sieving portion of factoring could be done with wet
string connecting the nodes, so yes it could be used for that. Wether
or not it would be useful for the final step, or how large a number it
could factor given memory constraints I have no idea, you'd need to
find a real factoring guru.

As far as I know, the NSA seems to prefer vector machines, so perhaps
they're more useful in some situations. 

-- 
Matt Gauthier <[EMAIL PROTECTED]>

------------------------------

From: Tom McCune <[EMAIL PROTECTED]>
Subject: Re: Encryption and IBM's 12 teraflop MPC......
Date: Sat, 01 Jul 2000 00:35:05 GMT

In article <ZUa75.1319$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>Dann Corbit <[EMAIL PROTECTED]> wrote:
>>> Out of curiousity, why does it have 8192 processors?  Is it just a nice
>>> even 8k number, or is there special reason to have an even multiple,
>>> rather than 8200 or 8100?
>
>> A power of 2 lends itself to n-cube style wiring (just a guess).
>
>ASCI White is composed of 512 RS6000s with some minor hardware changes
>to speed up inter-node communications. Since the nodes are 16
>processor SMPs, 16 * 512 = 8192.
<snip>

Thanks for the responses - I appreciate it.

-Tom

------------------------------

From: Dido Sevilla <[EMAIL PROTECTED]>
Crossposted-To: comp.theory
Subject: Re: Newbie question about factoring
Date: Sat, 01 Jul 2000 08:49:16 +0800

[EMAIL PROTECTED] wrote:
> 
>   In the FAQs for sci.crypt and rsasecurity.com
> it is stated that the security of RSA depends
> (partially) on the assumption that factoring is
> hard. Does this mean the assumption that
> factoring is NP-hard and, if not, then how
> hard? Also, can factoring be described as a
> decision problem?

Factoring is not NP-hard (at least, not if P!=NP).  Just very, very
difficult.  All known algorithms for factoring take time proportional to
the size of the number being factored.  No one has been able to find an
algorithm for factoring that takes time proportional to a polynomial in
the number of digits of the number to be factored (or equivalently, the
logarithm of the number to factor).  Factoring has already been
classified as an NP-incomplete problem, not an NP-complete one.  Unless
someone has come up with a polynomial-time algorithm for converting
factoring into satisfiability or some other proven NP-complete
problem...

--
Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
ICSM-F Development Team                         +63 (917) 4458925
University of the Philippines Diliman

------------------------------

From: Dido Sevilla <[EMAIL PROTECTED]>
Subject: Re: An encryption protocol, version 2
Date: Sat, 01 Jul 2000 08:43:35 +0800

Mark Wooding wrote:
> 
> Analysis:
> 
>   * I can't see why message 4 is useful.  By this time, the client knows
>     the server is OK (because he encrypted a valid timestamp T_S and the
>     public constant N_0 in step 2 under the shared secret K_i).  We can
>     therefore elide message 4, and combine messages 3 and 5.
> 

Message 4 is for synchronization purposes only.  I'll probably be using
UDP in the actual implementation and things such as this might be
needed.  It's just a message that means the server is ready to begin
receiving data from the client.

>   * Do we need all of those distinct timestamps?  My suspicion would be
>     that, once the server has issued T_S in message 2, we can continue
>     using T_S as a `session identifier' in the other messages, rather
>     than inventing new fresh timestamps T'_S and T'_{C_i}.  Comparison
>     is also easier if we do this, and we don't need to think about
>     awkward issues like permissable time windows and possible clock skew
>     in the server.
> 

The very nature of the devices being designed as a client (they're bundy
clock-type devices actually), means that they can't afford to have a
great deal of clock skew with the server.  Otherwise, the data they
produce might as well be useless.  These timestamps are also useful so
that the clients know when to resynchronize themselves. And they're also
useful for avoiding replay attacks.

> 
> Reasonable choices.  After the demolition job the Counterpane boys did
> on Rijndael relatively recently, I'm a bit cautious about it -- breaking
> 7 rounds out of 10 is a worry, even if the attack isn't particularly
> practical.  Serpent is surely a good choice, however, if you're going to
> pick an AES candidate.  If you take my advice above and replace the hash

Really?  Where can I get more up-to-date news on the status of the AES
candidates?  I was already beginning to lean towards using Rijndael
because it was so easy to implement on my embedded processor.  Maybe I
can increase Rijndael's rounds to 32...  It's also several times smaller
(in terms of code and data size) and faster than Serpent.  The trouble
with Serpent is that it seems to be biased towards 32 bit processors and
the Pentium architecture.  Implementing it even on something like an
80186 (80x86 too, but 16-bit member of the family) was hard, as I can
say from experience.

> by a MAC, a good choice would be HMAC-SHA1.  It's slower than raw SHA1
> by an additive constant (not a factor), and gives security which is
> provably related to the strength of SHA.
> 

Where can I find sample C code and/or specifications for this
algorithm?  I hope it's not too different from SHA-1 because I've
already implemented that algorithm for my embedded processor.  Perhaps I
should just use my encryption algo. as my MAC, or are there problems
with using a block cipher as a MAC.

And thanks for the advice.

--
Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
ICSM-F Development Team                         +63 (917) 4458925
University of the Philippines Diliman

------------------------------

From: [EMAIL PROTECTED] (John M. Gamble)
Subject: Re: Sellotape and scotch tape
Date: 1 Jul 2000 01:06:59 GMT

In article <8jcvsn$8ck$[EMAIL PROTECTED]>,
Scott Fluhrer <[EMAIL PROTECTED]> wrote:
>
>John Myre <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>>
>> This is way off-topic, except that Sellotape was actually
>> mentioned in another post.
>>
>> I forget, there is a word for when a brand name becomes the
>> (de facto) general term.  Good examples in the US include
>> zipper, kleenex, and the aforementioned scotch tape.  In each
>> case, using the "proper" generic term is rare.  Anybody
>> remember what this is called?
>>
>> Meanwhile, this is the first I've heard of Sellotape.  Is it
>> sold in the US?  Is Scotch tape sold in the UK?
>>
>> Is there a term for the reverse process?  That is, a general
>> term that is appropriated as a brand name.  Examples could
>> include PC (the IBM one) and Windows (Microsoft).
>ObNit: "PC" is not an example.  The first usage of the term (AFAIK), either
>as the abrieviation, or the full term "Personal Computer", was as the
>product name (and marketing hype) of the "IBM PC".  If anything, it's an
>example of a brand name becoming a generic name (and yes, I forget what
>that's called too).
>

Er, no actually.  PC was the term for personal computer before IBM
marketroids tried to hijack the name (for a while, successfully).  If
you can find some old Byte magazines you can confirm this.

Or, if you are willing to take my word for it, i could tell you of a
certain low level of resentment amongst the early computer users
(personal or otherwise) against IBM for such a manuever.

This was, after all, back in the days when there were a lot more PC
(IBM or otherwise) companies than there are now, and CP/M was still a
viable competitor vs. MS-DOS (i was working for a company that had to
support a program that ran on either system at the time).

        -john

February 28 1997: Last day libraries could order catalogue cards
from the Library of Congress.
--
Pursuant to US Code, Title 47, Chapter 5, Subchapter II, '227,
any and all unsolicited commercial E-mail sent to this address
is subject to a download and archival fee in the amount of $500
US.  E-mailing denotes acceptance of these terms.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to