Cryptography-Digest Digest #99, Volume #13        Sat, 4 Nov 00 20:13:00 EST

Contents:
  Generalized Playfair ("r.e.s.")
  Re: hardware RNG's (David Schwartz)
  Re: Randomness from key presses and other user interaction (Terry Ritter)
  Re: Randomness from key presses and other user interaction (Terry Ritter)
  Protocol (David Wagner)
  Re: Randomness from key presses and other user interaction (David Schwartz)
  Re: Randomness from key presses and other user interaction (David Schwartz)
  Re: Known-plaintext attack on chaining modes (was Re: Give it up?) (Tom St Denis)
  Re: very large mult. div. ([EMAIL PROTECTED])
  Re: Brute force against DES ("John A. Malley")
  Re: Spanish Language Data (Roberto Santilli)
  Re: BENNY AND THE MTB? ("Matt Timmermans")

----------------------------------------------------------------------------

From: "r.e.s." <[EMAIL PROTECTED]>
Subject: Generalized Playfair
Date: Sat, 4 Nov 2000 14:12:42 -0800

(This is partly in response to a recent thread that asked
about 3-dim Playfair.)

Here's a procedure that generalizes the Playfair cipher to
use an arbitrary number of alphabet squares (say M squares),
and also generalizes further to d-dimensional arrays of an
alphabet, with d=1,2,3,... (I don't know whether this method
is already in the literature.)

The generalization described first, which is for d=2, reduces
to ordinary Playfair when M=1, and reduces to the so-called
Two Square cipher when M=2.

The method for d=2, and arbitrary M (=1,2,...), is this:

(1) Use an alphabet that can be arranged into a square of
letters (e.g. 5 x 5)

(2) Make M such squares, each with a different permutation
of the same alphabet, and lay out the squares side-by-side
in a row, with all the rows aligned. (E.g., if the squares
5 x 5, this will make a 5 x 5M table.)

Suppose the plaintext is "abcde...", and the ciphertext is
"ABCDE...", where "A" simply stands for the encipherment
of "a", also being some letter in the plaintext alphabet.
Let's omit seriation, which could, however, also be done.

(3) Break the plaintext into groups of M letters
(M=#squares), padding the last group if needed.
Some examples for M=3 are
"abc def ..." = "an example" = "ane xam ple"
"abc def ..." = "padding" = "pad din gxx"

(4) Encipher one group at a time, treating it as a circular
sequence of overlapping letter-pairs (e.g. for M=3, "abc"
is seen as ab-bc-ca).  Each letter of a group is enciphered
as the first half of an ordinary Playfair digraph of which
it is the first letter, while cycling through the M squares
in succession.

More precisely, define the notation ab(ij) as follows:

ab(ij) = the element in the j-th square,
that's in the same *row* as "a" of the i-th square,
and in the same *column* as "b" of the j-th square;
if "a" and "b" are in the same row, then take the
element to the right of "b" (regarding the first
letter in a row of any square to be to the "right"
of the last letter of that row).

Thus, taking M=3 to illustrate:
Encipher "abc def ..." -> "ABC DEF ..."
according to
a -> ab(12) = A
b -> bc(23) = B
c -> ca(31) = C

d -> de(12) = D
e -> ef(23) = E
f -> fd(31) = F

etc etc

(5)
Decipher "ABC DEF ..." -> "abc def ..."
according to
A -> AC(21) = a
B -> BA(32) = b
C -> CB(13) = c

D -> DF(21) = d
E -> ED(32) = e
F -> FE(13) = f

etc etc


========
Example (M=3, plaintext="an example")

  1      2      3
=====  =====  =====
dhtrl  ctilg  nerio
bqxus  jbxrh  suaht
kaicm  siodx  vdbmw
nejgo  wmnfy  fkyjg
fvpyw  pvqak  cqxpl

Encipher:  ane xam ple -> okh rpi art

ane -> okh
according to
a -> an(1->2) = o
n -> ne(2->3) = k
e -> ea(3->1) = h

as shown:

  1      2      3
=====  =====  =====
dh<------------erio
bqxus  jbxrh  suaht
ka------>odx  vdbmw
nejgo  wmn---->kyjg
fvpyw  pvqak  cqxpl

Similarly,
xam -> rpi:

  1      2      3
=====  =====  =====
dhtrl  ctilg  nerio
bqx------>rh  suaht
kai<-------------mw
nejgo  wmnfy  fkyjg
fvpyw  pvqa----->pl

ple -> art:

  1      2      3
=====  =====  =====
  |<-----------|
dhtrl  ctil--->erio
bqxus  jbxrh  suaht
kaicm  siodx  vdbmw
nejgo  wmnfy  fkyjg
fvp------>ak  cqxpl



Decipher:
okh rpi art -> ane xam ple

okh -> ane
according to
o -> oh(2->1) = a
k -> ko(3->2) = n
h -> hk(1->3) = e

rpi -> xam
r -> ri(2->1) = x
p -> pr(3->2) = a
i -> ip(1->3) = m

art -> ple
a -> at(2->1) = p
r -> ra(3->2) = l
t -> tr(1->3) = e

==========

Further generalizing the above method to d-dimensional arrays
of an alphabet, with d=1,2,3,..., is straightforward.

For example, with d=3, an alphabet is arranged into a cubic
array, and M such cubes are placed side-by-side. Take the case
of M=2 for example:  A digraph "ab" is such that the location
of "a" in cube#1 defines in that cube a square of letters lying
in a plane "facing" the next cube (cube#2), and similarly, "b"
in cube#2 defines in that cube a square of letters lying in a
plane "facing" the next cube (cube#1).

Enciphering is then, with M=2 for illustration,
a->ab(12)=A = orthogonal projection of "a" into the "b"-square,
b->ba(21)=B = orthogonal projection of "b" into the "a"-square,
etc etc
Deciphering is, correspondingly,
A->AB(12)=a = orthogonal projection of "A" into the "B"-square,
B->BA(21)=b = orthogonal projection of "B" into the "A"-square,
etc.

=======

Yet another variation of this generalized Playfair would be
to *not* cycle back to the first letter of each group, but
instead to treat the entire plaintext string as a sequence of
overlapping letter-pairs, with the first letter regarded as
following the last letter of the message. (And still cycling
through the M squares in succession as before.)

--r.e.s.



------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's
Date: Sat, 04 Nov 2000 14:17:37 -0800


Terry Ritter wrote:

> There is something wrong with this logic!  If various signals in the
> area do affect the noise RNG, then our device is no longer based
> solely on quantum effects.  That is very dangerous because it means
> that one of the other effects which it uses might be controlled,
> perhaps fairly easily.  I claim that what we want to do is to isolate
> the noise signal from every other reasonable effect.

        This is probably the fundamental source of my disagreement with you.
There is absolutely no need to isolate the noise source from all other
possible sources, provided the noise is still there. All you have to do
is run a cryptographically-secure hash function on the sampled data and
you can be assured that almost as much noise as was in the input is in
the output, up to the limit of the hash function.

        For analysis and for some theoretical purposes, it is nice to be able
to isolate that unpredictable elements from the potentially predictable
one. But it in no practical way affects the security of the total system
provided the entropy is managed properly.

        Linux's /dev/random, for example, mixes TSC bits into an entropy pool
which is managed in a cryptogrpahically-strong way. The addition of
predictable data into that entropy pool does not compromise the
randomness at all. The addition of a 32-bit value which an attacker has
a 99% change of guessing correctly adds 1% of 32 bits of entropy to the
pool.

        So long as you get some randomness in, you will get some randomness
out. So long as you get enough randomness in, you will get enough
randomness out. Anything else that goes in has no effect. The form that
randomness is in has no effect.

        DS

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Randomness from key presses and other user interaction
Date: Sat, 04 Nov 2000 22:48:32 GMT


On Sat, 04 Nov 2000 12:56:53 -0800, in
<[EMAIL PROTECTED]>, in sci.crypt David Schwartz
<[EMAIL PROTECTED]> wrote:

>
>Mack wrote:
>> 
>> There seems to be some argument as to whether timing
>> keystrokes is a good source of randomness.
>> 
>> So lets start a thread on that.
>> 
>> 1) Key stroke timing is generally quantitized to about 20 ms
>> give or take.
>
>       It's the give or take in the 20 ms that contains the entropy. (At
>least, for cases where the keyboard oscillator is taken directly from
>the crystal or from a separate crystal. This does not apply for cases
>where the keyboard clock is produced by dividing the FSB clock.)

In exactly what PC system is the keyboard *not* a separate unit with
its own internal clock?

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Randomness from key presses and other user interaction
Date: Sat, 04 Nov 2000 22:49:16 GMT


On 04 Nov 2000 20:39:33 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (Mack) wrote:

>There seems to be some argument as to whether timing
>keystrokes is a good source of randomness.
>
>So lets start a thread on that.
>
>1) Key stroke timing is generally quantitized to about 20 ms
>give or take.
>
>2) The average typist doesn't get up to 90 wpm which assumes
>6 keystrokes per word or there abouts. That is 540 keystrokes
>per minute. Or 9 key strokes / second.
>
>3) This gives an average of 5.5 quanta per keystroke.
>
>4) Now our typist will may have some keystrokes that are 5 quanta
>and some that are 6 quanta and some that are even more or less
>
>5) this gives at MOST 1 bit per keystroke.
>
>Now for the personal view.
>
>I would say that it takes about 10 keystrokes to get 1 bit.
>But that is still 54 bits/minute.
>
>This isn't the best source of entropy.
>
>Now this assumes a good typist.  A bad typist tends to
>make more erratic keystrokes and fewer keystrokes per
>minute.  So the best we can hope for is about 54 bits/min.

Probably few of us sit down and type at fast rate without pause.  In
particular, I pause quite a lot, which means no keystrokes.   


>Now using Clock skew will probably generate a couple of
>bits per minute.  

No.  Clocks are generally deterministic.  Finding an unexpected edge
is just something happening a little earlier or later than it would
have happened anyway.  The result is just a further refinement on the
frequency and phase of one clock compared to another.  Each such
surprise might be considered entropic, but as we track the clock
frequencies with increasing precision, we have fewer surprises.


>This estimate is much hard to pin down.
>It depends greatly on the system.  The entropy collected
>is the acutal value of the clock skew.  This will change with
>time.  Not a lot but enough that a couple of bits are available.

Once we know the frequency, phase and drift, we can predict skew.


>Long term collection of process timing is also capable of
>generating entropy if the processes interact with humans.

To the extent that humans make unexpected decisions, that is true by
definition.  Making the same decisions repeatedly is not entropic.


>In sum yes key press timings and mouse waggles can provide
>entropy. PGP certainly uses it and hashes the data with a secure
>hash.
>
>But if you need a lot of randomness on a continuous basis it
>is better to invest in a hardware RNG and then test its limits.

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Protocol
Date: 4 Nov 2000 22:56:56 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)

There was recently a discussion on the following protocol:
   1. A->B: {N_A}_{K_B}
   2. B->A: {N_B}_{K_A}
and then forming a session key as K = H(N_A,N_B).

For those who were considering the above protocol, there is a
discussion of the protocol in the following paper, which is likely
to be of interest:
  Martin Abadi. Two facets of authentication.
  In 11th IEEE Computer Security Foundations Workshop, pages 27-32.
  IEEE Computer Society, June 1998.
  Also appeared as SRC Technical Note 1998-007.
  http://gatekeeper.dec.com/pub/DEC/SRC/technical-notes/abstracts/src-tn-1998-007.html
Happy reading.

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: Randomness from key presses and other user interaction
Date: Sat, 04 Nov 2000 14:59:04 -0800


Terry Ritter wrote:

> >       It's the give or take in the 20 ms that contains the entropy. (At
> >least, for cases where the keyboard oscillator is taken directly from
> >the crystal or from a separate crystal. This does not apply for cases
> >where the keyboard clock is produced by dividing the FSB clock.)
> 
> In exactly what PC system is the keyboard *not* a separate unit with
> its own internal clock?

        They vary actually. Some systems divide the FSB clock to get the
keyboard controller clock. Some systems use the crystal output to
provide the keyboard controller clock. The multiplier that generates the
FSB frequency has lots of unpredictable noise and jitter.

        In any event, though, there are really two keyboard controllers. One in
the keyboard, which generates the scan timings, and one on the
motherboard, which sits at the other end of a serial link. These are
definitely two separate uncompensated crystal oscillators. The precise
time at which the keyboard controller on the motherboard will detect the
scan detected by the controller in the keyboard, as measured by the
processor's TSC contains the entropy from drift and jitter betwen the
oscillator on the motherboard and the oscillator in the keyboard.

        Unfortunately, exactly how much entropy you get depends upon lots of
implementation details. But, again, the TSC noise alone is at least 8
parts per billion per second. So two events a second apart sampled with
a 500Mhz TSC has on the order of 4 bits of entropy in the difference
between the two TSC values.

        DS

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: Randomness from key presses and other user interaction
Date: Sat, 04 Nov 2000 15:09:22 -0800


David Schwartz wrote:

>         Unfortunately, exactly how much entropy you get depends upon lots of
> implementation details. But, again, the TSC noise alone is at least 8
> parts per billion per second. So two events a second apart sampled with
> a 500Mhz TSC has on the order of 4 bits of entropy in the difference
> between the two TSC values.

        By the way, a crystal oscillator with a stability of 8 parts per
billion would drift by about a hundred-thousandth of a second per day.
Do you really believe that a clock running of an uncompensated crystal
oscillator would be that stable? Try it with NTP against your TSC. Let
is synch to a time server for a month so that it has an excellent
estimate of its average frequency, then cut it loose for a day. You will
find it off by greater than 12 parts per billion. And you will find that
it's fast about half the time and slow about half the time.

        In fact, medium-term stability (a week or so) of uncompensated crystal
oscillators is so bad that there's no point in measuring the frequency
to better than about 8 digits. This is why NTP writes its ntp.drift
value out with only three decimal places.

        With a 500Mhz Pentium, you really can measure these things from second
to second. After all, a billion cycles takes just two seconds. So one
part per billion of noise is measureable as about one bit of entropy in
two seconds. Many people just don't realize that modern CPUs are
actually fast enough to measure this. But they are.

        DS

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Known-plaintext attack on chaining modes (was Re: Give it up?)
Date: Sat, 04 Nov 2000 23:16:14 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
> [EMAIL PROTECTED] (David Hopwood) wrote in
> <[EMAIL PROTECTED]>:
>
>   CUT
>
> >
> >It prevents a chosen-plaintext attack [*], but not known-plaintext.
> >
> >For full-feedback CFB,
> >  C[i] = P[i] XOR E[K](C[i-1])
> >  so (C[i-1], C[i] XOR P[i]) is a known plaintext/ciphertext (pt/ct)
> >  pair.
> >
> >For CBC,
> >  C[i] = E[K](P[i] XOR C[i-1])
> >  so (P[i] XOR C[i-1], C[i]) is a known pt/ct pair.
> >
> >For full-feedback OFB,
> >  S[i] = E[K](S[i-1])
> >  P[i] = C[i] XOR S[i]
> >  so (C[i-1] XOR P[i-1], C[i] XOR P[i]) is a known pt/ct pair.
> >
> >I.e. for each of these modes, having known message plaintext allows
> >calculating pt/ct pairs for the underlying cipher (in the case of
OFB,
> >n+1 consecutive known message blocks are needed to find n pairs).
> >
> >
>
>   I think you covered all the basic approved 3 letter blessed
> chaining modes. The above is the whole reason for my use of "Wrapped
> PCBC"  when one tries to attack scott16u or scott19u the attacker can
> not get the "pt/ct" pairs. One could easily do the samething even
> with Rijndael so that an attacker would not be able to obtain this
> kind of information which should make it easier to break. But it would
> slow the speed down but if one has some faith in Rijndael and wants
> the "all or nothing property" and a lot more security it would not be
that
> hard to do. Even the use of the so called Slide Attack that MR Wagner
> made was not even able to obtain these pairs in scott19u.
> and it would not be able to obtain them from Rijndael if used instead
> of the single cycle 19 bit look up table.
>   But do many really want encryption that is secure?

Nope.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: very large mult. div.
Date: Sun, 05 Nov 2000 00:02:23 GMT

In article <[EMAIL PROTECTED]>,
  Brian Phillips <[EMAIL PROTECTED]> wrote:
>
> I am implementing DSA and was wondering if anyone had any knowledge on
> large (160 bit) multipliers and dividers. Any help is appreciated.
>
> Brian
>
>
If time is not an issue, the problem can be divided into smaller
multiplier( 32-bit, or 16-bit).  I had implemented 512-bit
multiplication using 32-bit multiplier on a FPGA run at 25Mhz.


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: Brute force against DES
Date: Sat, 04 Nov 2000 16:14:28 -0800

Samir wrote:
> 
> (Excuse my English, I'm French...)
> 
> I search some informations about the brute force against DES.
> 
> I can use 20 computers, for a known-plaintext attack. So I have
> only one plain text and the corresponding cipher text.
> I known the fact that can have many keys, in French we says there
> are "collisions", and in English ?
> 
> About the network, I finished the client/server application in MFC
> and I have no problem.
> 
> But my problem is : I have no idea about the strategy to use in the attack ?
> 
> Is the best way to the server to search with ramdom selection ?
> 
> I learn there are some optimizations, like the property : for all key K,
> and all plaintext P : DES(K,P) = ~(DES(~K,~P)),
> so I can test K and ~K, with this property, in less time than testing K
> and ~K separately.
> What about others optimizations ?

Given plaintext and corresponding ciphertext, and (assuming) the
ciphertext is in ECB form (no chaining),
consider using redundancy in the plaintext (if present) to speed up the
search for the correct key. 

The ciphertext block and corresponding plaintext block each hold eight
8-bit bytes. Suppose some 8 bit byte values repeat in the plaintext
block while other 8 bit byte values occur only once in the plaintext
block.  Say there are k unique 8 bit byte values in the plaintext block
and 8 - k repeated values. Look first for the k unique 8-bit byte values
in the candidate plaintext block after decrypting the ciphertext with a
candidate key. Look for the unique values one-by-one. That is, look for
the first unique byte value - if not there, discard the key without any
further checking of the deciphered block. If one out of k unique values
is there, look for the next unique value. Again, discard the candidate
key immediately if the next unique value is not there, and so on.  If
all of the unique byte values are found in the decrypted ciphertext
using the candidate key, next check to see if those other (repeating)
values are present.  And do this in the same step-by-step manner so the
algorithm drops the comparison on the first bad (no repeated) byte it
finds. 

Also consider separating out the decrypting activity from the checking
of deciphered output activity.  Some processes/threads can work through
a set of candidate keys, decipher the ciphertext block and make the
results available to other processes/threads that check the candidate
plaintext as described above.  This is a producer/consumer model. 

The volume of data passed from the producer to the consumer may be
modulated by allowing the producer to run a limited check on the
candidate deciphered plaintext - say it finds 2 or 3 out of the k unique
8 bit byte values expected in the plaintext if the key is right. Then
the producer passes this particular candidate decipherment and
corresponding key on the consumer process/thread for more extensive
post-processing.  By adjusting the number of checks done immediately by
the deciphering process/thread one can tune the traffic between producer
and consumer to fit the communications bandwidth available (i.e. memory
mapped files, named pipe, a stream over a tcp/ip connection, etc. -
details depend on the architecture choices you made in your application.
) 

These tips are derived from "Cracking DES, Secrets of Encryption
Research, Wiretap Politics and Chip Design" by the Electronic Frontier
Foundation, ISBN 1-56592-520-3.


John A. Malley
[EMAIL PROTECTED]


 
 
> 
> I wish to make it clear that I'm not trying to attack DES directly ;-) I fix some
> bits in the key, in order to reduce the time of research...
> 
> thanks !!
> 
> --
> Samir Bellabes
> Etudiant - Student
> [EMAIL PROTECTED]

------------------------------

From: Roberto Santilli <[EMAIL PROTECTED]>
Subject: Re: Spanish Language Data
Date: Sat, 04 Nov 2000 01:21:52 +0100

Take a look at:
Gaines, Helen Fouch�, Cryptoanalysis, Dover, New York, 1956

Roberto Santilli
Security Engineer
Intesis S.p.A.
www.intesis.it


ja wrote:

> Hi,
>
> I am trying to solve a simple monoalphabetic substitution that has the
> plain-text in Spanish.
>
> Where can I get Spanish letter frequency, digram, trigram and contact info ?
>
> Also, does anyone have some useful words to try as a crib.
>
> Thanks
>
> Jeremy
>
> jeremy A T electrosilk D O T net


------------------------------

From: "Matt Timmermans" <[EMAIL PROTECTED]>
Subject: Re: BENNY AND THE MTB?
Date: Sun, 05 Nov 2000 00:34:46 GMT

"Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...

> I /think/ I understand (roughly) how this is done for the case where
> the stream is longer than a block - but when it is smaller then a block,
> we have:
>
> ``If a file has <= 128 significant bits [...] the 128 most significant
>   bits are encrypted with rijndael.  This will typically result in a file
>   with 127 significant bits [...]''
>
> What then?  Is this where you apply a bijection between your finitely-odd
> files and 8-bit granular files?

Ah, I see.

The normal bijective compression process (without encryption) does:

byte-granular file -- (ppm/arithcoding) -> finitely odd stream -- (trivial
decoding) -> byte-granular file

The entire encyption process is performed on the finitely odd intermediate
stream, i.e.:

byte-granular file -- (ppm/arithcoding) -> finitely odd stream --
(encryption) --> finitely odd stream -- (trivial decoding) -> byte-granular
file.

A finitely odd stream with N (>0) significant bits has a 1 for its Nth bit,
and 0s for all (infinitely many) following bits.  A trivial decoding of a
stream with N significant bits will produce an output file with about N/8
bytes.

Because final mapping (tivial decoding) from FOstreams to byte-granular
files is bijective, the encryption can perform any reversible operation on
the FOstream without changing the bijective nature of the entire process,
including operations that change the number of significant bits.

When the FOstream has >128 significant bits, the encryption process only
encrypts the bits before the last 1, and so the number of significant bits
in the stream doesn't change.

When the FOstream has <=128 significant bits, it simply encrypts the first
128 bits.  These 128 bits will be effectively randomized, and so the
expected number of significant bits in the output FOstream is 127.  (
P(bit[128]=1)=0.5, P(bit[127]=1)=0.5, etc.).   The resulting output file
will very likely be 16 bytes long.




------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to