Cryptography-Digest Digest #211, Volume #12      Wed, 12 Jul 00 09:13:01 EDT

Contents:
  Re: Proposal of some processor instructions for cryptographical  (Runu Knips)
  Re: attack against CBC mode (Runu Knips)
  Re: Proposal of some processor instructions for cryptographical  (Konrad Schwarz)
  Re: Proposal of some processor instructions for cryptographical       applications 
(Holger Bettag)
  Re: Random Numbers (John Savard)
  Re: Random Numbers (John Savard)
  Re: attack against CBC mode ("Falissard")
  Re: OTP's again ("sbh78")
  Re: Questions!!!! (Mark Wooding)
  Re: Posting downside up (Re: Steganographic encryption system) (David Damerell)
  Re: attack against CBC mode (Mack)
  Re: Public/Private Key crypting (Mark Wooding)
  Limits of brute force. (Mack)
  Re: New Idea - Cipher on a Disk ("Trevor L. Jackson, III")
  Re: Proposal of some processor instructions for cryptographical  applications 
("Stefan Monnier " <[EMAIL PROTECTED]>)

----------------------------------------------------------------------------

Date: Wed, 12 Jul 2000 11:06:32 +0200
From: Runu Knips <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical 

Dennis Yelle wrote:
> In the 8 bit case, my brute force search indicates
> that only 38272 of the permutations are reachable,
> but 8! is 40320.

So I was wrong again. *sigh*

> In particular, starting with abcdefgh,
> this was extracted from the sorted list
> of possible results:
> 
> aedhbcfg
> aedhbcgf
> aedhbfcg
> aedhbfgc
> aedhcbfg
> aedhcbgf
> aedhcgbf
> aedhcgfb
> 
> Notice that the entries starting with
> aedhbg

Well that can be tested by hand:

abcdefgh -> aedhbgcf
01234567 -> 04371625  (numbers are IMHO more readable)

The operations for 8 bit are:

BRT1  reg1, reg1, mask1
BRTL2 reg1, reg1, mask2
BRTL3 reg1, reg1, mask3
BRTL2 reg1, reg1, mask4
BRT1  reg1, reg1, mask5

Now we color the bits right and left:

04371625
LLLLRRRR (first grouping)
LLRRLLRR (grouping of halves)
LRLRLRLR (grouping of bit pairs)

01234567
LRRLLRRL
LLRRLRLR
LLLLRRRR

Now try to convert 01234567 to 04371625:

01234567
1. Mask1: 0,0,0,0
   01234567
   LRRLLRRL
   1.1. Mask2: 1,3
      12307456
      RRLLLLRR -> Mask3: 2
      3074 5612
      RLRL RLLR -> failed
      (can't reorder first half to 0437)
   1.2. Mask2: 3,1
      30125674
      LLRRRRLL -> Mask3: 6
      7430 1256
      RLRL LRRL -> failed
      (can't reorder first half to 0437)
2. Mask1: 0,0,1,1
   01235476
   LRRLRLLR
   2.1. Mask2: 1,1
      12304765
      RRLLLLRR -> Mask3: 2
      3047 6512
      RLLR LRLR -> failed
      (can't reorder second half to 1625)
   2.2. Mask2: 3,3
      30126547
      LLRRRRLL -> Mask3: 6
      4730 1265
      LRRL LRLR -> failed
      (can't reorder second half to 1625)
3. Mask1: 1,1,0,0
   10324567
   RLLRLRRL
   3.1. Mask2: 1,1
      03215674
      LLRRRRLL -> Mask3: 6
      7403 2156
      RLLR RLRL -> failed
      (can't reorder second half to 1625)
   3.2. Mask2: 3,3
      21037456
      RRLLLLRR -> Mask3: 2
      0374 5621
      LRRL RLRL -> failed
      (can't reorder second half to 1625)
4. Mask: 1,1,1,1   
   10325476
   RLLRRLLR
   4.1. Mask2: 1,3
      03216547
      LLRRRRLL -> Mask3: 6
      4703 2165
      LRLR RLLR -> failed
      (can't reorder first half to 0437)
   4.2. Mask2: 3,1
      21034765
      RRLLLLRR -> Mask3: 2
      0347 6521
      LRLR LRRL -> failed
      (can't reorder first half to 0437)

-> This doesn't work

> are missing.

Correct.

> The weakness seems to be that there is only one operation
> that moves bits from the left half to the right half, or
> vice versa.
> 
> Of course, it is possible that there is a bug in my program.

Well I don't think so. You're simply right. Damned :-(

------------------------------

Date: Wed, 12 Jul 2000 11:24:13 +0200
From: Runu Knips <[EMAIL PROTECTED]>
Subject: Re: attack against CBC mode

Serge Vaudenay wrote:
> The CBC encryption mode hides several weaknesses, although this is not
> so well known. Our laboratory has been developping a ciphertext only
> attack which can apply to any encryption (DES, 3DES, RC5, IDEA) in CBC
> mode as long as the block size is limited to 64 bits.

But larger blocks have the same problem, don't they ?

> [...]
> 
> Our attack has been presented on 2000 July 5th at the annual research
> day of the Communication System Department of EPFL.
> 
> More information on http://lasecwww.epfl.ch/birthday.shtml

Hmm I think I've to use the combined accumulated plaintext/CBC mode
in future.

------------------------------

From: Konrad Schwarz <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical 
Date: Wed, 12 Jul 2000 11:38:39 +0200



Andrew Molitor wrote:
> 
The CPU already sees the cache as essentially a 256 bit wide
> >structure and misaligned loads are not expensive because there isn't
> >really a "byte" concept---the appropriate bits are simply pulled in.
> 
>         Doesn't this imply 8x as many wires someplace, for bit enables
> as opposed to byte enables, inside the chip?
> 
>         Luckily, as we all know, wires are free inside a highly
> integrated part ;) This also implies that for ordinary byte/word/long
> operations (almost all the time) somebody has to drive 8x as many
> microamps down some wires, the problem of getting all the stupid
> signal to arrive at the same time is compounded 8-fold.
> 

I would also expect the byte- (or, in this case, bit-) steering
logic to become more complicated, requring a general-purpose
shift logic over the width of the entire L1-cache/CPU data path.

A further problem is the instruction set architecture, because you
need 3 more addressing bits.[*]

Finally, I just don't see the loading, manipulating, and storing
of individual bits of being time-competitive with the parallel
manipulation of 8, 16, 32, or 64 bits.  It may be an intellectual
challange to come up with parallel versions of bit-manipulating
algorithms, but then, that's the fun of programming.

And synthesizing single-bit operations out of more general integer
instructions is trivial.

[*] There is also the danger of ``generalizing'' single-bit
data to arbitrarily sized bit-fields.  (Various references
to Burroughs machines (and how slow they were) should go here.)

------------------------------

From: Holger Bettag <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical       
applications
Date: 12 Jul 2000 11:38:48 +0200

Mok-Kong Shen <[EMAIL PROTECTED]> writes:

> 
> Holger Bettag wrote:
> 
[...]

> > As for how to do an arbitrary 128x128 permutation, you start out by viewing
> > the 128 bits as a vector of 16 bytes. Now, for each of the eight bits in
> > all bytes, you do the following:

[...]

> Thanks. I understand now how it works. One needs however a bit
> of computation to determine from the desired permuation the various
> moves and rotates, as far as I can see.
>
Yes, although the actual computations are not really complicated. Basically,
you have to separate the more significant part of the source bit indexes
from the less significant part to obtain the byte-permutation information
from the bit-rotation information. But it adds to the latency if you want
to switch between different permutations often.

  Holger


------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Random Numbers
Date: Wed, 12 Jul 2000 10:46:50 GMT

On Tue, 11 Jul 2000 23:28:19 +0200, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote, in part:

>John Savard wrote:

>> <[EMAIL PROTECTED]> wrote, in part:

>> >I don't know whether it is (or easily) possible to prove rigorously that
>> >your procedure is exact. I would simply throw away the out of range
>> >values.

>> Oh, it is quite possible.

>> Let us say that I want numbers that are from 0 to 999, and I have a
>> source of numbers from 0 to 2056.

>> If every number from 0 to 2056 is equally likely, then it does
>> immediately follow:

>> taking the last three digits gives you a number from 0 to 999 that is
>> unbiased, because 0 and 1000 together are just as likely as 1 and
>> 1001;

>I am not quite sure of the above. The numbers from 0 to 1999
>contribute a count of 2 to each of the numbers from 0 to 999.
>The rest 2000 to 2056 evidently don't add equal counts to
>each of the numbers from 0 to 999.

They add no counts at all at that step: they _are_ discarded, except
for being used as input to another similar step. What they do is add
equal counts to each of the numbers from 0 to 56.

Then, n of those numbers adds an equal count to every number from 0 to
56^n-1, and when that number is over 999 the process is repeated.

John Savard (teneerf <-)
http://home.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Random Numbers
Date: Wed, 12 Jul 2000 10:48:52 GMT

On Tue, 11 Jul 2000 15:28:42 -0700, "Paul Pires" <[EMAIL PROTECTED]>
wrote, in part:

>Taking the last three digits is like doing mod 1000 and it seems that if the
>mod and the number being operated on don't have common prime factors, then
>it is biased. Consider that 0 to 2000 have 2 "hits" on all numbers from 0 to
>999 but the numbers 0 to 56 have an extra hit from the 56 remainder. So for
>every 2056 numbers processed you could expect to see the values of 0 to 56
>three times but could only expect to see 57 to 999 two times.

>Did I misunderstand your post?

Yes.

You only take the last three digits if the number being operated on is
in the range 0-1999.

If it is in the range 2000-2056, it is not used to directly produce a
number from 0-999; it is passed on to another stage. The other stage
now converts from unbiased numbers from 0 to 57^n-1 using the same
method.

John Savard (teneerf <-)
http://home.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: "Falissard" <[EMAIL PROTECTED]>
Subject: Re: attack against CBC mode
Date: Wed, 12 Jul 2000 12:50:43 +0200

I guess I will now reconsider ECB... (just kidding).

In some cases, obtaining a few GB of ciphertext
is not something unrealistic (e.g. from encrypted
data on tapes...).
Too bad this results only in 2 blocks of cleartext...
I would be curious to know whether this kind of attack
holds also against CFB or other modes.

Anyway, it's a good franco-suisse achievement.

Serge Vaudenay <[EMAIL PROTECTED]> a écrit :
> Research announcement:
>
> The CBC encryption mode hides several weaknesses, although this is not
> so well known.



------------------------------

From: "sbh78" <[EMAIL PROTECTED]>
Subject: Re: OTP's again
Date: Wed, 12 Jul 2000 06:28:51 -0500

Thank you both for answering my question.  I think I understand now.

Stephen


"Mack" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> >Ok, I have seen some commercial OTP products but they all talk about
> >creating a large key file and transferring it to someone else so that
they
> >will be able to decrypt the messages sent to them.  Here is my question
> >can't the problem of key distribution for one time pads be solved by
Diffie,
> >Helman and Merkle's scheme.  Think about the alphabet as a mod 26 system.
> >Define a=1, b=2, c=3...z=26.  Now, if we take 'd' and encrypt it with 'y'
> >then our encrypted letter should be 'c' if we wrap around.  Basically a
> >ceasar shift of -1 right??  (I am relatively new to this so forgive me.)
> >Anyway, lets say Alice encrypts a message with a OTP and sends her
message
> >to Bob who encrypts the message with an OTP and then sends that message
back
> >to Alice who decrypts with her OTP and sends it back to Bob who now
decrypts
> >it with his OTP to get the plain text.  No keys were exchanged and the
keys
> >that were used were random and as long as the message.  It could be done
> >with all the ASCII characters so that any file could be encrypted.  Would
> >this work?  Is it not implemented because it is impractical to do?  Any
> >thoughts?
> >
> >Just Curious,
> >Stephen
> >
> >
>
> Lets look at this symbolically for one bit.
>
> alice encrypts a
> alice sends a^b to bob
> bob sends a^b^c to alice
> alice sends a^c to bob
> bob decrypts a
>
> This does not work for an insecure channel
> To see why simply assume the opponent
> sees all 3 messages.
> and constructs
>
> (a^b)^(a^b^c)^(a^c)=a
>
> interception of the three messages breaks the
> cipher.
>
>
> Mack
> Remove njunk123 from name to reply by e-mail



------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Questions!!!!
Date: 12 Jul 2000 11:33:40 GMT

George Gordon <[EMAIL PROTECTED]> wrote:

> 1) I read in the literature that impossible differentials can be found
> for *any* Feistel cipher for up to 5 rounds. (Schneier, Biham) My
> question is how is that possible for a design like Karn, MDC, etc
> where one side is a full hash? ie: 5-round Karn. It doesn't make sense
> to me.

I didn't think that MDC was a Feistel structure: my understanding was
that it provided a one-way function which you then used in CFB or OFB
mode.

I couldn't find any references to this 5-round attack from a few
carefully chosen searches in Google.  Can you point me at any?  I'm
very interested in this result.

I've just done an experiment with small Feistel networks (on 8-bit
blocks).  I can't see any impossible differentials for more than three
rounds which aren't a requirement imposed by the cipher being a
permutation.  The 3-round impossible differentials I found had the form
(0, x) -/-> (0, y), where x != y, which follows from the structure.

(I did turn up a sci.crypt archive where someone was arguing against
CAST-256 on the grounds that it had too many rounds: I suspect that
attacks on Feistel networks with small round counts against this
argument.)

> 2) For RC4, is the full entropy of the key apparent in the output
> stream for short messages? ie: a simple message like "Hello" ? How
> long must the output stream be run for full key entropy to be
> apparent?? I'm assuming a 128-bit key.

The usual recommendation is to spin the RC4 state around after keying,
by throwing away the first 1024 bytes of output.  After that, I don't
believe that RC4 leaks any key information in a way which anyone is
admitting to knowing about.

> 3) In Blowfish the F-function: F(x) = (((s1[a] + s2[b]) xor s3[c]) +
> s4[d]) is equal to F(x) = (((s1[b] + s2[a]) xor s3[c]) + s4[d]) *if*
> s-box 1 and 2 are equal values. Can anything be made of that?

I doubt it.  The probabilities that S0 and S1 come out equal is
err... 2^{-8192}.  There *might* be one key which does this, in the
entire keyspace, but there are better ways of testing for that key, if
we ever find out what it is.

A Blowfish variant which used a subtract rather than an add for
combining S0 and S1 would fix this good and proper, and I don't believe
this would degrade security.

> 4) Do the key-dependant S-boxes and the key-dependant subkeys in
> Blowfish interact in any way to weaken the cipher or reduce it's
> bijectivity?

There's no risk to Blowfish's bijectivity: its (almost) Feistel
structure guarantees this.  And I can't see any particular problem with
the cipher's strength cuased by the similar way in which the round keys
and S-boxes are generated.

Blowfish's key schedule is an interesting (if heavyweight) way of
ensuring that each of the round keys (and S-box entries) depends
strongly on all of the input user key.

Note that the limitation of the user key to 448 bits, rather than the
perhaps more intuitively obvious 512 bits: this is because, if you have
a key that long, P[2] and P[3] will not depend on P[0] and P[1].

I've not done an analysis of Blowfish with the default S-boxes.  The
mixing of the S-box outputs is slightly nonlinear; I suspect that the
main worry will be the least significant bits of the output, which are a
linear combination of the least significant bits of the S-box outputs.
I suspect that the default S-boxes won't be bad enough to allow an
analysis of this type to derive information about the final S-boxes.

> Thanks for any answers,

I hope I've been helpful.

Can you ask easy questions next time, please? ;-)

-- [mdw]

------------------------------

From: David Damerell <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,uk.comp.os.linux
Subject: Re: Posting downside up (Re: Steganographic encryption system)
Date: 12 Jul 2000 12:47:50 +0100 (BST)
Reply-To: [EMAIL PROTECTED]

Tim Haynes <[EMAIL PROTECTED]> wrote:
>Rex Stewart <[EMAIL PROTECTED]> writes:
>>The system I use to post here puts you name at the top and mine at the
>>bottom - so I either move my sig to the top (usually) or cut and past the
>>"In artical..." line to the correct place down the message (as in this
>>message).
>You have a *very* strange 'system' if it mangles your articles by
>default.

It doesn't, except the usual line length problems; inspection of his
article reveals that what he means is that he carefully moves the
attribution line from the right place at the top of the article to midway
through.

This is probably a product of braindead if right-way-up quoting; people
who produce stuff like

A wrote
>B wrote
>>C wrote
>>>[C's article unsnipped]
>>[B's article unsnipped]
>[A's article unsnipped]

New text

rather than interspersing quoting with the reply and snipping where
appropriate. If people do that, there's exactly one block of text from
each posting, and this moving of the attribution lines appears
superficially sensible (especially as, in the absence of snipping, the top
of the article can be quite a long way from the first text by a later
poster).

If people interleaved and cut properly, it would be more obvious that
attribution lines all go at the top.
-- 
David/Kirsty Damerell.                       [EMAIL PROTECTED]
http://www.chiark.greenend.org.uk/~damerell/ w.sp.lic.#pi<largestprime>.2106
|___| Consenting Mercrediphile.                  Bev White's answer to |___|
| | | Next attempt to break the world in progress     Andrew S. Damick | | |

------------------------------

From: [EMAIL PROTECTED] (Mack)
Subject: Re: attack against CBC mode
Date: 12 Jul 2000 12:00:17 GMT

>Serge Vaudenay wrote:
>> The CBC encryption mode hides several weaknesses, although this is not
>> so well known. Our laboratory has been developping a ciphertext only
>> attack which can apply to any encryption (DES, 3DES, RC5, IDEA) in CBC
>> mode as long as the block size is limited to 64 bits.
>
>But larger blocks have the same problem, don't they ?
>

Yes, but how many people keep 2^64 blocks of data
lying around under a single password?

32GB is the size of a good sized hard drive.  So
theoretically at least it might be possible to
retrieve 2 blocks of data out of that.

It might even be possible for less data since it
is a random attack. But using a larger block size
increases the difficulty by 2^32.  A respectable
number.


Mack
Remove njunk123 from name to reply by e-mail

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Public/Private Key crypting
Date: 12 Jul 2000 12:09:58 GMT

Joseph Ashwood <[EMAIL PROTECTED]> wrote:
> Actually all public key algorithms should do this quite nicely, with a few
> caveats. If you want to avoid having the outer encrypt forced to span 2
> blocks, you must create the outer key so that it will encrypt a block of the
> size of the inner result. For ElGamal this would be doubling the size, for
> RSA it simply means choosing slightly larger primes. From there you can
> simply treat the first set of encrypted data as binary data that will fit in
> a single block.

Did you notice that the OP wanted the two encryptions (or decryptions)
to commute?

[snip]

> > -data==decrypt(decrypt(encrypt(encrypt(data,public_key1),public_key2),
> >                private_key1), private_key2); !!

-- [mdw]

------------------------------

From: [EMAIL PROTECTED] (Mack)
Subject: Limits of brute force.
Date: 12 Jul 2000 12:15:39 GMT

This is an excert from something I did a while back,
It gives estimates of the limits of brute force on a
symetric block cipher. Basically it states that
256 bits is secure for the foreseeable future and
that 80 bits is secure for now. And 117 bits is
secure for at most about 100 years.  Being conservative
about these things I think that 20 years is a safer bet
for 128 bits.

Here it is:

At the very limits of physics we can calculate that at minimum an operation
will take the length of time for light to cross a hydrogen atom. This value is
approximately 2*10**-19 of a second or about 2**62 cycles/second. Therefore we
require about 80 bits to withstand attack from such a blindingly fast computer
for a year. Since this computer is the size of an atom we should be able to jam
a lot of them in a small space and preform massively parallel computing.
Assuming that we can make a mole of these computers or about 6*10**23 we need
to add another 79 bits. Therefore at the limits of physics we need at least 169
bits for one year of security. Since we a dealing on an entirely theroetical
scale lets assume that this computer is as big as the moon and each processor
weighs the same as a hydrogen atom. That adds another 86 bits for a total of
255 bits. Therefore to break 256 bits by brute force would take a computer
which could preform 2**62 cycles/second and is the size of the moon two years.
Since we cannot build one such computer and massively parallel computers (or
attacks using distributed computing) generally have fewer than 1,000,000
processors it is safe to say that 80 bits is out of reach of modern technology
and 117 bits will be safe for a long time in the future.

        For a more modest estimate lets assume that a computer operating at 100GHz is
available now and that it is possible to connect 2**20 of these computers for
an attack. It would take six months of dedicated time to reach 80 bits. We
would need 8 more bits to reach one hundred years of security. Assuming
computing power allows us to double this every 4 years we would need 25 more
bits. It is fairly safe to say that 117 bits is secure at the present time. I
would like to say that 117 bits will be secure a hundred years from now but I
suspect technology would prove me wrong.

Mack
Remove njunk123 from name to reply by e-mail

------------------------------

Date: Wed, 12 Jul 2000 09:05:50 -0400
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: New Idea - Cipher on a Disk

Greg wrote:

> I was looking over a thread on the question of adding hardware
> support to a CPU.  The problem there is that the CPU must be
> flexible enough to support various types of ciphers if you go
> that route.
>
> What if a disk drive came with a cipher on the disk, and supported
> a new ATA instruction set that took advantage of these ciphers?
>
> The cipher can be very specific for that disk drive.  It can be
> a FEATURE of the disk drive, much like transfer speeds and buffer
> sizes are marketing features.  The cipher can be strong.  The disk
> drive manufacturers would compete for cipher strength and speed,
> much like they do now for transfer speed.  All of the software
> is eliminated and this feature is completely transparent.  There is
> no CPU overhead.  Multiple disks can work independently of each
> other.
>
> Sounds like a great idea for Fujitsu and IBM to incorporate into
> their drives!  But, alas, I proposed it here in the public domain...

Key management is the key ;-)

How do you propose to manage the keys used by the drive's cipher(s)?


------------------------------

From: "Stefan Monnier <[EMAIL PROTECTED]>" <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical  applications
Date: 12 Jul 2000 08:57:16 -0500

>>>>> "Konrad" == Konrad Schwarz <[EMAIL PROTECTED]> writes:
> Finally, I just don't see the loading, manipulating, and storing
> of individual bits of being time-competitive with the parallel
> manipulation of 8, 16, 32, or 64 bits.  It may be an intellectual
> challange to come up with parallel versions of bit-manipulating
> algorithms, but then, that's the fun of programming.

The way I thought about bit-addressing (which struck me as the most
natural choice for a processor like the Alpha) was just that: bit-addressing.

No need for funky alignment or arbitrarily bit-sized data access.
Just 32bit aligned load but ignoring (or checking zeroness of) the
last 5 bits instead of the last 2.  These are separate concerns.

The advantage is that such addresses can point to single bits (so your
language can support references to elements of bit-vectors, without
jumping through hoops), it gets rid of the arbitrary "byte" granularity,
gives 3 more tag bits for dynamically typed languages and doesn't
incur *any* cost in the implementation.

Of course, it's never been seriously considered because it would
break a whole bunch of C code (since C specifies that sizeof(char)
is 1 and people happily cast from integers and pointers all the time).


        Stefan

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to