Cryptography-Digest Digest #165, Volume #13      Wed, 15 Nov 00 23:13:00 EST

Contents:
  Re: hardware RNG's (David Schwartz)
  Re: Q: fast block ciphers (Tom St Denis)
  Re: Big-block cipher, perhaps a new cipher family? (Tom St Denis)
  Re: vote buying... (Paul Rubin)
  Re: Randomness from key presses and other user interaction (Steve Roberts)
  Re: Big-block cipher, perhaps a new cipher family? ([EMAIL PROTECTED])
  Re: hardware RNG's (Guy Macon)
  Re: Q: timing attacks on cryptographic algorithms (Dido Sevilla)
  Re: Q: fast block ciphers ("Brian Wong")
  Re: Modern Cryptoanalysis? (Dido Sevilla)
  Re: ECC help please (Dido Sevilla)
  Re: vote buying... (David Schwartz)
  Re: timing attacks on cryptographic algorithms ("Matt Timmermans")

----------------------------------------------------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's
Date: Wed, 15 Nov 2000 16:56:51 -0800


Terry Ritter wrote:

> One approach would seem to be to collect far more "entropy" input than
> one produces as output.  And that is probably OK, but since no measure
> can confirm the result, a lack of physical basis for the result is a
> far more ambiguous situation than we have with physical randomness.

        Exactly. You can't measure the entropy in the output. You have to
demonstrate that the entropy exists in the input and is preserved and
properly handled into the output. Even in physical processes, entropy
(in the cryptographic sense) is not something you can measure. It's
something you have to prove.

        From a practical standpoint, it may suffice in some applications to
attempt to predict the output and fail. Provided we can't forsee
significantly better attempts and the failure of those attempts is by a
sufficiently wide margin, we might be satisfied that our output is "good
enough". However, those judgments have to be carefully made and we
should be especially wary of claims that suspiciously large amounts of
entropy are being produced.

        Any physical process that is sensitive to temperature is truly random
to some level. At a fine enough level, the only way to judge the affect
of temperature on a process is to look at the output of that process.
Radioactive decay similarly qualifies -- as even if it is shown to be
deterministic, it is still beyond the realm of imagination that someone
might be able to remotely predict how a radioactive sample that they
can't even examine will decay.

        DS

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Q: fast block ciphers
Date: Thu, 16 Nov 2000 01:03:12 GMT

In article <[EMAIL PROTECTED]>,
  Lauri Pesonen <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm trying to figure out, which block cipher would fit my needs the
> best. I'm in need of a very fast block cipher that could be used to
encrypt
> multiple (as many as possible) simultanious network streams. The
system
> would also use Matt Blaze's RKEP Protocol [1] (which requires a block
> cipher instead of a stream cipher). The main point is that multiple
streams
> should be encrypted with as small a CPU load as possible. Decryption
is not
> all that performance sensitive (of course it's not a minus if
decryption is
> fast as well). The cipher should be strong in the present day as well
as for,
> let's say, the next five years.
>
> I seem to remeber hearing from someone that blowfish is relatively
> fast. Any faster block ciphers out there?
>
> Thanks for all your help.

I once designed a block cipher that uses decorrelation modules for
security.  The idea was to precompute multiplication in GF(2)^32 as a
series of four 8x32 sboxes.  With six rounds I achieved a rate of about
13 cycles/byte on my AMD K6-II machine which is the fastest speed for a
block cipher I ever heard of.

On my website some code called TC6a.c is available.  The irreducible
polynimal was entered wrong so you have to fix the source to use it.
Also the algorithm has had no formal analysis... user beware!

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Big-block cipher, perhaps a new cipher family?
Date: Thu, 16 Nov 2000 01:00:26 GMT

In article <iyFQ5.487$[EMAIL PROTECTED]>,
  "Manuel Pancorbo" <[EMAIL PROTECTED]> wrote:
> Traditionally there are stream and block ciphers. In the firsts, a
stream of
> data is encrypted unit by unit (the unit can be the bit, the byte or
even
> the full 32-bit word) by means of a keyed state that changes after the
> encryption of each unit. In the seconds, data is chunked in a fixed-
size
> blocks that are individually encryted by the same (scheduled) key;
blocks
> are not too big (64, 128 bits).
>
> Well I propose an intermediate method (really I? perhaps this is not
an
> original idea)
>
> Let's consider a fast stream cipher which reach full diffusion in
the 'n'
> ciphered unit and let's take it to cipher a big block (packet)
with 'N'
> units and N >> n, by means of a key 'K'. After this forward
encryption the
> state is reset to the initial key 'K' and a *backward* encryption is
> performed on the packet. So:
> 1) state <- K
> 2) p' = Encrypt p[0]...->...p[N-1]
> 3) state <- K
> 4) c = Encrypt p'[N-1]...->...p'[0]
>
> What we get is that any single plaintext bit change diffuses trough
all
> ciphertext packet, unless traditional stream ciphers where diffusion
> realizes only from the point where change occurs.
> Perhaps diffusion is not perfect in the last n units. This can be
solved by
> a previous backward cipher up to a safe distance n:
> 1) state <- K
> 2) p' = Encrypt p[N-1]...->...p[N-n-1] (the rest remain unchanged)
> 3) state <- K
> 4) p'' = Encrypt p'[0]...->...p'[N-1]
> 5) state <- K
> 6) c = Encrypt p''[N-1]...->...p''[0]
>
> If the stream cipher module is well designed, (with good unliner
properties
> to avoid attacks) all ciphertext units can be consider as a separate
hash
> (but invertible) function of the whole plaintext and key.
>
> How long the packet can be? Let's say ~100 bytes to 100 Kb or even
more.
>
> The advantage over a traditional block cipher are, IMHO:
>
> * It is really fast because, unless block ciphers, there are no
rounds to
> repeat. Oh, let's say there are 2 rounds.
> * There's no key schedule: no time consuming key setup. So the key
can be
> changed from one packet to another without ballast.
> * Packets could be as large as system allows. In an encrypt file
application
> the full file can be a single packet.
> * Because of this last property, packet chaining is less important
than in
> block ciphers. If necessary, a proper chaining method can be easily
> implemented.
> * Simplicity and modularity. Several conceptually simple stream
algorithms
> can be implemented easily and clearly.
>
> On the other side there are vulnerabilities:
>
> * Because the strength of the cipher is based on the avalanche over a
large
> number of units, small packets are vulnerable. A miminum safe packet
length
> should be defined. Padding is needed in case of small data.
> * Because of this, this method should be discarded in such scenarios
where
> data to be encrypted is small *and* bandwith is important.
> * The amount of memory required is mostly the length of the packet.
This is
> also an advantage because the user can customize her own security
> accordingly to system capabilities.
>
> I like the idea very much and thus I'm on the way of implementing it,
with
> cosmetic changes, in a concrete program. I use a 32-bit stream cipher
of my
> own design, with 128 to 256 bit keys, in order to obtain maximum
performance
> in a PC Pentium. In the first attempts the performance attained is 63
> Mbits/s with ANSI C, Linux GNU in a AMD K6II 350 Mhz. Furthermore test
> packets show suitable diffusion and randomness at a first glance.
>
> Opinions? Is the concept enough secure, without regarding the
structure of
> the kernel stream cipher?

I don't quite get how your cipher works.  Is it a stream or block
cipher?  Could you provide some simple pseudo-code?

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Paul Rubin <[EMAIL PROTECTED]>
Subject: Re: vote buying...
Date: 15 Nov 2000 17:40:38 -0800

David Schwartz <[EMAIL PROTECTED]> writes:
> > OK.  So when you present your receipt, election official Bubba can
> > look up your code number and know who you voted for.  We don't want
> > that.
> 
>       Why? That kind of traceability is exactly what you do want. Otherwise,
> there's no way to assure anyone that the results are accurate. In any
> scheme, if election officials themselves are corrupt, some level of
> compromise will have to be possible. Officials have to have the ability
> to investigate and correct cases where there is abuse.

That traceability is bad even if the election officials are honest and
the election is fair.  Years later, Sheriff Bubba has worked his way
up to being Supreme Dictator Bubba, has the election officials
executed and gets the archived code numbers and uses the ballot data
to locate everyone who voted against him.

> > >       You are looking at a scheme specifically designed to provide X and
> > > complaining that it doesn't provide Y. Of course not, it isn't designed
> > > to. Unless you want to argue that no scheme can provide both X and Y,
> > > your argument is pointless.
> > 
> > I may have missed something but I think I agree with this.  Any scheme
> > that solves all problems must do both X and Y, where X and Y are in
> > conflict with each other.
> 
>       What goal is in conflict with what goal?

The goal of being able to check votes after the election (to detect
fraud) is in conflict with the goal of not being able to check the
votes (to protect voter secrecy).

------------------------------

From: [EMAIL PROTECTED] (Steve Roberts)
Subject: Re: Randomness from key presses and other user interaction
Date: Thu, 16 Nov 2000 01:50:42 GMT

David Schwartz <[EMAIL PROTECTED]> wrote:

>
>Terry Ritter wrote:
>
>> >The letters which users type certainly aren't random,
>> >but the intervals between keystrokes by users certainly
>> >contains a fair bit of entropy.
>> 
>> Keystroke intervals *as* *typed* may indeed contain "a fair amount of
>> entropy."  Unfortunately, the keyboard scanning process quantizes most
>> of that away.
>
>       Especially if you know exactly what the user typed and have some
>experience timing the keystrokes of that particular user. There's
>probably somewhat more entropy if you just ask the user to bang on the
>keys randomly. 

Aargh, the user will just hold a key down so that all the key strokes
will have the same spacing in time!!  Don't forget that humans will
usually find the easiest possible way of doing something.  The same
thing for human-chosen random typing - there will be a lot of repeated
a's out there.

I have some experience of this technique in practical systems.  You
should PROVIDE the text that has to be typed - preferably randomish
letters, not coherent words that allow fast-keyboarding tricks - and
accept only the correct characters.  Then you have to watch for
keyboard buffering and quantization.  You can then get good random
numbers from times below about 10 milliseconds.  As a Christmas bonus
you can also hide facetious or abusive messages in the character
string that has to be typed in (without much loss of randomness).

Steve Roberts

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Big-block cipher, perhaps a new cipher family?
Date: Thu, 16 Nov 2000 01:52:21 GMT

In article <8uvbj7$7cl$[EMAIL PROTECTED]>,
  Tom St Denis <[EMAIL PROTECTED]> wrote:
> I don't quite get how your cipher works.  Is it a stream or block
> cipher?  Could you provide some simple pseudo-code?

I'm certainly not the authority, however the attempt seems to be to
create a keyed all-or-nothing-transform with cryptographic qualities.
Unfortunately he did a quite poor job of explaining it. To put it more
reasonably it is an block cipher, used with a propagating chaining
mode, and to form the aont after reaching the end you run it in reverse
across the finite stream. There have been many proposals like this,
most call them chaining modes and leave the cipher to a seperate idea.

Of course looking at the idea this way leaves little to the imagination
about the knowledge of the cryptographer, stating that it makes use of
a stream cipher (which somehow miraculously gives full forward
diffusion), and that it is very fast (it will in practice be quite slow
because of the need for proper diffusion), no time consuming key setup
(say what!!!!!!). He also locks the ability to properly diagnose the
vulnerabilities. The use of a proprietary stream cipher for testing it
also does not lead to confidence in the system, which also makes it
very clearly a chaining mode.

As to the OPs question of strength, the strength lies in the strength
of the cipher being used, just as with any other chaining mode. Of
course this is certainly not a new idea, and has been embodied by the
presence of chaining modes for a couple of decades, and I think most
people would let me state that rotor machines are mostly about chaining
mode, and those have been around for a century or so. Before that it
would have been a fairly obvious tactic to confuse your enemies by
using ciphers in a particular order on successive characters, so the
practice probably goes back at least a thousand years.
                  Joe


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (Guy Macon)
Subject: Re: hardware RNG's
Date: 16 Nov 2000 02:29:26 GMT


Terry Ritter wrote:
>
>in sci.crypt [EMAIL PROTECTED] (Guy Macon) wrote:
>
>>Terry Ritter wrote:
>>>
>>>Guy Macon wrote:
>>>
>>>>No one has proven perfect unpredictability or perfect security.
>>>>I advise using relative terms like "very difficult to predict"
>>>>and "highly secure" instead of absolute terms (or worse, terms
>>>>which some folks interpret as absolute and others as relative)
>>>>and most of the disagreements you two express will evaporate.
>>>
>>>That is poor advice.  The reason for having an absolute reference is
>>>that it has a possibility of being measured.
>>>
>>
>>I am not sure that I am following your argument.  Could you elaborate?
>>It sounds like you are saying that there is a possibility of measuring
>>security or predictability well enough to prove something to be
>>absolutely secure or absolutely unpredictable. 
>
>From my previous message:
>
>"No practical process or test can find every kind of sequence
>correlation.  But if we somehow do make a process which predicts, we
>can confirm that, and measure how close it comes."
>
>We cannot measure the strength or unpredictability of a random
>sequence, but we can measure the extent to which we can predict the
>sequence under some model.  If that extent is not 50 percent, we have
>found a weakness.  And if we can improve the prediction of the model,
>we have found a greater weakness.  
>
>The advantage of measuring how close we are to a particular form of
>nonrandomness depends upon having a measure which goes from "not
>predictable under the selected prediction model," through "partially
>predictable," to "fully predictable" numerically.  
>
>"Very difficult to predict" does not have the same advantage.  
>
>>I believe that I must
>>be misunderstanding, because such a proof assumes knowledge of all
>>future methods of breaking security or of predicting the next value.
>>This. of course, goes against what you have written in many other
>>posts, so I must assume that I am missing the point.
>
>The point is that a handwave definition of randomness does not meet
>the needs of scientific cryptology.  A measurable quantity is needed,
>and that goes beyond handwaves like: "if you can't predict the exact
>value all the time the sequence is obviously unpredictable."  Partial
>predictability is a valid and true form of weakness.  
>

O.K.  I can see the difference.  Thanks for taking the time to explain
it further.


------------------------------

From: Dido Sevilla <[EMAIL PROTECTED]>
Subject: Re: Q: timing attacks on cryptographic algorithms
Date: Thu, 16 Nov 2000 10:48:51 +0800

kihdip wrote:
> 
> And I interpret your answer as 'The only generel thing we know is, that
> there is a difference'

Well, the whole point of implementing a good cryptosystem that resists
timing attacks is that there should be no difference between ones and
zeroes, i.e. the algorithm implementation takes constant time regardless
of the values of the cryptovariables.

--
Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
ICSM-F Development Team, UP Diliman             +63 (917) 4458925
OpenPGP Key ID: 0x0E8CE481

------------------------------

From: "Brian Wong" <[EMAIL PROTECTED]>
Subject: Re: Q: fast block ciphers
Date: Wed, 15 Nov 2000 21:39:16 -0500


"Tom St Denis" <[EMAIL PROTECTED]> wrote in message
news:8uvbod$7l6$[EMAIL PROTECTED]...
>
> I once designed a block cipher that uses decorrelation modules for
> security.  The idea was to precompute multiplication in GF(2)^32 as a
> series of four 8x32 sboxes.  With six rounds I achieved a rate of about
> 13 cycles/byte on my AMD K6-II machine which is the fastest speed for a
> block cipher I ever heard of.
>


It's GF(2^32), something you should remember after being corrected for this
so many times. If you can't get this simple statement right, how do you
expect to have any credibility at all?

Brian



------------------------------

From: Dido Sevilla <[EMAIL PROTECTED]>
Subject: Re: Modern Cryptoanalysis?
Date: Thu, 16 Nov 2000 11:02:25 +0800

Jouni Hiltunen wrote:
> 
> I'm searching for a good book and/or paper on modern methods of
> cryptoanalysis, tools algorithms, even sourcecode if it would be
> available.
> 
> All suggestions appreciated.
> 

Someone posted a link with a lot of crypto links once:

http://citeseer.nj.nec.com/Security/Encryption/

It looks like it's got a lot of cool papers on cryptology and
cryptanalysis, including some of the seminal papers by Biham and Knudsen
on differential cryptanalysis, and many other useful papers.  Check out
the Counterpane website as well, where you can see how cryptanalysis is
done by experts in the field; all their papers are available online. 
Bruce Schneier also wrote a tutorial on modern cryptanalysis that's
available on the same website.

--
Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
ICSM-F Development Team, UP Diliman             +63 (917) 4458925
OpenPGP Key ID: 0x0E8CE481

------------------------------

From: Dido Sevilla <[EMAIL PROTECTED]>
Subject: Re: ECC help please
Date: Thu, 16 Nov 2000 11:08:23 +0800

James Russell wrote:
> 
> Also, could you grow your own ECC and have it be interoperable with
> Certicoms?  Maybe it would run slower or something, but it would still
> interoperate?
> 

AFAIK, Certicom's patents on ECC only cover their particular
implementation details, which are quite fast, and not the whole concerpt
of ECC itself, as was the case with the RSA algorithm.  You could create
an ECC implementation that didn't use their patented heuristics of
course, and it would still interoperate with Certicom's version,
although it probably would run slower.

--
Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
ICSM-F Development Team, UP Diliman             +63 (917) 4458925
OpenPGP Key ID: 0x0E8CE481

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: vote buying...
Date: Wed, 15 Nov 2000 19:33:20 -0800


Paul Rubin wrote:

> That traceability is bad even if the election officials are honest and
> the election is fair.  Years later, Sheriff Bubba has worked his way
> up to being Supreme Dictator Bubba, has the election officials
> executed and gets the archived code numbers and uses the ballot data
> to locate everyone who voted against him.

        Won't work. It'll just give him the code numbers of everyone who voted
against him. Remember, we were talking about a case where a citizen
presents his electronic voting receipt to an official.
 
> > > >       You are looking at a scheme specifically designed to provide X and
> > > > complaining that it doesn't provide Y. Of course not, it isn't designed
> > > > to. Unless you want to argue that no scheme can provide both X and Y,
> > > > your argument is pointless.
> > >
> > > I may have missed something but I think I agree with this.  Any scheme
> > > that solves all problems must do both X and Y, where X and Y are in
> > > conflict with each other.
> >
> >       What goal is in conflict with what goal?
> 
> The goal of being able to check votes after the election (to detect
> fraud) is in conflict with the goal of not being able to check the
> votes (to protect voter secrecy).

        So long as you need the voter's help to check them, you can easily meet
both goals.

        DS

------------------------------

From: "Matt Timmermans" <[EMAIL PROTECTED]>
Subject: Re: timing attacks on cryptographic algorithms
Date: Thu, 16 Nov 2000 03:47:51 GMT

You seem to be the unfortunate victim of an explanation in layman's terms.

Certain implementations of certain cryptographic algorithms take time that
is dependent on the input in predictable ways.  This doesn't translate into
a general difference in speed for processing 1s and zeros.

There aren't really too many ciphers that have been shown to be typically
vulnerable to timing attacks.  The stereotypical example is probably the RSA
cryptosystem.

To decrypt a message with an RSA private key, you perform a mathematical
operation called a modular exponentiation.  The private key is a large
number (say D), and the plaintext and ciphertext are also large numbers (say
P and C).  To perform an RSA decryption, you calculate P=C^D mod M, where M
is the RSA modulus -- a large number that is part of the public key.

The important thing to remember is that D is the private part.  The easiest
way to do a modular exponentiation is like this:

Decrypt(C,D,M)
    {
    P=1;
    while(D!=0)
        {
        P=(P*P)%M;
        if (D&1) //is the least significant bit a 1?
            P=(P*C)%M;
        D=floor(D/2) //shift out the least significant bit
        }
    return P
    }

This is subject to timing attacks in several ways.  Most notably, you can
easily tell about how many 1 bits are in D, because the algorithm performs
an extra multiplication and modulus operation for each one.  Also remember
that these are big numbers -- 1024 bits long or so.  Many algorithms for
performing multiplication and modulus operations on large numbers take time
that is dependent on their inputs.  By carefully choosing the ciphertexts
that get decrypted, you can find out a lot about D by measuring how long the
decryption takes.


"kihdip" <[EMAIL PROTECTED]> wrote in message
news:8uofd5$cqj$[EMAIL PROTECTED]...
> Timing attacks uses the difference in computation time vs. the number of
> zero's and one's in the input parameters.
>
> Which is faster - zero's or one's ??  ;-)
>




------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to