Cryptography-Digest Digest #143, Volume #13 Sun, 12 Nov 00 00:13:01 EST
Contents:
Re: Algorithm with minimum RAM usage? (Tom St Denis)
Re: A question about RSA ("Peter L. Montgomery")
Factoring discussed on web site (John Savard)
Re: "Secrets and Lies" at 50% off (Stuart Krivis)
Re: Algorithm with minimum RAM usage? (Dido Sevilla)
Re: Randomness from key presses and other user interaction (Benjamin Goldberg)
Integer encoding on a stream (Benjamin Goldberg)
Re: Whole file encryption (Benjamin Goldberg)
LaGrange Interpolating Polynomial Scheme? ("Bruce C. Goldstein")
Re: Updated XOR Software Utility (freeware) Version 1.1 from Ciphile Software
(John Savard)
Re: voting through pgp (David Wagner)
Re: Randomness from key presses and other user interaction (Terry Ritter)
----------------------------------------------------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Algorithm with minimum RAM usage?
Date: Sat, 11 Nov 2000 22:56:44 GMT
In article <8ukh04$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Guy Macon) wrote:
> I sometimes program microcontrollers where my old standby ARCFOUR
> can't be used because it takes too much RAM. What strong encryption
> algorithm uses the minimum amount of RAM?
>
> I have virtually unlimited amounts of ROM, and the application
> is such that I have a virtually unlimited amount of CPU time to
> burn, so efficiency and size are non-issues.
Perhaps in the 2001' CHES proceedings you will find your answer
(hehehe). For now... consider using a variant of Twofish or Rijndael
where the scheduled key is stored in a ROM/EEPROM. This will lower the
key agility but also lower the RAM requirement.
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: "Peter L. Montgomery" <[EMAIL PROTECTED]>
Subject: Re: A question about RSA
Date: Sun, 12 Nov 2000 00:23:03 GMT
In article <8ualcr$1n9$[EMAIL PROTECTED]>
"Scott Fluhrer" <[EMAIL PROTECTED]> writes:
>Chenghuai Lu <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>> Suppose we know n (= p * q), which is a multiple of two large primes,
>> and phi(n) where phi(x) is the Euler function. Can anybody give the
>> algorithm to find p and q in polynomial time?
>Bob Silverman of course gave a correct answer: given p*q and (p-1)*(q-1),
>it's not that difficult to find p and q. However, I'd like to answer the
>question I suspect you meant to ask.
>With RSA, the actual relationship between the encryption and the decryption
>exponent is actually:
> d*e = 1 mod lambda(p*q)
>where lambda(p*q) = lcm(p-1, q-1)
>So, I posit the question you meant to ask was: given p*q and lambda(p*q),
>can anyone give an algorithm to find p and q in polynomial time?
Denote k = gcd(p-1, q-1). Then lambda(p*q) = (p-1)*(q-1)/k .
We observe (p - 1)*(q - 1) <= p*q - 2*sqrt(p*q) + 1.
Dividing the latter by lambda(p*q) will give an excellent
numerical approximation to k.
But we can do better. I claim k = GCD(p*q - 1, lambda(p*q)).
Certainly k divides both p*q - 1 and lambda(p*q), since
k divides both p-1 and q-1. Suppose r^e is a prime power
dividing p*q - 1, but which does not divide k.
Either r^e does not divide p-1 or r^e does not divide q-1
-- suppose it does not divide p-1. From
p*q == 1 (mod r^e)
p =/= 1 (mod r^e)
we find q =/= 1 (mod r^e). That is, r^e does not divide q-1 either.
Therefore r^e does not divide lcm(p-1, q-1)= lambda(p*q).
--
If your parent or grandparent was a U.S. President,
don't expect a plurality of the popular vote when you run.
[EMAIL PROTECTED] Home: San Rafael, California
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Crossposted-To: sci.math
Subject: Factoring discussed on web site
Date: Sun, 12 Nov 2000 01:11:49 GMT
My web site now includes a page, at
http://home.ecn.ab.ca/~jsavard/crypto/pk050204.htm
which begins by illustrating Fermat factorization with several
pictures, and which proceeds, through an intermediate discussion of
ways to improve it, to the factor base algorithm, which is reasonably
well explained.
Only brief mentions of the continued fraction and quadratic sieve
methods are present, though, because to explain them in full would
have caused the page to cease to be nontechnical.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (Stuart Krivis)
Crossposted-To: comp.security.misc
Subject: Re: "Secrets and Lies" at 50% off
Date: 11 Nov 2000 20:40:36 -0500
Reply-To: [EMAIL PROTECTED]
On 16 Sep 2000 16:55:54 GMT, Bill Unruh <[EMAIL PROTECTED]> wrote:
>In <[EMAIL PROTECTED]> [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
>writes:
>
>> I think I get derision because I don't worshop Mr. BS
>>and think people vastly overrate is abilitues.
>
>Actually, it is for your spelling.
I asked a random sample of the people in my office if they would want to
read comments by Bruce. They said yes. I then asked if they wanted to read
comments by David Scott, and they all asked, "Who the fsck is he, and why
would I care what he has to say?"
I vote that David A Scott be henceforth banned from Usenet. He's as bad as
that Sternlight guy that hangs around the encryption groups and has a
woodie about pgp.
David Scott... David Sternlight...one and the same person?
--
Stuart Krivis
------------------------------
From: Dido Sevilla <[EMAIL PROTECTED]>
Subject: Re: Algorithm with minimum RAM usage?
Date: Sun, 12 Nov 2000 10:05:08 +0800
Tom St Denis wrote:
>
> Perhaps in the 2001' CHES proceedings you will find your answer
> (hehehe). For now... consider using a variant of Twofish or Rijndael
> where the scheduled key is stored in a ROM/EEPROM. This will lower the
> key agility but also lower the RAM requirement.
>
A reasonable implementation of Rijndael on the 8051 microcontroller
requires 54 bytes of scratch RAM if the key schedule is generated on the
fly as the subkeys are needed. Serpent would need 56 with the same
constraints. Twofish needs 52. RC6 needs 77. Don't even think about
implementing MARS on a small embedded processor... If key agility is a
concern, on the fly key expansion can be performed instead of burning
the expanded key in ROM. These RAM usage figures are from a paper
"Evaluation of Serpent, one of the AES finalists, on 8-bit
microcontrollers", by Vincent Journot, available as part of the Serpent
8051 implementation tarball.
--
Rafael R. Sevilla <[EMAIL PROTECTED]> +63 (2) 4342217
ICSM-F Development Team, UP Diliman +63 (917) 4458925
OpenPGP Key ID: 0x0E8CE481
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Randomness from key presses and other user interaction
Date: Sun, 12 Nov 2000 02:30:27 GMT
Frog2000 wrote:
>
> "Mack" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > >
> > >Tim Tyler wrote:
> > >>
> > >> David Schwartz <[EMAIL PROTECTED]> wrote:
> > >> : Mack wrote:
> > >>
> > >> :> There seems to be some argument as to whether timing
> > >> :> keystrokes is a good source of randomness.
> > >> :>
> > >> :> So lets start a thread on that.
> > >> :>
> > >> :> 1) Key stroke timing is generally quantitized to about 20 ms
> > >> :> give or take.
> > >>
> > >> : It's the give or take in the 20 ms that contains the entropy.
> > >>
> > >> Well, *if* this is true, this is not "randomness from key presses and
> > >> other user interaction" - it's more randomness from clock signal drift.
> > >
> > > The question is whether timing keystokes is a good source of
> > >randomness. I'm arguing that it is. Some of the entropy comes from the
> > >human, some of it from the quantization of real values built into the
> > >hardware. I happen to have a pretty good idea of how much entropy comes
> > >from the fact that the two oscillators involved have uncorrelated
> > >frequencies. I'm not as knowledgeable about the entropy in the
> > >keystrokes themselves.
> > >
> > > DS
> > >
> >
> > But the thread was about the user interaction. There was
> > already a thread on oscillators.
> >
> >
> > Mack
> > Remove njunk123 from name to reply by e-mail
>
> NO, usesrs aren't that random...
The letters which users type certainly aren't random,
but the intervals between keystrokes by users certainly
contains a fair bit of entropy.
--
There are two methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Crossposted-To: comp.compression
Subject: Integer encoding on a stream
Date: Sun, 12 Nov 2000 02:30:37 GMT
A while back, I asked a few questions about how one should store an
integer on a bit stream in a way that uses few bits. I found this
method in Knuth (volume 3), for a method which had the additional
requirements for being prefix free, and having larger integers be
lexicographically greater than smaller integers.
Here's some psuedocode for it.
writeIntegerOnStream( p ) {
if( p == 0 ) { writeBits( "0" ); return; }
writeBits( "1" );
writeIntegerOnStream( bitLength(p) - 1 );
writeBits( tobase2String(p).substring(1) );
}
readIntegerFromStream() {
if( readBits(1) == "0" ) return 0;
length = readIntegerFromStream();
return base2toInteger( "1" + readBits(length) );
}
Any comments on how efficient this method is compared to the others that
had been suggested?
Those that I can recall offhand were:
0) Write the integer as a fixed-length number
1) base 255 (with value 255 as a terminator)
2) base 128 (with the 8th bit showing if this is the last byte
in the number) aka Ber encoding.
3) fibbonacci encoding (hard to explain)
All three of these are prefix free, the first 2 are for writing on byte
streams, and the third is for writing on bit streams.
None of them (except for method 0) are good for a lexicographical
comparison of the encoded numbers (which is a neat property, but not
something I care about atm).
Additionally, someone suggested using Ber encoding to write the length,
then follow it with the binary representation of the number.
David Scott also tried to add his 2 cents, but his coins were slugs.
--
There are two methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Whole file encryption
Date: Sun, 12 Nov 2000 02:30:40 GMT
Tom St Denis wrote:
>
> In article <[EMAIL PROTECTED]>,
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> >
> > Benjamin Goldberg wrote:
> > >
> > > The following is a simple idea for whole file encryption.
> > > sbox is actually a keyed sbox.
> > >
> > > encrypt_r( data, length, sbox )
> > > tmp1 = length, tmp2 = tmp1/2, tmp3 = tmp1-tmp2
> > > ptr1 = &data[0], ptr2 = &data[tmp3]
> > > for( i = 0; i < tmp2; ++i, ++ptr1, ++ptr2 )
> > > *ptr1 += sbox[*ptr2]
> > > *ptr2 += sbox[*ptr1]
> > > *ptr1 += sbox[*ptr2]
> > > if( tmp2 != tmp3 )
> > > *ptr1 = sbox[*ptr1]
> > > if( tmp2 > 0 )
> > > encrypt_r( data, tmp2, sbox )
> > > if( tmp3 > 0 )
> > > encrypt_r( ptr1, tmp3, sbox )
> > >
> > > encrypt( data, length, sbox )
> > > encrypt_r( data, length, sbox )
> > > encrypt_r( data, length, sbox )
> >
> > I don't see why you have
> >
> > *ptr1 += sbox[*ptr2]
> > *ptr2 += sbox[*ptr1]
> > *ptr1 += sbox[*ptr2]
> >
> > and don't have simply the first two statements. (Compare
> > a normal Feistel cipher).
>
> Because the Luby-Rackoff proofs of construction only hold when there
> are at least three rounds.
>
> > Does
> >
> > encrypt( data, length, sbox )
> > encrypt_r( data, length, sbox )
> > encrypt_r( data, length, sbox )
> >
> > simply mean that you want to repeat encrypt_r exactly
> > two times? If yes, why?
Yes. Because if it's only done once, then avalanche might not occur,
due to the fact that lengths that aren't 2**m are allowed.
> > If I understand correctly, what you do in encrypt_r
> > is equivalent to doing a permutation of the elements
> > of the array data (or segments of that array in the
> > recursive calls) thru bringing those at some distance
> > away to become neighbours and then perform a normal
> > Feistel cycle.
I suppose that you could view it that way, but a more accurate model
would be to consider this as a recusively implemented FFT structure,
with keyed mixing (and if the size at a particular level is odd, then do
a keyed substitution on the odd byte, and mixing on the rest).
> > There is some similarity to an idea I
> > posted recently in the thread 'On block encrpytion
> > processing with intermediate permutations' (25th Sep),
> > the difference being that I use pseudo-random
> > permutations, while you employ special permutations,
> > if I don't err.
Well, yes, but your psuedo-random permutations make no gaurantee that
each byte of the input is exposted to each other byte of the input. My
deterministic permutation does make such a gaurantee.
> > It may be of interest to note that, if the file, i.e.
> > the array data, has 2^m elements, then one way of
> > applying a Feistel cipher is doing recursion, i.e.
> > first dividing the whole into two halves for Feistel
> > processing which is itself done by subdivision and so
> > on. See the thread 'On higher order Feistel schemes'
> > posted by me on 13th May.
I had a Feistel structure in mind when I designed the cipher.
> That was my design in TC5, and Matt Blazes idea for Turtle, this is
> nothing new!!!!
I haven't seen Turtle, but I can say that it differs from TC5 in three
ways:
1) This uses a keyed sbox, whereas TC5 uses an ordinary sbox, with round
keys.
2) This is a variable sized block cipher, whereas TC5 is a block cipher.
3) The size of this cipher's key is constant, whereas TC5 would need
more round keys if the size of the block were increased.
--
There are two methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
------------------------------
From: "Bruce C. Goldstein" <[EMAIL PROTECTED]>
Subject: LaGrange Interpolating Polynomial Scheme?
Date: Sat, 11 Nov 2000 19:49:30 -0700
The following question is in direct correlation to information given in
Bruce Schneier's book "Applied Cryptography" in section 23.2. Basically,
I'm looking for a and/or understanding this particular scheme (possibly code
already done in VB or ???) that would allow for, let's say 5 people to have
parts of a key phrase which will allow the unlocking of a private key (for
import/export purposes) but only 3 people are required to enter their
portions of the key.
=======================
I'm somewhat intrigued with the LaGrange Interpolating Polynomial Scheme for
allowing x of y participants to unlock a particular secret. I did a search
on the Internet and very little came up in the way of this in usage of
cryptography applications. There GOTTA BE some source code and/or freeware
(shareware?) out there that we could utilize. But where? I'm afraid my
days of being the mathematical physicist are long behind me and I'm not
quite up to speed in writing the linear algebraic solutions required to
solve this problem. One thing that has be fairly confused is
what those variables, such as "M", really mean in the equation. If "M" is a
message string, such as "Hello, World". Is "M" just a byte representation
of this string? How exactly do the participants know the length of the key
phrase? Is there a simple way to generate "P" especially if it's going to
be a very large number?
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Crossposted-To: alt.freespeech,talk.politics.misc,talk.politics.crypto
Subject: Re: Updated XOR Software Utility (freeware) Version 1.1 from Ciphile
Software
Date: Sun, 12 Nov 2000 03:50:47 GMT
On Wed, 08 Nov 2000 11:51:38 -0800, Anthony Stephen Szopa
<[EMAIL PROTECTED]> wrote, in part:
>Andre van Straaten wrote:
>> What people was concerned about is the fact that no source code has been
>> given and the program COULD bear additional hidden features as
>> backdoors, trojans, etc.
I doubt that this would be the _primary_ concern with a program that
just XORs two files together.
But I believe that Mr. Szopa does also make available an encryption
program with an algorithm of a more conventional kind.
>You just have to look at the reply posts and what they really
>contribute to a particular thread to make this determination.
You've said something your opponents will agree with.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: voting through pgp
Date: 12 Nov 2000 04:06:15 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)
John A. Malley wrote:
>Making the act of voting public introduces a distributed, collective
>witnessing/authentication to the genuineness of the vote cast.
I don't see it yet. If we're talking about Internet voting, making the
fact that someone has voted does not say anything about the genuineness
of the vote cast -- you can't tell whether or not the vote cast was the
one the legitimate user wanted to be cast. In a physical voting situation,
this issue does not arise, because one has a trusted path to a mechanism
trusted to accurately record the user's wishes, but there is no basis for
such trust when one is talking about voting on one's home PC. Nor do I
think that this is an issue that can be easily solved with clever
cryptographic algorithms.
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Randomness from key presses and other user interaction
Date: Sun, 12 Nov 2000 04:45:02 GMT
On Sun, 12 Nov 2000 02:30:27 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt Benjamin Goldberg
<[EMAIL PROTECTED]> wrote:
>Frog2000 wrote:
>>
>> "Mack" <[EMAIL PROTECTED]> wrote in message
>> news:[EMAIL PROTECTED]...
>> > >
>> > >Tim Tyler wrote:
>> > >>
>> > >> David Schwartz <[EMAIL PROTECTED]> wrote:
>> > >> : Mack wrote:
>> > >>
>> > >> :> There seems to be some argument as to whether timing
>> > >> :> keystrokes is a good source of randomness.
>> > >> :>
>> > >> :> So lets start a thread on that.
>> > >> :>
>> > >> :> 1) Key stroke timing is generally quantitized to about 20 ms
>> > >> :> give or take.
>> > >>
>> > >> : It's the give or take in the 20 ms that contains the entropy.
>> > >>
>> > >> Well, *if* this is true, this is not "randomness from key presses and
>> > >> other user interaction" - it's more randomness from clock signal drift.
>> > >
>> > > The question is whether timing keystokes is a good source of
>> > >randomness. I'm arguing that it is. Some of the entropy comes from the
>> > >human, some of it from the quantization of real values built into the
>> > >hardware. I happen to have a pretty good idea of how much entropy comes
>> > >from the fact that the two oscillators involved have uncorrelated
>> > >frequencies. I'm not as knowledgeable about the entropy in the
>> > >keystrokes themselves.
>> > >
>> > > DS
>> > >
>> >
>> > But the thread was about the user interaction. There was
>> > already a thread on oscillators.
>> >
>> >
>> > Mack
>> > Remove njunk123 from name to reply by e-mail
>>
>> NO, usesrs aren't that random...
>
>The letters which users type certainly aren't random,
>but the intervals between keystrokes by users certainly
>contains a fair bit of entropy.
Keystroke intervals *as* *typed* may indeed contain "a fair amount of
entropy." Unfortunately, the keyboard scanning process quantizes most
of that away.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************