Cryptography-Digest Digest #603, Volume #14 Wed, 13 Jun 01 08:13:01 EDT
Contents:
Re: The 94 cycle 64-bit block cipher :-) (Phil Carmody)
Re: Uniciyt distance and compression for AES ([EMAIL PROTECTED])
Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
([EMAIL PROTECTED])
Re: help non-elephant encryption (Nicholas Sheppard)
Re: Yarrow PRNG (Mark Wooding)
Re: help non-elephant encryption ([EMAIL PROTECTED])
Re: One last bijection question (Mark Wooding)
Re: Simple Crypto II, the public key... (Phil Carmody)
Re: Timer chip ("Harris Georgiou")
Re: Uniciyt distance and compression for AES ([EMAIL PROTECTED])
Re: One last bijection question ([EMAIL PROTECTED])
Re: Simple Crypto II, the public key... (Phil Carmody)
Re: Simple Crypto II, the public key... (Phil Carmody)
Re: Sophie-Germain Primes for sale (Phil Carmody)
Re: Yarrow PRNG (Tim Tyler)
----------------------------------------------------------------------------
From: Phil Carmody <[EMAIL PROTECTED]>
Subject: Re: The 94 cycle 64-bit block cipher :-)
Date: Wed, 13 Jun 2001 10:26:05 GMT
Tom St Denis wrote:
> Well I feel honoured that you are archiving my stuff :-) Feel free to
> download/repost/edit/whatever anything on my site. You can take my source
> and redistribute it if you want. (That's the point of sharing ya know :-0).
Sharing is good. Readers contributing back is even better.
> I would appreciate comments on my upcomming ideas though. Even if you don't
> have something rigorous more than "oh neat". It's nice to just hear from
> others.
Upcoming? Hmmm, I'd rather rewind the clock a few months if I may :-)
I'm curious about your 3-hash actually.
<<<
for (r = 16; r < SIZE; r++) {
t = W[r - 3] ^ W[r - 8] ^ W[r - 14] ^ W[r - 16] ^ r ^
0x9E379B93ul;
W[r] = (t << 1ul) | (t >> 31ul);
}
>>>
The ^r seems to be added to add a little more non-linearity, an the
^0x9E379B93ul seems to add some noise to those cases over-populated with
zeros.
However, the ^r only touches the bottom 7 bits (or thereabouts)
Assuming x86 has a nice fast integer multiply, wouldn't
^(r*0x9E379B93ul)
do a better job, potentially touching all bits?
I'm also curious - why aren't the well-known CRC algorithms used as
hashes? Is it that they aren't one-way? (they look reversable, but I've
not studied them closely). Or is it just that they are too short, and if
they were make longer they'd take too long to actually get all the bits
mixed up? (so would be useless for a short message)
Phil
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Uniciyt distance and compression for AES
Date: Wed, 13 Jun 2001 01:32:55 -0800
Tim Tyler wrote:
>
> [EMAIL PROTECTED] wrote:
> : Tim Tyler wrote:
> :> [EMAIL PROTECTED] wrote:
[snip]
> :>
> :> : What isn't clear to me is how a compression algorithm can be intelligent
> :> : enough to distinguish "meaningful" from "meaningless" inputs (although
> :> : it would be easier if the compression algorithm knew the input language).
> :>
> :> Compression algorithms need to do no such thing in order
> :> for the unicity distance to be increased.
> :>
> :> All they really need to do is compress plausible-looking messages
> :> on average - and face it - if they didn't do that, it would be hard
> :> to justify calling them compressors in the context of the target data.
I worked through an example that indicates that this isn't true. Whether
or not the redundancy is reduced depends on how many meaningless
messages
that the compressor compresses. I don't think it's sufficient to say
that the compressor increases the unicity distance just because is
compresses more meaningful messages than meaningless messages.
> : Someone is going to have to a better job of explaining this before I can
> : buy it.
> : The explanation should be simple:
>
> : n_o = H(k)/d where n_o = unicity distance, H(k) is keyspace entropy
> : and d = redundancy = r_o - r_n and r_o = r_n if all possible
> : messages are meaningful.
>
> : Explain how compression effectively reduces the redundancy (i.e. even if
> : I am able to decompress decryptions before determining whether or not
> : the key is spurious) if both meaningful and meaningless messages are
> : compressed. H(k) is assumed constant so the only way to increase n_o is
> : to decrease d . So to show that compression increases unicity distance,
> : one has to show that compression effectively reduces the redundancy d.
>
> That's what compression *does*. It makes files shorter, increasing
> the entropy per bit (entropy remains the same, number of bits in
> message decreases). When entropy-per-bit goes up, redundancy goes down.
I think the entropy refers to the entropy of the message space, which
would be affected if the number of bits decreases because the message
space would become smaller.
> : I don't see how compression can reduce d unless it filters out
> : meaningless messages.
>
> I'm not sure what "filter out" might mean in the context of lossless
> compression - but assuming it means "makes longer" it logically must
> do that in order to compress the target set.
>
> If meaningful files get shorter, other files must get longer, by the
> counting theorem.
>
> What it *doesn't* need to do is filter out *all* meaningless messages.
You are right about this. Even if the compressor compresses some
meaningless
messages, the unicity distance *may* increase, but my example seems to
show
that it depends on the number of meaningless messages that are
compressed.
Compressor Example
==================
==================
Uncompressed
Mesg. Mesg. Compression #1 Compression #2 Compression #3
Prob. (code) (prob.) (code) (prob.) (code) (prob.)
============ ======= =========== =============== ==============
0000 1/8 000 2/11 000 4/21 000 4/20
0001 1/8 001 2/11 001 4/21 001 4/20
0010 1/8 010 2/11 010 4/21 010 4/20
0011 1/16 011 1/11 011 2/21 011 2/20
0100 1/16 100 1/11 100 2/21 100 2/20
0101 1/16 101 1/11 101 2/21 101 2/20
0110 1/16 110 1/11 110 2/21
0111 1/16 111 1/11
1000 1/16
1001 1/16
1010 1/32 111 1/21 110 1/20
1011 1/32 111 1/20
1100 1/32
1101 1/32
1110 1/32
1111 1/32
Compression #1 only compresses high probability ("meaningful") messages.
Compression #2 compresses one low probability ("meaningless") message.
Compression #3 compresses two low probability messages.
In reality meaningless messages would have a much lower probability
but I just picked easy numbers.
Using H(m)=p_m*log(p_m)
H(m)=3.8125 for the umcompressed message space with n=4
H(m)=2.9139 for the message space after Compression #1
H(m)=2.869 for the message space after Compression #2
H(m)=2.61 for the message space after Compression #3
Using the notation in Shannon's "Comm. Theory of Secrecy Sys."
D_n = log(G) - H(m)
where D_n is the total redundancy for n letters of mesg.
and G is the total number of messages of length n.
D_n = 4 - 3.8125 = 0.1875 for the uncompressed message space (G = 16)
For Compression #1:
D_n = 3 - 2.9139 = 0.086 < 0.1875 so the unicity distance increases
For Compression #2
D_n = 3 - 2.869 = 0.1315 < 0.1875 so unicity distance increases
even though not all compressed messages were meaningful (as you said).
For Compression #3
D_n = 3 - 2.61 = 0.394 > 0.1875 so unicity distance *decreaes*
even though the majority of compressed messages are meaningful
(which is *not* as you said).
Correct me if I did something wrong.
=====BEGIN PGP MESSAGE=====
Version: PGP 6.5.8
owEBUACv/4kAPwMFADr8zeJeh8Vpi2BPfBECIdMAnj2JL4t7ldNTP6AdJpgPZfcJ
CyCJAJwOZpDqGOriihtsout3jjUnUCSpN6wMYgZwZ3BzaWcVAAAA
=dADa
=====END PGP MESSAGE=====
------------------------------
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
From: [EMAIL PROTECTED]
Date: 13 Jun 2001 06:31:58 -0400
Mok-Kong Shen <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] wrote:
>>
>> (2) In the case of PK systems, I generally *can* be 100% positive I have
>> now got the private key in my hands.
>
> That's the axiom of existence of opponents of unbounded resources. If
> one is any realistic, one wouldn't adopt that axiom.
No it isn't, Mok. There are many many ways I can get a key in my hands,
which I think might be yours. Once I have a candidate in my hand, I can
verify with 100% certainty whether it *is* or *is not* your key.
In the case of a OTP, YOU YOURSELF can walk right up to me and GIVE me
your key, along with a box of chocolates. That does me no good: I have
no way of knowing that it's your *real* OTP; it might be a clever fake
you created to fool me. Even if your wife mails me the OTP, together with
a note that says ``let's get together'', I can't be sure it's your key:
you might have planted a key to fool your wife.
That's what ``zero information-theoretic security'' means: by publishing
your public key, you have furnished a foolproof means of telling the real
private key from all other candidate keys. Which is the answer to your
original question.
Len.
--
I'm more worried about real security problems than theoretical
reliability problems.
-- Dan Bernstein
------------------------------
Date: Wed, 13 Jun 2001 16:43:08 +1000
From: Nicholas Sheppard <[EMAIL PROTECTED]>
Subject: Re: help non-elephant encryption
On 12 Jun 2001, Gregory G Rose wrote:
> Also it's perfectly reasonable to claim that a
> system has perfect secrecy for streams up to some
> (implied) finite length. After all, we're used to
> changing DES keys every 32 gigabytes.
The claim seems to be that their communicating devices can dynamically
generate a truly random keystream "on-the-fly", presumably of arbitrary
length, by using "random network fluctuations". Similar, apparently, to
quantum cryptography (so they say).
Perhaps it works by having the communicating devices measuring some common
source of randomness available on the network, and using the results as
one-time pad? I.e. it could be similar to the Rabin system that went
through sci.crypt some months back, which uses a public random source to
generate a one-time pad.
Does it work? Is it practical? No idea.
--
Nicholas Sheppard | Ph: +61 2 4221 5290
Research Fellow | Fax: +61 2 4221 4170
School of Information Technology & Computer Science | E-mail: [EMAIL PROTECTED]
The University of Wollongong, Australia | WWW: www.uow.edu.au/~nps
------------------------------
From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Yarrow PRNG
Date: 13 Jun 2001 11:02:19 GMT
[EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> 1. Has anyone reviewed the properties of the PRNG? Is it as good as
> it claims?
The paper claims 160-bit security for Yarrow-160. If you interpret that
to mean that you can't tell the difference between Yarrow-160 and real
random data with less than 2^{160} effort then the claim is false.
The output stage uses triple-DES in counter mode. Output is in 64-bit
chunks. If the source were truly random, we'd expect two consecutive
outputs to be equal with probability 2^{-64}. With a counter-mode
output, this would never happen at all, except that Yarrow-160 rekeys
the cipher every P_G = 10 output blocks. At a rekeying boundary, we can
assume that the probability of a duplicate is as we'd expect. Hence,
the probability of two consecutive blocks being equal is about ten times
lower than it ought to be. Reading 2^{64} blocks and returning `random'
if there is a repeated block and `non-random' if there is none gives an
advantage of about 0.63. This assumes we know where the block
boundaries are.
There's a better attack which also copes with this problem (but assumes
that the adversary is given all of the generator's output). I'll
parameterize it against general Yarrow. Recall that the block cipher
used in Yarrow operates on n-bit blocks, and is rekeyed every P_G
blocks. (Actually, the rekeying interval is \min(2^n, 2^{k/3}, P_G)
where k is the key length of the cipher, but I'll label this constant
value P_G in the following.)
Given P_G blocks of n bits chosen at random, two blocks will be equal
with probability C(2^n, P_G) >= (1 - 1/e) P_G (P_G - 1)/(2 2^n).
(In the case of Yarrow-160, this probability is about 2^{-60}.) In the
case of Yarrow output between two rekeyings of the output stage, the
probability is zero, because output is driven by a counter. If we don't
guess the boundary correctly, the probability is lower than C(2^n, P_G)
but nonzero. (The notation C(N, q) denotes the probability that, if q
balls are thrown at random into N >= q buckets, there is a bucket
containing more than one ball.)
The attack proceeds as follows. Request q n P_G + P_G - 1 bits of data
(i.e., q rekeying-intervals-worth of data with some extra each side).
For each possible position of the rekeying boundary, test whether any
block is duplicated within a rekeying interval. If not, claim
`non-random (Yarrow)'; otherwise, claim `random'.
I'll now measure the advantage of this adversary. It will *always*
detect Yarrow if it's being used. The only issue is the probability
that it will misdiagnose a random source as being Yarrow. Hence, the
advantage is precisely 1 - (1 - C(2^n, P_G))^q.
Some playing with Emacs suggests that, in the case of Yarrow-160, I need
q >= 2^{58.6} to have an advantage greater than 1/2.
-- [mdw]
------------------------------
Subject: Re: help non-elephant encryption
From: [EMAIL PROTECTED]
Date: 13 Jun 2001 07:08:04 -0400
Nicholas Sheppard <[EMAIL PROTECTED]> writes:
>
> Perhaps it works by having the communicating devices measuring some common
> source of randomness available on the network...
In that case, it can't work. Passive observers on the network can observe
the same random source, and can deduce the key. Quantum crypto works
because there's no such thing as a ``passive observer''; observation
changes the system, and scrambles the message rather than intercepting it.
Len.
--
We will only do with your money what we would do with our own.
-- Warren Buffett, 1983
------------------------------
From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: One last bijection question
Date: 13 Jun 2001 11:09:34 GMT
Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> The problem is with Mark Wooding's suggestion to replace in future
> 'range' with 'image'.
The suggestion was to use the term `image' to sidestep this issue of
ambiguity of `range' which has distracted the current conversation. I
don't intend to overturn the way mathematics is done elsewhere, just dig
the current thread out of a hole.
> Now this change may be good for some people whose books or papers
> mostly employ 'image' but may on the other hand be bad for other
> people whose books or papers mostly employ 'range'. Either way, some
> people would get advantage/disadvantage.
The problem isn't that some people aren't familiar with the term
`range'. The problem is that *everyone* seems familiar with it, but
we disagree about its meaning. It's best to avoid ambiguous terms, so
I've suggested what I'm sure is a less common alternative. Since this
is a suggestion applicable only to the current discussion, I didn't
think it was terribly controversial.
-- [mdw]
------------------------------
From: Phil Carmody <[EMAIL PROTECTED]>
Subject: Re: Simple Crypto II, the public key...
Date: Wed, 13 Jun 2001 11:11:48 GMT
Tom St Denis wrote:
>
> "John Savard" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > On Tue, 12 Jun 2001 23:04:40 GMT, "Tom St Denis"
> > <[EMAIL PROTECTED]> wrote, in part:
> >
> > >How do you cheat with "mod operations"? I have a list of good DH primes
> if
> > >you want
> >
> > Well, if P is 2^n-1, one can perform modulo arithmetic without long
> > division. I don't know if a prime can be Mersenne and Sophie Germain
> > at the same time, though.
(2^n+/-c)/d for small c and d are good primes for this.
> Ah but the # of good primes of the form 2^n - 1 is few. i dunno of any good
> ~1024 bit primes of that form.
>
> Of course [for anyone in the dark] a mod operation with these modulos is
> just a shift right n bits and an addition.
>
> So "mod 255" would be
>
> a = ((a >> 8) + a) & 255;
Stick 65535 into that :-)
> In C... so you could do a mult like
>
> a = b * c;
> a = ((a >> 8) + a) & 255;
a=(a&255) + (a>>8);
twice.
and then turn 255s into 0.
However, if you're doing this repeatedly, just accumulate the carries
Anyway, this is the cheating I'm thinking of.
Cor I'm full of questions... (and this groups is brilliant for answers,
thanks!) here comes another!
How are these good 'DH' primes (Diffie Hellman) related to use in
ElGamal? Is it the same class of primes, it's just that the name 'DH'
stuck? If so, what makes a good DH prime? As I was on the topic of
'minimal' algorithms - what makes a _bad_ DH prime?
Phil
------------------------------
From: "Harris Georgiou" <[EMAIL PROTECTED]>
Subject: Re: Timer chip
Date: Tue, 12 Jun 2001 22:06:55 +0300
Ο Paul Rubin <[EMAIL PROTECTED]> έγραψε στο μήνυμα συζήτησης:
[EMAIL PROTECTED]
> HyperCube <[EMAIL PROTECTED]> writes:
> > Hi folks, I heard there's a way to directly access the processor's or
> > board's timer chip by reading out a special register or memory address.
> > This bypasses the common timer interrupt and should give a resolution in
> > times of nano-seconds(!?), of course well suited for random number
> > generation. Does anybody know how it is done (I mean how it is really
> > done, in detail)? Thanks a lot.
>
> It's processor and hardware specific but generally it's done by having
> a CPU cycle counter on the processor. Every clock cycle (500 mhz
> would mean every 2 nanoseconds), the counter gets incremented; and
> there's a machine instruction for reading the counter. See for
> example the RTDSC instruction on Pentium processors.
There are also other ways of doing it, using inp/outp instructions to
specific ports - look for simple timer implementations (sources) in Pascal
or C. However, keep in mind that all these methods are very
hardware-specific, they do not behave well in modern operating systems
(which actually use them for internal scheduling) and of course all work
should be done in less than the available time interval, otherwise the
system will freeze (and probably crash).
--
Harris
- 'Malo e lelei ki he pongipongi!'
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Uniciyt distance and compression for AES
Date: Wed, 13 Jun 2001 02:20:07 -0800
[EMAIL PROTECTED] wrote:
>
>
> Using H(m)=p_m*log(p_m)
Oops. I forgot to add that it is summed over all possible messages in
the mesg. space.
------------------------------
Subject: Re: One last bijection question
From: [EMAIL PROTECTED]
Date: 13 Jun 2001 07:26:24 -0400
[EMAIL PROTECTED] (Mark Wooding) writes:
>
> The suggestion was to use the term `image' to sidestep this issue of
> ambiguity of `range' which has distracted the current conversation.
By the way, ``range'' is defined wrong in (some/most/all) American
Middle and High Schools; I always heard it defined as the codomain,
while the word ``codomain'' was not used at all.
>> Now this change may be good for some people whose books or papers
>> mostly employ 'image'...
I agree with Mark that nobody should be confused by ``image''. Except
that it is usually necessary to answer the question, ``image of
what?'' The image of the domain is the range. The image of a singleton
is a singleton in the codomain. The image of a subset is a subset of
the codomain, and in particular a subset of the range.
Len.
--
The easiest way to fix this bug is to change the spec. ``It's not a bug,
it's a feature.''
-- Dan Bernstein
------------------------------
From: Phil Carmody <[EMAIL PROTECTED]>
Subject: Re: Simple Crypto II, the public key...
Date: Wed, 13 Jun 2001 11:30:15 GMT
Gregory G Rose wrote:
> In article <w6zV6.102704$[EMAIL PROTECTED]>,
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> >Just FYI. The only Mersenne prime around 1024-bits is 2^1279 - 1, Where the
> >order has factors
> >
> >``(2)*``(3)^3*``(7)*``(19)*``(73)*``(1279)*_c354*``(228479)*``(17467)*``(664
> >57)*``(102241)*``(5113)
> >
> >_c354 is a 354 digit composite.
> >
> >As you can see it's roughly speaking "smooth" [excuse the bad pun]. Which
>
> It is, by definition, not smooth. For it to be
> smooth, *all* the factors must be small. If c354
> had small factors it would have been factored by
> now.
I though I once encountered a loose definition for smooth that indicated
that just one factor was permitted to be small. I think this was in the
context of Pollard factoring, (dunno which one - the one with the B
parameters, 'p-1'?) I'm ashamed of myself I ought to know this off by
heart (being a prime nut, so to speak).
It may have been a worded "near-smooth" or something weasely though.
[SNIP]
> >I dunno of any other method to determine the order of a base other than
> >factoruing the order of the group
>
> It can't be any easier than factoring, since
> otherwise you'd have invented a new factoring
> algorithm.
In fact isn't this the trick behind Pollard's P-1? (the fact that the
order of an element in the multiplicative subgroup is related to the
factorisation of the cardinality of the ring).
Phil
------------------------------
From: Phil Carmody <[EMAIL PROTECTED]>
Subject: Re: Simple Crypto II, the public key...
Date: Wed, 13 Jun 2001 11:33:44 GMT
Vincent Quesnoit wrote:
>
> The modpow function can be greatly improved by use of repeated squaring
> instead of simple multiplications, this woulreduce the number of
> operation to a maximum of 2*log(exponent).
>
> int mod_pow(int ch, int exp, int modulo)
> {
> int i, result,power;
> result = 1;
> power = ch;
> while (exp !=0){
> if ((exp & 1)==1) {
> result = (result * Power) % modulo;
> }
> Power = (Power *Power)% modulo;
> exp >>=1;
> }
> return result;
> }
> HTH,
> Vincent
I'm a 'right-to-left' man myself.
HAC I believe has both algorithms explicitly.
Phil
------------------------------
From: Phil Carmody <[EMAIL PROTECTED]>
Subject: Re: Sophie-Germain Primes for sale
Date: Wed, 13 Jun 2001 12:05:14 GMT
Tom St Denis wrote:
>
> Made you look.
>
> No seriously *free* SG primes are at my website
>
> http://tomstdenis.home.dhs.org/primes.txt
>
> A SG prime is of the form p = 2q + 1, where q itself is prime and of course
> p mod 4 = 3.
p = 5 (mod 6)
2p+1 = 5 (mod 6)
Any other residue is either even, 3, or yields 3.
This relation permits there to be chains (Cunningham chains of the first
kind)
http://listserv.nodak.edu/scripts/wa.exe?A2=ind0103&L=nmbrthry&P=R423
> They are useful for DH and other DLP quests. Since they are SG all bases
> (other than trivial ones) generate a group of order q which for some of the
> primes is huge.
>
> How to read the list?
>
> (size in bits) p==digits
>
> so
>
> (1024)
> p==1460030136858689905633918046800667131280181317311313833593791824930185113
> 6348768360708424001573886964262443996309806738655987368721064584308025706111
> 6036949438982968995332694598033744487708557681139725773222031612812763129935
> 3164025680222964658192849043699670677857470257248695463297505596077769310893
> 41764287
>
> Is a 1024 bit SG prime. I am building up the list with larger and large
> primes.
How large?
http://www.utm.edu/research/primes/lists/top20/SophieGermain.html
rank prime digits who when comment
8 885817959.2^24711-1 7448 Shefl, Gallot 2001 Sophie Germain
(p)
That's my girlfriend, that is. (Shefl, not the number or Sophie
Germain!)
> And yes FYI I live a very sheltered life.
So what? I do in spades, and my ambition is no trumps...
Phil
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Yarrow PRNG
Reply-To: [EMAIL PROTECTED]
Date: Wed, 13 Jun 2001 12:00:33 GMT
Mark Wooding <[EMAIL PROTECTED]> wrote:
: [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
:> 1. Has anyone reviewed the properties of the PRNG? Is it as good as
:> it claims?
: The paper claims 160-bit security for Yarrow-160. If you interpret that
: to mean that you can't tell the difference between Yarrow-160 and real
: random data with less than 2^{160} effort then the claim is false.
: The output stage uses triple-DES in counter mode. Output is in 64-bit
: chunks. If the source were truly random, we'd expect two consecutive
: outputs to be equal with probability 2^{-64}. With a counter-mode
: output, this would never happen at all, except that Yarrow-160 rekeys
: the cipher every P_G = 10 output blocks. [...]
Yes - the use of a block cypher in CTR mode was always a rather glaring
problem with Yarrow, IMO. It's pleasing to see the extent of the problem
quantified.
--
__________
|im |yler [EMAIL PROTECTED] Home page: http://alife.co.uk/tim/
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************