Cryptography-Digest Digest #338, Volume #14 Fri, 11 May 01 17:13:01 EDT
Contents:
Re: Best encrypting algoritme ("Simon Johnson")
PRNG question from newbie (Bugs Bunny)
Re: PRNG question from newbie ("Tom St Denis")
extracting random bits from low-entropy data (Mark)
Re: extracting random bits from low-entropy data ("Tom St Denis")
Re: extracting random bits from low-entropy data (David Wagner)
Re: extracting random bits from low-entropy data (Mark)
Re: extracting random bits from low-entropy data ("Tom St Denis")
TC15 differentials ("Tom St Denis")
Re: PRNG question from newbie (John Myre)
Re: TC15 differentials ("Tom St Denis")
Re: TC15 differentials ("Tom St Denis")
Re: free en/decryption library ("Tor Rustad")
----------------------------------------------------------------------------
From: "Simon Johnson" <[EMAIL PROTECTED]>
Subject: Re: Best encrypting algoritme
Date: Fri, 11 May 2001 20:06:51 +0100
SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> [EMAIL PROTECTED] (Tom St Denis) wrote in
> <BR4J6.25523$[EMAIL PROTECTED]>:
>
> >
> >I still don't get your main points. If the system is a FSM (Finite
> >State Machine) such as any computer program (they must be finite) then
> >there are only a *finite* amount of states the program can be in. This
> >means that no matter what you do, if it's a FSM then I can write a
> >program to brute force it. There is no way to escape this logic. You
> >can only make brute force impractical (i.e huge key, or something to
> >that effect).
>
> Tom what you lack and refuse to learn or even seem to engage your
> brain is. I could write a cyrpto system with even a 1 byte block
> size. and 2 bits of key space. Lets say I take my messages I don't
> have many but I compress them done using bijective compression.
> and then encrypt that bijectively compressed message with a 2 bit
> key using bijective encryption.If you write a super duper brute force
> machine you may get all of the 4 message I may of sent. There may have
been
> 407 messages. You have reduced it to 4. Know which message is it.
> The four are all equally likely.
>
> Yes when we go to longer message and longer keys. Many of the
> messages may not make sense. Since its hard to write a compressor
> for just the messages one might use. However the compressor can be tuned
> to the kind of messages we want to send. And if key long enough there
> will be many possible message that a key could map the encrypted
> message to.
I really don't follow what you mean by Bijective compression. If compression
was not bijective then what use would it be, you couldn't be sure your
uncompressed file was indeed the uncompressed version of the original.
> The point is why use an inferior system that allows most
> ciphertext message key combinations to be rejected as not
> even being possible. Knowing squat about the input. It also
> harder for the NSA to hid trap doors in the combination crypto
> system if its fully bijective.
Hrm... well.. i don't know if your system is any better then Rijndael.
Got any source or a PDF describing your technique? Obviously, i wont accept
a statement: "This is good, but i can't show you the source" -- that's
sprinkling fairy-dust.
As for the NSA hiding trap doors.... they don't need to. They know quite
well the most insecure part of most cryptosystems is the human element. They
just exploit this weakness.
> Something that you think would be
> of interest to the crypto community. Though one can still add
> trap doors of a type. But thats another topic that I may explain
> someday to you once you understand the concepts involved here.
> David A. Scott
> --
> SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
> http://www.jim.com/jamesd/Kong/scott19u.zip
> My website http://members.nbci.com/ecil/index.htm
> My crypto code http://radiusnet.net/crypto/archive/scott/
> MY Compression Page http://members.nbci.com/ecil/compress.htm
> **NOTE FOR EMAIL drop the roman "five" ***
> Disclaimer:I am in no way responsible for any of the statements
> made in the above text. For all I know I might be drugged or
> something..
> No I'm not paranoid. You all think I'm paranoid, don't you!
>
------------------------------
From: Bugs Bunny <[EMAIL PROTECTED]>
Subject: PRNG question from newbie
Date: Fri, 11 May 2001 12:46:48 -0700
The purpose of a PRNG is to generate as random as possible outputs.
Another required quality (for crypto) is to no be able to guess the
next output given previous outputs and the exact PRNG algorithm.
Is it a correct assumtion then, that this is one of the reasons for
masking the seed to produce the output? If not, then knowing a single
output (the seed itself!) would enable you to generate the future
sequence.
If this is correct, then it seems not too bright to use the modulus
function on the seed value to produce the output stream, because given
a single known output you know so many bits of the seed. (Such as the
simple LC algorithm.)
So I thought it would be better to use a hash function instead of
modulus. With a good hash function, even a simple counter as the seed
would produce PRNG output that would make it more difficult to guess
the seed.
So, a good PRNG is really dependent on a good hash function?
Now I know that there are two purposes for a hash function. One is to
simply prevent collisions for table storage. For crypto purposes, a
second quality is required in that no information about the input is
preserved in the output.
Thanks for any feedback.
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: PRNG question from newbie
Date: Fri, 11 May 2001 19:56:42 GMT
"Bugs Bunny" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> The purpose of a PRNG is to generate as random as possible outputs.
> Another required quality (for crypto) is to no be able to guess the
> next output given previous outputs and the exact PRNG algorithm.
>
> Is it a correct assumtion then, that this is one of the reasons for
> masking the seed to produce the output? If not, then knowing a single
> output (the seed itself!) would enable you to generate the future
> sequence.
>
> If this is correct, then it seems not too bright to use the modulus
> function on the seed value to produce the output stream, because given
> a single known output you know so many bits of the seed. (Such as the
> simple LC algorithm.)
LC are not secure no matter how you use them. There is too much structure
between success outputs.
> So I thought it would be better to use a hash function instead of
> modulus. With a good hash function, even a simple counter as the seed
> would produce PRNG output that would make it more difficult to guess
> the seed.
> So, a good PRNG is really dependent on a good hash function?
Not entirely true. For starters using a "modulus", e.g. a ring or field of
integers is not always a bad idea. For example, the BBS generator (Blum
Blum Shub) is considered secure and uses a large Blum composite as a
modulus.
Using a hash function for a PRNG has been suggested before. There are
several things to keep in mind.
1. Your seed must have enough entropy to prevent search attacks where the
attacker tries to guess the seed. Even if your seed is say 100 bits long it
must have 100 bits of entropy to be considered secure.
2. You must prevent one known PRNG output from giving the next or previous.
For example using H[i] = HASH(H[i-1]) where H[0] = HASH(seed), may appear
random but is not secure against this threat. Knowing H[i] will allow me to
know all future (but presumably not previous) outputs. Also H[0] cannot
equal seed or all the secret entropy is lost.
3. The PRNG should have a long period. Simply doing H[i] = HASH(H[i - 1]
|| SEED) is not enough. While this is forward and backwards secure the
period could be very short. || denotes concatenation of the bit strings.
A good hash based PRNG would be
H[0] = HASH(SEED)
H[i] = HASH(H[i - 1] || SEED || i)
or even just
H[i] = HASH[i || SEED)
> Now I know that there are two purposes for a hash function. One is to
> simply prevent collisions for table storage. For crypto purposes, a
> second quality is required in that no information about the input is
> preserved in the output.
You have your thoughts entangled. The output of a hash can be used to
identify the input, albeit it will take a long while. For example
SHA("tom") will always output the same value over and over and over... so if
you ever saw a checksum 0FBA 12B4 .... (etc..) you could compare it against
SHA("tom"). If they matched it's a good idea that was the input.
The main benefit of a hash function in a PRNG is a low level of collisions
in the sense H(x) == H(y) for x != y and a sense of one-way ness. In other
words given Y from Y=H(X) it's hard to find X. The low level of collisions
increases the entropy of the output and the one-way ness hides teh seed.
Hope this helps,
Tom
------------------------------
From: [EMAIL PROTECTED] (Mark)
Subject: extracting random bits from low-entropy data
Date: Fri, 11 May 2001 20:17:57 +0000 (UTC)
[Apologies if this is a naive question, but I did try the FAQ first.]
How do I extract the entropy from low-entropy data to produce high-quality
random bits, assuming that I know a lower bound for the amount of entropy
in the input?
For example, suppose I want 100 uniform random bits, and my source of
randomness is the conversation in a chat room. Making the assumption that
there are at least 2 bits of entropy per line, it seems reasonable to
assume that I can convert 50 lines of chat into 100 bits, each bit of very
high (approaching 1) entropy.
(1) Is this a reasonable assumption and (2) is there a cookbook solution?
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: extracting random bits from low-entropy data
Date: Fri, 11 May 2001 20:21:31 GMT
"Mark" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
> [Apologies if this is a naive question, but I did try the FAQ first.]
>
> How do I extract the entropy from low-entropy data to produce high-quality
> random bits, assuming that I know a lower bound for the amount of entropy
> in the input?
>
> For example, suppose I want 100 uniform random bits, and my source of
> randomness is the conversation in a chat room. Making the assumption that
> there are at least 2 bits of entropy per line, it seems reasonable to
> assume that I can convert 50 lines of chat into 100 bits, each bit of very
> high (approaching 1) entropy.
>
> (1) Is this a reasonable assumption and (2) is there a cookbook solution?
It all depends on how you model the data really.
If you use a zeroth order model then 'h' is a low frequency char, but in a
first order model it isn't.
Basically the most pratical method is to simply be conservative and hash the
data.
The theoretical way you're looking for is to compress the 50 lines into a
100 bits using some lossless codec. However, it is very rare that the
desired ratio will occur, hint: long/short chat lines.
Tom
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: extracting random bits from low-entropy data
Date: 11 May 2001 20:29:40 GMT
Mark wrote:
>How do I extract the entropy from low-entropy data to produce high-quality
>random bits, assuming that I know a lower bound for the amount of entropy
>in the input?
See <http://www.cs.berkeley.edu/~daw/rnd/index.html> for extensive
information on this topic. Roughly speaking, you hash it and feed
the result to a secure keystream generator (e.g., AES in CTR mode).
------------------------------
From: [EMAIL PROTECTED] (Mark)
Subject: Re: extracting random bits from low-entropy data
Date: Fri, 11 May 2001 20:38:03 +0000 (UTC)
"Tom St Denis" <[EMAIL PROTECTED]> wrote in
news:fRXK6.67716$[EMAIL PROTECTED]:
>
> "Mark" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
>>
>> [Apologies if this is a naive question, but I did try the FAQ first.]
>>
>> How do I extract the entropy from low-entropy data to produce
>> high-quality random bits, assuming that I know a lower bound for the
>> amount of entropy in the input?
>>
>> For example, suppose I want 100 uniform random bits, and my source of
>> randomness is the conversation in a chat room. Making the assumption
>> that there are at least 2 bits of entropy per line, it seems
>> reasonable to assume that I can convert 50 lines of chat into 100
>> bits, each bit of very high (approaching 1) entropy.
>>
>> (1) Is this a reasonable assumption and (2) is there a cookbook
>> solution?
>
> It all depends on how you model the data really.
>
> If you use a zeroth order model then 'h' is a low frequency char, but
> in a first order model it isn't.
>
> Basically the most pratical method is to simply be conservative and
> hash the data.
>
> The theoretical way you're looking for is to compress the 50 lines into
> a 100 bits using some lossless codec. However, it is very rare that
> the desired ratio will occur, hint: long/short chat lines.
>
> Tom
>
>
Sorry, I probably wasn't being clear. I did assume that hashing the data
somehow would be the answer, because you're right that the source data
isn't uniform in its entropy density.
So I guess what I'm really asking is this:
1. How does a newbie like me construct such a hashing function (which must
be flexible enough to take M bits of input and produce N bits of output,
where M and N are chosen by me) from standard components I can find source
code for (like DES or whatever).
2. Does this procedure guarantee that the entropy is spread throughout the
output bits (e.g., I don't want a situation where the first couple of
output bits contain all the entropy and the rest are non-random).
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: extracting random bits from low-entropy data
Date: Fri, 11 May 2001 20:48:20 GMT
"Mark" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> "Tom St Denis" <[EMAIL PROTECTED]> wrote in
> news:fRXK6.67716$[EMAIL PROTECTED]:
>
> >
> > "Mark" <[EMAIL PROTECTED]> wrote in message
> > news:[EMAIL PROTECTED]...
> >>
> >> [Apologies if this is a naive question, but I did try the FAQ first.]
> >>
> >> How do I extract the entropy from low-entropy data to produce
> >> high-quality random bits, assuming that I know a lower bound for the
> >> amount of entropy in the input?
> >>
> >> For example, suppose I want 100 uniform random bits, and my source of
> >> randomness is the conversation in a chat room. Making the assumption
> >> that there are at least 2 bits of entropy per line, it seems
> >> reasonable to assume that I can convert 50 lines of chat into 100
> >> bits, each bit of very high (approaching 1) entropy.
> >>
> >> (1) Is this a reasonable assumption and (2) is there a cookbook
> >> solution?
> >
> > It all depends on how you model the data really.
> >
> > If you use a zeroth order model then 'h' is a low frequency char, but
> > in a first order model it isn't.
> >
> > Basically the most pratical method is to simply be conservative and
> > hash the data.
> >
> > The theoretical way you're looking for is to compress the 50 lines into
> > a 100 bits using some lossless codec. However, it is very rare that
> > the desired ratio will occur, hint: long/short chat lines.
> >
> > Tom
> >
> >
>
> Sorry, I probably wasn't being clear. I did assume that hashing the data
> somehow would be the answer, because you're right that the source data
> isn't uniform in its entropy density.
>
> So I guess what I'm really asking is this:
>
> 1. How does a newbie like me construct such a hashing function (which must
> be flexible enough to take M bits of input and produce N bits of output,
> where M and N are chosen by me) from standard components I can find source
> code for (like DES or whatever).
You don't. Use SHA or Tiger, etc..
> 2. Does this procedure guarantee that the entropy is spread throughout the
> output bits (e.g., I don't want a situation where the first couple of
> output bits contain all the entropy and the rest are non-random).
Tom
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: TC15 differentials
Date: Fri, 11 May 2001 20:50:28 GMT
I wrote a short program and I saw that if a single sbox is active in one
round two must be active in the next round. I think that's a good
indication that std. differentials won't work.
I was wondering if someone could explain the boomerang attack method a bit
more. I didn't think it would apply since the structure is homogenous but
something tells me I am wrong.
The sbox test program is at
http://tomstdenis.home.dhs.org/tc15_sbox.c
I think next I should check for pairs (i.e two sboxes)
--
Tom St Denis
---
http://tomstdenis.home.dhs.org
------------------------------
From: John Myre <[EMAIL PROTECTED]>
Subject: Re: PRNG question from newbie
Date: Fri, 11 May 2001 14:46:05 -0600
Bugs Bunny wrote:
<snip>
> So, a good PRNG is really dependent on a good hash function?
<snip>
The required qualities for a PRNG and a hash function are
indeed quite similar. And hashing a simple counter to get
a PRNG is often proposed (and, so far as I know, should work).
It doesn't *have* to be that way, however.
The creators of Rijndael (*) have said that "PRNG" and "hash
function" ought to be considered as the same idea, they just
have different input and output sizes. In support of this
opinion, they have created an algorithm, Panama, that can be
used both ways. (I don't know how far they've gone with this
line of thought, or if they still support the idea.)
On the other hand, it might be instructive to consider, say,
how RC4 works (it is a reasonable PRNG). The idea is that each
output is related to the state but does not directly reveal it.
Then the state is updated, and the next step references a
different part of the state, and so on. So we don't need to
hash the outputs because the hiding of the state is already
built in to the algorithm.
(*) see http://csrc.nist.gov/encryption/aes/rijndael
(I should point out that RC4 and Panama are not described as
PRNG's but as stream ciphers. It really doesn't matter for
this discussion, except to note that slight weaknesses of any
given algorithm might prevent it's use in one case but not
the other. An ideal PRNG could always be used as a stream
cipher in the usual way, and an ideal stream cipher could be
used as a PRNG by encrypting zeroes.)
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: TC15 differentials
Date: Fri, 11 May 2001 21:00:31 GMT
"Tom St Denis" <[EMAIL PROTECTED]> wrote in message
news:ogYK6.67801$[EMAIL PROTECTED]...
> I wrote a short program and I saw that if a single sbox is active in one
> round two must be active in the next round. I think that's a good
> indication that std. differentials won't work.
Modified the program. From a single active sbox the number of active sboxes
in the next round will follow this model
2: 0.125000
3: 0.187500
4: 0.250000
5: 0.187500
6: 0.125000
7: 0.062500
So for the most part 3,4,5 sboxes will be active in the next round (62.5% of
the time).
Tom
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: TC15 differentials
Date: Fri, 11 May 2001 21:06:21 GMT
"Tom St Denis" <[EMAIL PROTECTED]> wrote in message
news:PpYK6.67826$[EMAIL PROTECTED]...
>
> "Tom St Denis" <[EMAIL PROTECTED]> wrote in message
> news:ogYK6.67801$[EMAIL PROTECTED]...
> > I wrote a short program and I saw that if a single sbox is active in one
> > round two must be active in the next round. I think that's a good
> > indication that std. differentials won't work.
>
> Modified the program. From a single active sbox the number of active
sboxes
> in the next round will follow this model
>
> 2: 0.125000
> 3: 0.187500
> 4: 0.250000
> 5: 0.187500
> 6: 0.125000
> 7: 0.062500
Hmm... nope that doesn't make sense... the prob is >1...
arrg..
Tom
------------------------------
From: "Tor Rustad" <[EMAIL PROTECTED]>
Subject: Re: free en/decryption library
Date: Fri, 11 May 2001 23:06:05 +0200
"Niklaus Schild" <[EMAIL PROTECTED]> wrote in message
> >
> > > >Hi,
> > > >I am new with en/decryption and I am looking for a free and open
> > > >en/decryption C/C++ library that compiles with gcc and C++ Builder.
> > >
> > >
> > > Try cryptlib
> > >
> > > http://www.cs.auckland.ac.nz/~pgut001/cryptlib/
> >
> > Very good library indeed and the documentation is excellent aswell, but
> > there are some licence issues.
>
> do you have a lot of experience with cryptlib?
I looked at it for the first time quite a while ago, and last year I
evaluated the toolkit for a financial institution.
> I'm using cryptlib since a
> week and I agree with you, when
> you say that this is a good library. But what I miss is, something like a
> mailinglist or a discussion group for
> users of this library. I have some problems (for example: cryptDecrypt()
does
> not encrypt the whole
> buffer, the first bytes are still some curious ascii symbols) and cant
find
> something about it. Do you know something some discussion pages or
> mailinglists?
I am lurking a rather active mailing-list (6 new messages since yesterday),
where Peter Gutmann himself provide the answers.
Send mail to
[EMAIL PROTECTED]
with
subscribe cryptlib
but check a mail archive before posting your question, it may very well be
answered a long time ago.
> Or do you know a solution to my problem?
Not without looking at the problem... :-)
Look at the documentation and the example code once more, and I guess you
are able to fix it yourself.
--
Tor <torust AT online DOT no>
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************