Cryptography-Digest Digest #103, Volume #13       Sun, 5 Nov 00 14:13:01 EST

Contents:
  Re: On obtaining randomness (Richard Heathfield)
  Re: BENNY AND THE MTB? (SCOTT19U.ZIP_GUY)
  Re: On obtaining randomness (Dido Sevilla)
  Re: Is RSA provably secure under some conditions? (Jan Fedak)
  Birthday messages (Steve Portly)
  Re: Calculating the redudancy of english? ([EMAIL PROTECTED])
  Re: Brute force against DES (Francois Grieu)
  Re: Detectable pattern in encoded stegaanographic images ([EMAIL PROTECTED])
  [newbie] Is PGP 7.0 hash extension secure? ("Thomas J. Boschloo")
  Re: BENNY AND THE MTB? ("Matt Timmermans")
  Re: hardware RNG's (Guy Macon)
  Re: On obtaining randomness (Mok-Kong Shen)
  Re: Crypto Export Restrictions ("Trevor L. Jackson, III")
  Re: hardware RNG's ("Trevor L. Jackson, III")
  Re: hardware RNG's ("Trevor L. Jackson, III")
  PGP 7.0 - Key Reconstruction  - White Paper now online (Robert Guerra)

----------------------------------------------------------------------------

Date: Sun, 05 Nov 2000 15:06:22 +0000
From: Richard Heathfield <[EMAIL PROTECTED]>
Subject: Re: On obtaining randomness

Mok-Kong Shen wrote:
> 
> According to the British Museum model, given sufficient
> time, monkeys at keyboard could eventually produce all
> the works in the literature collection there.

No. The Sun will explode long before they complete even a Shakespearean
soliloquy.

> So
> evidently there is enough entropy present in the numerous
> volumes written by humans to be found in the libraries.

Maybe there is, but it isn't evident from your first sentence.

April 1st isn't for a while yet.

<snip>


-- 
Richard Heathfield
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: BENNY AND THE MTB?
Date: 5 Nov 2000 15:07:16 GMT

[EMAIL PROTECTED] (Bryan Olson) wrote in 
<8u3cip$vr7$[EMAIL PROTECTED]>:

>Tim Tyler wrote:
>> Matt Timmermans wrote:
>>
>> : Because final mapping (tivial decoding) from FOstreams
>> : to byte-granular files is bijective, the encryption can
>> : perform any reversible operation on the FOstream without
>> : changing the bijective nature of the entire process,
>> : including operations that change the number of significant
>> : bits.
>>
>> I think this is what I didn't grasp.  It's obvious, really ;-)
>
>But subtle.  I had to sketch a proof to convince myself the
>encryption is bijective on finitely odd streams.  A key point
>is that for streams of more than 128 (significant) bits, the
>transformation preserves the (significant) bit length.  For
>streams of 128 or fewer significant bits, it's not length
>preserving but will never map to a stream of more than 128
>(significant) bits.  Thus we can separately show it's
>bijective on the set of <=128-bit FO streams, and bijective
>on the set of >128-bit FO streams.

   I don't want to lose your faith in the method but what do
you means by a bit stream of 128 significant bits. Do you mean
that counting the first bit in the first postion as one so that
by 128 significant bits the last bit number 128 would be a one
and would just fill a 16 byte buffer?


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
        http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
        http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
        http://radiusnet.net/crypto/  then look for
  sub directory scott after pressing CRYPTO
Scott famous Compression Page
        http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:

------------------------------

From: Dido Sevilla <[EMAIL PROTECTED]>
Subject: Re: On obtaining randomness
Date: Sun, 05 Nov 2000 23:41:06 +0800


Compare with the Beale cipher, which actually uses literature as the
key...

--
Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
ICSM-F Development Team, UP Diliman             +63 (917) 4458925
OpenPGP Key ID: 0x0E8CE481

------------------------------

From: [EMAIL PROTECTED] (Jan Fedak)
Subject: Re: Is RSA provably secure under some conditions?
Date: Sun, 5 Nov 2000 15:28:43 +0000 (UTC)

I think searching the key space is not really an option since factoring
a large integer is much faster than an exhaustive search for a key.

By "secure" I'd mean something like: it's provably as difficult to break
RSA as to factor $n$.

Jan

In article <[EMAIL PROTECTED]>, Douglas A. Gwyn wrote:
>Jan Fedak wrote:
>> I wonder are there any conditions under which RSA is provably secure?
>
>You would need to define your term "secure", but certainly
>RSA can in principle be broken in principle, given sufficient
>(modest) amount of ciphertext (assuming considerable
>redundancy exists in the plaintext), by searching the
>deciphering-key space for a value that converts the ciphertext
>to a highly redundant (testable) output (putative plaintext).


-- 
  Jan Fedak                            talk:[EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED]                    mailto:[EMAIL PROTECTED]
                Linux - the ultimate NT Service Pack.  

------------------------------

From: Steve Portly <[EMAIL PROTECTED]>
Subject: Birthday messages
Date: Sun, 05 Nov 2000 11:12:21 -0500

For a 2000 character note written in English, what is the approximate
key length that would produce more than one valid plain text message in
a brute force attack?  If the message is written in Japanese??

Thank you very much



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Calculating the redudancy of english?
Date: Sun, 05 Nov 2000 16:35:22 GMT

Richard Heathfield <[EMAIL PROTECTED]> wrote:
> No idea. Here's a single letter frequency analysis for Sir Arthur Conan
> Doyle though - all the Holmes stuff, plus quite a bit of poetry and
> other stuff, all from one CD-ROM:
[...]
> Looks rather modern, doesn't it?

Reasonably so. It was more idle curiosity than anything else that I
even wondered. I honestly don't think you'll see any obvious
differences in letter frequencies, just word frequencies.

-- 
Matt Gauthier <[EMAIL PROTECTED]>

------------------------------

From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Brute force against DES
Date: Sun, 05 Nov 2000 18:35:20 +0100

"Samir" <[EMAIL PROTECTED]> wrote:

> I can use 20 computers, for a known-plaintext attack.
First compute your odds of success. In rought numbers, a top gun CPU
can test 2^22 key/s using the best known optimizations. So in a year
20 of them will have tested about
   20*3600*24*365*2^22/2^56 = 3.7%  of the key space
and this is the odds of success.



> Is the best way for the server to search with ramdom selection ?
No. If the search algorithm selects key at random
- there is statistically no benefit; if you are afraid the key
  might be chosen so that you test it last, defeat this strategy
  by just taking the starting point at random.
- it will take some time to build-up the random, compared to a simple
  order, so less key will be tested with given resources
  (machines*time); the more random, the more lost time.
- if you use true random, you will need more memory than you can
  afford to remember which keys have already been tested, so you'll
  much probably end up testing quite a few keys several times.
- optimizations made possible by orderly scanning of the keys are
  lost (see my post on fr.misc.cryptology).

for more pointers see
<http://www.distributed.net/des/index.html>
<http://www.distributed.net/des/index.html.fr>

  Francois Grieu

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Detectable pattern in encoded stegaanographic images
Date: Sun, 05 Nov 2000 17:27:47 GMT

In article <8tsqgv$uv1$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:
>
> upon closer examination and comparison of e-mails with the plain
> parchment bmp background, and the same background with a textfile
> steganographically hidden within the background, found a very clear
> pattern in the base 64 encoding {when viewing the 'message source' and
> the encoding of the .bmp }, but no obvious patterns in the encoding of
> the plain .bmp
>
follow -up,

have tested .gif, .jpg, .wav formats,
no obvious detectable pattern found in the base-64 encoding,

have changed the parchment bmp background to a .gif format, put the
same pgp file in using s-tools 4 , no detectable pattern found in the
base 64 encoding,

with the .bmp format, if the base 64 encoding is viewed without line-
wrapping {as block of lines of equal length - the default view when
checking message source, in full screen view }
there is an obvious clear pattern of columns of M and N as repeating
4th characters throughout the block,

would recommend to stay away from .bmp format for steganography
[at least, if done from s-tools 4 ]

vedaal


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "Thomas J. Boschloo" <[EMAIL PROTECTED]>
Subject: [newbie] Is PGP 7.0 hash extension secure?
Date: Sun, 05 Nov 2000 18:37:01 +0100

=====BEGIN PGP SIGNED MESSAGE=====

I had a little e-mail conversation with Micheal Young on the hash extension
used in PGP (described in RFC 2440). I am not as knowledgeable as him, so I
can't see if his arguments are true.

My statement was that in order to use Twofish or Rijndael to its full
potential of 256 bits security, you would need to have a 256 bit hash
function like they are creating a draft for in
<http://csrc.nist.gov/cryptval/shs.html>. Micheal Young however, claims that
the process used in PGP to extend the SHA-1 algorithm to 256 bits is secure
enough. In PGP you have a large ammount of random data in your entropy pool
and a 256 bit session key is generated by taking the SHA-1 hash on this
data. Then the extra missing 96 bits are generating by feeding that same
pool, with a '/0' character prepended to it, to the SHA-1 function. Micheal
claims that this is secure and that a succesful attack based on this
property would leave the SHA-1 algorithm useless for digital signatures.
1) Is the extension methode of the hash in RFC 2440 reason to worry about
PGP's promised 256 bit security?

During a web search I found this document on the extension of a RIPEM hash:
<http://www.esat.kuleuven.ac.be/~bosselae/ripemd160.html#extensions>.
I don't get this! (Sorry) Why would anyone want to get a 320 bit result from
a hash function, if it only has a security of 160 bits? You could just as
well add 160 fixed bits after the hash, just like they do in reducing the
security from e.g. RC4-128 to 40 bits for export? Why not use the technique
used in PGP to increase security to the full 320 bits if it is so easy? It
would be useful for both purposes.

It is unclear to me if the RIPEM-320 hash is generated from the same input
buffer as RIPEM-160 (allowing 'PGP tricks'), or only from the RIPEM-160 hash
output (resulting in a fixed 2^160 on 2^320 mapping).

Further questions that arise from this for me are:
3) If PGP uses this 'extension' technique to generate large primes for new
keys, doesn't this reduce the security of those keys?
4) Where can I find the FIPS for SHA-256 and others? The crypto++ 4.0
library already seems to have them, while the NIST homepage claims they are
still proposing a 'draft' for it which won't be ready until 2001?
5) Why the 512 bit SHA-2 functions? Wouldn't SHA-256 be enough for 256 bit
session keys? I have heard about the 'birthday attack', but wouldn't this
require an average database of 2^128 elements to produce an 'unchosen'
collision? Isn't this a bit silly?

Lots of questions, little answers, TIA!
Thomas

=====BEGIN PGP SIGNATURE=====
Version: PGPfreeware 5.5.3i for non-commercial use <http://www.pgpi.com>

iQB5AwUBOgWMqgEP2l8iXKAJAQEDawMfQm6XUhI9mbAnUbuIoeYf2G96gKElZ8bZ
b/5MtGEkkP7hXD4fPfryeVhb9/g/vL/O7LXwih3l7h9BoWFffv/dZCCf8rcVFnLw
GQd1iabIX7Ocjyky9/8c2iIYec01trd8RA0viQ==
=2ndY
=====END PGP SIGNATURE=====
-- 
We live in the Matrix <http://www.whatisthematrix.com>

http://wwwkeys.pgp.net:11371/pks/lookup?op=get&search=0x225CA009
Email: boschloo_at_multiweb_dot_nl


------------------------------

From: "Matt Timmermans" <[EMAIL PROTECTED]>
Subject: Re: BENNY AND THE MTB?
Date: Sun, 05 Nov 2000 17:52:12 GMT

Just to confirm some of the conclusions being reached in this thread:

Yes, Bryan's analysis is correct. All zeros counts as a finitely odd
stream -- the one that will TrivialDecode to the zero-length file.

On a more entertaining note:

"Bryan Olson" <[EMAIL PROTECTED]> wrote in message
news:8u3b2m$v0r$[EMAIL PROTECTED]...
> [...]
> Nevertheless, if the ciphertext is one
> byte, then that byte is the first byte of sole Rijndael block
> XOR 0x55. (For some reason the program XOR's every byte with
> 0x55 which is irrelevant to the bijection.)

Yes, the final compressed output file has 0x55 XORed with all bytes.  This
is irrelevant to the encryption and the bijection and the compression.

The XOR is there because people might like to test the bijection by
repeatedly decompressing (without decryption) arbitrary files that they
might find on their hard disk.  Lots of common files will have long runs of
zeros in them, though.  When those zeros get to the decompressor, they will
decompress to the current MPS, and the current MPS probability will be
adaptively increased.  If there are lots of 0s in the run, each MPS hit will
consume fewer of them, and so the end result of the decompression will be
very large.  And if the MPS is 0x00, then this effect will be greatly
amplified at the next decompression.

So the XOR with 0x55 makes it more practical to play with arbitrary files
you might find lying around, because long runs of 0x55 are relatively rare
in real files, and 0x55 is a less common MPS than 0x00.

This effect doesn't happen when you use decryption, of course, but XOR with
0x55 is a cheap operation, so I didn't bother to take it out of the
processing pipeline when a passphrase is specified.





------------------------------

From: [EMAIL PROTECTED] (Guy Macon)
Subject: Re: hardware RNG's
Date: 05 Nov 2000 18:02:27 GMT

Terry Ritter wrote:
>
>One could argue that since one does not know the intended use, one
>should make the device as good as possible, even for cryptography.
>But I think the issue is larger:
>
>For cryptography, the whole point of building such a device would seem
>to be to rely upon quantum events as opposed to the deterministic
>events which dominate computing.

I think that ba lot of the discussion strts like this:

Person A: "An OTP is perfectly immune to a cryptographic attack, even
           by an opponent with infinite resources and cleverness"

Person B: "An OTP assumes a true random munber generator (TRNG).
           In theory, there is no such thing, so you OTP is imperfect.

Person A: "Quantum Events are random,  Use one as your TRNG" 

Person B: "But turning the Quantum Event into digital numbers
           is imperfect"

(With side conversations about the practicality of OTP's, the many
non-cryptographic ways to break security based on an OTP. whether
Quantum Events are really random, etc.

>I claim that it is important to be
>able to identify a theoretical distribution in the noise source, so as
>to be assured from whence this randomness comes.  After that it will
>need to be processed of course, but first we need to validate the
>quantum nature of the source.  Unless we can in practice measure a
>theoretical distribution, then turn off the source and see massive
>change, we don't have evidence which testifies that the source is
>quantum and under our control.  

I agree up to a point.  I see the avbove as necessary but not suficient.
Certainly if I remove the source of Quantum Events and still get what
looks like randomness, then I am unsure that my randomness comes from
the Quantumm Events.  I am not sure that the reverse is true.

I do like the idea of (AFTER making the Ritter Test) of XORing in other
unrelated RNGs and PRNGs.



------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On obtaining randomness
Date: Sun, 05 Nov 2000 19:03:00 +0100



Richard Heathfield wrote:
> 
> Mok-Kong Shen wrote:
> >
> > According to the British Museum model, given sufficient
> > time, monkeys at keyboard could eventually produce all
> > the works in the literature collection there.
> 
> No. The Sun will explode long before they complete even a Shakespearean
> soliloquy.
> 
> > So
> > evidently there is enough entropy present in the numerous
> > volumes written by humans to be found in the libraries.
> 
> Maybe there is, but it isn't evident from your first sentence.
> 
> April 1st isn't for a while yet.

Do an actual attempt of prediction of even a stream
from a some moderate complicated PRNG scheme and you
would be finding yourself like biting the stone, even 
in the month of April. Try something, don't just write
big words.

M. K. Shen

------------------------------

Date: Sun, 05 Nov 2000 13:12:19 -0500
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,talk.politics.misc,alt.freespeech
Subject: Re: Crypto Export Restrictions

Anthony Stephen Szopa wrote:

> ... pure golden random numbers ...

Cool!

Do they glow in the dark too???


------------------------------

Date: Sun, 05 Nov 2000 13:23:59 -0500
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's

David Schwartz wrote:

> Tim Tyler wrote:
> >
> > David Schwartz <[EMAIL PROTECTED]> wrote:
> > : Terry Ritter wrote:
> >
> > :> There is something wrong with this logic!  If various signals in the
> > :> area do affect the noise RNG, then our device is no longer based
> > :> solely on quantum effects.  That is very dangerous because it means
> > :> that one of the other effects which it uses might be controlled,
> > :> perhaps fairly easily.  I claim that what we want to do is to isolate
> > :> the noise signal from every other reasonable effect.
> >
> > :       This is probably the fundamental source of my disagreement with you.
> > : There is absolutely no need to isolate the noise source from all other
> > : possible sources, provided the noise is still there. [...]
> >
> > True - but it's a lot harder to figure out how much "real" noise you
> > have got if it's mixed in with alot of "fake" noise - which might
> > eventually turn out to be rather deterministic.
>
>         Right. You can't see it in the final data, so you have to analyze the
> parameters that make the data up. In other words, you determine that the
> randomness is there from theoreticaly analysis, not from looking at the
> data.
>
> > "Provided the noise is still there" might be a bit of an act of faith - if
> > you can't actually see or measure the noise any more because it's swamped
> > with possibly-pseudo-random garbage.
>
>         *sigh* You missed my point. It doesn't _matter_ if it's swamped with
> pseudo-random garbage. 99% of the data can be garbage. So long as the
> noise is in there somewhere, you are set.

You are assuming that it is possible to collect the sum of the random and the
pseudo-random noises and concentrate the former.  This assumption is only true of
the digital portion of the process.  The analog sum-of-signals has to be
digitized.  Saturation effects will guarantee that some of the random signal will
be displaced by the pseudo-random signals.

All mixing should be done digitally.  Analog signals have to be handled in
isolation or their value is lost.  This applies to hum sources as well as crystal
drift.

Note also that if the analog source is mixed it is impossible to determine how
much of it's digitized transform is entropic.


------------------------------

Date: Sun, 05 Nov 2000 13:25:35 -0500
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's

Tim Tyler wrote:

> David Schwartz <[EMAIL PROTECTED]> wrote:
> : Tim Tyler wrote:
> :> David Schwartz <[EMAIL PROTECTED]> wrote:
> :> : Terry Ritter wrote:
>
> :> :> There is something wrong with this logic!  If various signals in the
> :> :> area do affect the noise RNG, then our device is no longer based
> :> :> solely on quantum effects.  That is very dangerous because it means
> :> :> that one of the other effects which it uses might be controlled,
> :> :> perhaps fairly easily.  I claim that what we want to do is to isolate
> :> :> the noise signal from every other reasonable effect.
> :>
> :> :       This is probably the fundamental source of my disagreement with you.
> :> : There is absolutely no need to isolate the noise source from all other
> :> : possible sources, provided the noise is still there. [...]
> :>
> :> True - but it's a lot harder to figure out how much "real" noise you
> :> have got if it's mixed in with alot of "fake" noise - which might
> :> eventually turn out to be rather deterministic.
>
> :       Right. You can't see it in the final data, so you have to analyze the
> : parameters that make the data up. In other words, you determine that the
> : randomness is there from theoreticaly analysis, not from looking at the
> : data.
>
> That doesn't sound as good as using a theoretical analysis *and* looking
> at the data.
>
> :> "Provided the noise is still there" might be a bit of an act of faith - if
> :> you can't actually see or measure the noise any more because it's swamped
> :> with possibly-pseudo-random garbage.
>
> :       *sigh* You missed my point. It doesn't _matter_ if it's swamped with
> : pseudo-random garbage. 99% of the data can be garbage. So long as the
> : noise is in there somewhere, you are set.
>
> Are you sure that it was not you who missed my point?  I was talking
> mainly about *measuring* the volume of the entropy - not *using*
> whatever entropy is present.
>
> :       The fact is, uncompensated crystal oscillators do in fact drift
> : unpredictably. Equally important, the frequency multipliers used in
> : motherboards to generate the FSB frequency drift unpredictably as well.
>
> This is not my field.  However, are you sure that the drift is
> unpredictable - and does not reflect environmental conditions
> (such as temperature) which might be measured or controlled?
>
> Certainly I for one would like to be able to eliminate such influences,
> and test the result.

Why does this make a valid test?  One should remove the signal of interest (the
source of randomness) as the test case.



------------------------------

From: Robert Guerra <[EMAIL PROTECTED]>
Crossposted-To: comp.security.pgp.tech,comp.security.pgp.discuss,alt.security.pgp
Subject: PGP 7.0 - Key Reconstruction  - White Paper now online
Date: Sun, 05 Nov 2000 18:31:22 GMT

Hi folks:

PGP 7.0 introduces several new features, the most hotly debated seems to 
be the new "key reconstructions feature" 

I was able to attend Will Price's talk on it at the August Cypherpunks 
in San Francisco. He explained the inner workings of how it's been 
implemented.

The white paper which he and the folks at NAI authored is now online at 
the URL below. 


http://download.nai.com/products/media/pgp/pdf/literature/pgpkeyreconwhit
epaper.pdf


>From what I heard it's been carefully laid out, however i'll leave it up 
to the folks on here to debate the fine points.


Regards

Robert
-- 
Robert Guerra <[EMAIL PROTECTED]>, Fax: +1(303) 484-0302 
WWW Page <http://crypto.yashy.com/www>
PGPKeys  <http://pgp.greatvideo.com/keys/rguerra/>

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to