Cryptography-Digest Digest #859, Volume #10       Fri, 7 Jan 00 03:13:01 EST

Contents:
  Re: Unsafe Advice in Cryptonomicon (NFN NMI L.)
  Re: Truly random bistream (NFN NMI L.)
  Re: Questions about message digest functions ("Joseph Ashwood")
  Re: simple block ciphers (Tom St Denis)
  Re: Square? (Tom St Denis)
  Re: OLD RLE TO NEW BIJECTIVE RLE (Tom St Denis)
  Re: How to pronounce "Vigenere"? ("Zuldare")
  Re: Blowfish ("r.e.s.")
  Re: Questions about message digest functions ([EMAIL PROTECTED])
  Re: Wagner et Al. ("Rick Braddam")
  Re: OT Re: letter-frequency software (Bill Unruh)
  Re: Questions about message digest functions ("Joseph Ashwood")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (NFN NMI L.)
Subject: Re: Unsafe Advice in Cryptonomicon
Date: 07 Jan 2000 05:22:39 GMT

<<bout a computer room with an electromagnet in the door frame, that wipes any
media being carried in or out>>

Peter Gutmann. "Secure Deletion of Magnetic Media". The NSA can still read it.
My advice: on the door, put flamethrowers. If you can vaporize the magnetic
coating, the Adversary is screwed.

S. "Degaussing is a cool word" L.

------------------------------

From: [EMAIL PROTECTED] (NFN NMI L.)
Subject: Re: Truly random bistream
Date: 07 Jan 2000 05:23:50 GMT

<<And there's no such thing as _perfect_ conductivity...
>
>: And there is no fluid with zero viscosity...>>

S. "Superconductivity, superfluidity" L.

------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Questions about message digest functions
Date: Thu, 6 Jan 2000 21:22:24 -0800

I know I've been silent on this until now, but I think I may have something
of import to say.

One of you is saying (as near as I can tell):
A good hash function should for a given length of input generate an equal
number of outputs in each potential output.

The other is saying (again as near as I can tell):
A hash function should be non-invertable even for the minimal length.

I personally think that you're no
"Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> lordcow77 <[EMAIL PROTECTED]> wrote:
>
> : Are you utterly unable to comprehend this simple and widely accepted
> : concept that a hash should approximate a pseudo-random function?
>
> I don't care *how* "simple" and "widely accepted" it is - it's dead wrong.
>
> For /many/ applications hashes should make finding two messages which hash
> to the same value as difficult as possible.  Alternatively (and perhaps
> more commonly) they should make finding a second message with the same
> hash as a given message as difficult as possible.
>
> A hash that has the distribution of an unbiased "pseudo random function"
> demonstrably fails to get anywhere near the optimum collision resistance.
> Consequently it fails to offer the property demanded by a good hash - that
> of making finding multiple messages with the same hash as hard as
> possible.  Most obviously, it fails against a brute force search through
> the space of possible messages for a matching hash.
>
> For example, when the hash size is equal to the message size, use of a
> hash that simulates a PRF will introduce totally unnecessary hash
> collisions.  Generally speaking - for most applications of a hash - this
> is not good.
>
> If collision resistance is why you are using a hash, you should ideally
> avoid those that simulate pseudo random functions - *especially* if you
> can't afford to use a large hash, and the information you are hashing is
> not gigantic compared to the size of the hash.
>
> [snip more references]
>
> : You might also want to consult Knuth's TAOCP where he discusses
> : noncryptographic hashing.
>
> My comments apply equally to non-cryptographic hashing.  If avoiding
> hash collisions is important, a PRF is likely to be the wrong model for
> the ideal that a hash function should approach.
>
> My argument is /very/ simple.  I have seen no coherent criticism of it.
>
> If nobody can distill the wisdom of the literature references given into
> some sort of reason why my argument is wrong - and in fact collision
> resistance in the hash should be sacrificed for some currently-unspecified
> higher goal - I will not be impressed.
>
> If you have a large hash, failure to deviate from the distribution given
> by a PRF in the correct manner as the messages become small will not be
> terribly important.
>
> However, if the hash size is small - as it will sometimes be practially
> constrained to be - a PRF is simply completely the wrong model for an
> ideal hash, if you care at all about avoiding hash collisions.
> --
> __________
>  |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]
>
> Enough research will tend to support your theory.





------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: simple block ciphers
Date: Fri, 07 Jan 2000 05:17:53 GMT

In article <[EMAIL PROTECTED]>,
  Anton Stiglic <[EMAIL PROTECTED]> wrote:
> You are right, you would need a more intelligent attack to get p.  I
> beleive
> that one exists do, but I don't rember what it is or where you can
find
> it
> (so can't give you much help there).

I think attacking the RNG that makes e/p will be more fruitful.

> As Mr. Molnar pointed out do, my last argument was wrong.
> If e has to be prime, to have gcd(e, p-1) = 1, e should not be
> a factor of p-1, so we don't get much info with this.
> But yes, gcd(e, p-1) must be 1 so that the inverse of e can exist
> (when you work in a group mod p, where p is prime, you can think
> of your exponents working in a group p-1 (this is because the order
> of the group is p-1. In a genral case, a group mod n has order
> phi(n), where phi is the euler fonction).
> For an element x of a group mod n to have an inverse, you must
> have gcd(x,n) = 1. Note that if n is prime, all x in the group mod n
> will be such that gcd(x, n) = 1, this is why all elements in a group
> mod p (p prime) have inverses...)..

I know this now.  but thanks for the info :)

I am getting the hang of it slowly... but it's fun to make demos and
play with it.. instead of all on paper.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Square?
Date: Fri, 07 Jan 2000 05:20:30 GMT

In article <[EMAIL PROTECTED]>,
  Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> Tom St Denis wrote:
> >
> > > It appears certain that any block cipher with sufficiently reduced
> > > number of rounds can be cracked. Hence the question: Why are block
> > > ciphers with (designed) variable, instead of constant, number of
> > > rounds not very common? With that parametrization an algorithm
> > > could adapt to the future advances of analysis techniques at least
> > > to some reasonable extent and hence survive.
>
> > You can always add more rounds to most ciphers.  It simply requires
> > more round keys and more rounds of course.  You could [as
demonstrated
> > in the RC5 paper] encode the keysize/rounds in the ciphertext
packet.
>
> I meant most algorithms like DES have a 'fixed' number of rounds
> and the normal users have no (practical) 'possibility' of using
> more rounds. It would have been much better, if the number of rounds
> is 'designed' to be user-choosable (to some extent).

So 3des is equal to three rounds of des right?

Duh... the avg user will most likely think this.  And that is why it's
not user choosable.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: OLD RLE TO NEW BIJECTIVE RLE
Date: Fri, 07 Jan 2000 05:24:42 GMT

In article <852o3e$1078$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
>  I think I have found the RLE that the old bzip uses. Any way instead
> of ragging on how bad it is. I used it as a basises to model a new RLE
> that follows the one used there as close as possible but removing the
> information that it needlessly adds to the compressed file. Both the
> original and modifed RLE's and there DOS executables are at my site
> for those who are interested. It should help those interested in doing
> better compression. ALso if one used an Unadulterated arithemtic
compression
> such as what Matt Timmermans used you could improve that part of
> bkzip also.
>  So you don't have to look go to
http://members.xoom.com/ecil/compres7.htm
> Take Care

You are really reaching now.  The huffman was bad enough but RLE?

It's not even worth thinking about.  Why not work out a more efficent
enigma or something?

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "Zuldare" <[EMAIL PROTECTED]>
Subject: Re: How to pronounce "Vigenere"?
Date: Fri, 07 Jan 2000 05:45:57 GMT


        Talking face to face? My God, when did they start that??

LBMyers <[EMAIL PROTECTED]> wrote in message
news:84vg12$1l0s$[EMAIL PROTECTED]...
>
>
> >
> > As we all know what kind of cipher we're talking about, does
> > it matter?
> >
> > --
> > Posted by G4RGA.
> >
> > Rallies Info: http://website.lineone.net/~nordland
> >               http://www.netcomuk.co.uk/~amadeus
>
> Not on a news group, but some people have been known to speak to each
other
> face to face.  then it is helpful if words are pronounced in a mutually
> understandable manner   : ).
>
>
>



------------------------------

From: "r.e.s." <[EMAIL PROTECTED]>
Subject: Re: Blowfish
Date: Thu, 6 Jan 2000 21:47:33 -0800

Since you might browse into this out of curiosity as I did,
I think the following is worth mentioning.

My AV software (McAfee) reports that the file
boblowfish1-1.zip
at
ftp://ftp.replay.com/pub/replay/pub/crypto/LIBS/blowfish/
contains a virus called "Orifice2K.plugin".
(Today, 1/6/00, I notified the webmaster at the ftp site.)

One of the links on Counterpane's page is to that site,
so I'm now wondering if it could be something legitimate
causing a false alarm.  Can anyone enlighten me on this?

--
r.e.s.
[EMAIL PROTECTED]



"Kyle Morani" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
: "Ben Humphreys" <[EMAIL PROTECTED]> wrote:
:
: >Does anyone know where I can get some decent info regarding the Blowfish
: >cipher? I've used the search engines but couldn't find anything exciting.
:
: Bruce Schneier seems to know a little about it. Try his site here:
:
: http://www.counterpane.com/blowfish.html
:
: --
: "Kyle Morani" is actually [EMAIL PROTECTED] (4903 871256).
:  0123 456789 <- Use this key to decode my email address and name.
:               Play Five by Five Poker at http://www.5X5poker.com.









------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Questions about message digest functions
Date: Fri, 07 Jan 2000 06:03:35 GMT

Tim Tyler wrote:
> lordcow77 wrote:
> [...]
> a hash should approximate a pseudo-random function?
>
> I don't care *how* "simple" and "widely accepted"
> it is - it's dead wrong.

The wide-acceptors have a reason: we can't find
functions better than random.  The improvement
against brute force is entirely trivial, and seems
to come at a terrible cost.

> For /many/ applications hashes should make
> finding two messages which hash to the same
> value as difficult as possible.  Alternatively
> (and perhaps more commonly) they should make
> finding a second message with the same
> hash as a given message as difficult as possible.

There's another property generally required of
hash functions: a hash should be "preimage
resistant", that is one-way.

> A hash that has the distribution of an unbiased
> "pseudo random function" demonstrably fails to
> get anywhere near the optimum collision resistance.
> Consequently it fails to offer the property
> demanded by a good hash - that of making finding
> multiple messages with the same hash as hard as
> possible.  Most obviously, it fails against a
> brute force search through the space of possible
> messages for a matching hash.

Not at all.  Even for modest length messages, just
a few bytes longer than the digest, random
functions will map preimages to digest so evenly
that the expected time for brute-force search will
differ by an tiny fraction of a percent,
compared to a hash that maps exactly the same
number of preimages to each digest.  Below, I work
out some numbers.


> For example, when the hash size is equal to the
> message size, use of a hash that simulates a PRF
> will introduce totally unnecessary hash collisions.
> Generally speaking - for most applications of a
> hash - this is not good.

If you only need collision resistance, there's no
reason to use a hash if the digest is as big as
the messages; the identity function is collision
free.  If you do need preimage resistance, then it
would seem that a pseudo-random permutation would
be optimal by your criteria.

But here's the problem - how do you create this
pseudo-random permutation so that's it's as strong
as pseudo-random-function hashes such as SHA-1?
Can you describe a permutation on 160-bit vectors
(or even an almost-permutation) that is easy to
compute but takes anywhere near 2^160 steps to
invert?

Using ecliptic curves, I think I can come up with
such a function (almost-one-to-one) that takes
about 2^80 steps (and trivial memory) to invert.
Can you do better?


> If collision resistance is why you are using
> a hash, you should ideally avoid those that
> simulate pseudo random functions - *especially*
> if you can't afford to use a large hash, and
> the information you are hashing is not gigantic
> compared to the size of the hash.

Is one byte larger "gigantic compared to"? Let's
say our digest is n bytes long, and our messages
are n+1 bytes long.  We want to know the expected
number of tries to find a collision with a given
message, using a random function versus using one
optimized to avoid collisions.

If we use a RF as our hash, each trial message
has a 1/256^n chance of colliding, which we
might express less compactly as 256/256^(n+1).

If we use a hash with a perfectly even
distribution, then each digest has 256 preimages.
We have one message, and we need to find one of
the other 255 that induces the same digest.  Thus
any distinct random message has a 255/256^(n+1)
chance of colliding.

The ratio of time to break by brute-force is about
255/256.  For messages more than one byte longer
than the digest, the ratio gets even closer to 1.
There is no point to optimizing against brute
force search.

[...]
> My argument is /very/ simple.  I have seen no
> coherent criticism of it.

Now you have - and how you might refute it is
clear:  Describe a hash that distributes preimages
to digest more evenly than would a random
function, and is comparable to a random function
in resistance to the best attack against it that
we can find.

> If nobody can distill the wisdom of the literature
> references given into some sort of reason why my
> argument is wrong - and in fact collision
> resistance in the hash should be sacrificed for
> some currently-unspecified higher goal - I will not
> be impressed.

Here is the wisdom: it makes no sense to optimize
resistance to brute force attack when doing so
enables much more damaging attacks.  Now you have
a chance to impress me - describe a hash that's
better than random.


--Bryan


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "Rick Braddam" <[EMAIL PROTECTED]>
Subject: Re: Wagner et Al.
Date: Thu, 6 Jan 2000 23:44:56 -0600

Steve K <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> So far the only thing that sounds like an important consideration to
> me, is the use of page locked memory.  As a beginner, I only think I
> know what this means, and I do not know if, or where, PGP and PeekBoo
> use it.  Is it integral to the distributed source for the actual
> ciphers, or can it be made so?  Can it be applied to the PeekBoo key
> management functions?  And can it be applied to the PeekBoo main
> window, which supports text editing functions?

I downloaded the PGP sdk shortly after installing PGP, and found a
memory locking device driver in it which is supposed to provide
page-locked memory to PGP, according to the description in the sdk. In
my Windows directory (Win98) is a device driver named PGPmemlock.vxd.
The creation date is the same as PGPnet.vxd, which was installed with
PGP 6.51. It appears to be identical to the one from the sdk. The sdk
docs say that the vxd will be used (by the sdk dll) to secure memory if
it is present. The PGP_sdk dll is also in my Windows directory, and has
the same creation date that I installed PGP 6.51. Therefore, I think
that page-locked memory is used by PGP 6.51.

The download page for the sdk said that the source for the page-locking
vxd *may* be available at a later date.... not the last time I looked,
but I'm still checking.

> I think that these are relevant questions, because the represent
> potential security improvements, not only in PeekBoo but elsewhere:
> For instance, according to the PGP docs, the only way that PGP
> attempts to deal with the swap file leakage issue is by not leaving
> sensitive data in memory "any longer than necessary"; evidently PGP
> leaves real solutions up to the user.  (PGPWin user's guide, pg 209).

The System Information utility shows a kernel mode device driver loaded
named PGPMLOCK, but no information about it. It is probably the
PGPmemlock.vxd driver.

> A crypto app that does not allow memory to be swapped during any
> process that involves key material, and includes an un-swappable text
> editor, might seriously annoy some hypothetical forensics team
> someday.

Let's hope so.

Rick




------------------------------

From: [EMAIL PROTECTED] (Bill Unruh)
Subject: Re: OT Re: letter-frequency software
Date: 7 Jan 2000 07:19:19 GMT

In <853odr$cjt$[EMAIL PROTECTED]> [EMAIL PROTECTED] (William Rowden) writes:

>>cat document.txt|awk 'BEGIN{N=0} {f[$1]++}END{ for (j in f) print j, " ", 
>f[j]}'|sort -n +1

>Ah!  The ubiquitous unnecessary cat!  Try input redirection.

Yes. On the other hand, the above is lexigraphically closer to the
logical.

------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Questions about message digest functions
Date: Thu, 6 Jan 2000 23:32:03 -0800

Oops that message was far from complete. I'll spend the time tomorrow to
figure out what I was trying to say.
                Joseph
"Joseph Ashwood" <[EMAIL PROTECTED]> wrote in message
news:OjvyN#MW$GA.261@cpmsnbbsa02...
> I know I've been silent on this until now, but I think I may have
something
> of import to say.
>
> One of you is saying (as near as I can tell):
> A good hash function should for a given length of input generate an equal
> number of outputs in each potential output.
>
> The other is saying (again as near as I can tell):
> A hash function should be non-invertable even for the minimal length.
>
> I personally think that you're no
> "Tim Tyler" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> > lordcow77 <[EMAIL PROTECTED]> wrote:
> >
> > : Are you utterly unable to comprehend this simple and widely accepted
> > : concept that a hash should approximate a pseudo-random function?
> >
> > I don't care *how* "simple" and "widely accepted" it is - it's dead
wrong.
> >
> > For /many/ applications hashes should make finding two messages which
hash
> > to the same value as difficult as possible.  Alternatively (and perhaps
> > more commonly) they should make finding a second message with the same
> > hash as a given message as difficult as possible.
> >
> > A hash that has the distribution of an unbiased "pseudo random function"
> > demonstrably fails to get anywhere near the optimum collision
resistance.
> > Consequently it fails to offer the property demanded by a good hash -
that
> > of making finding multiple messages with the same hash as hard as
> > possible.  Most obviously, it fails against a brute force search through
> > the space of possible messages for a matching hash.
> >
> > For example, when the hash size is equal to the message size, use of a
> > hash that simulates a PRF will introduce totally unnecessary hash
> > collisions.  Generally speaking - for most applications of a hash - this
> > is not good.
> >
> > If collision resistance is why you are using a hash, you should ideally
> > avoid those that simulate pseudo random functions - *especially* if you
> > can't afford to use a large hash, and the information you are hashing is
> > not gigantic compared to the size of the hash.
> >
> > [snip more references]
> >
> > : You might also want to consult Knuth's TAOCP where he discusses
> > : noncryptographic hashing.
> >
> > My comments apply equally to non-cryptographic hashing.  If avoiding
> > hash collisions is important, a PRF is likely to be the wrong model for
> > the ideal that a hash function should approach.
> >
> > My argument is /very/ simple.  I have seen no coherent criticism of it.
> >
> > If nobody can distill the wisdom of the literature references given into
> > some sort of reason why my argument is wrong - and in fact collision
> > resistance in the hash should be sacrificed for some
currently-unspecified
> > higher goal - I will not be impressed.
> >
> > If you have a large hash, failure to deviate from the distribution given
> > by a PRF in the correct manner as the messages become small will not be
> > terribly important.
> >
> > However, if the hash size is small - as it will sometimes be practially
> > constrained to be - a PRF is simply completely the wrong model for an
> > ideal hash, if you care at all about avoiding hash collisions.
> > --
> > __________
> >  |im |yler  The Mandala Centre  http://www.mandala.co.uk/
[EMAIL PROTECTED]
> >
> > Enough research will tend to support your theory.
>
>
>
>



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to