Cryptography-Digest Digest #161, Volume #12       Tue, 4 Jul 00 21:13:01 EDT

Contents:
  Re: #sci.crypt moved to Dalnet (Simon Johnson)
  Re: Java implementation of DES and 3DES (Benjamin Goldberg)
  Re: Hash and Entropy ("Joseph Ashwood")
  Re: Java implementation of DES and 3DES ([EMAIL PROTECTED])
  Re: Hash and Entropy (Benjamin Goldberg)
  Re: Blowfish for signatures? (Thierry Nouspikel)
  Re: Hashing Function (not cryptographically secure) (lordcow77)
  Re: Public-domain Blowfish (Bruce Schneier)
  Re: AES: It's been pretty quiet for some time... (Bruce Schneier)
  Re: Hash and Entropy ("Scott Fluhrer")
  Re: Diffie Hellman Primes : Speed Tradeoff Q (Anton Stiglic)
  Re: Crypto Contest: CHUTZPAH... ("Paul Pires")
  Re: Crypto Contest: CHUTZPAH... ("Paul Pires")
  Re: Use of EPR "paradox" in cryptography (Tim Tyler)
  Re: Hash and Entropy (David A Molnar)

----------------------------------------------------------------------------

From: Simon Johnson <[EMAIL PROTECTED]>
Subject: Re: #sci.crypt moved to Dalnet
Date: Tue, 04 Jul 2000 21:59:25 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Frank M. Siegert) wrote:
> On Sun, 02 Jul 2000 10:37:35 GMT, Simon Johnson
> <[EMAIL PROTECTED]> wrote:
>
> >
> >
> >I've moved it to Dalnet, due to the frequent netsplitting on EFNET.
> >
> >Hope to see you soon.
>
> Havn't found you today... is this channel up 24/7?
>
> - Frank
>
>
Some guy as registered the channel before i got chance to set-up the
channel.......

So i've registered #sci-crypt

U'll catch me around 6-12:00pm GMT :)

Hope to see ya around :)

--
Hi, i'm the signuture virus,
help me spread by copying me into Signiture File


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Java implementation of DES and 3DES
Date: Tue, 04 Jul 2000 22:14:54 GMT

dexMilano wrote:
> 
> The algortith seems to work perfectly.
> I've verified some cases that put the error.
> 
> For example, if the resulting bit sequence is 143 the character
> corresponding (ASCII (143)) is "\uFFFD" as a single char!
> If try do decript the sequende where I have this char I have a wrong
> decription.
> I don't receive any error but the algorith doesn't work.
> 
> That's why I think the problem could be in the way Java manages
> characters.

The problem is that ascii 143 is '\u008F', whereas the number you're
using, '\uFFFD' has the value 65533.  GIGO.

------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Hash and Entropy
Date: Tue, 4 Jul 2000 15:25:43 -0700

Let's see if I can get this right, or at least get my own
errors pointed out.
A hash is a function that takes an input of an arbitrary
length, and creates an output of a fixed length. A
cryptographically secure hash does so in a fashion that
makes it difficult to find 2 values that generate the same
output value, difficult to find a value that generates a
specific value, and generates values that are distributed
approximately evenly across all possible output values in
the realm, as a result of this the output passes all
statistical tests.

Entropy is a measure of non-determinism inherent in a string
generated by a method. It has so many different necessary
interprettations that it becomes difficult to consider, in
spite of the fact that all the (valid) interpretations are
in fact equivalent. It has such a great number of
intricasies that it is quite difficult to define briefly. It
can be defined for some purposes as the predictability of a
sub-sequence given all other output values of the generator.
It is also, almost by definition difficult to locate
dependably, even a coinflip is not purely entropic since the
coin could, under very rare circumstances, land on the edge,
also if you check a US penny, it tends to land on one side
more than the other (tails more than heads IIRC).
                Joe



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Java implementation of DES and 3DES
Date: Tue, 04 Jul 2000 22:35:47 GMT

Boris Kazak <[EMAIL PROTECTED]> wrote:
> ASCII 143 can be interpreted by the system as "unsigned char", in which 
> case its value will be really 143, or as "signed char", in which case 
> its value will be 256-143= -113. 
> There must be some way in Java to treat and concatenate bytes as 
> "unsigned chars", otherwise no cipher implementation will work
> correctly.

No, there's a primative type, byte, which is what all of the ciphers
expect arrays of. For files, it's not a problem, as you can read and
write by the byte. Strings, on the other hand, need to be converted
before being passed to the engine. Assuming, of course, you're using
the standard cipher framework.

-- 
Matt Gauthier <[EMAIL PROTECTED]>

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Hash and Entropy
Date: Tue, 04 Jul 2000 22:40:02 GMT

Future Beacon wrote:
> 
> Hash and Entropy are the most frequently use mystery words.  I
> wonder if everybody is operating on the same meanings.  You could
> follow this news group for a year without getting anybody's
> expressed or implied definition for either of these words.  If
> nobody wants to risk a formal definition of either, maybe an
> example calculation would help make these ideas clear.
> 
> So far, I get a sense that a hash function is like a number tumbler
> that confounds the order of the original numbers and possibly makes
> a string of numbers that is longer than the original string.

A secure hash is one that transforms an arbitrary length string into a
fixed length string irreversibly.  Given the result of a hash it should
be damn near impossible to know anything about the input to the hash.

> Entropy seems like some nutso metaphor that no self-respecting
> statistician would ever use in calculations.

Entropy is a measure of randomness or unpredictability.  In statistics,
you'll often have an ordered set of data, with small random errors; the
task is to find the order, and filter out the error.  A statistician
might indeed want to know how much entropy there is in data, so as to
find the best way to remove it.

Hi, i'm the signature virus retro-virus,
help me spread by copying me over your signiture virus.

------------------------------

From: Thierry Nouspikel <[EMAIL PROTECTED]>
Subject: Re: Blowfish for signatures?
Date: Tue, 04 Jul 2000 10:47:27 -0700

Mark Wooding ([EMAIL PROTECTED]) wrote:

> Yes.  Look up `Rabin's One-Time Signature Scheme' in a good crypto book,
> such as HAC.  With a symmetric cipher (such as Blowfish), and a hash
> function (such as Blowfish in abreast Davies-Meyer, if you've got all
> day spare), you can use this cunning signature system which will allow
> you to sign one message for each key, which may be verified once.

Let me make sure I understood correctly:
1) Rabin's scheme requires interactive communication with the author of
the document, so as to verify its authenticity. 

2) Merckle scheme implies that both the public key and the signature are
thousand times larger than the document being signed!

Which makes them both extremely inconvenient. Or is there a way to a)
hash a message to, say a 16-bit word, then b) sign this word with a key
(and a signature) of reasonable lengths?

Or am I missing something?

Thanks for your help,

                                                                                Thierry


-- 
Thierry Nouspikel, MD, PhD                      | "Un technocrate c'est
un mec,
Department of Biological Sciences             |  tu lui poses une
question,
Stanford University     CA 94305-5020       |  quand il a fini de repondre
Phone: 1 650 723 2425                            |  tu comprends plus ta
question".
Fax: 1 650 725 1848                               |  Michel Colucci, dit
"Coluche"
[EMAIL PROTECTED]                     |  
Spam bait: postmaster@localhost abuse@localhost root@localhost
admin@localhost

------------------------------

Subject: Re: Hashing Function (not cryptographically secure)
From: lordcow77 <[EMAIL PROTECTED]>
Date: Tue, 04 Jul 2000 16:17:37 -0700

[EMAIL PROTECTED] (Scott Nelson) wrote:
>Primitive polynomials are neither necessary, nor sufficient for
>a CRC poly to be "good."  CRC-CCITT isn't primitive, but has
>several other nice properties which make it a reasonable choice.

A common technique is to take a primitive polynomial of degree
one less than the length of the desired CRC and then multiple
the polynomial by (x+1).



===========================================================

Got questions?  Get answers over the phone at Keen.com.
Up to 100 minutes free!
http://www.keen.com


------------------------------

From: Bruce Schneier <[EMAIL PROTECTED]>
Subject: Re: Public-domain Blowfish
Date: Tue, 04 Jul 2000 18:23:46 -0500

On Mon, 3 Jul 2000 20:25:53 -0400, Future Beacon <[EMAIL PROTECTED]>
wrote:

>To this day, can anybody use this algorithm any way they want?

Yes.

>Can anybody use it as a layer in their own algorithm?  

Yes.

>Has there
>been restrictions (such as credit to the author) added?

No.  But I like credit; credit is nice.

>Has anybody else patented it?  Has anybody heard of any disputes?

No.  No.

>My interest in encryption is for e-mail privacy within the United
>States.

Have fun.

Bruce
**********************************************************************
Bruce Schneier, Counterpane Internet Security, Inc.  Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
           Free crypto newsletter.  See:  http://www.counterpane.com

------------------------------

From: Bruce Schneier <[EMAIL PROTECTED]>
Subject: Re: AES: It's been pretty quiet for some time...
Date: Tue, 04 Jul 2000 18:31:11 -0500

On Thu, 29 Jun 2000 15:22:31 +0000, Volker Hetzer
<[EMAIL PROTECTED]> wrote:

>Hi!
>Does anyone know what's going on?
>The last announcement was on may, 15.

May 15th was the deadline for public comments.  NIST has now retreated
into their organization and is making a decision.  They hope to have a
final decision by Crypto--mid August--but have made no promises.

Bruce
**********************************************************************
Bruce Schneier, Counterpane Internet Security, Inc.  Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
           Free crypto newsletter.  See:  http://www.counterpane.com

------------------------------

From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: Hash and Entropy
Date: Tue, 4 Jul 2000 16:24:17 -0700


Benjamin Goldberg <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Future Beacon wrote:
> >
> > Hash and Entropy are the most frequently use mystery words.  I
> > wonder if everybody is operating on the same meanings.  You could
> > follow this news group for a year without getting anybody's
> > expressed or implied definition for either of these words.  If
> > nobody wants to risk a formal definition of either, maybe an
> > example calculation would help make these ideas clear.
> >
> > So far, I get a sense that a hash function is like a number tumbler
> > that confounds the order of the original numbers and possibly makes
> > a string of numbers that is longer than the original string.
>
> A secure hash is one that transforms an arbitrary length string into a
> fixed length string irreversibly.  Given the result of a hash it should
> be damn near impossible to know anything about the input to the hash.

There are a few more requirements on a secure hash than that.  A hash
function that evaluates to a string of all zero bits on any input certainly
meets your definition, but has a few obvious problems if you considered
using it as a secure hash function.

The impracticality of finding a second message with the same hash value; or
two distinct messages that hash to the same value, is also considered
fundamental to a secure hash.

--
poncho





------------------------------

From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: Diffie Hellman Primes : Speed Tradeoff Q
Date: Tue, 04 Jul 2000 19:55:52 -0400

David Hopwood wrote:
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> 
> Mark Wooding wrote:
> > Anton Stiglic <[EMAIL PROTECTED]> wrote:
> [re: DH parameter generation]
> > > you want a strong prime p = 2q + 1 in DH for different reasons,
> >
> > I'm aware that it's a good idea to work in a subgroup with prime order,
> > since this defeats attacks whereby the adversary takes advantage of
> > possible subgroups with smooth composite orders.  Is there a further
> > benefit to
> >
> > > you also want to work in the subgroup of size q
> >
> > Most definitely.
> 
> Ideally yes, although it's worth pointing out that when p = 2q + 1 for
> q prime, there's no reason to believe that working in the whole group of
> size 2q is less secure in practice. In that case the least significant
> bit of the private exponent can be determined from the DH result, but
> that doesn't matter because:
> 
>  a) the DH result should only be used via a one-way hash or PRF, so
>     it will remain secret even if the session keys are broken,
>  b) it only reduces the amount of uncertainty in the private exponent
>     to what it would have been anyway for the prime order case (i.e.
>     about q possibilities),
>  c) there's no way to extend this attack to recover any of the other
>     bits.
> 
> Actually for most purposes, I would recommend choosing p = 2qr + 1 for
> q and r prime, with q about 200 bits [*], g generating a subgroup of
> order q, and exponents chosen uniformly on [0, q). That makes the
> exponentations much faster, because they take time linear in the exponent
> size. It also makes parameter generation faster, because r can be fixed,
> and then it's only necessary to find the smaller prime q such that
> 2qr + 1 is prime.

If you are working with p = 2q + 1, you don't have to take exponents in
the range [2, q-1], you can take them in the range [2, 2^e], where e can
be much smaller than q-1 (200 bits for example where p is 1024 bits),
which 
makes exponentiation as fast as in your proposal.

The nice thing about using p = 2q + 1, is if you want more "entropy", 
you can choose larger exponents.  If you work in the subgroup of q
where p = 2qr + 1 and q is 200 bits you are stuck and can't choose 
exponents larger than 200 bits (you could use a PRF as a key derivation
function that will produce larger keys, but an adversary will prefer 
attacking the DH exponents of size 200 bits, Pollard Lambda attack will
take time O(2^100) ).

The only benefit I see of using Lim-Lee primes over strong primes is
that the
first can be generated faster than the latter, but if you are going to
use
fixed public parameters this time difference is not of importance.

Anton

------------------------------

From: "Paul Pires" <[EMAIL PROTECTED]>
Subject: Re: Crypto Contest: CHUTZPAH...
Date: Tue, 4 Jul 2000 17:02:51 -0700


David A. Wagner <[EMAIL PROTECTED]> wrote in message
news:8jtj6r$e14$[EMAIL PROTECTED]...
> In article <O0s85.793$[EMAIL PROTECTED]>,
> Paul Pires <[EMAIL PROTECTED]> wrote:
> >     It seems to me that the break that these attacks provide is
knowledge of
> > the key used or some other ability to read other messages or blocks that
> > could not be read directly from the attack. Am I missing something?
>
> In practice, that's probably about right.
>
> If we want to speak about academic or certificational attacks,
> we go further and we often say that if a chosen-ciphertext attack
> allows us to distinguish the cipher from an ideal cipher we should
> be very suspicious of the security of the cipher.
>
> I don't know which camp Boris Kazak's attack falls into, but
> that's about how it seems to break down, in my opinion.

Thanks for the time and input. Seems that I should get back with Boris and
understand his proposition much better. Right now, all I clearly know is
that this method could pinpoint that this cipher could have been used and
exclude all (most?)others. But if the the attacker has access to the
properly keyed cipher to perform this mischeif, doesn't he already know what
cipher was used? Once again, the attack yeilds no information or ability
over and above that needed to mount the attack. I understand the acedemic
aspects of it, that it is traditionally seen as an indication of weakness
but the contest does define an unambiguous meaning for a succesful attack
that this does not meet. (as yet). I'll work with Boris and see if we can
firm it up.

I'd grant that it would be a break if this certificational weakness could be
found by an attack that doesen't presuppose access to the properly keyed
cipher, were all that it finds is what you already must have known in order
carry it out.

Paul

>





------------------------------

From: "Paul Pires" <[EMAIL PROTECTED]>
Subject: Re: Crypto Contest: CHUTZPAH...
Date: Tue, 4 Jul 2000 17:15:39 -0700


Boris Kazak <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...

As I said already, I must reread the description again and then come
back to
our academic games...

Hey! I'm tickled half to death that you have invested the time and attention
to look at this. I am very interested in anything you find or deduce and do
appreciate the effort. Pop me a note when you are ready to have a go at it.

I don't understand the reference to academic games thought, if you believe
that I have responded unfairly, please accept my apologies and chalk it up
to unfamiliarity with the material. I'm in this to learn something, not
maintain a stubborn position that I am right.

Paul






------------------------------

Crossposted-To: sci.physics
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Use of EPR "paradox" in cryptography
Reply-To: [EMAIL PROTECTED]
Date: Tue, 4 Jul 2000 23:20:37 GMT

In sci.crypt DSM <[EMAIL PROTECTED]> wrote:

: From what I know, EPR allows uninterceptable, untraceable,
: instantaneous exchange of RANDOM data. [...]

You'd need a verification stage (similar to that used in quantum crypto)
to reduce the chance of interception to 1/S - assuming you can
authenticate your partner remotely.

As a method of generating random streams, it seems to be more of a PITA
than using a conventional hardware RNG, with no obvious compensating
benefits.

Why not generate your random stream (using some quantum process if you
must), and then transmit it using QC?
-- 
__________  Lotus Artificial Life  http://alife.co.uk/  [EMAIL PROTECTED]
 |im |yler  The Mandala Centre   http://mandala.co.uk/  UART what UEAT.

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Hash and Entropy
Date: 5 Jul 2000 00:44:30 GMT

Future Beacon <[EMAIL PROTECTED]> wrote:

> that confounds the order of the original numbers and possibly makes
> a string of numbers that is longer than the original string.

Let {0,1}^k be a string of k bits. Let {0,1}^* be a string of 0 or more
bits. A hash function is a function h : {0,1}^* ---> {0,1}^k  for some
fixed k. In the case of MD5, k is 128. In the case of SHA1, k is 160.
I'm not sure that having {0,1}^* as a domain is actually kosher, so 
to be formal we might have to do something like consider the hash
as a family of functions, one for each such that h_j : {0,1}^j --> {0,1}^k
but you get the idea. 
As far as I know, "hashing" just by itself means that the function outputs
a fixed length, usually small output. 

That's not particularly interesting by itself. For cryptography, we 
usually want some additional requirements for our hash functions.

The most basic is that we want the hash function to be "collision
intractable." That is, given h(x), it should be difficult to find any x'
such that h(x') = h(x). Note that this includes "one-wayness" as a special
case when x = x'. There are some conditions you have to have on this
property holding for almost all x, instead of absolutely all x. I don't
remember them off the top of my head, but Papadimitriou's _Computational
Complexity_ gives them in the context of a one-way function. 

Note that collisions must exist, since there are a lot more strings
of arbitrary length than there are k-bit strings. Like many other 
places in computer science where we have these kinds of "counting
existence" proofs, though, constructing examples of these collisions
can be very hard for the right hash function. 

The next thing we might want would be for a hash function to be "second
preimage resistant." That is, it should be infeasible to come up with
an x and a y such that h(x) = h(y) but x != y. The difference between this
and collision intractability is that we are free to pick the inputs to
the hash function as we want; we don't have some h(x) tying us down
to a specific target. 

Then there are a whole _range_ of things we might want from our hash
function! Some of them seem more reasonable than others, at first.
For instance, it might seem more natural to have a hash function
where it's hard to find x and y such that h(x) = h(y)-1 than it is
to have a hash function whose output is always a prime number. 
Yet I know of at least one construction for the latter which always
works, and I know of nothing which provably provides the former. 
(It's probably not hard to come up with, but still)

At the "wishful thinking" end of the spectrum, we would like our
hash function to perfectly simulate a "random function" or "random
oracle." Put another way, we want the output of the function to
"look random." We can think of a random function as one which,
on a given input, flips coins to give an output and "remembers" 
all the previous inputs -- if you ask it for a new x, it gives 
a random h(x), but if you ask the same y you asked before, it
gives the same h(y). 

Still another way of saying what we want for the hash function is that we
can look at all the relations between outputs which are "hard" to satisfy
for a random function. Then we want all of these relations -- called
"evasive relations" to be "hard" to satisfy for the hash function as
well. By treating the hash function as if it were random, we can avoid 
analysing it directly to see which relations we really need.

Many, many, many proofs of security treat hash functions as random
oracles, because it leaves aside the issue of analysing the function
directly and allows for efficient implementation. In principle,
it might be possible to derive which relations, exactly, that these 
proofs assume is "hard" to do with the hash function. Abdalla, Bellare,
and Rogaway actually did this in their DHAES paper -- they got rid
of the "random oracle" assumption and replaced it with some more specific
assumptions. The next step after that would be to create a hash function
which provably provides what you need -- Gennaro, Halevi, and T. Rabin
did that at Eurocrypt '98 (although there's a paper in Eurocrypt '00
claiming to break this which I haven't seen yet).

For constructions of hash functions, the papers by Rivest and later by
Bart Preneel & co at KU Leuven are worth reading. An excellent, apparently
too often overlooked, paper on the kinds of "things we want from our hash
functions" is due to Ross Anderson in "The Classification of Hash
Functions" (on his web page). The paper which introduced the Random Oracle
Model is due to Bellare and Rogaway "Random Oracles Are
Practical : A Paradigm For Desiging Efficient Protocols."

More recently, the Random Oracle Model became kind of a deal - there was a
paper by Goldreich, Canetti, and Halevi (??? I've blanked) on "The Random
Oracle Model, Revisited" which showed that hash functions, in general,
can't simulate random oracles as well as we might hope. In particular, a
certain kind of desired property called "correlation
intractability" cannot be provided in full by hash functions. Very few
schemes explicitly rely on "correlation intractability," but then again,
not all that many schemes seem to have been analysed in that detail. 

 This led to some development of "probabilistic hash functions" by Canetti
which  provide a property called "perfect one-wayness"; they don't quite 
get around the negative results, but they offer some slightly stronger
one-wayness properties than mentioned above. Later Fischlin and also
Miccancio and co-authors put together more efficient implementations of
these beasts. I'm not sure whether such "probabilistic" hashes have
found their way into practical constructions yet; they have this property
that h(x) is not always the same value, so all the constructions which
depend on XORing h(x) with something or otherwise manipulating it are
shot.

uh, ok. that was probably way more than you wanted to know. suffice it to
say that there are some formal definitions. pick one you like and 
run with it. :-)

> Entropy seems like some nutso metaphor that no self-respecting
> statistician would ever use in calculations.

There are lots of kinds of entropy! You should look up the page of Chris
Hillman, who sometimes posts to sci.physics.research. He has many
papers and tutorials on the different kinds of entropy which exist and
how the concept is used in physics and computer science. I haven't gone
through them yet, though, so I won't say more about its utility...

-David


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to