Cryptography-Digest Digest #609, Volume #14      Thu, 14 Jun 01 08:13:00 EDT

Contents:
  Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY (Mok-Kong Shen)
  Re: Yarrow PRNG (Tim Tyler)
  Re: FIPS 140-1 test (Tim Tyler)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY (Mok-Kong Shen)
  Re: Uniciyt distance and compression for AES (Tim Tyler)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY 
([EMAIL PROTECTED])
  Re: Alice and Bob Speak MooJoo ([EMAIL PROTECTED])
  Academic Position (Nigel Smart)
  Re: RNG (Janne Tuukkanen)
  Re: RNG (Tom St Denis)
  Re: Problem in Twofish ("Philip G. Boys")

----------------------------------------------------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
Date: Thu, 14 Jun 2001 10:28:07 +0200



[EMAIL PROTECTED] wrote:
> 
[snip]
> No. In an information-theoretic sense, the plaintext you hand me is
> useless.  I am forced to consider the possibility that you are lying,
> and there is NO PROOF that you are NOT lying. If the cipher was a OTP,
> you can give me the plaintext AND the key, and I STILL can't be sure
> you aren't lying to me--even if you swear on your grandmother's grave.

I think that I misunderstood you. I am confused. Could
you please give a description of a scenario (a sequence
of events) such that an opponent could absolutely prove 
that I am not lying in ANY sense? (Note that elsewhere,
e.g. in certification for PK, we don't have an 'aboslute'
guarantee of security and we have to have some trust.)

M. K. Shen

M. K. Shen

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Yarrow PRNG
Reply-To: [EMAIL PROTECTED]
Date: Thu, 14 Jun 2001 08:26:13 GMT

Eric Lee Green <[EMAIL PROTECTED]> wrote:
: On Wed, 13 Jun 2001 14:51:09 GMT, Tim Tyler <[EMAIL PROTECTED]> wrote:

:>[If you] need to slap a hash function over the outputs the question
:>arises as to why you didn't put it at the heart of the algorithm
:>in the first place.

: So you like the design of the Linux /dev/urandom ? 

I think having a hash function at the heart of a PRNG is going to be 
better than outputting much unadulterated block cypher output.

I've read some criticism of /dev/urandom.  Apparently it generally shares
an entropy pool with /dev/random - so using the former can cause the
latter to block rather unnecessarily.

Then there's the use of MD5 - which is no longer regarded as a good
one-way hash function, because of the techniques for finding collisions
in it.

While this is probably of low relevance to a PRNG, I'd rather have
something with no known flaws in it.
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: FIPS 140-1 test
Reply-To: [EMAIL PROTECTED]
Date: Thu, 14 Jun 2001 08:28:48 GMT

Dobs <[EMAIL PROTECTED]> wrote:

: I am looking for source code of FIPS 140-1 statistical test for randomness
: which is used for high security application (that's what was written in
: Handbook of Applied Cryptography:)

It's at: http://quartus.net/files/Misc/

Docs at: http://www.cerberussystems.com/INFOSEC/stds/fip140-1.htm
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
Date: Thu, 14 Jun 2001 10:53:03 +0200



wtshaw wrote:
> 
> Mok-Kong Shen<[EMAIL PROTECTED]> wrote:
> 
> > Mark Wooding wrote:
> > >
> > > Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> > >
> > > > But measures should have adquate (intuitionally reasonable)
> > > > interpretations, I suppose. If a security measure
> > > > says 0 security, then one would 'very naturally' think
> > > > that that means no protection at all, isn't it?
> > >
> > > This is why we have different notions of security.  There is a
> > > difference between the information-theoretic security provided by the
> > > one-time pad (and perfect secret-sharing systems) and the computational
> > > security provided (by assumption) by most commonly-used symmetric and
> > > asymmetric ciphers.
> >
> > The problem is whether one has a 'common' measure of
> > security that could be applied to all sorts of encryptions.
> 
> Note that some ciphers are outside of both of these categories.  The
> strength of them ranges from stupidly simple to GOK.
> 
> There is no and can't be one common measure of security.  Without
> repeating them, I had to create that which some saidcould not be, a way to
> variously describe comparitive security of different ciphers in several
> ways. I can't ignore Shannon, even as he did not cover all the relative
> factors and did not know and therefore could not include new aspects of
> recent ciphers in his thinking.

I suppose that discussions long ago in the group have
already established that there is no scientifically 
rigorous and practically applicable measure of 'strengh' 
of ciphers. As to the concept of 'information', probably 
I have a too naive and hence ridiculously incorrect view. 
If I hand over 20 bits to the opponent, what 'knowledge' 
have I provided him? Doesn't it depend on the context, 
i.e. some other knowledge that he already has? Doesn't
it further depend on the processes (resources) that
he could utilize to exploit these bits? Thus 20 bits 
is not simply 20 bits. Now one talks about probailities,
as if the opponent has at his disposal 'exact' values
of these probabilities (e.g. correct to 5 decimal 
places) and with these one derives fine theoretical
results. But in my humble opinion one seemingly has
ignored the 'probability' of the very occurence of 
that scenario in actual practice itself.

Thanks.

M. K. Shen

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Uniciyt distance and compression for AES
Reply-To: [EMAIL PROTECTED]
Date: Thu, 14 Jun 2001 08:48:32 GMT

[EMAIL PROTECTED] wrote:
: Tim Tyler wrote:
:> [EMAIL PROTECTED] wrote:
:> : Tim Tyler wrote:

:> :> Here is the model I think you should have adopted:
:> :>
:> :>   Alice -> [encrypt] -> ...eve... -> [decrypt] -> Bob
:> :>
:> :> Unicity distance is as measured by Eve.
:> :>
:> :> We are interested in the effect of replacing [encrypt] and [decrypt]
:> :> by [compress encrypt] and [decrypt decompress] respectively.
:> :>
:> :> What you have done is not /just/ make changes to the stages represented
:> :> inside square brackets.
:> 
:> :> You have made fundamental changes to the set of messages which are sent
:> :> and their probabilities.
:> 
:> : How is this avoidable if messages are compressed from n=4 to n=3?
:> : There are 16 possible messages before compression and 8 possible after
:> : compression.
:> 
:> That is not how compression programs function.  I'm talking about
:> conventional compression programs, like zip, huffman, arithmetic, etc.
:> These allow you to compress a file, decompress it again and wind up with
:> the same result.

: You misunderstood my example.

I very much doubt it.  Was my sentence above less than 100% accurate?

: The example compressor *encodes* 0000 as 000, 0001 as 001, etc. It
: doesn't just throw away the leading zero.

I'm well aware of that.

: The decoder would reverse the mapping so that 000 would decode into
: 0000, etc. So, all values that compress would decompress into exactly
: the same value.

Yes - but compress "1111" - and then decompress - and what do you get?

In any orthodox compressor that behaviour would be regarded as severely
broken.  If you tried to seel such a program it would soon garner a
reputation as a file mangler, not a file compressor.

: Anyway, I'm not trying to understand specific compressor algorithms at
: this point.

A good job.

: I'm trying to understand the fundamentals of what *any* compressor
: does.

By examining a compressor that isn't even reversible, and mangles the
user's data.  Not a good start IMO.

: Compressors map m-bit values into n-bit values where n < m.

That's a bad description of what compressors do.  They do that for
particular files - not for *all* files.

:> :>  Your system after the compressor is added can't even transmit some
:> :> messages that were transmitted before - and *all* the probabilities of
:> :> the messages occurring are changed!
:> 
:> : This isn't surprising. [...]
:> 
:> It was suprising to me.  How you could think comparing the resulting
:> systems had any meaning was most puzzling.
:> 
:> : Can *any* compressor compress *all* possible messages in the original
:> : message space?
:> 
:> Of course not - but that is not how compressors operate either.  They
:> compress some files, and expand others.

: My example allows for the expansion of the values that don't compress.

No - you did *not* include these files or their probabilities.  If you
add in any more files, your probabilities will need reworking so that they
still add up to one.

: In fact, they don't affect the results in any way because I only
: considered the compressed message space.

Nonsense.  Add those files in, leave all the probabilities alone and your
thesis collapses.  Remember what H(m) = -log2(0) is?

:> :> Why should putting in a compressor affect the frequency of the messages
:> :> being sent?  It should not.
:> :>
:> :> If you /must/ use a lossy compressor at least set the probability of
:> :> messages it can't handle to be 0 initially, or you are comparing different
:> :> systems with different message spaces.
:> 
:> : Again, doesn't one *have* to compare different message spaces since the
:> : value of n is different after compression??
:> 
:> No, you should be comparing the same message spaces, otherwise the
:> comparison is completely nonsensical.

: How can the message spaces be the same when there are 2^m values in the
: original message space and 2^n values in the compressed message space?

The last time you asked this question I replied:

``That is not how compression programs function.''

Why are you considering this irrelevant case?

:> :> Also, do not change the probabilities of the messages occurring, or else
:> :> you are comparing entropies in very different systems - and it is no
:> :> suprise that your unicity distances are all over the map.
:> 
:> : The probabilities in the original message space are what they are. If
:> : they are non-zero then assigning them zero probability isn't an option;
:> : *that* would be altering the message space.
:> 
:> Of course.  But *don't* apply a compressor that can't accept these
:> messages as inputs and pretend that nothing has changed.

: How can a compressor accept all inputs if it is mapping values from
: a larger message space into a smaller message space?

It can't - but your premise is wrong.

:> : All the probabilities in my compressed message spaces sum to one [...]
:> 
:> No they don't.  Add up the probabilities in your second column, and you
:> will find they exceed one.  A minor arithmetic error on your part.
:> I ignored it until now because there were other far more important errors
:> to address.

: The point is that they are supposed to sum to one, and since the
: compressed message space is smaller than the original message space the
: original message probabilities can't be used because then they don't
: sum to one.

The compressed message space should not be smaller than the original
message space.

Think about what you are saying.  Why should introducing a compressor into
the system affect the frequency with which Alice wants to send messages to
Bob?   A compressor should not make any difference to them.  It should
be transparent in operation.  The only person affected is Eve - who 
has a harder time of it.

:> : Propose other probabilities.
:> 
:> Leave the probabilities completely alone, you you are not comparing
:> equivalent systems.

: The message spaces can't be equivalent since n < m.

You are barking up completely the wrong tree.  You don't seem to
grasp that your model has nothing to do with what compression programs
actually do.

If you deal with "compression" programs that distort messages as they are
sent, don't allow the original messages to be reconstructed, and you
change all the probabilities of the messages in your system after adding
compression you are not comparing like with like.
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
From: [EMAIL PROTECTED]
Date: 14 Jun 2001 06:35:59 -0400

Mok-Kong Shen <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] wrote:
>> 
> [snip]
>> No. In an information-theoretic sense, the plaintext you hand me is
>> useless.  I am forced to consider the possibility that you are lying...
> 
> I think that I misunderstood you. I am confused. Could
> you please give a description of a scenario (a sequence
> of events) such that an opponent could absolutely prove 
> that I am not lying in ANY sense?

Who said anything about ``ANY'' sense? If you say, ``Here's my OTP,''
then I'm generally forced to believe that you are lying. If your secretary
says, ``Here's Mok's OTP, honey!'' then I must consider the possibility
that you planted a OTP for her to find.

If somebody says, ``Here's Mok's private key,'' then I can be 100% positive
that it's the one that goes with your public key. There's no way you can be
lying about that. And since your public key is ``public'', there is likely
to be evidence connecting you to it--including a large body of messages I've
been itching to decrypt. You *can't* lie about that.

But in ``ANY'' sense? Maybe you've been lying to everyone, all your
life. Maybe everything you ever said was a lie, and getting your
private key simply allows me to read more of your lies.  Sure.

But there's one thing you can't lie about, period: the question ``Does this
private key go with that public key?'' You can't fool me, because I can
verify (i.e., ``absolutely prove'') it for myself.

Len.

-- 
Finally, anti-spam rules are vulnerable to the anti-fax effect: they are
useful only if they aren't very widely used.
                                -- Dan Bernstein

------------------------------

Subject: Re: Alice and Bob Speak MooJoo
From: [EMAIL PROTECTED]
Date: 14 Jun 2001 06:47:23 -0400

"Paul Pires" <[EMAIL PROTECTED]> writes:
> <[EMAIL PROTECTED]> wrote:
>>
>> Except, of course, for the fact that the code-talkers' system was nothing
>> like what you describe.
> 
> My description was not from any knowledge of the official system
> It was the description of how it worked from an interview of a
> code talker...History Channel :-(

He was probably describing the system fairly according to his
recollections, though it was vague enough that listeners would fill in
the gaps inaccurately.

There were a couple of errors in my reply, though. First, it was the
Marines, not the Navy, that employed the code talkers. Second, the
code was evolved with the help of the first team of code talkers...so
``We worked together to devise the code'' would be a truthful enough
statement, but again it is misleading: it tends to bury the role of the
military and its cryptologists in helping devise the code.

See <http://www.history.navy.mil/faqs/faq61-2.htm>.

Len.

-- 
Frugal Tip #18:
Get by on your good looks.

------------------------------

From: Nigel Smart <[EMAIL PROTECTED]>
Subject: Academic Position
Date: Thu, 14 Jun 2001 10:54:43 GMT


The University of Bristol Computer Science department wishes to
appoint a lecturer in the general area of cryptography and
information security.


The University does not issue application forms.

Applications should be made by letter, stating special academic and
research interests, and should include the names and addresses of three
referees. The letter should be accompanied by a curriculum vitae, setting
out the date of birth and, in chronological order, details of University
and subsequent career, with qualifications, previous appointments, and
present salary.
 
Applications should be sent, quoting reference 7576 to:
 Personnel,
 University of Bristol,
 Senate House, Tyndall Avenue,
 Bristol
 BS8 1TR, UK.

Closing date for application is 12th July 2001.

Further details about our department can be found at...

  http://www.cs.bris.ac.uk/


Yours

Nigel
-- 
Dr Nigel P. Smart                  | Phone: +44 (0)117 954 5163
Computer Science Department,       | Fax:   +44 (0)117 954 5208
Woodland Road,                     | Email: [EMAIL PROTECTED]
University of Bristol, BS8 1UB, UK | URL:   http://www.cs.bris.ac.uk/~nigel/

------------------------------

From: Janne Tuukkanen <[EMAIL PROTECTED]>
Subject: Re: RNG
Date: Thu, 14 Jun 2001 14:05:24 +0300

Dobs wrote:
> I took two random sources from the system of the computer- I menaged to find
> to such a function :  lp.dwAvailPhys from
> GlobalMemoryStatus(&lp)

 This depends from memory management. Most likely the value is far
from random, but divisible by 0x1000 which is the base block size
in most pc's.

> and timebuffer.millitm from _ftime( &timebuffer )

 The resolution of this is not real ms's. There might be long
sequences where the least significant bit(s) is 0 or 1 and the
incrementation is happening in higher bits.

> output is like 1111111111111111000001111111111111111111111111111111 which
> seems not to be random. Here is my code , could somebody tell me what I am

 Try higher bits. Or different sources. If I remember right there's
tickCount() or something in Win32 api which should be better to
produce timer values. And I don't think the combination of some memory
usage characteristics with simple timer is Very Great idea.

 Your procedure is something like this:

rnd(*a,*b) {
if (*a == *b)
 return rnd(++a,++b);

return *a;
}

Example with poor inputs:

mem (long period in the lowest bit)
time (shorter period)

111111110000000111111100000001
110011001100110011001100110011

returned sequence:
--11--1100--00-1--11----00--0-
= 11110000111000

or:

1111111100000001111111000000011
1010101010101010101010101010101

returned sequence:
-1-1-1-10-0-0-01-1-1-10-0-0-01-
= 11110000111100001

 Not so random, eh. 

 The lengths of these periods is not the problem, but the existence
of such periods.


        JanneT

-- 
      Janne Tuukkanen            Fools ignore complexity
  [EMAIL PROTECTED]       Pragmatists suffer it
   Simple Script Security:       Some can avoid it
 http://projannet.port5.com/     Geniuses remove it   - A.J.Perlis

------------------------------

From: [EMAIL PROTECTED] (Tom St Denis)
Subject: Re: RNG
Date: 14 Jun 2001 04:50:38 -0700

[EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote in message 
news:<[EMAIL PROTECTED]>...
> [EMAIL PROTECTED] (Dobs) wrote in <9g8rnd$kvt$[EMAIL PROTECTED]>:
> 
> >According to the Bruce Schneier book I made such a random number
> >generator: I took two random sources from the system of the computer- I
> >menaged to find to such a function :  lp.dwAvailPhys from
> >GlobalMemoryStatus(&lp) and timebuffer.millitm from _ftime( &timebuffer
> >) which seems to return random values from the system. I took last
> >significant bit of each, compare them:if it is the same I take to
> >compare new couple of bits or if it is different I took as my output the
> > first bit. However my output is like
> >1111111111111111000001111111111111111111111111111111 which seems not to
> >be random. Here is my code , could somebody tell me what I am doing
> >wrong? Thanks. Best Regards.Mike 
> >
> 
>    Since you asked. The first mistake was reading the BS crypto book.
> At least thats my humble view. I just hope you weren't foolish and
> spent a lot of bucks for it.

Gimme three good reasons why Schneiers book is a waste of money?

Also you failed to answer his question.

To the OP:  The reason your output looks poor is because your source
of input is poor.  Note that in "BS's Book" [as D.S likes to call it]
he does mention how to really get some good bits.  All you have todo
is READ the book properly and follow the guidelines.

For starters things like "kb of freemem" is not a value that is
entirely random.  Nor is the lsb of counters.  They will work for a
bit but not much.

Also you should tap as many sources as possible.  Hook IRQ interrupts,
grab network datagrams, audio output, mouse movements, keyboard
timings/keys, screen contents, etc...

Grab all this entropy and mix it into something like Yarrow [except
replace SHA-1 with SHA-256 or TIGER-192 or Haval-256, and replace 3DES
with AES].

Tom

------------------------------

From: "Philip G. Boys" <[EMAIL PROTECTED]>
Subject: Re: Problem in Twofish
Date: Thu, 14 Jun 2001 13:02:23 +0100

Saurabh Pal <[EMAIL PROTECTED]> wrote:

Can anyone help me in on problem related to a Function in Twofish Encryption Algorithm 
?

In the Function ' ParseHexDWord' , after getting the decimal value of
parsed Hex character in b, the last line in for loop is

d[i/b] |= b<<(4*((i^1)&7));

I am unable to understand what is happening with DWORD b in this line,
before writing it to array d.

Thanks and waiting in anticipation.

Saurabh Pal
Allahabad, India.

---- ---- ----

hi - i'm not a c programmer but if i had a similar problem i would break the line
down into sub-expressions:

=== expanded version ===
m = (i ^ 1);
n = (m & 7);
p = (4 * n);
q = (b << p);

=== original version ====
r = b << (4 * ((i ^ 1) & 7));

print out m, n, p, q and r to see whats going on in the algorithm
and to check that the q and r vales are equal (expansion = original)

finally:

d[i/b] |= r;

phil.





------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to