Cryptography-Digest Digest #349, Volume #9        Tue, 6 Apr 99 23:13:03 EDT

Contents:
  Re: Test vector repository--specifically, help with a broken Blowfish   
implementation. (Wei Dai)
  Re: Geometric Identification (Darren New)
  a question about time and DES cracking (Pete)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: Geometric Identification ([EMAIL PROTECTED])
  Re: Histogram of my RC4 cipher stream (Nathan Kennedy)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Guaranteed message authentication faster than MD5 (D. J. Bernstein)
  Re: Real-time crypting (Boris Kazak)
  Windows PWL files ("Robin Keir")

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Wei Dai)
Subject: Re: Test vector repository--specifically, help with a broken Blowfish   
implementation.
Date: Tue, 6 Apr 1999 16:39:16 -0700

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> The main thing I'm looking for is if there is any place or site with a pile
> of comprehensive test vectors for various algorithms.

The Crypto++ library (see http://www.eskimo.com/~weidai/cryptlib.html) 
includes test vectors for MD2, MD5, SHA-1, HAVAL, Tiger, RIPEMD-160, 
MD5MAC, HMAC/MD5, XMACC/MD5, DES, IDEA, SAFER, RC2, ARC4, RC5, Blowfish, 
Diamond2, 3-WAY, GOST, SHARK, CAST-128, Square, SEAL, RC6, MARS, RSA, 
and DSA. Most are in .dat files, and the rest are hard coded into the 
validat?.cpp files.

------------------------------

From: Darren New <[EMAIL PROTECTED]>
Subject: Re: Geometric Identification
Date: Wed, 07 Apr 1999 00:44:36 GMT

>         Abstract.       This paper presents a simple, yet effective method of
> user identification using geometric objects.  The process is quite simple, and
> uses little memory and can be perform relatively quickly.

Cool. Two comments.  (I'm not a cryptographer, so take that with a grain
of salt...)

First, this doesn't really identify users. I can't imagine someone
memorizing a key that consists of mathematical descriptions of object.
It identifies computers or programs, perhaps. But that's a minor
terminology nit.

The other idea was that each side could answer just part of where the
"ball" exitted, and use internal information to generate session keys.
For example, with a sufficiently complex B, the X's could be used to
verify identity and the Y's could be used as bits of the session key. Or
the number of bounces could be used as bits of the session key.  Or
something like that.  Just a thought. 

-- 
Darren New / Senior Software Architect / MessageMedia, Inc.
     San Diego, CA, USA (PST).  Cryptokeys on demand.
"Practical Necromancy: Chapter One - Proper Use of Shovels"

------------------------------

From: [EMAIL PROTECTED] (Pete)
Subject: a question about time and DES cracking
Date: 7 Apr 1999 00:53:51 GMT

dear all,

i'm running john the ripper DES cracker on my home linux password file.

i have to say that i'm extremely surprised to see how slowly the
incremental cracker works.  it's been working for weeks, and i've only
obtained about 10 cracked passwords.

does anybody know any order of magnitude estimates on unix password hacking
and time on a typical pentium II/300?   are we talking decades, years or
months?

pete

--
NEWS FLASH:   Just compiled a new kernel 2.2.4!  YEAH!!!
================================================================
http://landau.ucdavis.edu/psalzman   [EMAIL PROTECTED]
One world, one web, one program. -- Microsoft Ad Campaign
Ein Volk, ein Reich, ein Fuhrer. -- Nazi Ad Campaign
<=>+/\/-=Prevent world domination, Install Linux today!=-\/\+<=>
================================================================
  The best way to accelerate a win95 system is at 9.81 m/s^2

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Tue, 06 Apr 1999 23:41:28 GMT
Reply-To: [EMAIL PROTECTED]

On Tue, 06 Apr 1999 11:29:14 +0200, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:

>In the spirit of science you have to be consistent. If you recognize
>the correctness of the theory of statistics then you have to
>accept the results of its applications, in the present context the
>results of tests on given sequences, whether these be generated 
>according to algorithms or from physical devices.

I have never denied the validity of statistics where it is applicable.


I am claiming that it is not applicable to the determination of true
randomness.

Randomness is an axiom of statistical theory and cannot be decided
from that theory. To do so would be circular.

>If you are in a position
>to dependably claim that a given sequence is random or not random (in 
>certain acceptable measure/sense), you should be able to do that 
>without knowing its origin.

By now you should know that true randomness is a property of the
generation process, not the numbers it produces. Therefore you cannot
possibly hope to know if the RNG is random unless you know what that
process is.

I could give you two identical sets of numbers and one of them could
have been generated by an algorithm and the other generated randomly. 

What test would tell which one is which?

>(For quite similar reasons medical tests
>of new drugs are performed in the so-called doubly blind tests.)

Those are not tests for randomness.

>Statistical tests, as far as I know, never claim in absolute
>sense that a given sequence is random/non-random.

Sequences are not random - the process that generates them is random.

Randomness pertains to the process, not its output.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Tue, 06 Apr 1999 23:41:20 GMT
Reply-To: [EMAIL PROTECTED]

On Tue, 06 Apr 1999 03:00:36 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>I obviously have already worked it out, and yes, it can be done
>on the back of a very small envelope.

So can astrology.

>The point is, you don't know how,

That is a rather smug assertion. But coming from you, it is typical.

>and refuse to learn,

I don't need to, if I already know it.

>yet feel entitled to criticize the very technology that you don't understand.  Why?

Because I am a Devil's Advocate, that's why.

Don't you pay attention at all? I have said that so many times it is
becoming hackneyed. My agenda here is to see if there is a prevailing
consensus on fundamental issues like true randomness.

And guess what - I do not take YOUR opinion automatically to be the
correct one. Only when a larger number of people can answer all the
questions that I can come up with will a prevailing consensus emerge
that I will consider correct - and then only tentatively.

I learned a long time ago not to take the opinion of so-called
"experts" without a critical investigation, especially when true
experts do not agree with their position.

You have completety dismissed the fact that Triola states in clear
English that parametric statistical tests cannot be used to determine
randomness. Williams & Clearwater, Li & Vitanyi, Feller - all these
experts say the same thing, yet you ignore them because you imagine
yourself to be smarter than they are. Well, I do not believe you are
smarter than they are. In fact, I have very good reasons to believe
that you are a phony.

There - did that answer your question "why?" above?

>No, now that you have clarified that you mean just the integers

Don't give me that bullcrap. The definition of a TRNG contains the
term "sequence", which is defined as a string of bits. There is no
decimal anywhere in a string of bits. The fact that sequences can be
looked upon as integers is completely obvious from the outset.

You simply do not pay close attention to what I write, but read things
in from you bigoted position of imaging you are the smartest person on
Earth.

>(and apparent allow negative integers, although the same problem
>occurs either way), the question stands.  It is a serious point.

What a real dickhead you are. Anybody knows that the interpretation of
a string of bits as negative is implementation dependent, and not an
intrinsic property of the sequence per se.

Do you know how it is done? Can you say "top bit"? There, I knew you
could.

>> Equidistribution means that the sample space has a flat distribution.

>That isn't very precise

HUH?! It is a precise as it can ever get. How imprecise is it to say
that the probability is the same for each item in the ensemble.

>but what I suppose you mean is that the
>TRNG has the same chance of generating
>       42
>as of generating
>       1234566778901033909867041675
>In which case, there is practical certainty (measure 1) that the
>*first* number ever generated by any actual "TRNG" (if it could
>exist) would be too large to be represented in whatever equipment
>realized the TRNG.

Not if it were stored externally as it was being generated, like on a
storage device.

>So there is something wrong with your definition.

It is not MY definition. It is the *prevailing* consensus of real
experts.

>Others have
>attempted to substitute workable definitions, in order to make
>*some* sense out of your "TRNG" concept.

Others have not managed to generate a prevailing consensus among real
experts.

BTW, what is YOUR "workable definition"? A uniform Bernoulli process?
A so-called zero-order Markov process? Whatever it is, why don't you
present it, expound on it, and defend it from any questions about it? 

Until you do, people are going to assume correctly that you can't.

>And what does "independent" mean in *that* context, if not
>related to the usual statistical meaning?

But I am not an Expert here - I am the Devil's Advocate.

So tell us exactly what the "usual statistical meaning" of the notion
of "independent" is. Do try to avoid circular definitions like
"independent means equal probability", etc.

What is the most fundamental definition of "independent" and is it is
it solely a property of the population distribution or of the
selection method. They are different aspect of the true random number
generation.

I believe independent refers to the sample selection method in
statistical theory. Therefore it is axiomatic in that theory and
cannot be proven by the very statistical theory it supports. You
cannot say these things in this order:

1) Statistical tests rely on random sampling.

2) Statistical tests can demonstrate the randomness of sampling by
detecting departures from some property of a population distribution.

IOW, randomness is not a property of a population distribution in
statistics. It is a property of the sampling procedure in statistics.
It is axiomatic to statistical procedures and cannot be proven using
those procedures.

>If you have a specific design in mind (perhaps borrowed from a
>book you read),

which is my only source of ideas, since I am not an Expert

I don't mind standing on the shoulders of giants.

>then that's pretty "absolute".  We *know* what
>statistics result from application of QM (at least, the standard
>theory used in work on "quantum computing") in various situations,
>so given a specific design, we can in principle derive the model
>for the distribution it generates.

You need to justify that claim.

Once again you are assuming that randomness refers to the population
distribution, instead of the generation process that created that
distribution. My claim is that you cannot infer the randomness of the
generation process by sampling the distribution.

>In fact, this is done all
>the time in physics laboratories everywhere.  For example, in
>PAC work I was involved with, the model turned out to be a sinusoid
>imposed on a decaying exponential, convolved with the response of
>the sampling equipment.  The basis of the phenomenon was photon
>pair correlation from a certain radioactive decay.  Just because
>the root phenomenon is fundamentally "quantum" doesn't mean that
>statistics is inapplicable, as you seem to have been arguing.

I have not argued that. I have merely restated the claim by Williams &
Clearwater that a quantum algorithm is now available for calculating
true random numbers. All that is presumably left to do is to program
it on a quantum computer, when one becomes available. That will then
constitute the standard for true quantum numbers.

>If it walks like a duck, quacks like a duck, and looks like a
>duck, then it's a duck, for all practical purposes.  Or are you
>making a distinction without a difference?

Why do you believe that classical chaotic processes are truly random?

We know that quantum processes are truly random.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Geometric Identification
Date: Wed, 07 Apr 1999 02:00:37 GMT

[EMAIL PROTECTED] wrote:
>       Abstract.       This paper presents a simple, yet effective method of
> user identification using geometric objects.  The process is quite simple, and
> uses little memory and can be perform relatively quickly.

Your energy and enthusiasm are refreshing, but tired old
cryptologists can shoot this one down.

First, the method doesn't compare well to others.  Given a
shared secret of reasonable size, existing challenge/response
methods are simpler and more secure.  If you have a shared
secret and SHA-1, why bother with the geometry?

More recent "key amplification" techniques can establish a
secure channel using a small shared secret - so small that
an adversary could search the keyspace.


Against this geometric method, the chosen starting point and
angle attack looks promising.  I'm guessing that's why you
limit the secret to 1000 uses, but I don't see any
justification for the number.

You suggest using either floating point or fixed point
numbers, thereby laying a huge implementation trap. Floating
point and avalanche don't mix.  Even CPUs that follow the
same standard will not always get identical results for
floating point operations; they'll occasionally round a
result to different model numbers.  Combine that with the
avalanche effect we cryptographer love (and the numerical
methods folks despise) and you get non-portable ciphertext.
Rounding to fewer digits doesn't eliminate the problem,
it only makes the bug bite less frequently.

--Bryan

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: Nathan Kennedy <[EMAIL PROTECTED]>
Subject: Re: Histogram of my RC4 cipher stream
Date: Wed, 07 Apr 1999 09:27:13 +0800

Anonymous wrote:
> 
> Statistical analysis such as this proves little as to a cipher's strength,
> but in response to the other fellow's postings, this is the histograms
> I've found from several test runs of my RC4 cipher code.  I don't see any
> immediately-apparent bias towards generating zeros.

There is certainly none, at least on the order of data you are
considering.  In fact, you will almost certainly never see a bias in the
distribution of the bytes outputted by RC4.  The CAT test (counting
combinations that show up, like CAT or FOO), however, reveals slight biases
in RC4 after a huge (in the order of trillions of bytes) quantity of data
has been generated.  This WILL show through on huge amounts of repetitive
data, but provides no practical attacks.

Basically, RC4 has been analysed statistically and there have been many
academic weaknesses found, but no really practical weaknesses found.  From
the SFS docs:

Among the weaknesses in RC4 are that there is too high a
likelihood during the initialization phase that small values
will remain in small positions in the initial permutation; user
keys are repeated to fill 256 bytes, so 'aaaa' and 'aaaaa'
produce the same permutation; results are looked up at
pseudorandom positions in the array, and if some internal state
causes a certain sequence of positions to be looked up, there are
255 similar internal states that will look up values in the same
sequence of positions (although the values in those positions
will be different), from which it can be shown that cycles come
in groups of 2^n, where all cycles in a group have the same
length, and all cycles are of an odd length * 256 unless they are
in a group of 256; there is a bias in the results so that, for
example, the pattern "a a" is too likely and the pattern "a b a"
is too unlikely, which can be detected only after examining about
8 trillion bytes; the internal state is not independent of the
results, so that with a given result there are two patterns in
the internal state that appear 1/256 times more often than they
ought to; and at least two seperate methods exist for deducing
the internal state from the results in around 2^900 steps.

[obviously 2^900 is a lot harder than 2^128 or so steps needed to brute
force your key, and much less effective]

Nate
--
bogosort(int *l,int n){int i,a,b;do{a=random()%n;b=random()%n;i=l[a];
l[a]=l[b];l[b]=i;i=1;while(i<n&&l[i-1]<=l[i])i++;}while(i<n);}

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Tue, 06 Apr 1999 23:41:22 GMT
Reply-To: [EMAIL PROTECTED]

On Tue, 06 Apr 1999 03:06:42 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>Far from it -- in fact I explained how it was done to a fellow
>who asked politely via e-mail.  I'm sure Gillogly, Reed, and
>others who have actually practiced cryptanalysis understand
>the methodology, also.  It illustrates why OTP is not used in
>very many actual cryptosystems, despite its theoretical
>properties having been well understood since before WWII.

If the keystream is generated by a true random process and used only
once, the OTP cipher it produces is proveably secure.

What happens in reality is that someone uses a non-true-random
keystream and information leaks giving the cryptanalyst a reasonable
certainty that a particular decryption is the intended plaintext.

But if the keystream is truly random, then there is no reason to
prefer one decryption over the other, since the cipher can be
decrypted into all possible messages equiprobably.

Bob Knauer

Signature files traditionally present comments from a population with an IQ over 100.
I thought it would be appropriate to present comments from the other half of the 
population, 
namely the one with an IQ under 100. After all, they do make up 50% of the human 
condition.
Never mind that they also tend to become politicians. Here is a representative example:

"People blame me because these water mains break, but I ask you, if the water mains
didn't break, would it be my responsibility to fix them then? WOULD IT!?!"
- Marion Barry, Mayor of Washington DC


------------------------------

From: [EMAIL PROTECTED] (D. J. Bernstein)
Subject: Guaranteed message authentication faster than MD5
Date: 6 Apr 1999 23:34:22 GMT

The first release of my hash127 package is available from

   http://pobox.com/~djb/hash127.html

hash127 computes a 16-byte secret-key signature of an arbitrarily long
message. An attacker cannot guess a signature of a different message
with probability better than b/2^{131}, where b is the total number of
bits in the messages and signatures.

hash127 is the first ``universal hash function'' at this security level
to break the MD5 speed barrier. The first release of hash127 is faster
than the best asm versions of MD5 on a variety of architectures. hash127
can be used as a fast replacement for HMAC-MD5 or HMAC-SHA.

I've set up a mailing list for discussions related to hash127. To join,
send an empty message to [EMAIL PROTECTED]

---Dan

------------------------------

From: Boris Kazak <[EMAIL PROTECTED]>
Subject: Re: Real-time crypting
Date: Tue, 06 Apr 1999 20:06:28 -0400
Reply-To: [EMAIL PROTECTED]

Andrew I. Baznikin wrote:
> 
> If someone has the possibility to send me C/C++ sources of functions or
> programs for real-time crypting or may be if you know any URLs please
> send me it!
> 
>     Thank you, Baznikin Andrew
>     <[EMAIL PROTECTED]>
==========================
  It is not clear, what do you mean by "real-time". Should the 
program work on-line? How fast must be the response? Must the 
program only encrypt or encrypt-decrypt? 
   For encryption algorithms you can log on to
    < http://www.rpini.com >
 and browse their Crypto CD, in the directory /source/ciphers.
���� ��������� ������, ������� ������������� ��������� ������ 
������������ ��������, and the site is in Switzerland.

   Best wishes               BNK

------------------------------

From: "Robin Keir" <[EMAIL PROTECTED]>
Subject: Windows PWL files
Date: Wed, 07 Apr 1999 03:08:41 GMT

Hi everybody. As a programming exercise I was thinking of implementing a
Windows PWL file cracker. In case you don't know what a Windows PWL file is,
it is an encrypted repository for all of your common Windows passwords i.e.
logon password, web passwords, network passwords etc. However, I am unable
to find all the details I need about the file format, encryption methods
used etc.

I've seen various bits and pieces on the web (Vitas Ramanchauskas et al) but
most sources seem unclear and confuse the issue with misleading references
to old style (Win 95 pre OSR2) intermingled with new style (Win95
OSR2/Win98) PWL file formats and algorithms.

Firstly I would like to concentrate on OSR2/Win98 format PWL files. From
what I understand it uses MD5 for hashing the key and RC4 for encrypting the
data, but I am unclear as to whether it still also uses the old style key
routine (32 bit key) as well. Some sources claim it uses a key created from
a "9 round MD5 algorithm", which seems odd since by my understanding MD5
uses a 4 round encoding scheme. I have implemented both MD5 and RC4
algorithms and they work correctly but without the necessary file format
information I can't begin to know how to parse the PWL file.

Any pointers anybody??

Thanks




------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to