Cryptography-Digest Digest #439, Volume #9 Wed, 21 Apr 99 14:13:03 EDT
Contents:
SHA-1 as f-function in Feistel construction (RDJ)
Re: RC6 new key standard from AES conference? (Paul Koning)
Re: Radiation/Random Number question (Martin Peach)
Re: How long for export application? (Kent Briggs)
Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO (SCOTT19U.ZIP_GUY)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: Thought question: why do public ciphers use only simple ops like shift and XOR?
(John Savard)
Re: Thought question: why do public ciphers use only simple ops like shift and XOR?
(John Savard)
Re: Radiation/Random Number question (Medical Electronics Lab)
----------------------------------------------------------------------------
From: (RDJ)
Subject: SHA-1 as f-function in Feistel construction
Date: Wed, 21 Apr 1999 14:30:18 GMT
I'm wondering how effective the scheme described in the subject might
be? For instance:
using 256 bit blocks, the f-function would return the first 128 bits
of the hash of the right-half of the data block and a subkey.
just a thought
I'm just learning about cryptography, so please pardon my naive
musings.
RDJ
------------------------------
From: Paul Koning <[EMAIL PROTECTED]>
Subject: Re: RC6 new key standard from AES conference?
Date: Wed, 21 Apr 1999 10:09:58 -0400
Paul Rubin wrote:
> >If the SmartCard people want to use the AES algorithm then they
> >should design their cards to do so (not the other way around).
>
> The point of smart cards (and microcontrollers, which interest me more
> than smart cards) is that they are very cheap to manufacture because
> the hardware doesn't need to be very powerful. If a cipher needs
> kilobytes of ram or a 32-bit multiplier, that increases the hardware
> requirements a lot and makes the smart card much more expensive. When
> the card is more expensive, it can't be used in as many applications.
> So the applications that can't afford AES-capable smart cards end up
> using nonstandard ciphers, or no cryptography at all, and security
> suffers.
I still have a hard time understanding why smart cards use 15 year old
microprocessor technology. Things like 32 bit multipliers used to
be expensive, but in these days of 0.18 micron linewidths I don't
see the issue. The smart cards that are being used for reference
in these discussions seem to suffer, most of all, from absurdly low
amounts of memory given the state of the art even of some years
ago.
I suppose it's nice to keep using the design investment of 5 years
ago, but is that a prudent basis on which to evaluate cryptosystems?
paul
------------------------------
From: [EMAIL PROTECTED] (Martin Peach)
Subject: Re: Radiation/Random Number question
Date: Wed, 21 Apr 1999 15:05:56 GMT
The shortwave band around 19 MHz is mainly cosmic (Jupiter, galactic)
radiation, a very nice sounding hiss with some bursts from lightning
and machines mixed in...
Martin.
On Tue, 20 Apr 1999 20:21:00 GMT, [EMAIL PROTECTED]
(Jim Dunnett) wrote:
>On Tue, 20 Apr 1999 17:58:40 GMT, [EMAIL PROTECTED] wrote:
>
>>Use a sound card to acquire FM hiss, and distill it. This will
>>give you several orders of magnitude more entropy/sec than
>>any radioactive decay you'd want to play with. Much more
>>practical than any of this fun radiation stuff Dr. Mike et al.
>>enjoy.
>
>Any thoughts on using an FM radio with a shielded dummy
>antenna for your hiss?
>
>--
>Regards, Jim. | The comfortable estate of widowhood is
>olympus%jimdee.prestel.co.uk | the only hope that keeps up a wife's
>dynastic%cwcom.net | spirits.
>marula%zdnetmail.com | - John Gay 1685 - 1732
>Pgp key: pgpkeys.mit.edu:11371
>
------------------------------
From: Kent Briggs <[EMAIL PROTECTED]>
Subject: Re: How long for export application?
Date: Wed, 21 Apr 1999 16:57:08 GMT
Dale Mosby wrote:
> Does anyone have a recent data point for the time required
> to get a classification request run though the system? (A question
> for USA readers of course.)
>
> Feb 2nd an application for "Classification Request" went to the
> Bureau of Export Administration. It has been with the NSA since
> February 26th. As of April 12 BXA can't tell me anything other
> than "it's with NSA". All I'm trying to do is get approval to change
> a 40 bit DES algorithm to 56 bit DES. This has to be the most
> straightforward request that could be submitted. Has anyone
> run something through recently? I'd love to know what sort
> of turn-around time people have experienced.
>
> Thanks, Dale [EMAIL PROTECTED]
You should have done it before April 1 because the EAR had a shortcut
built-in that allowed you to submit a certification that automatically
upgraded your previously approved 40-bit application to 56-bit (assuming
no other changes). That deadline has now passed, however. I did a
"7-day fast track" application a couple months ago and it took about 5 or
6 weeks to go through. If you don't use the BXA's official 3-part form,
typed up with the proper fields filled in, it'll delay things even
longer.
--
Kent Briggs, [EMAIL PROTECTED]
Briggs Softworks, http://www.briggsoft.com
------------------------------
From: SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]>
Subject: Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO
Date: Wed, 21 Apr 1999 15:30:00 GMT
In article <[EMAIL PROTECTED]>,
Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> SCOTT19U.ZIP_GUY wrote:
>
> > But since compression does not in general have a secret key
> > one is free to write and give away compression programs. Since
> > part of the security of encrypted text relies on the entropy
> > of the file being encrypted. If one is sending ascii data it
> > would be nice to compress the data first. The problem is that
> > most compression methods invovle headers or other tell tell
> > signs that give information to an attacker. So it is best to
> > use a method that leaves no traces in the compressed file.
>
> I like but haven't yet tried to code any compression algorithms. So
> I have some naive questions:
>
> (1) If one uses a normal Huffman, i.e. with a fixed known ('standard'
> or agreed upon by the partners) distribution of the symbols, does one
> still need some header? If yes, what does the header contain?
>
By normal I assume you mean static huffman. And one could assume
based on previous messages or whatever and use a static Huffman
method where the tree is passed one time and used many times.
However if there is a slick way to send the table in a way
that any file could be treated as the compressed result of another
file I do not know. But that would be interesting to look into.
> (2) An adaptive Huffman must presumably start with some initial assumed
> distribution of the symbols, doesn't it? If so, how can one avoid
> the transmission of that initial distribution to the partner via a
> header, if it is not 'standard' or otherwise agreed upon by the
> partners?
>
Yes it does start with some starting assumptions. The public
domain one that I used started with no tree and then first symbol
is passed straight through. And a second token is used as the
not seen symbol. So that when any new symbol incountered the
new symbol token passed to output file as well as the symbol
as it first appears.
I rejected this approachs since it means as any new symbol
incountered when first compressed is composed of both the
current token for not seen and the 8 bits of the new one.
Instead I start with a full tree where each symbol has a
weight and then when a hit occurs the weight is greatly increased.
This will lead to smaller files since as more and more symbols
used the first string to output file is shorter than what the
other guy did.
The other problem that the public domain compression did not
address was the ending problem. When one changes the tokens
to be out out the last symbol may not end on a byte bounary.
The public domain method solved this problem by always having
the last few bytes of compressed file contain a count of number
of symbols used.
This information would be a great benefit to one trying to
break your encryption. So I end the file in such a way that
one of 3 conditions exist.
for a quick not complete explained it is this:
1) the compressed file ends exactly on a byte boundary and sent as suchh
2) the compressed file has filler in the last byte.
3) the compressed file is truncated so last symbol is not fully written
The logic of how and when is buried in my code. My english
sucks at writting a longer presicse definition but the
result is such that take any file of x bytes.
decompress it and then recompress it. You should
get the same file back.
And don't ask for explaintion in another spoken
language. I am learning spanish but not there yet.
and as far as I can tell nothing explains it better
than the C source code I include with the stuff.
David A. Scott
--
http://cryptography.org/cgi-bin/crypto.cgi/Misc/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
to email me use address on WEB PAGE
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Wed, 21 Apr 1999 15:00:04 GMT
Reply-To: [EMAIL PROTECTED]
On Wed, 21 Apr 1999 14:35:25 +0200, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:
>I would be rather careful in blindly accepting test reports of the
>manufacturers and audits done by their staffs for applications
>as critical as cryptology.
I assumed that the maker would conduct the tests in your presence. If
you buy a mail order TRNG, you are a fool.
>The FIPS 140-1 Monobit test that you so be-littled
I am not belittling that test. I consider it useful as a diagnostic
aid, and only as a diagnostic aid.
>is meant to test one aspect of 1-bit bias.
I suppose that is why it is called the "monobit test", eh.
>You may argue about the details
>of that test as specified in the document,
Actually I have not argued about any details. My indictment of its
inadequacy is quite general. I only singled it out because it was the
first one presented in FIPS-140. There are 3 other tests that we
should critique.
>e.g. whether one shouldn't take a larger sample.
My criticism goes farther than just sample size. I consider the test
to be far too simplistic, regardless of sample size. It makes a nifty
diagnostic aid, but that is all.
>But the usefulness of that test is certainly beyond question.
You are pontificating.
I also said that it is "useful" - as a diagnostic aid, and no more.
>And if you want to argue about the sample size of
>that test, please suggest a concrete sample size that is adequate
>in your judgement and say something to justify that.
The test itself does nothing to measure the error in rejecting the
null hypothesis. Doubling the sample size will not change that fact.
You cannot infer the ensemble average from the time average of a
sequence.
For some reason, you and others are having a fundamental difficulty in
accepting, much less understanding, these words from the giants in
Probability Theory. You should try your best to grasp the significance
of these statements, both in terms of their fundamental importance as
well as their practical importance.
+++++
"In everyday language we call random those phenomena where we cannot
find a regularity allowing us to predict precisely their results.
Generally speaking, there is no ground to believe that random
phenomena should possess any definite probability. Therefore we should
distinguish between randomness proper (as absence of any regularity)
and stochastic randomness (which is the subject of probability
theory). There emerges the problem of finding reasons for the
applicability of the mathematical theory of probability to the real
world".
--A. N. Kolmogorov (quoted in Li & Vitanyi, p. 52, op. cit.)
"The frequency concept based on the limiting frequency as the number
of trials increases to infinity does not contribute anything to
substantiate the application of the results of probability theory to
real practical problems where we always have to deal with a finite
number of trials."
--A. N. Kolmogorov (quoted in Li & Vitanyi, p. 55, op. cit.)
"We shall encounter theoretical conclusions which not only are
unexpected but actually come as a shock to intuition and common sense.
They will reveal that commonly accepted notions concerning chance
fluctuations are without foundation and that the implications of the
law of large numbers are widely misconstrued. For example, in various
applications it is assumed that observations on an individual
coin-tossing game during a long time interval will yield the same
statistical characteristics as the observation of the results of a
huge number of independent games at one given instant. This is not so.
Indeed, using a currently popular jargon we reach the conclusion that
in a population of normal coins the majority is necessarily
maladjusted."
--William Feller, "An Introduction To Probability Theory and Its
Applications", 3rd ed, p. 67
"Refined models [of coin-tossing and its relation to stochastic
processes] take into account that the changes and their probabilities
vary from trial to trial, but even the simple coin-tossing model
leads to surprising, indeed shocking, results. They are of practical
importance because they show that, contrary to accepted views, the
laws governing a prolonged series of individual observations will show
patterns and averages far removed from those derived for a whole
population. In other words, currently popular psychological tests
would lead one to say that in a population of "normal" coins most
individual coins are "maladjusted".
--Feller, p. 71.
" *Warning*: It is usual to read into the law of large numbers things
which it definitely does not imply. If Peter and Paul toss a perfect
coin 10,000 times, it is customary to expect that Peter will be in the
lead roughly half the time. *This is not true*. In a large number of
*different* coin-tossing games it is reasonable to expect that at any
*fixed* moment heads will be in the lead in roughly half of all cases.
But it is quite likely that the player who ends at the winning side
has been in the lead for practically the whole duration of the game.
Thus, contrary to widespread belief, the time average for any
individual game has nothing to do with the ensemble average at any
given moment."
--Feller, p.152.
+++++
Testing sequences has nothing to do with testing the process that
generates them. In order to characterize the process itself, you must
conduct experiments designed to reveal properties of the population as
a whole - the so-called ensemble average. That is a far more involved
program than merely looking at a small sample and inferring properties
from statistical theory that apply to other kinds of determinations of
significance. Determining error limits for the null hypothesis that p
= 1/2 for a candidate random process is not the same thing as deciding
if snow tires have better skid properties than last year's model.
But perhaps the most fundamental consideration is that if you could
infer the underlying processes responsible for quantum measurement
(i.e., the collapse of the wave vector) from the measurements of
quantum processes themselves, you would be able to construct a hidden
variable theory - something that has eluded physicist for nearly a
century. The very fact that this cannot be done should tell you that
deciding a process is not truly random is not a simple task.
>This is not meant to argue for or against what you said above: Since
>you are very interested in quantum phenomena, I like to call you
>attention to an URL I just got elsewhere related to that:
>http://www.eet.com/story/OEG19990419S0043
I remind you that Zurek's comments are highly speculative. The whole
area of the Physics of Information is quite interesting, but it has a
way to go to nudge orthodox QM out of its well-deserved perch as the
reigning monarch of physical theories. I point out that the some forms
of the Physics of Information have a hidden variable agenda.
>Do you think it is practical/economical for users to do constant
>dismantling and assembling of equipments, even if you can assume
>some technical competency on their part?
I would imagine that to be automated. In fact the TRNG could be highly
automated in all respects.
>What you can do is to know from the sampled data that p has the
>expression above at a certain confidence level that is acceptable to
>you.
As Herman Rubin has pointed out on more than one occasion, such
thinking is a misuse of the notion of statistical significance.
You will never be able to grasp what is being expressed here as long
as you hold dear to your incorrect notions of statitical significance.
>You can never be absolutely sure that p IS within that range
>for the random process under your examination.
That is true if you use simplistic small sample statistical tests. In
fact, your statement is an expression of the problem in using such
tests to characterize a process as truly random - they do not result
in any reasonable certainty about the randomness or non-randomness of
the underlying processes.
>And you do that with
>statistical tests which you, however, have constantly claimed to be
>useless for testing arbitrarily given sequences!!!
You are still off in the weeds. Apparently you have never conducted
scientific experiments in basic research.
>(What tests other
>than statistical tests can give you the knowledge above??)
I have discussed this before in terms of experimental measurements
that test hypotheses about the candidate random process itself. In
fact I have outlined such experiments several times with regard to a
radioactive TRNG.
None of those experiments rely on statistical tests of the TRNG output
itself. The randomness of the TRNG output is inferred from the
processes and equipment which generate that output. It is certainly
true that statistical measures are used to characterize the subsystems
of that TRNG. But the output itself is not used to characterize the
true random process which generates it.
The underlying reason for why such a TRNG is truly random, assuming
that it is working properly, is that it is based on quantum
randomness, such as that found in radioactive decay. The output is
truly random, not because it has the appearance of randomness, but
because it is generated by a truly random process. Therefore the
program at hand is to certify that the process is behaving as
advertised to within acceptable experimental errors.
For example, if you can measure the effects of detector deadtime, then
you can in principle determine the extent to which p is not precisely
1/2. If that error in the determination of p is acceptably small, then
you can feel reasonably certain that your TRNG will not leak enough
information to permit a cryptanalyst to decrypt your ciphers with
reasonable certainty.
Bob Knauer
If you think health care is expensive now, wait until it's FREE!
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Thought question: why do public ciphers use only simple ops like shift
and XOR?
Date: Wed, 21 Apr 1999 16:12:35 GMT
[EMAIL PROTECTED] (Terry Ritter) wrote, in part:
>On 18 Apr 99 02:05:37 GMT, in <[EMAIL PROTECTED]>, in sci.crypt
>[EMAIL PROTECTED] () wrote:
>>- And I think you can see why this design process actually _increases_ the
>>probability of a design which is strong against known attacks, but weak
>>against a future attack someone might discover.
>You lost me on that one.
When testing a computer system, sometimes a small number of known bugs are
deliberately introduced, so that, if not all of _those_ bugs are found, one
has an indication that testing should continue (on the assumption that a
similar proportion of the unknown bugs really being looked for have not
been found yet either).
What I was thinking of here is that the cryptanalyst will find what he
knows how to look for; and so, weaknesses beyond the reach of current
cryptanalysis won't be found; but if a cipher designed by a
non-cryptanalyst did not have a *single* known weakness (known to the
cryptanalysts, not to the designer) then one might have grounds to hope
(but, of course, not proof) that unknown weaknesses were scarce as well,
while getting rid of the known weaknesses _specifically_ doesn't give any
such hope.
John Savard ( teneerf<- )
http://members.xoom.com/quadibloc/index.html
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Thought question: why do public ciphers use only simple ops like shift
and XOR?
Date: Wed, 21 Apr 1999 16:21:01 GMT
[EMAIL PROTECTED] (Terry Ritter) wrote, in part:
>On 18 Apr 99 01:55:36 GMT, in <[EMAIL PROTECTED]>, in sci.crypt
>[EMAIL PROTECTED] () wrote:
>>Then it does make sense to look at the upper bound, because it's one of
>>the few indications we have.
>No. Completely false. I see no reason why the upper bound should
>have any correlation at all to the lower bound.
It will definitely be higher than the lower bound, but yes, it doesn't
prevent the lower bound from being low.
>In any security audit, we have to consider the worst case attacks, not
>just the ones we expect, and not just the ones we tried.
Any security audit will have to include a disclaimer that the true security
of the cipher systems used is essentially unknowable, but even real-world
financial audits do routinely include various sorts of disclaimer.
>>But it also makes sense - and here, I think,
>>we come closer to agreement - not to put too much faith in that upper
>>bound, and to add constructs of different types, and constructs that seem
>>like any mathematical tools to analyze them which would be useful for
>>cryptanalysts are *far* in advance of the state of current knowledge.
>I'm not sure I understand this fully.
Given that a cipher highly resistant to known attacks (i.e., differential
cryptanalysis) _could_ still be very weak, as far as we know, what can we
do about it? The closest thing to a sensible suggestion I can make is this:
make our ciphers stronger (that is, use more rounds) and more intrinsically
difficult to analyze (use complicated, highly nonlinear, constructs) than
the known attacks indicate is necessary.
John Savard ( teneerf<- )
http://members.xoom.com/quadibloc/index.html
------------------------------
From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: Radiation/Random Number question
Date: Wed, 21 Apr 1999 12:10:36 -0500
[EMAIL PROTECTED] wrote:
> You all know about how alphas from the epoxy was flipping bits
> in early Intel memories, right?
Yeah, I'm not so old that I've forgotten :-)
> Can you use an ac-coupled soundcard as your acquisition system?
Sure. You'd only have to make sure the soundcard is reasonably
shielded from the system noise and that's easy to measure. I suspect
it would work quite well.
Patience, persistence, truth,
Dr. mike
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************