Cryptography-Digest Digest #518, Volume #9        Sat, 8 May 99 16:13:03 EDT

Contents:
  Re: AES (Volker Hetzer)
  Re: Roulettes (Mok-Kong Shen)
  Re: True Randomness & The Law Of Large Numbers ([EMAIL PROTECTED])
  Re: Little Irish girl's algorithm? (Dan)
  Re: Pentium3 serial number is based on who you [server/exterior] claimed   to be 
(Vernon Schryver)
  Re: AES (Bruce Schneier)
  Re: AES (Bruce Schneier)
  Twofish performance question. (Paul Pires)
  Re: Twofish performance question. (Paul Rubin)
  Re: Twofish performance question. (Paul Pires)
  Re: Thought question: why do public ciphers use only simple ops like shift and XOR? 
(John Savard)
  Re: Factoring breakthrough? (wtshaw)
  Re: AES (Terry Ritter)

----------------------------------------------------------------------------

From: Volker Hetzer <[EMAIL PROTECTED]>
Subject: Re: AES
Date: Fri, 07 May 1999 14:44:52 +0200

Terry Ritter wrote:

> But the function of a cipher is to hide information from others.
> Simply using a cipher for a long time while not specifically knowing
> that it exposes secrets hardly means the secrets are secure.  We have
> no way to *see* whether or not hiding occurs; we have no way to see a
> cipher perform its function.  So we simply do not have the same sort
> of practical experience that we commonly call "proven."
But we have. Sort of.
Look at it like this:
You use an algorithm to protect some info with a certain value.
You believe you would have noticed if somebody other than you
uses that information (comparable products, evidence in courts, ...).
If you have used that algorithm to protect info of that certain value
often and nobody has used captured data you can conjecture that people
will continue not to use your data. This can have to reasons:
1: The effort to break your algo is higher than the value of your information.
2: The damage caused by breaking the algo (switch to better algos, embarrasment
   of people, governments) is bigger than the value of your information.
The result is the same for both cases. You can continue to use your algo.
It is a bit like evidence in court. If you can't show it it's as good as
you've got none.
If your enemy can't (for whatever reason) use your data, it's safe to use
your algorithm. This is secutity too.

Greetings!
Volker

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Roulettes
Date: Fri, 07 May 1999 14:44:55 +0200

[EMAIL PROTECTED] wrote:
> 
 
> Random value try this...
> 
> Setup a RC4 array from 0..15 (not to 255), then perform the rng function
> (which produces the output) repeatedly (mod 16) until the user hits a key.
> The last output made will be the 'dice face'.

Mmm. You need at least a palmtop. Wouldn't dice be more handy
for you?

M. K. Shen

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Fri, 7 May 1999 12:47:02 GMT

Organization: DECUServe
Lines: 46

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (R. Knauer) 
writes:
> On Thu, 06 May 1999 04:14:20 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> 
>>      3)  We observe the actual output and find that it does
>>          not have those statistical characteristics.
> 
> You did not take enough samples to make that determination. Claiming
> that you took 20,000 single bit samples is fallacious. In reality you
> took just one sample.

In reality it doesn't matter whether it's 20,000 single bit samples from
a perfect one bit generator or one 20,000 bit sample from a perfect
20,000 bit generator.

Either way, it's 20,000 bits.

Either way, those 20,000 bits have an aggregate bit bias.

Either way, the probability distribution for the set of all 2^20,000
possible 20,000 bit sample sequences is identical.

Either way, the probability of seeing a particular bias of ones over
zeroes in a 20,000 bit sample sequence is identical.

> If  you measure the speed of light with one sample consisting of
> 20,000 individual waves of the EM field, you cannot claim to have
> taken 20,000 samples of one wave.

If you assume that the 20,000 individual waves are independent
and identically distributed, you can claim to have taken 20,000
representative samples.

If you can't assume that then the analogy you were attempting to
establish fails to apply.

> You need to look into the "estimation of error in samples", which can
> be quite large for just one sample.

You need to actually calculate the probable error distribution for the
sample at hand.  20,000 single bit samples or one 20,000 bit sample.
Your choice.  Assume perfect generators.  I claim that the error
distribution for sample bit bias versus population bit bias is identical
either way.  Prove me wrong.  Show your work.

        John Briggs                     [EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (Dan)
Subject: Re: Little Irish girl's algorithm?
Date: 8 May 1999 11:30:45 -0500

Yes, my apologies.  As was pointed out to me privately, I took the literal
meaining of "until now", which implies in English that a paper is now 
available, when it's believed you meant "so far".  Idioms never fail to 
confuse. 

Best Regards,

Dan

>Hi Dan,
>I said :
>    NO paper has been released. (as far as I know). Maybe you can forget this story.
>    Emmanuel
>
>> >Well, We can think this was a good (?) joke, since no paper has been released
>> >until now....
>> >Emmanuel
>
>> So there is a paper published after all?  Is there an online version or
>> available for download?
>> -Dan
>

------------------------------

From: [EMAIL PROTECTED] (Vernon Schryver)
Crossposted-To: alt.security
Subject: Re: Pentium3 serial number is based on who you [server/exterior] claimed   to 
be
Date: 8 May 1999 09:04:33 -0600


>> Paul Koning wrote:
>>> I think a more accurate statement would be "tamper-resistant software
>>> is non-existent".
>>> 
>>> The whole concept is utterly nonsensical.
>> 
>> What is the basis for your conclusion?

>Probably the idea that, what is too much effort today, will be a piece of
>cake tomorrow.


No, it is because the notion is silly salescritter nonsense intended to
bamboolze suckers.  First, tampering with software is variously called
"maintenance" and "debugging."  It is what programmers are paid to do most
of the time.  Software is called "soft" because it can be changed.  There
are plenty of good tools for finding the code in an existing program that
computes the value that it later puts on the network, particularly in the
x86 architecture since the 80836 when the debugging instructions that can
be set to trap when any or a particular value is stored in a range of
addresses.  All except the most junior programmers are competent to find
and patch bugs in programs given only the binary (e.g. .exe or .dll) and
not the binary.  All of us who has dealt with the enormous pile of bugs
that is WIN32 and who is not very junior has had to poke through .dll's
without source, using only the disassemblers that came with the C++
compilers we bought from Borland, Microsoft, or elsewhere.

Second, in this particular situation, the formula for computing the
hash of the HTTP server and the client's PIII serial number would be
public.  Not even Microsoft, after their current legal difficulties,
would go along with a plan that would keep all other HTTP clients and
servers software vendors out of the market by keeping the formula
secret, even if it were not trivial to reverse engineer it, or not
true that the most popular HTTP server software is neither Microsoft's
nor Netscape's.  (See http://apache.org).  Even if you are naive and
believe the Rainbow Technologies silliness that intentional bugs are
harder to fix that unintentional bugs, it would be easy to modify any
of the many proxies like Junkbuster to delete or modify the hash from
your browser.  See http://www.intermute.com/ http://www.falken.net/webfree/
and http://junkbuster.com/ijb.html, and in particular
http://www.junkbusters.com/cgi-bin/privacy for what junkbuster does
for hiding the name and version of your browser, or how it can generate
bogus cookies.

Consider the implications of the SOCKS protocol, which many 1,000,000
people have no alternative except to use to get through firewalls to the
Internet.  Every bit that their browsers send or receive is fondled by
their SOCKS proxies, and so can be adjust to be anything the programmers
maintaining those proxies desire.  Easiest would be to have the proxies
that claim to not be running on a PIII.
-- 


Vernon Schryver    [EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: AES
Date: Sat, 08 May 1999 15:58:43 GMT

On Thu, 06 May 1999 19:21:23 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:

>
>On Wed, 05 May 1999 14:13:47 GMT, in <[EMAIL PROTECTED]>,
>in sci.crypt [EMAIL PROTECTED] (Bruce Schneier) wrote:
>
>>On Wed, 05 May 1999 04:46:48 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
>>
>>>
>>>On Tue, 04 May 1999 21:39:14 GMT, in
>>><[EMAIL PROTECTED]>, in sci.crypt
>>>[EMAIL PROTECTED] (Bruce Schneier) wrote:
>>>
>>>>On Fri, 30 Apr 1999 13:12:52 -0500, "Anthony King Ho"
>>>><[EMAIL PROTECTED]> wrote:
>>>
>>>>[...]
>>>>>and algorithm such as Triple-DES is proven
>>>>>to be very secure.
>>>>
>>>>Agreed.
>>>
>>>Sorry.  There is NO proof of strength for Triple-DES or any other
>>>cipher in cryptography. 
>>
>>Sorry.  I thought he meant "proven" in the vernacular, as in "this
>>meal has proven to be very tasty."  I agree that there is no
>>mathematical proof of the strength of triple-DES, which has
>>nonetheless proven to be very secure (ans tasty).
>
>It turns out to be difficult to justify such a statement even using a
>non-mathematical definition of "proof."  For example:
>
>We can say a race car is "proven" in practice:  We can see it run for
>some time at some speed and compare it to other entrants.  Basically a
>car is movement at speed and we can see that.  
>
>We can say a cipher program is "proven" in practice:  We can see it
>encipher data, produce junk, then recover the original data.  We can
>see whether or not the program crashes.  We thus see the program
>perform its functions.  
>
>But the function of a cipher is to hide information from others.
>Simply using a cipher for a long time while not specifically knowing
>that it exposes secrets hardly means the secrets are secure.  We have
>no way to *see* whether or not hiding occurs; we have no way to see a
>cipher perform its function.  So we simply do not have the same sort
>of practical experience that we commonly call "proven."  
>
>I would say that simply using a cipher for a long time -- and not
>specifically knowing it is broken -- does *not* constitute "proven
>security," by any definition of "proof."  

This has proven to be a very tiresome conversation.  I apologise if
you dislike our language.

Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
           Free crypto newsletter.  See:  http://www.counterpane.com

------------------------------

From: [EMAIL PROTECTED] (Bruce Schneier)
Subject: Re: AES
Date: Sat, 08 May 1999 16:01:00 GMT

On Thu, 06 May 1999 23:47:38 GMT, William Hugh Murray
<[EMAIL PROTECTED]> wrote:

>wtshaw wrote:
>> DES has proven to be in these days, still a moderately useful cipher, some
>> semifirm ground; now the problem is to extract from it what is more solid
>> and what is more liquid: I maintain that the lessons of DES are not fully
>> learned, and we may still harvest something good out of it if we can trim
>> away at the rotten parts, which I am expressly trying to do to the
>> complaints of the multitudes.
>> --
>> What's HOT: Honesty, Openness, Truth
>> What's Not: FUD--fear, uncertainty, doubt
>
>Moderately useful?  After more than twenty years, the cost of attack as
>a function of the cost of encryption is exactly where it was when DES
>was announced.  

No, that's not true.  You're assuming that the cost to build a DES
encryption engine and the cost to build a DES breaking engine have
followed exactly the same "Moore's Law" curve, which they have not.

>I would suggest that that is a very useful cipher.  I
>would suggest that that is a timeless cipher.  Until that ratio begins
>to fall, I can continue to make good use of that cipher.  

The ratio has fell and will probably continue to fall.

>I certainly can not use it in the way that it was used twenty years ago
>but there are, just as certainly, useful applications and safe modes. 

Only against some threat models.  Again, the ratio has changed.

>It will be a long time before we will know a fraction as much about an
>AES candidate as we know about DES. 

Agreed.

Bruce
**********************************************************************
Bruce Schneier, President, Counterpane Systems     Phone: 612-823-1098
101 E Minnehaha Parkway, Minneapolis, MN  55419      Fax: 612-823-1590
           Free crypto newsletter.  See:  http://www.counterpane.com

------------------------------

From: Paul Pires <[EMAIL PROTECTED]>
Subject: Twofish performance question.
Date: Sat, 08 May 1999 11:28:22 -0700

    I'm a little confused. I know very little about the science of
cryptography but decided to figure some of it out and ran smack into my
confusion.

    In checking Bruce Schneier's site I found performance specs on
Twofish for a Pentium. It says it runs at "18 Clocks per byte". Is a
"Clock" a processor cycle? Does that mean that a 180 mhz Pentium (if
there was one) would crank out 10 megabyte of ciphertext per second?

    Looking at the code and the stated method of using many rounds on a
128 bit block and since there appears to be operations performed on the
stuff each round, it looks to me like my simple understanding of a
"clock" might be wrong. Can someone set me straight?

Thank You,

Paul


------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: Twofish performance question.
Date: Sat, 8 May 1999 18:55:22 GMT

In article <[EMAIL PROTECTED]>, Paul Pires  <[EMAIL PROTECTED]> wrote:
>    In checking Bruce Schneier's site I found performance specs on
>Twofish for a Pentium. It says it runs at "18 Clocks per byte". Is a
>"Clock" a processor cycle? Does that mean that a 180 mhz Pentium (if
>there was one) would crank out 10 megabyte of ciphertext per second?

Basically that's correct.

>    Looking at the code and the stated method of using many rounds on a
>128 bit block and since there appears to be operations performed on the
>stuff each round, it looks to me like my simple understanding of a
>"clock" might be wrong. Can someone set me straight?

I don't understand your question.  Are you saying you're counting the
operations and there are too many to be running at 18 clocks/byte?
Keep in mind that:
  - the Twofish block is 16 bytes, so you get 16*18 = 288 clocks to 
    encrypt a block, at 18 clocks/byte
  - the Pentium is a dual pipeline processor, which means that if you
    write the code very carefully, you can do two machine instructions
    on almost every clock.  So a Twofish encryption might be able to
    use as many as 288*2 = 576 instructions, and still finish in 288 
    cycles (in practice you can't get total parallelism like that, but
    might be able to get 400-500 instructions done).

------------------------------

From: Paul Pires <[EMAIL PROTECTED]>
Subject: Re: Twofish performance question.
Date: Sat, 08 May 1999 12:20:43 -0700

Paul Rubin wrote:

> In article <[EMAIL PROTECTED]>, Paul Pires  <[EMAIL PROTECTED]>
> wrote:
> >    In checking Bruce Schneier's site I found performance specs on
> >Twofish for a Pentium. It says it runs at "18 Clocks per byte". Is a
> >"Clock" a processor cycle? Does that mean that a 180 mhz Pentium (if
> >there was one) would crank out 10 megabyte of ciphertext per second?
>
> Basically that's correct.
>
> >    Looking at the code and the stated method of using many rounds on
> a
> >128 bit block and since there appears to be operations performed on
> the
> >stuff each round, it looks to me like my simple understanding of a
> >"clock" might be wrong. Can someone set me straight?
>
> I don't understand your question.  Are you saying you're counting the
> operations and there are too many to be running at 18 clocks/byte?
> Keep in mind that:
>   - the Twofish block is 16 bytes, so you get 16*18 = 288 clocks to
>     encrypt a block, at 18 clocks/byte
>   - the Pentium is a dual pipeline processor, which means that if you
>     write the code very carefully, you can do two machine instructions
>
>     on almost every clock.  So a Twofish encryption might be able to
>     use as many as 288*2 = 576 instructions, and still finish in 288
>     cycles (in practice you can't get total parallelism like that, but
>
>     might be able to get 400-500 instructions done).

   Thank you for your response. No, I'm not saying I understand the
twofish source implementation yet. I haven't counted yet. I haven't
figured out how yet. It was a real newby question and before I rolled up
my sleeves and got to work educating myself, I just thought I should
check my assumptions about the definition of terms. Looks like it does
mean what it says, so now I can get busy figuring out how. My biggest
problem is that I have an interest in cryptography but not much
knowledge of programming or the optimization of such. I'll keep chunking
away at it.

Thank You,

Paul


------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Thought question: why do public ciphers use only simple ops like shift 
and XOR?
Date: Mon, 03 May 1999 17:31:14 GMT

[EMAIL PROTECTED] (Terry Ritter) wrote, in part:

>If
>users confine themselves to one piece of information per message (not
>continually recounting the past), this would seem to be superior to
>normal security compartmentalization.  

I would think that people using a cipher program in a normal civilian
environment, while they wouldn't repeat the past, would supply lots of
context and phrase their messages in a normal conversational style. Even in
the military environment, where users are made aware of the need to limit
context in enciphered messages, some leakage in this fashion is
unavoidable.

While I thought that Bryan Olson's objection was a legitimate one to the
cipher system he _thought_ you were advancing, I had thought you would have
already addressed this concern; essentially, multi-ciphering is what is
needed, including encipherment by a cipher from a small group that users
"know" to be secure as well as one from a larger pool of ciphers that
*seem* secure, and make problems for the cryptanalyst by their profusion.

It may also be noted that your plan has one major benefit: Kerchoff's
dictum notwithstanding, it is no longer true that the adversary knows what
algorithms have been used on a particular message. Ensure that a known
plaintext attack can only recover a session key, and not the key-generating
or key-exchange keys, and another whole avenue is closed.

John Savard ( teneerf<- )
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: Factoring breakthrough?
Date: Sat, 08 May 1999 13:34:46 -0600

In article <7h19ac$aaj$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
> 
> Analog gives infinite precision, but limited accuracy.
> Management of such dualisms is somewhat embarassing.
> 
Precision and accuracy are not necessarily related, as precision referes
to clustering of obtained readings, and accuracy references an obtained
values absolutely to the real one.  It is true that with deficient
methods, you can have good precision, but lousy accuracy; you can also
have an accurate average, but have terrible percision.

Precison of an analog device depends largely on the observer, which might
be finer than the abilities of a digital instrument to quantify, or not.
Your first line is not necessarily true; it all depends on the
circumstances.

The essence of good management is in effective handling of impossible
situations.  The choice between analogy or digital is not necessarily a
difficult one.

Since the group is sci.crypt, perhaps I should try to relate this to
something cryptological: If you can make an analysit happy with the
precision of his preliminary judgements about data while making them
inaccurate, you have him pretty well at your mercy.  This is the essence
of the value of laying a false trail, which is really a sneaky thing to do
in ciphertext.
-- 
What's HOT: Honesty, Openness, Truth
What's Not: FUD--fear, uncertainty, doubt  

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: AES
Date: Sat, 08 May 1999 19:37:49 GMT


On Sat, 08 May 1999 15:58:43 GMT, in <[EMAIL PROTECTED]>,
in sci.crypt [EMAIL PROTECTED] (Bruce Schneier) wrote:

>On Thu, 06 May 1999 19:21:23 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
>
>>
>>On Wed, 05 May 1999 14:13:47 GMT, in <[EMAIL PROTECTED]>,
>>in sci.crypt [EMAIL PROTECTED] (Bruce Schneier) wrote:
>>
>>>On Wed, 05 May 1999 04:46:48 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:
>>>
>>>>
>>>>On Tue, 04 May 1999 21:39:14 GMT, in
>>>><[EMAIL PROTECTED]>, in sci.crypt
>>>>[EMAIL PROTECTED] (Bruce Schneier) wrote:
>>>>
>>>>>On Fri, 30 Apr 1999 13:12:52 -0500, "Anthony King Ho"
>>>>><[EMAIL PROTECTED]> wrote:
>>>>
>>>>>[...]
>>>>>>and algorithm such as Triple-DES is proven
>>>>>>to be very secure.
>>>>>
>>>>>Agreed.
>>>>
>>>>Sorry.  There is NO proof of strength for Triple-DES or any other
>>>>cipher in cryptography. 
>>>
>>>Sorry.  I thought he meant "proven" in the vernacular, as in "this
>>>meal has proven to be very tasty."  I agree that there is no
>>>mathematical proof of the strength of triple-DES, which has
>>>nonetheless proven to be very secure (ans tasty).
>>
>>It turns out to be difficult to justify such a statement even using a
>>non-mathematical definition of "proof."  For example:
>>
>>We can say a race car is "proven" in practice:  We can see it run for
>>some time at some speed and compare it to other entrants.  Basically a
>>car is movement at speed and we can see that.  
>>
>>We can say a cipher program is "proven" in practice:  We can see it
>>encipher data, produce junk, then recover the original data.  We can
>>see whether or not the program crashes.  We thus see the program
>>perform its functions.  
>>
>>But the function of a cipher is to hide information from others.
>>Simply using a cipher for a long time while not specifically knowing
>>that it exposes secrets hardly means the secrets are secure.  We have
>>no way to *see* whether or not hiding occurs; we have no way to see a
>>cipher perform its function.  So we simply do not have the same sort
>>of practical experience that we commonly call "proven."  
>>
>>I would say that simply using a cipher for a long time -- and not
>>specifically knowing it is broken -- does *not* constitute "proven
>>security," by any definition of "proof."  
>
>This has proven to be a very tiresome conversation.  I apologise if
>you dislike our language.

You already apologized.  I assumed you were going to let it drop.  

This is not an issue of mere words, about which to use when.  (Though
one might *well* think that a question in sci.crypt, to a technical
authority, ought to imply "proof" in the mathematical sense.)

The issue goes deeper than words: it goes to the perception of "proven
security" as a result of cryptanalysis or use.  Presumably it is the
years of cryptanalysis of DES which leads to this, and it is not just
a delusion of the general public but a perception which is repeatedly
affirmed by technical authorities (as happened here).  The term
"validation" has also been used.  But as far as I can tell, a logic of
cryptanalytic validation goes something like this:

1. Assuming academic cryptanalysis is the best possible,
2. if cryptanalysis has found no break, then
3. no break is possible.

No cryptanalyst will ever put this so baldly.  But we see in practice
the sequence: new cipher designs, subjected to academic cryptanalysis,
then generally approved for use, which seems to be an expression of
the above logic.  Cryptanalysts may say "We just did what we could."
But to the extent that the result is considered "proven secure" in any
sense at all (and that *is* the point of this tiresome conversation),
it sure looks like the above logic to me.  

If we were having an outbreak of former academic cryptanalysts
breaking ciphers and stealing information, we might just stretch a
point and say:

1. Assuming all academics are the same, 
2. if our academics can't find a problem, then
3. neither can the former academics.  

This at least has a modicum of science to it, because we are comparing
two similar groups.  But of course in reality individuals differ, and
their situations differ, so this isn't right either.  

In practice our ciphers confront opponents whose knowledge and
capabilities exceed the academic literature.  Just because academics
cannot find a break does not mean the opponents cannot.  It is a
*realistic* possibility that DES has been broken in secret from the
time it was designed and that we still do not know that.  And while we
might wish and hope to call this "improbable," that would be pasting
the illusion of scientific analysis on something which cannot (yet) be
quantified.  It is just such a quantification that "proven secure"
implies to me, and that is bad science.

I think people get so involved in the technical aspects of
cryptanalysis that they forget the logic of what this does or does not
prove.  Non-cryptanalysts generally *do* take this as a *validation*
process which produces ciphers of "proven security."  Cryptanalysts
are not speaking up about whether "validation" and "proof" are useful
terms for what they do, and that makes them part of the problem.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to