Cryptography-Digest Digest #424, Volume #9       Mon, 19 Apr 99 18:13:03 EDT

Contents:
  WHy would ibm sniff?? (John L Singleton)
  AES R1 comments/papers available & my views (David Crick)
  Re: True Randomness & The Law Of Large Numbers (Mok-Kong Shen)
  help analyzing TEA and TEA-X ([EMAIL PROTECTED])
  Re: Thought question:  why do public ciphers use only simple ops like shift and XOR? 
(John Savard)
  Re: RC6 new key standard from AES conference? ("Richard Parker")
  Re: Question on confidence derived from cryptanalysis. (Terry Ritter)
  Re: RC6 new key standard from AES conference? ([EMAIL PROTECTED])
  Re: What is E0~E3 Algorithm.? (John Savard)
  Re: FSE-6 Report: Slide Attack ([EMAIL PROTECTED])
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")

----------------------------------------------------------------------------

From: John L Singleton <[EMAIL PROTECTED]>
Subject: WHy would ibm sniff??
Date: Mon, 19 Apr 1999 12:15:16 -0400

I have seen many assertions that IBM 3COM and others have siffers.
one question. WHY? i don't mean this in a mean way-- but who yould care?
other than the NSA...i see no reason for this. i aggree that our govt' is
very f_cked up but still...if someone acn explain some reasoning for this
i would enjoy that...thanks.


*-------------------------*--------------------------------------------*
* John Lawrence Singleton *                [EMAIL PROTECTED] *
*  Computer Engineering   *               [EMAIL PROTECTED] *
*   University of Miami   *               [EMAIL PROTECTED] *
*    (305)-689-9850       *                      [EMAIL PROTECTED] *
*-------------------------*--------------------------------------------*



------------------------------

Date: Mon, 19 Apr 1999 21:52:42 +0100
From: David Crick <[EMAIL PROTECTED]>
Subject: AES R1 comments/papers available & my views

As promised, NIST have published "all" (electronic?) Round 1 comments
and papers on their web site:

      http://csrc.nist.gov/encryption/aes/round1/pubcmnts.htm

(I see they have removed code from the papers... presumably they
got a slapped wrist from NSA for their previous transgression? :))

Lots of good stuff to read and think about. I have some preliminary
observations. I'm sure they'll be more to follow but I'm hoping to
provoke a discussion here :)

Firstly, something that pulls together lots of threads from various
articles.
There looks to be about 5 (spooky, huh?) good all-round candidates
that should make it through to Round 2. One of them will hopefully
be selected as AES.
This causes concern among the multi-AES brigade. But (IP issues aside),
that may still leave us with 2 to 5 good algorithms that can be used
either (i) as a drop-in for AES, or (ii) as another possible algorithm
in a software package (e.g. PGP 5 upwards allows IDEA, CAST, 3DES and,
in 6.5 I believe, Twofish).
The fact that the AES process may/will provide us with a few very good,
(relatively) heavily analyzed new algorithms should be considered
a major plus, irrespective of all other issues.

Secondly, since the original submission of algorithms, many authors
(and third-parties) have proposed various tweaks, fixes and improvements
to the candidates. These range from fixing weaknesses to (fairly) major
redesigns which lead to significant performance/security improvements.
After the end of any project it's possible in hindsight to see how you
could have improved it with the knowledge now gained. Post-AES we may
see a whole new branch of algorithm design/breaking based on it (e.g.
from DFC and Rijndael/Square type ciphers).
In the short pre-AES term however, possible improvements would allow
stronger algorithms through to Round 2 and ultimately through to the
final.
It will be interesting to see what NIST take as "acceptable tweaks"
given that they've already hinted that key-scheduling and major round
redesigns won't be allowed. It would be a pity not to see some of the
new features through, but given the extremely short time between R2
code being published and the end of R2 (approx. 10 months) this is
probably a good thing.
But again, we may still end up with about 5 really strong algorithms,
and these (again) can be used alongside AES, but with enhancements.
Indeed AES itself could be used with enhancements, but presumably
wouldn't be AES then.
(Looking way into the future, it will be interested to see NSA's
proposed tweaks prior to FIPS being granted. This is part of the
process.)

Mokros's Poker Test gives an interesting new angle, although not all
candidates were assessed with it.

And finally, sci.crypt gets several name-checks, and deservedly so
in my opinion. Let's keep up its reputation as one of the best
discussion forums for AES.

   David.

-- 
+---------------------------------------------------------------------+
| David Crick  [EMAIL PROTECTED]  http://members.tripod.com/~vidcad/ |
| Damon Hill WC '96 Tribute: http://www.geocities.com/MotorCity/4236/ |
| Brundle Quotes Page: http://members.tripod.com/~vidcad/martin_b.htm |
| PGP Public Keys: (2048-bit RSA) 0x22D5C7A9 (4096 DH/DSS) 0x87C46DE1 |
+---------------------------------------------------------------------+

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Mon, 19 Apr 1999 20:21:22 +0200

R. Knauer wrote:
> 
> On Thu, 15 Apr 1999 21:00:17 +0200, Mok-Kong Shen
> <[EMAIL PROTECTED]> wrote:
> 
> >It is not to 'believe' or 'not believe', but to 'recognize' or else
> >with sound arguments to 'refute' the theories that the statistitians
> >have developed through centuries. I know only very little about
> >statistics but, as far as I am aware, the issue of sample size is
> >well taken into consideration by them. I am of the opinion that
> >only statistitians have the knowledge and hence are in a proper
> >position to judge whether the application of statistical tests are
> >correct or not. For a non-expert in the field, like you and me,
> 
> Please speak for yourself. You don't have a clue whether I am an
> expert or not in reality. Since you claim that you are not an expert,
> you do not have the capability to judge me in that regard.

Oh! On 14 April you wrote the following:

    In the first place I am not an "expert", therefore I do not 
    have any ....

I don't suppose you were cheating when you posted that sentence.
Or are you inviting others to not taking your writings seriously?????

M. K. Shen

------------------------------

From: [EMAIL PROTECTED]
Subject: help analyzing TEA and TEA-X
Date: Mon, 19 Apr 1999 18:29:17 GMT

I would like some insight here,


How can I exploit the two things I noticed about TEA,

1)  The bits in the keys are always in the same order, thusly effect the
plaintext in the same manner (probabilities?)...

2)  The confusion sequence is really short (2 rounds for TEA)

Thanks,
Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Thought question:  why do public ciphers use only simple ops like shift 
and XOR?
Date: Mon, 19 Apr 1999 20:15:32 GMT

[EMAIL PROTECTED] (Terry Ritter) wrote, in part:
>[EMAIL PROTECTED] (John Savard) wrote:

>>- Also, since there are many insecure cipher designs floating around, one
>>can't just accept that a cipher is secure based on its designer's say-so.
>>Instead, what gives real confidence in a cipher design is that it has been
>>studied by experts who have failed to crack it, but who have come away from
>>their attempts with an understanding of the source of the design's
>>strengths.

>I dispute this.  This is essentially what Schneier would have us
>believe, and it is false.

>The truth is that we *never* know the "real" strength of a cipher.  No
>matter how much review or cryptanalysis a cipher gets, we only have
>the latest "upper bound" for strength.  The lower bound is zero:  Any
>cipher can fail at any time.  

I agree with you that we don't have a way to prove that a cipher really is
strong. But cryptanalysis still gives the best confidence currently
available.

>It is not, frankly, the role of the innovator to educate the
>academics, or even to serve technology to them on a silver platter.
>In the end, academic reputation comes from reality, and the reality is
>that many crypto academics avoid anything new which does not have an
>academic source.  The consequence is that they simply do not have the
>background to judge really new designs.  

That is true: the desires of the academic community aren't a valid excuse
for compromising one's cipher designs.

>Upon encountering a new design, anyone may choose to simplify that
>design and then report results from that simplification.  This is done
>all the time.  It is not necessary for an innovator to make a
>simplified design for this purpose.  

And that is one of the reasons why.

>On the other hand, I have been pioneering the use of scalable
>technology which, presumably, can be scaled down to a level which can
>be investigated experimentally.  The last I heard, experimentation was
>still considered a rational basis for the understanding of reality.
>Indeed, one might argue that in the absence of theoretical strength
>for *any* cipher, experimentation is about all we have.  But note how
>little of it we see.  

Are you drawing a distinction between "experimental investigation" and
"cryptanalysis"? If so, it would appear you are saying that there is an
additional method for obtaining some additional, though still imperfect,
confidence in a cipher design.

>>Plus, the risk that one's adversary is a hacker of the future with a very
>>powerful desktop computer seems much greater than the risk that one's
>>adversary will be an accomplished cryptanalyst, able to exploit the most
>>subtle flaws in an over-elaborate design.

>But we don't know our Opponents!  If we have to estimate their
>capabilities, I think we are necessarily forced into assuming that
>they are more experienced, better equipped, have more time, are better
>motivated, and -- yes -- are even smarter than we are.   There is
>ample opportunity for them to exploit attacks of which we have no
>inkling at all.  

Most cipher users are more worried about their communications being read by
the typical computer hacker than by the NSA.

I suppose it's possible that one day a giant EFT heist will be pulled off
by retired NSA personnel, but that's the sort of thing which happens far
more often as the plot for a movie than in real life.

The problem is, of course, that if one has data that should remain secret
for 100 years, one does have to face advances in cryptanalytic
knowledge...as well as _unimaginable_ advances in computer power.

>>I believe it to be possible and useful to develop a design methodology -
>>mainly involving the cutting and pasting of pieces from proven cipher
>>designs - to enable a reasonably qualified person who, however, falls short
>>of being a full-fleged cryptographer, to design his own block cipher, and
>>thereby obtain additional and significant benefits in resistance to
>>cryptanalytic attack by having an unknown and unique algorithm.

>And in this way we can have hundreds or thousands of different
>ciphers, with more on the way all the time.  That means that we can
>divide the worth of our information into many different ciphers, so
>that if any one fails, only a fraction of messages are exposed.  It
>also means that *any* Opponent must keep up with new ciphers and
>analyze and possibly break each, then design a program, or build new
>hardware to exploit it.  We can make good new ciphers cheaper than
>they can possibly be broken.  The result is that our Opponents must
>invest far more to get far less, and this advantage does not depend
>upon the delusion of strength which is all that cryptanalysis can
>provide.

>>I don't deny that there are pitfalls looming in such an approach; if
>>something is left out of the methodology, or if it isn't conscientiously
>>used, people could easily wind up using weak designs and having a false
>>sense of security. I just think the problems can be addressed, and the
>>potential benefits are worth the attempt.

>Neat.

And of course, I must confess that my present efforts in this direction
have not gotten to the point of providing an explicit "toolkit". I've
contented myself with explaining, in my web site, a large number of
historical designs - with a very limited discussion of cryptanalysis - and
I've illustrated how an amateur might design a cipher only by example, with
the ciphers of my Quadibloc series, as well as various ideas in the
conclusions sections of the first four chapters.

Right now, although my web site is educational, it's also fairly light and
entertaining as well: I haven't tried to trouble the reader with any
difficult math, for example.

John Savard ( teenerf<- )
http://members.xoom.com/quadibloc/index.html

------------------------------

From: "Richard Parker" <[EMAIL PROTECTED]>
Subject: Re: RC6 new key standard from AES conference?
Date: Mon, 19 Apr 1999 18:44:37 GMT

> I've heard mention a couple times of a revised key schedule version of RC6
> that Rivest discussed at the rump session of the last AES conference...
>
> Anyone have, or have al link to, any documentation for this?

RSA's Round 1 comments include an appendix which describes RC6a, you
can find this at the following URL:

<http://csrc.nist.gov/encryption/aes/round1/comments/990414-mrobshaw.pdf>

-Richard

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Question on confidence derived from cryptanalysis.
Date: Mon, 19 Apr 1999 19:05:13 GMT


On Mon, 19 Apr 1999 13:37:32 -0400, in <[EMAIL PROTECTED]>,
in sci.crypt Geoff Thorpe <[EMAIL PROTECTED]> wrote:

>[...]
>Agreed - it would extraordinarily naive to dispute that fact. The point
>I was trying to make was that the collective academic grunt (and other
>"in the open" contributors) we have in cryptography and cryptology does
>not (or rather, can not) pale so completely by comparison to "the enemy"
>that our research and results give no indication to a cipher's
>susceptibility to "theirs". Mr Ritter seemed to have very different and
>quite extreme view on this point. However, I get the impression you tend
>to agree - if we can't punch a hole in it, that lowers the odds that
>they can (as compared to not having really seen if WE can yet).

If you really want to bait Mr. Ritter, I'll go one round with you:

I have just previously covered my main argument, which is basically
that IN MY OPINION, with a single standard cipher, there will be far
too much value at risk to endure even a small possibility of
single-cipher failure.  I note that the POSSIBILITY of such failure is
fact, not opinion.  The opinion part of this is my judgment of the
costs and consequences of failure, versus the additional cost of
protecting against such failure.  

My position is that the consequences of failure of a universal
single-cipher system would be catastrophic, and that even a small
probability of such failure is unacceptable.  This means we cannot
depend on any single cipher, no matter how well reviewed.  We can
reduce the probability of single-cipher failure, and reduce also the
value of information at risk from any failure, by changing what we
consider a cipher system to be.  

What I call the cul-de-sac extension of this argument is the question
of just *how* small the probability of failure is.  

1)  I dispute the idea that by applying various attacks to a cipher we
somehow can predict how it will perform on future unknown and
potentially unrelated attacks.  (And if this were true, we should be
able to see the effect with respect to past ciphers.  This should be
measurable and quantifiable in a scientific sense.  But we have no
such reports.)  

2)  I dispute the idea that by looking at the attacks we have we can
somehow estimate the probability that unknown attacks exist.  (Again,
were this true, we should have scientific evidence to support it.  But
we do not.)

3)  I dispute that we can estimate the capabilities of our Opponents
from the capabilities we see in academics or that we can extrapolate
from our open experience to predict the capabilities of our Opponents.

(Alas, there is no evidence to be had here.)

In summary: 1) We cannot estimate the probability that an effective
attack exists which we did not find; and  2) We cannot estimate the
probability that even if such an attack does exist, our Opponents can
find it and use it.  I thus claim that we CAN know nothing of the
probability of future cipher failure, and cannot even reason that this
probability is "small."  The practical consequence of this is that we
cannot trust any cipher.  

IF we were willing to assume that our Opponents would use only the
attacks we know and have tried, presumably we *could* have insight
into the amount of effort needed to break a cipher (although we might
have screwed up in testing).  But I am of the opinion that we cannot
assume that our Opponents have our limitations.  Indeed, I think this
is very basic cryptography.  


>[...]  I was just taking issue with what I perceived to
>be the following idea: Until we actually break it, or *prove* it secure,
>we have no more measure of strength for it than for another (less
>"investigated") one. I feel that quite the opposite is true - it IS a
>very appropriate statistical measure of strength, and moreover the only
>realistic one we have to work with. If the total state of the art stays
>roughly in sync with the academics, albeit "they" may have a couple of
>things up their sleeves and they may often get the jump on us by a few
>months/years with various developments, then we can make reasoned
>guestimations on the strength of a cipher against them based on the
>strength of a cipher against us.

And upon what evidence do you base you opinion that we *can* predict
what our Opponents can do?  

Do you even have evidence that we can predict what *our* guys can do?


>[...]
>> Some problems, like efficient factoring, are obviously
>> relevant, and unlikely to be achieved in secret without
>> happening in the outside around the same time.  Other
>
>I agree but I doubt very much Mr Ritter does.

The idea that *any* cipher *may* have an effective attack is fact, not
opinion.  The only opinion here is whether the issue is worth
addressing.  

Presumably, you would handwave about what our Opponents can do both
now and in the future and say that caution is silly.  But that
conclusion is based on your opinion that we can predict what others
may do in the future, which I find very strange.  If that were true in
general, we could put criminals in jail before they did anything.


>> breakthroughs have been kept secret for decades, in some
>> cases.  So there really is reason to fear that the most
>> advanced "enemies" might know how to easily crack some
>> system you use that appears uncrackable to all outsiders.
>
>I know - and there's a lot of targets out there so the odds are on that
>at least one of them has fallen completely to an "unpublished" source
>without our knowing it. However, I just think it's more likely to be
>something less well analysed in the "open" than something well analysed
>in the "open" for the reasons I've mentioned, and that Mr Ritter doesn't
>agree with.

Mr. Ritter has always recommended that we get as much cryptanalysis as
we can.  But he also points out that this is an open-ended process
which in any case must be terminated to have a product.  So our
cryptanalysis can never be complete.

With respect to the problem of potential catastrophic failure from a
single-cipher system, no amount of cryptanalysis can prevent such
failure.  Both untested ciphers and massively-tested ciphers are the
same in the sense that neither can be trusted.  


>[...]
>> > Me, I'm going to stick with RSA and triple-DES for a while.
>> 
>> In a well-designed cryptosystem, these do seem sufficiently
>> secure against realistic threats for the near future.  Any
>> vulnerabilities would most likely occur elsewhere in the
>> system/protocols, not in these encryption algorithms as such
>> (assuming of course a long RSA key, and 168-bit 3DES key).
>
>I think that too, but as Mr Ritter might say - you are already in the
>abyss and are naive if you think that. If that is so, I am comfortable
>in my naivety.

Mr. Ritter would say that you are vulnerable to a single-cipher
failure.  And as long as the problem is just you, we really don't
care.  But if the problem eventually becomes the whole society pretty
much using the same cipher, we may care, yet be well past the time to
do much about it.  


>[...]
>Exactly, and if I resort to using a different cipher every week ... the
>cryptanalysts will not keep up with them satisfactorily and I have a lot
>more confidence that "they" WILL be breaking my traffic on a
>semi-regular basis.

The whole point of that particular approach is that cryptanalysts will
not keep up.  In particular, the other side will not keep up, and
those are the guys we have to worry about.  

It should be possible for a true cipher designer to use various
alternatives to achieve a similar result, thus mixing and matching and
producing various different ciphers with similar supposed strength,
whatever that may be.  We cannot hope to know that strength by
cryptanalysis, of course.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: RC6 new key standard from AES conference?
Date: Mon, 19 Apr 1999 18:58:21 GMT


> I think it is important that the AES perform reasonably well on smartcards.

But essentially any algorithm can be made into a smartcard, just build a
custom ASIC for it, instead of using a 6805 or 8051 (why would you use an
8051 anyways?)

Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Crossposted-To: alt.security,comp.networks,comp.security,comp.security.misc
Subject: Re: What is E0~E3 Algorithm.?
Date: Mon, 19 Apr 1999 22:01:10 GMT

"Kim. Jae-Yong" <[EMAIL PROTECTED]> wrote, in part:

>I saw that RF Network called 'Bluetooth' uses E0,E1,E2,E3 algorithm as a
>security algorithm..
>I wanna know about these algorithm and surveyed a book Applied Cryptography
>by Schneier. but I can't find that.
>I wanna know about those algorithms. in fact I must..
>Please let me know about those algorithm
>Your typing for a minute can help me much.
>Thanks in advance.

An algorithm called E3 is one of the candidates for the Advanced Encryption
Standard, and its description is available on the web: visit my web site,
look at the section on the AES, and, while I don't have a description of
E3, there is a pointer in the initial section to another page which has
links to all the descriptions.

However, that may not be the E3 algorithm you're looking for.

John Savard ( teenerf<- )
http://members.xoom.com/quadibloc/index.html

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: FSE-6 Report: Slide Attack
Date: Mon, 19 Apr 1999 21:04:45 GMT


> Well, known-plaintext attacks are sometimes possible 'in the field', and if
> one has enough known plaintext, the plaintext one might wish to choose may
> already happen to be among the known plaintexts.
>
> And against some devices, such as smartcards, or against a utility where
> you are one of the legitimate users, but you want to eavesdrop on other
> users, chosen-plaintext attacks are possible.
>

So I encode a file with a program, and send the file, where does this attack
come in?

I am a secure modem, I encrypt data the user gives me, I send it, I also
received and decrypt with another key.  Where does this attack come in?

Thanks,
Tom

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Mon, 19 Apr 1999 22:00:55 GMT

"R. Knauer" wrote:
> The only surprising thing I find is how some people can trust
> simplistic small sample statistical tests as a means to determine true
> non-randomness with reasonable certainty, as if something so simple as
> 1-bit bias in a single sample somehow tells them everything they need
> to know about the underlying process.

If I buy a bit-stream generator that has been advertised as generating
a "truly random" (uniform equiprobable) bit stream, and the acceptance
test shows the likelihood of its meeting its advertised specification
is less than 1 in 1,000,000, I am justified in rejecting it and finding
another vendor.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to