Cryptography-Digest Digest #107, Volume #13 Mon, 6 Nov 00 06:13:00 EST
Contents:
Re: Brute force against DES ("John A. Malley")
Re: Brute force against DES (Francois Grieu)
Re: Q: Computations in a Galois Field ("kihdip")
Re: Brute force against DES (Francois Grieu)
Re: Brute force against DES ("John A.Malley")
Re: Generalized Playfair (Mok-Kong Shen)
Re: On obtaining randomness (Mok-Kong Shen)
Re: hardware RNG's (Tim Tyler)
Re: CHAP security hole question (Vernon Schryver)
Re: BENNY AND THE MTB? (Tim Tyler)
----------------------------------------------------------------------------
From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: Brute force against DES
Date: Sun, 05 Nov 2000 22:16:43 -0800
David Wagner wrote:
>
> John A. Malley wrote:
> >If the plaintext block of eight, 8-bit bytes consists of 8 different
> >values - b1, b2, b3, b4, b5, b6, b7, b8 - then when checking the
> >candidate plaintext resulting from trial decryption with candidate key K
> >- d1, d2, d3, d4, d5, d6, d7, d8 - check first that b1 == d1. If not,
> >throw away this key.
>
> But this is a very weak test: Over 89% of all incorrect key guesses
> will survive this test.
>
> Also, it requires knowing something about the byte-repetition patterns
> in the plaintext, a questionable assumption. (Otherwise, you might discard
> the correct key value.)
How did we arrive at such different results?
Here's my reasoning.
First, the plaintext is known (per the OP.)
So the exact 8 bit byte values expected to appear in the ciphertext
deciphered with the right key are known.
I assumed decrypted ciphertext produced by incorrect keys would look
random. Each 8-bit byte in the 8 bytes of the decrypted output would
take on every possible value from 0 to 255 equiprobably.
If the known plaintext has 8 distinct 8 bit byte values, then then
probability of a false alarm (the correct known value appearing in the
first byte checked but the key is not correct) is (let's say) 1/256. And
the probability of another false alarm after that (the next byte checked
matches the value expected but the key is wrong) is (1/256)^2, and so
on. So the algorithm would find 255/256 of the keys are discarded after
just one byte check. 1 out of 256 times it must check the next byte.
And 255/256 of those second checks it throws away the key without
further checking. So 1/256^2 keys end up with the third byte
checked...and so on.
This was just a quick mental model approximation - actually there are
only 2^56 keys to check, not 2^64.
Where did I go wrong?
John A. Malley
[EMAIL PROTECTED]
------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Brute force against DES
Date: Mon, 06 Nov 2000 08:44:14 +0100
"John A. Malley" <[EMAIL PROTECTED]> wrote:
> The less redundant the plaintext, the faster the comparison between
> known plaintext and candidate plaintext resulting from trial decryption
> of the ciphertext with a candidate key.
True only if the alignment of the candidate plaintext with the deciphered
block(s) is unknown, as is the case when the plaintext is NOT known exactly);
but not so for a known plaintext/ciphertext pair, which is the question
asked I think, and definetly the context of the RSA-security DES Challenges.
Carefully check the algorithms you gave and you'll come to the conclusion
that the number of compares, and more importantly the number of bits of
plaintext to be computed, is unchanged by redundancy in the plaintext.
Francois Grieu
------------------------------
From: "kihdip" <[EMAIL PROTECTED]>
Subject: Re: Q: Computations in a Galois Field
Date: Mon, 6 Nov 2000 09:05:36 +0100
I don't mean to be rude (or get the discussion of track)
but the notation GF(2^m) is the correct notation.
Try Ritters glosary:
http://www.io.com/~ritter/GLOSSARY.HTM#GF2n
Kim
------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Brute force against DES
Date: Mon, 06 Nov 2000 09:12:23 +0100
"John A. Malley" <[EMAIL PROTECTED]> wrote:
> Just a thought - if the first byte of 4 different decipherments is
> tested in one 32 bit value and none of the four bytes match, then a
> single test rules out 4 keys at once. If one matches, then 3 keys
> are ruled out at once, and so on.
Yes, this optimisation works, but only a little, and on processors,
but not on dedicated hardware.
In practice, the best algorithms on modern processors work out like
64 keys at a time, using a reduction of S-boxes to boolean functions,
and starting from both ends of the DES, doing a few enciphering rounds
for the known plaintext and a few deciphering rounds for the known
ciphertext. The compare starts somewhere in the middle of the algorithm
after only 15 rounds have been performed, and only partly for the last
few rounds. When testing 64 keys simultaneously, it is almost impossible
the first bit computed from both sides rules out all 64 keys tested, so
like 4 bits are blindly computed from both sides, and then it is checked
if at least one of the 64 keys beeing tested survives these 4 compares;
with sizable probability, this rules out the 64 keys; in the other case,
one more bit is computed and tested, and so on until all 64 bits have
been checked.
Francois Grieu
------------------------------
From: "John A.Malley" <[EMAIL PROTECTED]>
Subject: Re: Brute force against DES
Date: Mon, 06 Nov 2000 00:33:38 -0800
Francois Grieu wrote:
>
> "John A. Malley" <[EMAIL PROTECTED]> wrote:
>
> > The less redundant the plaintext, the faster the comparison between
> > known plaintext and candidate plaintext resulting from trial decryption
> > of the ciphertext with a candidate key.
>
> True only if the alignment of the candidate plaintext with the deciphered
> block(s) is unknown, as is the case when the plaintext is NOT known exactly);
> but not so for a known plaintext/ciphertext pair, which is the question
> asked I think, and definetly the context of the RSA-security DES Challenges.
> Carefully check the algorithms you gave and you'll come to the conclusion
> that the number of compares, and more importantly the number of bits of
> plaintext to be computed, is unchanged by redundancy in the plaintext.
>
Shoot, totally missed that.
Yes, I see it now that I look over the description again. The number of
compares needed to verify the candidate plaintext is a complete match to
the known plaintext following that algorithm is unchanged by redundancy
in the known plaintext.
Thanks.
John A. Malley
[EMAIL PROTECTED]
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Generalized Playfair
Date: Mon, 06 Nov 2000 10:11:55 +0100
"r.e.s." schrieb:
>
> "Mok-Kong Shen" <[EMAIL PROTECTED]> wrote ...
> | "r.e.s." wrote:
> | >
> | [snip]
> | >
> | > (4) Encipher one group at a time, treating it as a circular
> | > sequence of overlapping letter-pairs (e.g. for M=3, "abc"
> | > is seen as ab-bc-ca). Each letter of a group is enciphered
> | > as the first half of an ordinary Playfair digraph of which
> | > it is the first letter, while cycling through the M squares
> | > in succession.
> |
> | Question: You are using multiple matrices, i.e. doing
> | sort of 'polyalphabetical substitution' (in contrast to
> | monoalphabetical substitution'). How does your method
> | compare to encrypt in the example above ab with the
> | first matrix as conventional Playfair to Ab' then b'c
> | with the second matrix again as conventional Playfair
> | to Bc', etc.?
>
> What do you mean by "how does [it] compare", and
> in which example, specifically?
>
> How would you wish to compare ordinary Playfair
> (one square, i.e., d=2, M=1 in the general method),
> to the case, say, d=2, M=3 (three squares)?
I mean, if you use three matrices with your overlapping
method to encrypt and I use the same matrices in the normal
way of Playfair (through using these round robin, first
processing the first and second character with the first
matrix, then the second (encrypted) and third character
with the second matrix, and so on), which scheme do you
think is better (more secure)?
I guess that that's difficult to say. I think that you
have indeed shown an interesting new way of performing
substitution that merits further study, though I am
personally still of the opinion that generalization
to higher dimensions (the last part of your original
post) is not likely to bring forth much benefits in
terms of security (considering also the complexity of
implementation).
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: On obtaining randomness
Date: Mon, 06 Nov 2000 10:11:46 +0100
Richard Heathfield wrote:
>
> I don't know what you mean by "biting the stone". NB I'm not arguing
> about your conclusion, merely your premise - the monkeys thing.
>
> Take a typical sonnet (shorter than the average soliloguy!) - say,
> Shakespeare's Sonnet CXL. The sonnet form has 14 lines, and this
> particular example has about 30-35 alphabetic characters per line. Let's
> call it 30.
>
> So that's around 420 characters, giving us 26^420 possible combinations
> of those letters (if we are prepared to do a little light copy-editing
> on behalf of our monkeys - otherwise, the numbers get a lot sillier a
> lot quicker). If a million monkeys were bashing away on specially
> adapted typewriters at 10 characters per second each, they would get
> through about 3e14 'random' characters per year, which gives them
> approximately 7e11 attempts at a sonnet. But they would need, on
> average, around 1e594 attempts to produce this particular sonnet. I
> don't know how many sonnets the British library has, but let's say it's
> a million (I suspect it's actually fewer than this). So we're looking at
> 1e588 attempts, on average, to produce some sonnet or other. But they
> can only make 7e11 per year, so it's going to take them, on average,
> 2.77e576 years (if you increase the number of monkeys, you can bring the
> time down, of course, but there's a limit to the number of monkeys you
> can fit on the planet) just to produce one sonnet, let alone all the
> great works. (For the record, the sun is expected to last around another
> 7e9 years.)
>
> This argument has been raging for over a hundred years and it's high
> time it was shot down in flames for the specious claptrap it really is.
> Nothing personal.
>
> And I still have no quarrel with your conclusion. Just your argument.
It is interesting to note that natural language is often
not powerful (efficient) enough to transmit thoughts, i.e.
what is quite clear to one person may not be so to the
other. Stone is hard. Biting the stone wouldn't achieve
the intended goal of crashing the stone before one crashes
one's own teeth. So the expression should convey the
idea that a task is extremely hard in my view. But this
ubiquitous ambiguity of natural languages on the other
hand demonstrates that there is quite a bit of entropy
in everyday sentences. Further, if every sentence that I
speak were exactly anticipated by my partner without even
hearing it, then the entropy of it would have been exactly
zero, isn't it? Now that the partner gets the sentence
but is still not very clear of my meaning (without my
intentionally make it ambiguious), i.e. maybe he has to
flip a coin to determine the intended meaning, I suppose
that the existence of a fair amount of entropy in natural
language sentences is certainly quite evident.
The problem is how to tap that entropy source. Before
further elaborating my point, I should say a bit about
the monkeys. That these would have required infinite or
almost infinite time to produce the literature works is
certain. But I wanted simply to say that, on the assumption
that keystrokes of monkeys are truly random, the writings
of humans, since these could eventually be reproduced by
monkeys, are not the 'exact' opposite of randomness (i.e.
'totally' deterministic). Actually this fact is entirely
trivial. One buys books because one wants to get something
whose contents one doesn't know for sure. An extreme
example is a good criminal roman which is such that the
reader can't guess the end of the drama before actually
reading it to the last pages. (He can't do prediction.)
So it is my thesis that there is plenty of entropy in the
books. That entropy may not be in highly concentrated
form but this fact is easily compensated by the shear
number of available books. What realy helps us here is
combinatorics. If I choose ten books, even from one single
publisher, say the Penguin, and put them in series, the
number of possibilities is astronomical. So if I use a
sufficiently long secret key to make such a choice, it is
almost certain that the opponent can't know which set
(sequence) of books I use, i.e. excepting via an extremely
small 'pure' chance. Now all I have to do is to concentrate
the entropy in the texts. First I compress them to reduce
the redundancy. Then I shuffle these in small units (just
large enough for efficient computer processing, hence
32-bit words for the currently common computers) to
introduce sufficent confusion to prevent the opponent from
doing any useful work even if he managed to get to the
shuffled result (which he can't because of the hashing
process that follows). Finally I use a good hashing
algorithm to concentrate the entropy, leading to
sufficiently good randomness and ensuring that the
opponent can't reverse my processing. Now, of course, this
is no mathematical rigorous proof of security, but I
suppose that the security offered isn't worse than what
is commonly being discussed in practice, noting that there
are a number of 'parameters' of the described scheme that
can be adjusted/tuned according to the needs of the
applications that one has at hand.
Finally I like to say that I am of the opinion that one
need not resort to unpredictability of quantum mechanics
to obtain security and that there are ample pseudo-ramdom
sources (including well-designed PRNGs) that could very
well satisfy one's practical requirements.
M. K. Shen
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: hardware RNG's
Reply-To: [EMAIL PROTECTED]
Date: Mon, 6 Nov 2000 10:00:41 GMT
Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> Certainly I for one would like to be able to eliminate
:> [pseudo-random environmental influences] and test the result.
: Why does this make a valid test? One should remove the signal of
: interest (the source of randomness) as the test case.
Both tests may produce interesting results. It is possible that one will
be more practical than the other. It is also possible that (if examining
the pseudo-random stream) little of interest will be learned about the
randomness of the signal of interest.
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Organic: Church music.
------------------------------
From: [EMAIL PROTECTED] (Vernon Schryver)
Subject: Re: CHAP security hole question
Date: 6 Nov 2000 00:11:20 -0700
In article <8u5ej6$g0l$[EMAIL PROTECTED]>, <[EMAIL PROTECTED]> wrote:
> ...
>Humans *can* remember passwords, but the trouble
>starts happening
>when they're forced to remember passwords that are
>difficult to remember.
>Strong password protocols are *precisely* relevant
>to these cases, because
>they protect passwords from offline guessing.
No, while humans can remember 1 or 2 passwords, they *cannot* remember
the literally dozens that they must remember to operate on the Net. The
dozens of distinct but poor passwords that we each need are much more
difficult to remember than a single good password. Strong password
protocols do *nothing* to solve this problem.
Consider the passwords for your on-line medical records, on-line
pharmacy, bank accounts, stock brokers, credit-card account, cell-phone
account, 401K, vacation and sick pay, vacation travel agent, private
mail or ISP account, and the other dozen outfits that you *really*
should not trust to share passwords. You *really* don't want to use
the same password with your on-line bank account that you use to do
things to your MasterCard so that if a bad guy (such as a bad employee)
gets one you're whole life is compromised.
Assume that strong password protocols let you use poor passwords
for each of those on-line accounts. How do you remember them?
My personal experience with the dozens of passwords I've been juggling
for decades (until recent years only computer and network server things)
is that without writing down the dozens of individually easy to remember
(and so bad) passwords I'm given, I'm unable to keep them straight. When
I remember which passwords belong to a given account, I can't remember
which is the current one.
>> >And aren't humans really the ultimate endpoints
>for all these fancy
>> >cryptographic technologies?
>>
>> When was the last time you participated in the
>CHAP PPP authentication
>> process? The answer is almost certainly
>"never." You might have acted
>
>I don't know if you're deliberately
>misunderstanding what I'm saying or if
>you really don't understand the point.
>
>Ultimately, a human is what is being
>authenticated. Your PC may bringing up a PPP
>link, but it's bringing it
>up on your behalf.
Yes, the PC is being authenticated on your behalf, but since you
need not be present, it cannot be authenticating you.
Since also you need not know the secret that your PC uses to authenticate
whatever it is authenticating, how can it be authenticating you? How many
people know the password they use to connect to their ISP? Most did even
a few years ago, but today most do not. They have to check their records
or look in their PC's configuration data to refresh their memories.
More to the point, since your PC has a good memory for long, hard for
humans to use passwords that are immune to dictionary attacks, there is
no need to bother with zero-knowledge password protocols in those
computer-to-computer situations.
>> [scheme for negotiating a strong password
>How is this long, random password exchanged
>securely? Are you assuming
>that on-line communication to your ISP is already
>secure?
I'm assuming something obvious and simple, such as a DH exchange to get a
secure channel, authenticated with a side channel consisting of you
speaking on the phone to someone at the ISP. You would download a program
(if you wish, deal with authenticating the program). The guy at the ISP
would tell you to type a short string to the program that would let the
program on your PC and a matching program at the ISP exclude a man in the
middle. Then the two programs would do the usual stuff to generate a
random key that would be jammed into the Micrsoft PPP configuration stash
along with phone numbers, TCP windows and MRU, and other stuff.
No human would never know the authenticating secret, just as with
most PKI authenticating.
And yes, as I said before, that idea founders on the insecurity
and unreliability of Windows.
> ...
>It is better to design an authentication system
>that acknowledges
>the limitations of human factors and takes them
>into account, rather than one
>that imposes significant inconvenience upon users
>in order to achieve security.
Exactly. In today's world that means acknowledging that we all must
authenticate ourselves in literally dozens of independent contexts, and
that means passwords kept in human memories are largely irrelevant.
> ...
>You keep talking about how easy and wonderful
>smartcards are without
>realizing their downsides (expense, availability,
>interoperability,
>overhead, user acceptance). Maybe people don't
>*want* smartcards
>everywhere.
Smartcards have all of those problems, but at least they have a hope
of working. Your own assumption that people cannot handle a single
good passphrase of a password implies that people cannot handle the
dozens of passwords that we all must use even if those dozens of
passwords are each 1 and 2 syllable words.
> ...
>> In other words, the current reality of dozens of
>passwords forces people
>
>Sites that force artificial "good password" rules
>on human users do this.
>Let's be clear on our assumptions.
My assumption has nothing to do with 'forced artificial good passwords'.
I assume that it is a Very Bad Idea(tm) to use the same password, whether
good or not, with your banker, your employer, and your doctor. Evidently
you assme the opposite, that it is just fine to make it possible for a
clerk in your doctor's office to poke around your MasterCard account, or
vice versa.
Or for someone at the the last web-site that you used to get sports
scores to use your single global, easy to remember password to poke
around your medical and financial records.
> ...
>No, it's better to bet that humans won't
>spontaneously develop better memories and
>to design around that.
We agree on that point. You also seem to assume that people can live with
at most 3 or 4 simple passwords, since we agree that is an upper bound on
the compliations that people can remember. That makes no sense unless
you are student without a mortgage, two car loans, a bank account, your
wife's bank account, at least one on-line brokerage account, 3 bank credit
cards, a cell phone, three 401K's from previous employers, 4 doctor's
offices (pediatrican, your own GP, your wife's GP, your wife's OBY/GYN),
a dentist, your employer's medical insurance account, your employer's
dental insurance account, your employer's "human resources" web pages,
and the last 15 online newspapers and merchants you've dealt with. My
own list of consumer passwords (i.e. not servers, routers, or otherwise
unusual) is longer than that, although I'll admit that my medical stuff
is not yet on the Net (as far as I know).
--
Vernon Schryver [EMAIL PROTECTED]
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: BENNY AND THE MTB?
Reply-To: [EMAIL PROTECTED]
Date: Mon, 6 Nov 2000 10:19:41 GMT
Matt Timmermans <[EMAIL PROTECTED]> wrote:
: "Tim Tyler" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
:> AFAIK, you are responsible for inventing this terminology - so I hesitate
:> to tell you what I think of it.
: I can take responsibility for that, yes. I assume you think it sucks. []
I would be quite happy with it - if zero wasn't somehow "odd".
: What is the "oddness" of 0? Certainly less than the "oddness" of 1.
In my books, 0 is indeed less "odd" than 1. 1 is odd, and 0 is not.
: Since 1 is finitely odd, then 0 must be too.
Am I losing it? If the conclusion follows from the premise here, I don't
see how.
I have "odd"ness as being associated with a lack of ability to divide by
two - with a lack of evenness. I can't see how 0 sensibly qualifies ;-|
--
__________ Lotus Artificial Life http://alife.co.uk/ [EMAIL PROTECTED]
|im |yler The Mandala Centre http://mandala.co.uk/ Organic: Church music.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************