Cryptography-Digest Digest #244, Volume #10 Thu, 16 Sep 99 01:13:03 EDT
Contents:
More New Stuff COMPRESS before ENCRYPT (SCOTT19U.ZIP_GUY)
Re: Encryptor 4.1 reviews please. (Rebus777)
Re: one time pad ([EMAIL PROTECTED])
Q: Is this key-exchange OK? ("Douglas Clowes")
Re: Mystery inc. (Beale cyphers) (Curt Welch)
Re: Stream cipher from a hash function (Secret Squirrel)
Re: Can you believe this?? (Eric Lee Green)
Re: Can you believe this?? (Eric Lee Green)
Re: Sources of randomness (Eric Lee Green)
Re: Looking for Completely-Free Strong Algorithms (Eric Lee Green)
Re: Ritter's paper ("rosi")
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: More New Stuff COMPRESS before ENCRYPT
Date: Thu, 16 Sep 1999 03:13:29 GMT
Updated my page on Adaptive One to One Huffman compression
and have source code with examples at my site. This is the
kind of compression one should do before one encrypts if one
is to use compression.
http://members.xoom.com/ecil/compress.htm
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: [EMAIL PROTECTED] (Rebus777)
Subject: Re: Encryptor 4.1 reviews please.
Date: 16 Sep 1999 02:26:35 GMT
This program seems to be implemented pretty well, except after testing all the
algos and combinations of algos I found that they are all using ECB mode. Even
the IDEA cipher which is documented to be CBC, is outputting ECB. ECB reveals
a pattern in the cipher text when you encrypt redundant data. Example: a string
of only one character, aaaaaaaaaaaaaa, in an 8 byte block cipher, the pattern
would be a pattern of 8 repeating bytes.
>
>Has anyone used or heard of Encryptor 4.1 by Dr. Peter Sorvas & Bill
>Giovinetti? Here's their page:
>
>http://ourworld.compuserve.com/homepages/psorvas/
>
>
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: one time pad
Date: Thu, 16 Sep 1999 02:05:08 GMT
[EMAIL PROTECTED] wrote:
> I am trying to think of an efficient way to attack
> this problem but so far all I can think of is
> generating all possibilities for the first
> character for both texts
That's 117 possibilities.
> and then generating all
> possibilities for the second character. In my
> mind I would place these results on two seperate
> "wheels" and turn them and see if I get any
> matches.
What exactly is a match?
> BUT to compute both texts will require
> more CPU power than I have. I was planning on
> writing this program in C++.
I doesn't seem possible that you could have enough
CPU power to compile C++, but not enough to carry
out the attack. I'd say to try it; maybe the
machine will surprise you.
I don't think you'll want a fully-automated solution.
Get the program to show you how various choices come
out, and weave together the texts that looks right.
--Bryan
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: "Douglas Clowes" <[EMAIL PROTECTED]>
Subject: Q: Is this key-exchange OK?
Date: Thu, 16 Sep 1999 12:17:49 +1000
Are these key-exchanges OK or do I need something else?
I want two terminals to practice secure communications after an algorithm
negotiation and key exchange phase. I want to mutually authenticate the
parties.
The pair will initially either share a secret, which may be a
password/phrase, or they will have public key certificates signed by a
trusted CA. The messages in the key exchange phase are authenticated by
HMAC-SHA1-96 or SHA1 with RSA signature, as appropriate.
The question is: can I do this in a two pass exchange, or do I need a
challenge-response after DH exchange?
The message exchange for passwords (HMAC authenticated) is:
1:A->B: Id_a, Id_b, Time_a, Rand_a, DH_a, Algorithms_a, HMAC
2:A<-B: Id_b, Id_a, Time_b, Rand_b, DH_b, Algorithms_b, HMAC
The message exchange for certificates is the same, but includes the sender's
certificate (which can be checked by the receiver) and is signed with the
private key associated with the certificate.
After message 1, B can ensure that the message came from A, and the DH_a is
A's.
The same is true for A after message 2.
Is this OK?
Possible variations would include using the HMAC function on the password
and DH exchange secret to create the key used in future communications, as a
further assurance that each has the password. Similarly for the certificate
case, B could encrypt DH_b with B's private key, and then A's public key -
for A to recover it, DH_b must be decrypted with A's private key (proving
A's possession) and B's public key (proving B's possession of the private
key).
Does this help and it is needed?
Does a challenge-response exchange following add anything except overhead?
Is this open to attack? If so, how do I secure it?
Oh, and does it infringe anybody's patents? :-(
TIA,
Douglas
------------------------------
Subject: Re: Mystery inc. (Beale cyphers)
From: [EMAIL PROTECTED] (Curt Welch)
Date: 16 Sep 1999 03:00:06 GMT
[EMAIL PROTECTED] wrote:
> << Didn't know there were B2 adjustments people felt were needed! >>
>
> If you renumber the DOI you'll find that there are six or seven
> encipherments that still haven't been fixed by the renumbering. Most
> of these can be treated as typographical errors requiring only a +1 or -
> 1 adjustment.
It seems to me that it's a bit misleading to think about an adjustment
like that as a "typo". I say that because calling it a typo implies
it happend at some point after all the cipher work was done. And that
implies "fixing" the cipher text as publised is the correct route to
take. (i.e. you are just correcting a mistake the printer, or someone
copying the cipher at a later date made). But it's not clear to me that
this is the case here.
It does make sense that if a block of consecutive numbers (consecutive
in the DOI that is - not consecutive in B2) are off by a constant (say +5),
then it makes sense to think this is caused by a numbering error in the
DOI (i.e one error caused all the offsets).
And if in the middle of a block like that, there is a single number which
has a +6 offset error (instead of +5 like the rest of the block), then it
does seem more likely that the error there was caused after the DOI was
numbered. So, it makes more sense to label it a single encoding error
instead of "fixing it" by inserting a +1 and a -1 shift in the DOI
numbering.
But when did the error happen? It could have happend as the B2 clear text
was being encoded. i.e., they found the word to use in the DOI, but then
mis-counted from the nearest numbered word. (that seems the most likely
to me for an error like this). Or it's possible someone just copied the
number incorrectly at some point after the original encoding of B2.
But, think of this. It's possible that as B2 was encoded, they recorded
all the numbers to use for each letter on a separate sheet of paper. And
this sheet of paper (not the numbered DOI) was the "key" which was going
to be used to decode the message. And this "key" may have been used to
create the B1 "key". So even if a letter was encoded incorrectly (because
of a simple clearical error in the encoding process), this off-by-one
number as seen in B2 may be the number that was written down on the "key".
And that number may be the one we need to use when trying to break B1.
So, all I'm saying, is that those "typo" errors aren't something you should
just "fix" and forget about. You need to think about both possible values
as you search for various solutions.
> Also, when viewed as typos, 188 becomes 138
That seems like a very likely copying error that would happen at some point
after all the crypto work was done.
> and 440
> becomes 40.
That's a little harder to swallow, but i guess also very likely. (it's not
like the document was typed into a computer and the typest double
hit the 4 key). It it was being copied by hand, it seems like
an unlike error to me (i.e. read 40, write 440). Maybe it's a
type of error that would be easy to make when typeseting the ward
paper??? line-o-type (or whatever it's called) came much later
than the date of the ward paper didn't it? So the ward paper would
have been truly typeset -- pick two 4's stuck together out of the case
instead of 1 by mistake????
> Other people have chosen other adjustments.
Yeah it seems to me that anyone doing serious work on this should
re-analize all these adjustments for themself so they understand
all the logic behind each adjustment and the possible options for
each...
If there are this many errors in B2 (the one where everyone has the key)
imagin how many errors might be in B1 and B3 (assuming there's some truth
to the story and that these documents have been floating around, and
re-copied various times before they were published by Ward).
I know Ed told me that even with his photocopy of the Ward paper there were
a handful of numbers that could not be correctly read because the document
had been folded and the paper torn on the fold. Yet few people publish
B1/B3 with notes that document the possible errors. And I think he said
there were versions of B1/B3 that as far as he could see were just clearly
wrong. e.g. his copy of the Ward paper was "obviously" an 8 but the
versions re-published had it as 3 (or something like that).
I wonder if there's a clear enough copy of the Ward paper were all the
numbers can be read without question?
--
Curt Welch http://CurtWelch.Com/
[EMAIL PROTECTED] Webmaster for http://NewsReader.Com/
------------------------------
Date: 16 Sep 1999 03:59:36 -0000
Subject: Re: Stream cipher from a hash function
From: Secret Squirrel <[EMAIL PROTECTED]>
On 10 sep 1999, [EMAIL PROTECTED] wrote:
,--------
|Ok, lets assume that crypto is 100% export restricted, and only lies
|within the US (heh, a BIG assumption). Hash functions, on the other
|hand, are not export restricted. Is it possible to turn a hash function
|into a stream cipher?
|
|My idea falls along the lines of:
|SHA-1(password) = 1st output of stream cipher
|SHA-1(password + counter) = 2nd output of stream cipher
|SHA-1(password + counter2) = 3rd output of stream cipher
|...
|
|I was curious, (1) what would you use to make the output non linear (IE,
|how would you change the counter variable above), and (2) what would the
|security of this cipher be? (This assuming the hash function is not
|broken.)
|
|Just curious.
`--------
It would almost work. If you encrypt two files with the same
password, the spooks can take your two files and XOR them together
and then decrypt them without that much further effort.
You need to make it so that you can encrypt two passwords with the
same password but have different "session" keys for each file.
Here's how: do this and it will be fool proof:
Create some salt. Doesn't have to be random: just has to be
unique. The full system time, including microseconds, will do.
Here's what you do with it:
SHA-1(password + salt + counter1) = 1st output of stream cipher
SHA-1(password + salt + counter2) = 2nd output of stream cipher
SHA-1(password + salt + counter3) = 3rd output of stream cipher
..
The salt will be prepended, as is, to the output encrypted,
so that decryption is still possible. The enemy sees the salt, but
that is useless to them with the password.
The only question remaining is: how secure is sha-1?
------------------------------
From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Can you believe this??
Date: Wed, 15 Sep 1999 20:33:47 -0700
John wrote:
> I agree. How can I get my encrypter "peer reviewed" without
> giving away the "family jewels?" I have had many who
> claim they are in the encryption community "assult me" for
> not giving up the source code.
Encryption algorithms are a dime a dozen. ftp.funet.fi has dozens of
encryption algorithms free for the download. Encryption algorithms
basically have no commercial value. Look at RC5, for example, which is
patented at least in the USA. Who uses RC5? Nobody. They either use
DES/3DES, or they use (totally free) Blowfish.
What has value, on the other hand, are encryption PRODUCTS. This is the
core encryption algorithm, "productized". You may put it into a library
with a big math library and other products and documentation, and sell
it as a cryptographic toolkit. You may bundle it with a user interface
and sell it as a file encryption product. You have made it easy to use,
and people will buy your product (presuming that it is a good product at
a decent price and that your marketing doesn't totally suck). You can
make money this way even though the core encryption algorithms in your
toolkit or file encryption product are public knowledge with published
source code.
--
Eric Lee Green http://members.tripod.com/e_l_green
mail: [EMAIL PROTECTED]
^^^^^^^ Burdening Microsoft with SPAM!
------------------------------
From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Can you believe this??
Date: Wed, 15 Sep 1999 20:41:13 -0700
John wrote:
> when it was called Computer Language. The 1996 article
> does not "debunk anything" but only proves that in a
> computer magazine, that is what I'd expect. I never said
Cryptographic algorithms are hard to get right, and it is hard to tell
whether they are working correctly or not. This makes them inherently
different from, say, a word processor, where you can tell immediately
whether it is producing proper printed pages or not.
As someone who is responsible for the cryptographic component in my
employer's next product, I will either use well-known algorithms that
have been extensively cryptanalysed, or I will publish the source to
particular components that I am not sure of. To do anything else is
sheer lunacy -- I simply do not have the depth of knowledge to design a
worthwhile encryption algorithm from scratch and expect it to stand up
to cryptanalysis. Even Ron Rivest had to try his hand several times in
the RC-series before finally getting one that stood up to cryptanalysis.
If Ron Rivest, one of the best minds ever in the cryptographic algorithm
business, can't get it right first time at bat, why should I presume
that I could do better? Why should I presume that *YOU* could do
better?!
--
Eric Lee Green http://members.tripod.com/e_l_green
mail: [EMAIL PROTECTED]
^^^^^^^ Burdening Microsoft with SPAM!
------------------------------
From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Sources of randomness
Date: Wed, 15 Sep 1999 20:48:15 -0700
Scott Nelson wrote:
> There's a tradeoff here between wasting computing power
> and source bits. Both resources are remarkably cheap,
> and if you can, then waste them both. In my experience tough,
> source bits have always been "cheaper." YMMV.
In my application, at least, source bits have always been the hardest
thing to come by :-(.
> do with the random bits you've collected. In fact, if you
> want to talk about avoiding implementation errors, the KISS
> principle says we should use the simpler CRC over SHA1, not
> the other way around. This is especially true if the bits
On the other hand, if my application already depends upon having a
cryptographically secure hash function available (e.g. if I must sign a
hash in order to authenticate messages), implementing an extra routine
to do the CRC would be ridiculous. Most of what we do here on sci.crypt
probably deals with at least security and authentication, most of which
requires a strong one-way hash function such as SHA1 or MD5. If you
already have it there, why not use it rather than add more code to your
product (meaning more bugs)?
If I were running a server that had to generate 100,000 bytes per second
of key information then I would perhaps be less sanguine about possible
over-use of cryptographic hashes (and the resulting waste of CPU time).
YMMV, as always.
--
Eric Lee Green http://members.tripod.com/e_l_green
mail: [EMAIL PROTECTED]
^^^^^^^ Burdening Microsoft with SPAM!
------------------------------
From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Looking for Completely-Free Strong Algorithms
Date: Wed, 15 Sep 1999 20:51:12 -0700
Joseph Ashwood wrote:
> Actually most of it is making sure that identical commands aren't sent
> between the server and host. Right now we occassionally get the equivalent
> of:
> Go get me a head of lettuce
> ...
> Go get me a head of lettuce
That's why you use a challenge string and a sequence number as part of
your protocol. Prevents replay attacks quite nicely.
Or am I confused about what you were talking about? (If you use a
challenge string and sequence number, no two packets should be
identical, even if the commands themselves are identical).
--
Eric Lee Green http://members.tripod.com/e_l_green
mail: [EMAIL PROTECTED]
^^^^^^^ Burdening Microsoft with SPAM!
------------------------------
From: "rosi" <[EMAIL PROTECTED]>
Subject: Re: Ritter's paper
Date: Thu, 16 Sep 1999 00:19:28 -0400
Dear Ritter,
I see more sense in your side of the argument.
--- (My Signature)
Terry Ritter wrote in message <[EMAIL PROTECTED]>...
>
>On 14 Sep 1999 12:55:58 -0700, in
><7rm98e$69h$[EMAIL PROTECTED]>, in sci.crypt
>[EMAIL PROTECTED] (David Wagner) wrote:
>
>>In article <[EMAIL PROTECTED]>, Terry Ritter <[EMAIL PROTECTED]>
wrote:
>>> There is a copy of the article .PDF on my pages. It is first in the
>>> list in the Technical Articles section on my top page. The exact link
>>> is:
>>> http://www.io.com/~ritter/ARTS/R8INTW1.PDF
>>
>>Thanks for posting! I think this is an important subject for
>>discussion.
>>
>>However, I don't think your suggestion works. I'd like to invite
>>you to look over my reasoning and see if you find any errors.
>>
>>Let's think of this as a resource allocation problem (i.e., an
>>economics problem), where our sole goal is to minimize the risk
>>that the adversary can read our traffic. Then I think a fairly
>>simple calculation shows that your proposed approach is sub-optimal,
>>and that the best strategy is to "follow the crowd".
>>
>>Suppose we have a fixed bound R on the total amount of resources
>>we can apply to the problem (e.g., R man-years, R months of Eli
>>Biham's time, whatever). Further suppose we have a fixed amount T
>>of traffic to protect. We have two choices:
>> ("AES") Design one cipher that you really really believe in; use
>> it for _all_ the traffic.
>> In other words, spend all of R on design and analysis
>> of the cipher, and use it for all of T.
>> (Ritter) Design N ciphers, and hope most of them don't get broken.
>> In other words, spend R/N on each of the N designs, and
>> use each cipher to encrypt T/N of the traffic.
>
>I notice that you say I "hope" my ciphers are not broken. Yet it is
>*I* who can afford to lose a few: I have exponentially many, you
>know. On the contrary, it is *you* who must *hope* your cipher is not
>broken, because that is everything. And you can *only* hope, because
>you cannot *measure* that probability.
>
>
>>I think these scenarios accurately characterize the two approaches
>>we want to compare. Do you agree with the model?
>
>No. For one thing, this model is too limited to describe the
>advantage of placing exponentially many ciphers before our Opponents.
>It also slouches toward comparing probabilities which we cannot know
>and thus make no scientific sense to discuss.
>
>
>>Let f(R) be the probability that we apply the resources specified
>>by R to cryptographic design and analysis, and yet the adversary still
>>manages (somehow) to break our cipher.
>
>No, no, no we can't. We might calculate the economic disaster
>which would result from breaking (and those results would be:
>disaster for the AES approach; a regrettable transient loss in mine),
>but we cannot calculate the probability that a cipher will fail.
>
>
>>We can now calculate the risk of failure for each scenario.
>> ("AES") With probability f(R), the cipher breaks, and all T of
>> our traffic is broken.
>> => Expected loss = T*f(R).
>> (Ritter) Each cipher breaks with probability f(R/N), and each break
>> reveals T/N of our traffic.
>> Since expectation is linear, the total expected loss is the
>> sum of the expected losses; the latter quantity is T/N * f(R/N)
>> for each cipher, and there are N of them, so...
>> => Expected loss = N * T/N * f(R/N) = T*f(R/N).
>>Here, I've made the assumption that the "utility function" we want
>>to minimize is the expected amount of compromised traffic. This is
>>probably an unrealistic assumption, but let's make it for the moment.
>
>Alas, these are false computations. One hidden -- dare I say
>"sneakily" hidden -- assumption is that adding cryptanalysis resources
>R reduces f. But where is the evidence for this? For example:
>
>* If the cipher is already broken, f = 1.0, and all the R we spend is
>completely wasted to no avail.
>
>* If the cipher is not broken but will be some day, we are again in
>the same situation, but we just do not know it. And this may be the
>whole of our universe.
>
>Yet another hidden assumption is that there exists a probability of
>failure, and that we can discuss that. I assert that while there may
>be some such probability, there is no way to measure it, and no way to
>know if our discussions are correct, and so no scientific use in
>discussing it. We cannot say when it is reduced, or even what that
>might mean, unless we invent a special case where more attack
>resources break more messages. Normally we handwave a "break" which,
>when known, exposes "all" messages.
>
>You have also not included the per-cipher resource reduction which
>affects the Opponents, and some sort of effort factor to describe the
>difference between placing twice as much effort into something you
>know as opposed to having to learn some whole new cipher and trying to
>attack that. One of the advantages of multi-ciphering is that it
>creates exponentially many "ciphers" which may exist. Another
>advantage is that multiciphering protects each individual cipher
>against known-plaintext attacks against an individual cipher. If the
>only reasonable attacks are known-plaintext, this alone inherently
>increases strength over any single cipher which is necessarily exposed
>to known-plaintext.
>
>
>>It's hard to tell a priori what the graph of f() will look like,
>>but at least we can be pretty sure that f() is monotonic: doing
>>more analysis will only reduce the risk of catastrophic failure.
>
>This is CERTAINLY false if catastrophic failure already exists, or
>will exist. It is only true if you BELIEVE that the cipher is strong.
>But if you are satisfied by belief, you don't need crypto -- just
>*believe* your Opponents are not looking.
>
>
>>Thus, we can see that f(R) < f(R/N), and in particular,
>>T*f(R) < T*f(R/N), so the way to minimize the expected loss is to
>>choose the single-cipher strategy (the "AES" approach). Using lots
>>of ciphers only increases the expected loss.
>
>GIGO.
>
>
>>In the real world, the expected loss is probably not exactly the right
>>function to minimize (probably the harm is a non-linear function of
>>the amount of traffic compromised, where leaking 10% of one's traffic
>>is almost as bad as leaking 90% of it). Nonetheless, a moment's thought
>>will show that adjusting this assumption to be more realistic will only
>>widen the gap between the two strategies, and will make the "AES"
>>approach even more appealing than the above calculation might suggest.
>
>I would say that "a moment's thought" should be sufficient to show the
>futility of attempting a statistical argument on something which one
>cannot measure, such as cipher "strength," or discussing the
>"probability" that it will be broken, when we have no accounting and
>no evidence relating to real probability.
>
>---
>Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
>Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
>
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************