Cryptography-Digest Digest #438, Volume #10 Fri, 22 Oct 99 21:13:08 EDT
Contents:
Re: some information theory (very long plus 72K attchmt) (Tim Tyler)
Re: some information theory (very long plus 72K attchmt) (James Felling)
Re: some information theory (very long plus 72K attchmt) ("Belldandy")
Re: OAP-L3: Where are the negative critiques from users? (Taneli Huuskonen)
Re: some information theory (very long plus 72K attchmt) ("Rick Braddam")
Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column (Jerry
Coffin)
----------------------------------------------------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: some information theory (very long plus 72K attchmt)
Reply-To: [EMAIL PROTECTED]
Date: Fri, 22 Oct 1999 13:28:08 GMT
Anton Stiglic <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> Anton Stiglic <[EMAIL PROTECTED]> wrote:
:> : Tim Tyler wrote:
:> :> Anton Stiglic <[EMAIL PROTECTED]> wrote:
:> :> [about a post of mine relating to compression (and Tom's notions about it)]
:> :>
:> :> : The thing is that in practice, and attacker *does* have the
:> :> : decompressor at hand. So compressing doesn't help information
:> :> : theoreticaly wise.
:> :>
:> :> Compression helps against all sorts of cryptoanalytic attacks by:
:> :>
:> :> A) reducing the quantity of the cyphertext available and,
:> :> B) removing statistical characteristics present in the compressed
:> :> plaintext by raising its entropy
:>
:> : The statistical characteristics are not hidden when you compress.
:>
:> Oh yes they are ;-)
: If the information (in the sens of Shannon) was not there, it would be
: impossible to decompress (we would not get back the info).
The information is present in the combination of the compressed
message and the decompression (or compression) program.
: So all the info, statistical or whatever, is there, no matter how you
: compression works (on individual caracters, on words, or on the whole
: text).
The claim is that compression helps with security. The claim is *not*
that compression somehow loses information in the message.
Regularities in the uncompressed plaintext can (sometimes) be used by the
crypyanalyst to attack the cypher. Removing these regularities by
compressing the data complicates his job - and indeed sometimes
can make a difference between a code which it is possible to crack,
and one where more information is needed.
Statistical characteristics of the message *are* changed - it becomes less
regular.
To argue that these are still there if you decompress the compressed file
misses the point that to the cryptanalyst who only has a fragment of the
message, decompressing may not be possible.
--
__________
|im |yler The Mandala Centre http://www.mandala.co.uk/ [EMAIL PROTECTED]
6.6.6 - IP address of the Beast.
------------------------------
From: James Felling <[EMAIL PROTECTED]>
Subject: Re: some information theory (very long plus 72K attchmt)
Date: Fri, 22 Oct 1999 11:07:51 -0500
Anton Stiglic wrote:
> James Felling wrote:
>
> >
> >
> > Even with the knowledge that the message is contained in a set (just for
> > example lets say E(english language text strings), compression will still work
> > to our advantage.
> > There are many attacks against cyphers that depend on the analyst having a
> > large quantity of cyphertext to sort through before begining. While our
> > knowing that the mesage M is contained in E will allow us to efficiently
> > recognise a break in either case, the fact that M is compressed 6:1(a fairly
> > typical ratio) will mean that to have the requisite amount of raw cyphertext
> > to procede with a break one needs 6 times the trafic, or perhaps 6 times the
> > processing power, you have made the adversary' job a bit more difficult.
> > However, if a tool for attacking the algorithim on a purely informational
> > basis were developed there would be no difference, and thus they would be at
> > parity.
> >
> > However, we have no such methods yet, and instead depend upon methods that
> > rely upon properties other than purely entrophy and in fact rely mainly upon
> > analisys of the data. These methods will benefit from an increased quantity
> > of data.
> >
>
> You are confusing *size* of ciphertext, with the *number* of ciphertexts. If an
> attacker can get an certain amout N of ciphertexts, he will get it wheter the
> person
> used compression or not.
You miss my point. If I set up my security policies properly, I will change keys on a
regular basis. Thus if I send a quantiy N bytes of data and then change keys,
assuming 6:1 compressoion thw analyst will get only N/6 bytes of data. For well
chosen N, this may move the cyper for just barely attackale to not practical to
attack.
> I would agree thet each ciphertext will be shorter,
> but as
> I said, that doesn't take away any info to the attacker (this is where the subset
> argument
> comes in, not for the amount of elements in the set, but for the computation of
> entropy.)
It does not take away any information from the attacker IN THEORY. That is correct.
However, in actual practice, it is much more likely that the code will be
successfully attacked with N bytes of data to analyze than say N/k bytes (where k> 1)
bytes of data, even assuming that the informational content of both is equivalent.
We are simply not that good at extracting informational content/recognising what is
information and what is not, were we that good compression would do no good.
However, that is NOT the case, so compression will benefit us until we aproach that
theoretical maximum.(if we ever do)
>
>
> Anton
------------------------------
From: "Belldandy" <[EMAIL PROTECTED]>
Subject: Re: some information theory (very long plus 72K attchmt)
Date: Fri, 22 Oct 1999 11:13:43 -0500
Anton Stiglic <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Belldandy wrote:
> > you have a mathematical proof for the "easiness" to solve the cypher
> > text??!?!? One word: wow!
> >
>
> No no no!!!!!!
> I have a simple proof (it's not even a poof, just direct application
> of definitions) that compressing does not reduce the entropy.
> You don't loose any information (in the sens of Shanon) about
> your ciphertext when you compress it, this includes statistical
> bindings of the language in question. This, of cours, is in the
> assumpiton that you know the compreession function that is
> beeing used. It might make a statistical attack more time
> consuming, but still possible.
You should quote my whole reply. but nevermind that, i'll rephrase my
posting to make it clearer.
What do you mean still possible? Well, compression before encryption is not
designed to make the possible impossible. the purpose is to make life more
difficult. that's all/
I kind of confuse why you need to show that you don't lose any informatiion.
It supposed to be implied. anyway, even if you show that there is no
information lost (which supposed to be a tautology of a compression
function), i don't see why compression increase the enthropy. (you said
reduce the enthropy? mistake in typing?)
Yes, statistical binding will never lost (given that there is no information
lost), but as I said in previous posting, remember that you work from a
cypher text.
When you try to run a statistical analysis, you run it on cypher text right?
>From the statistical analysis, you do some educated guess.
Hold that position, let's evaluate the nature of the compressed plain text
first. We have a space of plain text of N size named M. A good function of
compression [Comp] is the one that (often) doesn't have too many repetitive
pattern of digit. (sorry for my opinion about compression a few postings
ago. I don't know compression much, so bear with me. but this time, i'm sure
i got it right). The operation of Comp over the space of M is a space of CO
(Compressed result), and CO doesn't have too many repetition for a plausible
statistical analysis. all M will be mapped to c.
Back to where we hold the position. CO's nature (as above) doesn't have too
many repetition for a plausible statistical analysis. Thus, the cypher text
is not likely to have any statistical tendency too. Therefore, we're left
with a cypher text with no noticeable statistical pattern, and compressed
plain text with no noticeable statistical pattern too.
If you know the exact size of plain text before compression, you might be
able to eliminate some possible combination of plain text. But if you don't
know the size of plain text, you still might be able to eliminate some
possible combination of plain text only by the size of compressed result
(that is, if you can know the size of compressed result from the cypher
text). But more likely, you cannot know the size of compressed result.
The possible weak link (given my pathetic knowledge in compression) in my
argument is in the assumption that a good compressed function has the
characteristic to produce the least statistical pattern. But I'm pretty sure
i'm right there. Can anybody verify this?
> My point is to eliminate the myth that when you apply a
> compression function to an english text, for example, you
> get randomness, you *don't*! All the information is still
> there, just in a different format. A compression function
See above.
> is just a bijective deterministic function. It's not at all like
> the Discret-Logarithm Problem (DLP) for say. DLP in
> a multiplicative group G*_p is trivial if one knows the bijection
> of G*_p with it's isomorphic additivie group G+_p, since
> DLP in additive groups is easy. The problem here is to find
> the bijection (isomorphism). In the context of encryption
> using compression, one most often knows the compression
> (bijective) function, (a minimal requirement for security
> is of the encryption function to be secure against an attacker
> having knowledge of the mecanism of the encryption cipher).
>
> Now, why I don't beleive worth the while to compress before
> you encrypt, two major reasons:
>
> 2. For weak ciphers, the information is still there, and the
> compression function is almost always known (if it's not
> known, I wouldn't trust the cipher, the knowledge of the
> mecanism of the cipher to an attacker is a minimal
> requirement). Compressing might make it longer for an
> attacker to decrypt, but ciphers that are vulnarable to
> statistical bindings of the language attacks are easily
> decipherable, and no proof exist that compressing realy
> makes it all that difficult to an attacker (it can only increase
> the attack time by a polynomial time constant).
how can you be so sure with all possible algo of compression?
> So, in conclusion to my initial posts, compressing doesn't reduce
> information on the cleartext, in the sens of Shanon theoretical
> information (or what we call entropy). This is obvious to anyone
> who knows what the mathematical definition of entropy is.
uih? I can't claim that i know the official definition of enthropy, but I
can assure you that the enthropy you're talking about has the property (I
explained above) to make statistical analysis difficult.
btw, you said a direct aplication of definition..... of compression? of
enthropy?
btw2, you might want to be careful with the fact that encryption and
compression can be considered the same or interchanged. This is why I don't
see it's possible to prove that compression is useless. You can always make
a compression with key that is tougher to defeat without key. A compression
which emphasize is to reduce redundancy more than to make size smaller.
------------------------------
From: [EMAIL PROTECTED] (Taneli Huuskonen)
Crossposted-To: talk.politics.crypto
Subject: Re: OAP-L3: Where are the negative critiques from users?
Date: 22 Oct 1999 23:11:46 +0300
=====BEGIN PGP SIGNED MESSAGE=====
In <[EMAIL PROTECTED]> Anthony Stephen Szopa
<[EMAIL PROTECTED]> writes:
>OAP-L3: Where are the negative critiques from users?
>Many people have gotten the software.
>They learned of this software from these very news groups.
>I have not seen any posts from users / evaluators that trash the
>software with legitimate examples that support any such assertion.
>Yet many of you have trashed the software while having never used it.
>Does anyone believe that this sort of criticism supports the
>credibility of those making such rash and irresponsible criticisms?
Well, since you asked:
How many times have you heard complaints about the quality of food
scavenged from trash? Is the lack of complaints a reason to believe
such food is good and healthy? Is it rash and irresponsible to teach
your kids not to eat anything they find in a dustbin?
I think that analogy sums up my attitude towards your programme quite
accurately. It might be something good; people do sometimes throw away
perfectly good food. However, I've enough reason to doubt its quality
to have no interest in finding out what it's really like.
The marketing hype on your Web page is all very vague, but there is one
particular claim that is enough to prove beyond reasonable doubt that
your understanding of cryptology is so limited that any cryptographic
product designed by you can be secure only by accident.
Quoting directly from the page:
Well, everyone knows that only messages encrypted using one-time
pads are unbreakable. At its heart, Original Absolute Privacy -
Level3 is an automated pseudo one-time pad generator.
Anyone who really understands the content of Shamir's classical result
on the unbreakability of the one-time pad knows that it applies exactly
as much to pseudo one-time pads as advice on the treatment of a dog
applies to a hot dog. Granted, no logical connection between the
consecutive sentences is claimed here, but it's clearly implied.
Whoever is stupid and/or ignorant enough to believe your hype and buy
your programme is certainly not qualified to assess its cryptographic
strength. That simple fact suffices to explain the lack of negative
feedback, regardless of the merits of the product.
Taneli Huuskonen
=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv
iQB1AwUBOBDE8gUw3ir1nvhZAQGLQQL6AobesJaoFZoOjpnlb8+iLjVzR/ByNPzo
BZYHIzhWupcnrs0IYkRvEygltElcA4LjKzt6kXE8zLr5X99NMns9M9ByjKu33qC8
pfiiyCvzKwQCer+/Y3/fZX2DemExvuAo
=sV25
=====END PGP SIGNATURE=====
--
I don't | All messages will be PGP signed, | Fight for your right to
speak for | encrypted mail preferred. Keys: | use sealed envelopes.
the Uni. | http://www.helsinki.fi/~huuskone/ | http://www.gilc.org/
------------------------------
From: "Rick Braddam" <[EMAIL PROTECTED]>
Subject: Re: some information theory (very long plus 72K attchmt)
Date: Fri, 22 Oct 1999 16:58:09 -0500
Anton Stiglic <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
>
> I repeat the construction of the distinguishing algo D:
> on input c,
> D computes y <-- Comp(c),
> IF y has the redundancy that is attached to the elements of set P, D
> outputs "NOT RANDOM"
> ELSE D outputs "RANDOM".
>
> Now, tell me what is wrong with this proof, and I'll listen to what you
> have to say.
> Hint: there is nothing wrong with this proof.
>
> Anton
>
How about that IF? I'm really surprised that no one has called you on that yet.
Correct me if I restate your proof incorrectly in a
different form:
Compress() is a compression function
D() is a verifier function [NOT THE DECOMPRESSOR function but a cryptographic analyzer]
y = Compress(c)
IF y has THE SAME REDUNDANCY that is attached to [characteristic of] the elements of
set P, D(y) returns "NOT RANDOM" ELSE D(y)
returns "RANDOM"
I submit that if y has the same redundancy no compression has taken place. A
compressed file might have the same redundancy in the
case of trying to compress random numbers, but not if the file was English text,
unless the compression program only did a copy
operation.
It has been stated here and elsewhere that the purpose of compression is to reduce
(actually, *try* to eliminate) redundancy. Let's
face it... if the compressed message is shorter that the original, SOMETHING is
missing. If the compressed message can be
decompressed to the original message, then the something missing is not essential to
the message or can be reconstructed. As I
understand it, the "something missing" is redundancy.
What some of the others have been saying is that the "something missing" is helpful to
an analyst in trying to break an encryption.
In another post, John Savard wrote, and you replied:
>> Compression, as it gets better and better, produces a result that
>> approaches closer and closer to being random.
>>
>
> This is absolutely wrong! If you plaintext set P was random, then
> Img(Comp(P)) would be random (but P is already random).
> If the messages picked in P are not random (but english text),
> Comp(P) will never be random.
What's wrong with this picture? John said "approaches ... being random" and you argued
"will never be random". I guess it depends on
what your definition of "is" is. :) One thing I do know, because I've done it, is that
I can compress a file of text and test the
result with DieHardC, and have it pass all tests. That doesn't say that the result is
random, and we all know it ain't. There are
some pretty good statistical tests in DieHardC, though. That makes me wonder what you
use for your distinguisher D. If it absolutely
can determine that a file is not random, then can't it also determine that one IS
random? From posts I've read in the past, I
thought that was impossible... determining randomness by inspection.
Any corrections, clarifications, or criticism are welcome and appreciated in
continuing my education. Thanks.
Rick
------------------------------
From: [EMAIL PROTECTED] (Jerry Coffin)
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Fri, 22 Oct 1999 16:31:08 -0600
In article <7uqjmb$oei$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] says...
> In article <[EMAIL PROTECTED]>,
> Jerry Coffin <[EMAIL PROTECTED]> wrote:
> > If you hardwire one algorithm, the satellite is secure until that
> > algorithm is broken. If you make the algorithm re-programmable, your
> > satellite is secure until one of two algorithms is broken. If you
> > have five hardwired algorithms, your satellite is secure until all
> > five algorithms are broken.
> >
> > If there's a flaw in the logic here, please be so kind as to point it
> > out.
>
> You seem to assume that you will know when someone breaks your algorithm.
Yes, more or less.
> History suggests that this is unlikely -- the adversaries who are most
> likely to spend a lot of effort patiently searching for weaknesses are
> also the ones who are least likely to tell you when they break your cipher.
Of course. OTOH, if you're at all prudent about things, you can get a
pretty fair idea that your security is breaking down somewhere, and
when that happens, you can at least take SOME action if your
investigation indicates that the cipher may be what's causing a
problem.
Your suggestion seems to be that since you MAY not know when you have
a problem, that it's best to stick your head in the sand and simply
assume that you'll never have a problem, at least in this area.
> I think it is more prudent to assume that if you select somehow from a
> list of five algorithms for each connection, you may end up with a system
> that is only as strong as the weakest of the five algorithms.
WHY? I've already suggested a method that, AT WORST is at least as
strong as the method you suggest. You're basically asking that I
assume that whoever designs the system is an idiot. You're asking
that if I design a system, I that ignore the best method I know and
intentionally weaken the system.
The phrase "straw man" comes to mind here: you're basically asking
that if I don't select the method you consider good, that the only
alternative I consider is one we both already know is bad.
If no third alterative existed, then your position might be a
reasonable one, but it seems pretty clear to me that if you include a
number of ciphers and simply never use any but one, then the security
is no worse than if you'd only ever included one in the first place.
IOW, the worst possible case with my suggestion is as strong as your
suggestion.
Even in this worst-case scenarios, I think having multiple AES winners
improves security. The funding for cryptanalysis and number of
serious cryptanalysts in the world are both quite limited. If AES
selects a number of different winning algorithms, that research
capability is likely to be spread more or less evenly among those
winners rather all being concentrated on one. This is likely to
reduce the possibility of any one being broken in any specific period
of time.
In addition, if an opponent KNOWS that at the first sign of a problem,
you can switch from one cipher to another, he's less likely to invest
the (typically large) amount of money necessary to build specialized
hardware to break the first cipher to start with. Thus, until there
were really serious theoretical problems found with _all_ the winners,
it would be substantially less likely that anybody would invest the
money necessary to really decrypt any of them.
--
Later,
Jerry.
The universe is a figment of its own imagination.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************