Cryptography-Digest Digest #263, Volume #10 Sat, 18 Sep 99 12:13:03 EDT
Contents:
Blantant Error & My Apologies (WAS: Re: Exclusive Or (XOR) Knapsacks) ("rosi")
Re: Glossary of undefineable crypto terms (was Re: Ritter's paper) (Nicol So)
Re: Mystery inc. (Beale cyphers) (John Wasser)
Re: VICTORY??? (was: crypto export rules changing) (Ben)
Re: Exclusive Or (XOR) Knapsacks ("rosi")
Re: crypto export rules changing (Anthony Stephen Szopa)
Re: (US) Administration Updates Encryption Export Policy (Anthony Stephen Szopa)
Re: VICTORY??? (was: crypto export rules changing) (SCOTT19U.ZIP_GUY)
Re: some information theory ("Tony T. Warnock")
----------------------------------------------------------------------------
From: "rosi" <[EMAIL PROTECTED]>
Subject: Blantant Error & My Apologies (WAS: Re: Exclusive Or (XOR) Knapsacks)
Date: Sat, 18 Sep 1999 07:40:08 -0400
rosi wrote in message <7rv1vj$24t$[EMAIL PROTECTED]>...
[Snip]
>sense', beyond NP-complete. (But not unique in the true sense, by the way,
>is
>useless to cryptography). To be concrete a bit more, if you are referring
to
^^^^^^^^
I was wrong (as was only talking in a limited sense which had an
assumption behind). The implicit assumption is NOT at all reasonable.
If you mean one-way use (never have the intention of reverting it), the
word 'useless' is total nonsense.
My sincere apologies. Genuinely sorry.
--- (My Signature)
------------------------------
From: Nicol So <[EMAIL PROTECTED]>
Subject: Re: Glossary of undefineable crypto terms (was Re: Ritter's paper)
Date: Sat, 18 Sep 1999 08:19:46 -0400
Trevor Jackson, III wrote:
>
> rosi wrote:
>
> > Trevor Jackson, III wrote in message <[EMAIL PROTECTED]>...
> > [SNIP]
> > >Usually this means as secure/hard as X where X is "thought to be secure"
> > >or "thought to be hard". Hardly a convincing manner of proof.
> >
> > Thought I alone had the semantics problem. Long live 'provably secure'!
> >
> > (Even) As hard as???
>
> There are lots of poorly defined or undefined terms in this NG, after all it's
> mostly English language which is pretty slippery. But there are a several
> terms that may be undefinable. Like putting a superfluid in an open container
> or picking up dropped mercury with your fingers, it just doesn't work.
>
> I nominate the above described phrase "provably secure" to accompany the
> classic "truly random" as not possible to define precisely.
Actually, I think the meanings of "provably secure" and "as hard as" are
quite clear, and they are by no means undefinable. The latter
expression can be understood in terms of reduction. That said, these
expressions may not have the implications one might expect if he's not
familiar with the established usage.
Nicol
------------------------------
From: John Wasser <[EMAIL PROTECTED]>
Subject: Re: Mystery inc. (Beale cyphers)
Date: Sat, 18 Sep 1999 08:47:48 -0400
In article <[EMAIL PROTECTED]>, Niteowl <[EMAIL PROTECTED]> wrote:
> Dr. Matyas wrote a paper after much apparent research and thinks the
> version of the
> DOI used was from "An Historical, Geographical, Commercial, and
> Philosophical
> View of the American United States" published in 1795 by W.
> Winterbotham. The
> version of the DOI there does 'correct' many of the numbering errors
> seen in B2.
Copies are available from several dealers, including:
http://www.philaprintshop.com/rareamer.html
William Winterbotham. An Historical, Geographical, Commercial and
Philosophical View of the American United States and of the
European Settlements in America and the West-Indies. London: Ridgway,
Symonds, Holt, 1795. First British edition. 4 octavo volumes.
Original half leather spines and boards. Rehinged and refurbished.
Engraved illustrations. Sabin: 104832.
A detailed description of the new United States of America. These four
volumes give much information on the political situation by
transcribing documents as well as describing situations. Besides the
lovely and early American prints, the most fascinating aspect of
Winterbotham's narrative is that which describes natural history and
presents flora and fauna. The two views are current and interesting,
but
the natural history illustrations are primitive and much of the
information is old. The reader can see how a new generation of natural
scientists was needed. The maps sometimes found in these volumes are
not here present. $525
------------------------------
From: Ben <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto
Subject: Re: VICTORY??? (was: crypto export rules changing)
Date: Fri, 17 Sep 1999 22:14:55 -0800
Reply-To: [EMAIL PROTECTED]
fungus wrote:
> "Douglas A. Gwyn" wrote:
> >
> > Paul Rubin wrote:
> > > A big liberalization of export rules is supposed to be announced
> > > today, but apparently there will also be some key escrow provisions.
> > > http://www.sjmercury.com/breaking/headline1/024676.htm
> >
> > If the "liberalization" includes mandatory key escrow,
> > then it isn't the great advance that the article indicates.
>
> I just heard thet they completely dropped all export control
> except to blacklisted countries (IRAQ, etc.) The only requirement
> is that companies provide the feds with a list of their known
> resellers.
>
"The White House agreed Thursday to allow U.S. companies to sell
the most powerful data-scrambling technology overseas" -- quote from
http://abcnews.go.com/sections/tech/DailyNews/encryption990917.html
"The only major change is that the export limit has been raised from 56-bit
to 64-bit." -- quote from www.hackernews.com...
Just shows how out of touch the major media are.
Does this mean AES will have to be the AMERICAN encryption standard? If
they only increase key-length in 8 bit intervals, we
won't see 128 bit products ok'd for many years.
Ben
> --
> <\___/>
> / O O \
> \_____/ FTB.
------------------------------
From: "rosi" <[EMAIL PROTECTED]>
Subject: Re: Exclusive Or (XOR) Knapsacks
Date: Sat, 18 Sep 1999 08:43:09 -0400
Trevor Jackson, III wrote in message <[EMAIL PROTECTED]>...
[Snip]
>
>Does the non-unique case permit extension to the point that uniqueness may
be
>provided by additional information? I.e., could a "key pair" be created
such
>that the "public" portion is non-unique, perhaps in pathologically so,
while the
>"private" portion determines the unique solution?
>
Yes and No depending on a couple of things.
Additional information, yes.
Private, not necessarily. If private, it is analog to keeping a 'subset'
bits of an
RSA modulus secret. (Nonetheless, keeping it secret does in no way, I hope,
decrease the security or the complexity for an attack)
The property of 'non-uniqueness' (note in quotes) is provided by
'structure'
rather than secret information (but of course the private key is still
secret).
A bit more (only in the limited one-way trapdoor sense this time). Say, A
and
B are to perform the treat with a set {1, 2, 3, 7, 19}. If the subset sum is
10,
well that is beyond NP-complete. An attacker can NOT get it (i.e. 1+2+7 or
3+7?). Call it Perfect Secrecy? A bit of flavor. However, non-unique in this
true sense does not give us any ease. The decryptor can not get it, EITHER!
If the decryptor winnows out undesirable ones taking advantage of any
additional characteristic of the message, or any other additional
information,
we go back to square one. That is, whether that 'piece' is to be kept
secret.
However, to be brief, if one designs it in such a way that the solution can
be
unique (which IMHO can be achieved in more than one way with different
ease for A and B, and no less complexity for the attacker) to the decryptor
(_AND_ the encryptor), but as a subset sum it can really decomposed in
more than one way. The simplest is to consider the above example. If one
can design it in a way that only one of 1+2+7 and 3+7 can be valid, the
point
may be clearer.
Finally, I think, when we talk about security, etc. we re-visit the same
set
of questions, which are many and can get us very 'involved' and sometimes we
can not put a satisfactory stop to it once started. The questions are very
straightforward, independent of math formulae and so forth. Take this for
an example, what one does is to look hard at the way A and B does the
treat and the way(s) (any imaginable ones) an attacker can institute. No
doubt, one can not cover all. But if one thinks in generic terms, he is
likely
to cover a large area. One thing obvious to me --- and I believe to all ---
is that A or B does the decryption treat piece meal while the attacker
'cannot'. When such a difference is 'distilled and enlarged' (If you have
4-5, you DO HAVE 4-5), ... Needless to say that both the cryptographer and
the cryptanalysist need to look at any assumption harsh and straight in the
eye
and avoid getting intoxicated in the (seemingly) success they (maybe
temporarily) get.
Thank you so much Trevor.
--- (My Signature)
P.S.
Think saying a bit more here won't hurt (about 'unique'). It is, IMO,
possible
that the public portion at publication is semantics dense (or in other
words,
an attacker gets the 1+2+7 or 3+7 situation). However, if all the secret is
one-sided, I can not think of a way to really create the situation for an
attacker while A and B do not get confused at all. It is quite against
intuition,
but I can be wrong. One day, somebody may come up with this Perfect
Secrecy, but I doubt. Therefore, if A and B are to perform the treat, at
some
point, more needs to be revealed in one way or another. I just would like to
emphasize: I have no appetite to bet my life (or any portion of it) on PS. I
ONLY strive for one-wayness. If any is in the same frame of mind, we are
in the same camp. I am in no way saying 'shut up' to anybody. I have no
right to do so in any fashion. I lay down my scope so that other people and
I talk in the same terms. (BTW, I gave my def. or my version of trapdoor
one-
way)
------------------------------
From: Anthony Stephen Szopa <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto
Subject: Re: crypto export rules changing
Date: Sat, 18 Sep 1999 04:15:25 -0700
Reply-To: [EMAIL PROTECTED]
Bill Unruh wrote:
> In <VqiE3.5630$[EMAIL PROTECTED]> [EMAIL PROTECTED] (Dmitri Alperovitch) writes:
>
> >Um. Question - if they are willing to allow open export of unlimited
> >size keys (except when the destination is a terrorist state), why do they
> >still want a one-time review of your application? If there is no limit on the
> >size of the key you can use anymore, it shouldn't be any of their business
> >about the way you algorithm works or how strong it is, right?
>
> Sure. They want a record of what crypto you are actually using, so they
> can narrow their attack.
Technical review is a BS stalling tactic proposed to make it look like they are making
meaningful "progress". They should be made to buy a copy if they want one like every
body else.
------------------------------
From: Anthony Stephen Szopa <[EMAIL PROTECTED]>
Subject: Re: (US) Administration Updates Encryption Export Policy
Date: Sat, 18 Sep 1999 04:12:35 -0700
Reply-To: [EMAIL PROTECTED]
jerome wrote:
> what do they mean by 'technical review' ?
>
> On Fri, 17 Sep 1999 06:47:29 +0000, Helger Lipmaa wrote:
> >
> >Any encryption commodity or software of any key length can now be
> >exported under a license exception (i.e., without a license) after a
> >technical review, to commercial firms and other non-government end users
> >in any country except for the seven state supporters of terrorism.
It means:
1) this new policy does not take effect until some future date... Perhaps
December 15, 1999; maybe. (Just more stalling.)
2) technical review??? who knows??? The whole thing is completely vague
and designed to placate / attract potential high tech Gore supporters.
3) If this is anything to be concerned about then tell me, has the US
government withdrawn its appeal in the Bernstein case? I doubt it because
the government is not sincere and full of beans.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: talk.politics.crypto
Subject: Re: VICTORY??? (was: crypto export rules changing)
Date: Sat, 18 Sep 1999 16:11:05 GMT
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>
>
>fungus wrote:
>
>> "Douglas A. Gwyn" wrote:
>> >
>> > Paul Rubin wrote:
>> > > A big liberalization of export rules is supposed to be announced
>> > > today, but apparently there will also be some key escrow provisions.
>> > > http://www.sjmercury.com/breaking/headline1/024676.htm
>> >
>> > If the "liberalization" includes mandatory key escrow,
>> > then it isn't the great advance that the article indicates.
>>
>> I just heard thet they completely dropped all export control
>> except to blacklisted countries (IRAQ, etc.) The only requirement
>> is that companies provide the feds with a list of their known
>> resellers.
>>
>
>"The White House agreed Thursday to allow U.S. companies to sell
>the most powerful data-scrambling technology overseas" -- quote from
>http://abcnews.go.com/sections/tech/DailyNews/encryption990917.html
>
>"The only major change is that the export limit has been raised from 56-bit
>to 64-bit." -- quote from www.hackernews.com...
>Just shows how out of touch the major media are.
>
>Does this mean AES will have to be the AMERICAN encryption standard? If
>they only increase key-length in 8 bit intervals, we
>won't see 128 bit products ok'd for many years.
>
> Ben
>
>> --
>> <\___/>
>> / O O \
>> \_____/ FTB.
>
>
>
Don't worry Ben the AES product will be weak enough that the NSA will
be able to read the message encrypted with it.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: "Tony T. Warnock" <[EMAIL PROTECTED]>
Subject: Re: some information theory
Date: Fri, 10 Sep 1999 08:05:31 -0600
Reply-To: [EMAIL PROTECTED]
LZW compression removes some of these problems. The higher order correlations also
go away. Arithmetic compression does similarly. A second order Huffman compression
(treat pairs of letters as the alphabet, 676 "letters" for example) would remove
pairwise correlations. It's probably not worth the effort.
I did lots of counting on LZW compressed files. They quickly (1000 or so bytes)
began to look random in the high order correlations. This would not be good
cryptography but the files look rather random.
Actually one should compress then encrypt because compression is cheap and
encryption is expensive.
Tony
Anti-Spam wrote:
> Tom St Denis wrote:
> >
> > If I am not wrong, the following statement is true:
> >
> > 1) There is no method of INCREASING or DECREASING the entropy of a message M
> > (outside of adding/removing information).
> >
> > If that is true, then no matter what function you employ (be it ROT13,
> > compression, etc...) the actual secret message (before being encrypted) will
> > always have H(M) bits of 'information'.
> >
> > I am trying to figure this out to shut DS up for once. I still don't believe
> > compression before encryption increases security in REAL cryptosystems. I
> > still believe that it can slightly make things more complicated, but it's
> > SOLE job is to make M shorter (by removing redundancy).
> >
> > 2) Is it not true that no matter what compression algorithm (C for this
> > example) H(M) will always equal H(C(M))? If so then the message is no more
> > complicated after compressing with 'one-to-one huffman' or deflate, or lz78,
> > or lzss, or ...
> >
> > (btw I make the assumption that the encryption and compression are both fully
> > invertable.) Tom -- damn windows... new PGP key!!!
> > http://people.goplay.com/tomstdenis/key.pgp (this time I have a backup of the
> > secret key)
> >
> > Sent via Deja.com http://www.deja.com/
> > Share what you know. Learn what you don't.
>
> I've followed these thread debates on encryption vs. compression for
> months without saying anything. Now it's time to speak.
>
> This is a long post, so here's the summary and then read all below it to
> see how I back up this statement:
>
> Compression with huffman-encoding type schemes is identical to a
> substition encipherment.
> The relative order of SYMBOLS as ENCODED in the uncompressed data/file
> with ASCII, UNICODE or any other binary representation is PRESERVED by
> the compression and appears intact in the compressed data/file. Each
> SYMBOL is ENCODED with a shorter bit string in the compressed file/data,
> but the SYMBOLS still appear in the same frequency and order as they did
> in the original uncompressed data/file. Compression is just the encoding
> of the SYMBOLS with the least number of bits (ie. as close as possible
> to a Maximal Entropy Code.)
>
> Compression PRESERVES the TOPOLOGY of the SYMBOLS as ordered elements in
> a finite set. SYMBOLS in positions 1, 2, 3...i, j, k... n in an n
> symbol long message that are neighbors, encoded in the uncompressed
> data/file, are still neighbors, encoded in the compressed data/file.
>
> Compressing data/files prior to encryption with a cipher system does not
> alter the frequency of the SYMBOLS encoded in the compressed data/file
> relative to their original frequencies in the uncompressed file/data.
> Compressing prior to encrypting does not permutate the order of the
> SYMBOLS encoded in the compressed data/file relative to their original
> order as encoded in the uncompressed file/data. Compression only alters
> the encoding of the SYMBOLS to achieve the maximal entropy possible as a
> function of the probabilites of the SYMBOLS themselves.
>
> So yes, Tom, you are on the right track. This arguement can be extended
> to other compression schemes (in fact , its a good idea for a paper.)
>
> Now down to the nitty gritty:
>
> First, Compressed data is NOT necessarily random data. Many of us assume
> the compressed form of a file is "equivalent" in some form to true
> random data. It is not. Compressed files will not pass statistical
> tests for random bit streams. A compressed file is non-random. Say the
> compressed file is n bits long. Apply the five basic tests for
> randomness to that binary sequence ( see Handbook of Applied
> Cryptography, Section 5.4.4).
>
> 1. The Frequency Test ( or mono-bit test) - do 1s and 0s occur in the
> same total number (with some margin of difference based on the
> Chi-Square test.) A compressed file SHOULD pass this test.
>
> 2. The Serial Test ( two-bit test) - do the patterns 00, 01, 10 and 11
> occur with the same frequency? Here we see a good chance that the
> compressed file should not pass this test, espcially if we use Huffman
> Encoding or Variable Huffman encoding.
>
> 3. The Poker Test ( a generalization of the Frequency test )- looks for
> the number of occurances of k bit sequences ( such as 010, or 0110, or
> 001, or 1000101, etc..) in the m bit sequence. (See ref. Handbook for
> exact details ) Again, compressed data should not pass this test.
>
> 4. The Runs Test - looks at the number of runs of 0s (gaps)of various
> lengths (example 00, 000, 0000, etc.) and 1s (blocks)(example 11, 111,
> 1111, 11111, etc.) that appear in the candidate random data verses the
> expected number of gaps and blocks of various sizes that should appear
> in random data ( with some margin of difference based on the Chi-Square
> test.)
> A compressed file should not pass this test.
>
> 5. The Autocorrelation test - look for correlations between sequences of
> bits in the candidate random data with itself in non-cyclic shifts. A
> compressed file should exhibit correlations far larger than those
> expected for random bits.
>
> Why should a compressed file fail four out of five of these tests?
>
> Compression schemes like Huffman Coding,Shannon-Fano Coding do the
> following:
>
> Look the set of symbols in the file or data before compression ( or the
> "alphabet" of symbols in the data.) These symbols are strings of bits
> (0s and 1s) that appear many times in the data. Many times we use ASCII
> or UNICODE characters of fixed bit lengths. Determine the total number
> of all symbols in the file or data. Call it k. Determine each unique
> symbol's probability of occurance in the data by its frequency (unique
> symbol count/total number of symbols).
>
> Now, using the probability of occurance for each defined symbol from
> above, define a new encoding for that symbol. These new encodings are
> binary strings but they are not necessarily the same length as the
> binary strings representing the symbols in the original data or file.
> No two new encodings will be the same binary string. Each new encoding
> string shall be constructed such that no additional information is
> necessary to identify where that encoding starts or ends when it is
> written before or after another encoding string for a different symbol.
> Keep a dictionary that maps the correspondance between the original
> symbol and its binary representation in the original file or data and
> its new encoding bit string in the compressed file or data.
>
> The net result is that if you order the symbols by their probabilites,
> p1 >= p2 >= p3 >= p4... >= pk , the lengths of the binary strings of 1s
> and 0s for the new encodings for the symbols are in a similar order but
> decreasing in magnitude, L1 <= L2 <= L3 <= L4 ... <= Lk. Now these new
> encoding bit strings for the symbols should be less than (or equal) to
> the bit lengths of the encoding for the original symbols. If some are
> less than the original symbols' encoding bit lengths we WILL get
> compression. Huffman and Shannon-Fano algorithms use a binary tree
> algorithm to establish the 1/0 bit string representations for the
> symbols as a function of frequency. Usually these encodings use equal
> numbers of 1s and 0s. (they occur with equal frequency)
>
> We now take the original data or file and REPLACE the encoding of the
> symbols as appeared in the original data or file with these new encoding
> strings (which are different than the original bit strings for those
> symbols.) If the new encodings are shorter in bit length than the
> original encodings for the symbols we get smaller (less bits) data or
> files.
>
> There is a one-to-one correspondance between the occurances of the
> symbols (as encoded in ASCII or UNICODE for example) in the original
> data or file and the occurances of these same symbols in their new
> encoding for compression. THIS IS AN IMPORTANT POINT. We have
> substituted different encodings for those symbols in the compressed data
> or file. That is all we have done. We do not alter the relative
> positions of the symbols. There is no shuffling, no rearrangement of
> symbols occurances spatially in the data/file.
>
> At best what we have done can be called a "substitution encipherment"
> from a cryptological point of view but nothing more than that. If we had
> 36 e's in the original uncompressed file, and e was encoded in ASCII, in
> the compressed file we now have 36 occurances of a new encoding string
> for e (say its 001) that is less than the ASCII bit encoding bit
> length. The relative order of symbols as they occurred in the
> uncompressed data/file IS PRESERVED BY COMPRESSION in the compressed
> data/file. The new encodings appear many times in the compressed
> data/file, and in the same frequency as the SYMBOLS for which they stand
> as those SYMBOLS appeared in the uncompressed data/file.
>
> This is necessary for the decompression to work! If the ordering of
> symbols is not preserved then decompression back to the original
> encoding in the uncompressed file/data will not return the original
> data/file. Decompression is just the recognition of the smaller
> encoding strings in the compressed data/file for replacement with the
> corresponding longer, original binary represenation for that symbol.
> That original represenation is in the dictionary for the compressed file
> generated during the compression (and must accompany the compressed
> file/data in an uncompressed form to ensure the decompression can occur
> - or be available at the time of decompression if sent by other means.)
>
> Tnis is why compressed data/files will/should fail the Serial test,
> Poker Test, Runs test and Autocorrelation tests for true randomness. The
> only cryptoanalytic challenge posed by compression is the cryptanalysis
> of a substitution cipher. The SYMBOLS that appeared in the uncompressed
> file/data are encoded with different codes in the compressed file/data.
> That's at most a substitution cipher. If we have only the compressed
> data/file, we can "attack" it as a substitution cipher.
>
> What makes this more of a challenge is the definition of those SYMBOLS.
> If the symbols are ASCII or UNICODE then I could apply language models
> in the "cryptanalysis" of the compressed data to identify probable
> compressed encodings for e, i, th, etc. If I just select every k bits
> as the occurance of a symbol, then the set of unique k bit values that I
> find (if much smaller than 2^k in number ) serves as my "alphabet" and I
> encode them with new codes that are less than k bits long to get
> compression. Here there are no language models to help me analyize the
> compressed data/file to try to determine the compression encoding. But,
> if I know what k is and I know something about the nature of what was
> compressed then I can use other examples of that compressed data/file to
> try to put together reasonable estimates of a dictionary to use in
> substitution cryptanalysis, and I can try autocorrelation tests on the
> compressed data to spot the compression encoding strings (which will
> repeat in the compressed data/file and thus will autocorrelate.)
>
> For those more mathematically inclined ( I think of rosi as I write
> this), Compression preserves the TOPOLOGY of SYMBOLS as they appear in
> relative order (neighbor to neighbor) and frequency in the uncompressed
> data/file. That preserved topology is a doorway to analytical attack.
>
> For example, let's say this is the message in SYMBOLS:
>
> T H E M E S S.......
>
> when encoded before compression, in binary,
>
> 001001 101011 001001 110111 001001 101010 101010
>
> But, when compressed, T now is encoded as 0111, H as 0011, E as 01, S as
> 10101, M as 1001...
> (just examples, didn't do the math for the encoding.....)
>
> 0111 0011 01 1001 01 10101 10101
>
> We see the bit length of the message is shrinking with the new encoding,
> but note the occurances of the new codes follows the relative ordering
> of the occurances of the uncompressed code. An e no matter how encoded
> still follows the H no matter how encoded. A T starts the message, no
> matter how encoded. We can define a point-set topology on the SYMBOLS
> as they occur in the original message, and we can show that this
> point-set topology is preserved by the compression.
>
> Just my two cents,
>
> [EMAIL PROTECTED]
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************