Cryptography-Digest Digest #296, Volume #10 Wed, 22 Sep 99 15:13:03 EDT
Contents:
Re: some information theory (very long plus 72K attchmt) (James Felling)
Re: Comments on ECC (Patrick Juola)
Re: frequency of prime numbers? ("Boris Kolar")
Re: Exclusive Or (XOR) Knapsacks ("Boris Kolar")
Re: Okay "experts," how do you do it? (Anton Stiglic)
Re: Galois field isomorphism? (Medical Electronics Lab)
Re: (US) Administration Updates Encryption Export Policy (Patrick Juola)
Re: frequency of prime numbers? (Patrick Juola)
Re: frequency of prime numbers? (Patrick Juola)
Re: Okay "experts," how do you do it? (Anton Stiglic)
Re: some information theory (very long plus 72K attchmt) (SCOTT19U.ZIP_GUY)
signature help with s/mime ([EMAIL PROTECTED])
Re: Exclusive Or (XOR) Knapsacks (David Wagner)
----------------------------------------------------------------------------
From: James Felling <[EMAIL PROTECTED]>
Subject: Re: some information theory (very long plus 72K attchmt)
Date: Wed, 22 Sep 1999 12:45:29 -0500
Rick Braddam wrote:
> Good morning, all. Sorry about the long post, especially with the attachment,
>but if I sent this one paragraph at a time it
> would take a week and might generate a lot of noise... and the noise is too high
>already.
>
> I've been following this thread and I thought I understood what David Scott was
>trying to say. I probably don't understand
> entropy correctly, though. I'm hoping that if I write down what I think I know, some
>of you will clarify it for me.
>
> If I have an 80-character message, the information is encoded in 80 8-bit bytes,
>which uses 640 bits. Only about 1.3 bits per
> byte are actually needed to encode the information, though, the rest are redundant.
>This indicates that the message could be encoded
> in as little as 10.4 bits. I have had PKZIP 2.04G report 95% compression on large
>text files, so that don't seem too far off.
>
> That seems to be the objective of compression programs, to sort of squeeze the
>information-carrying bits closer together and
> throw away the redundant bits. If entropy is a measure or indication of the amount
>of information represented, it cannot change from
> the long string to the shorter one, or you couldn't expand the short string back to
>the longer one. So the short string and the
> longer one have to have exactly the same entropy. That would be the entropy of the
>message. However, the shorter string uses fewer
> bytes, so the entropy per byte must have increased. I have seen some posters write
>about the entropy of the message, and others
> refer to the entropy per byte. Intuitively, entropy per byte makes sense to me, but
>which is correct?
Message entropy remains constant( the message hasn't changed and its entropy is its
content), the entropy per byte will increase with
compression.
>
>
> David Scott promotes using compression before encryption, and several responses
>have agreed with him. If I understand correctly,
> compression doesn't increase security, but it increases the work factor needed to
>break the encryption. Why isn't that the same as
> increasing security? "Increase the work factor" should be the same as "makes it
>harder" to break the encryption... isn't that what
> increasing security means?
Not really. Lets say I want to conduct an attack on your encryption. I know that you
use compression method X. Further I know that
your messages have some common starting feature(enough after compression to be a block
or two of encyphered text, or can submit a
limited amount of known plaintext.
I can then use the compressed feature / compressed known text as the plaintext
recognise decoded plaintext -- thus the work factor for
some attacks is NOT increased. An increase in security means under any circumstance
the work factor goes up. The big benefit of
compression is that instead of 1 meg of raw encoded cypher text the adversary gets
100K of encyphered text -- much less material to
analyze and draw conclusions from.
Similarly by creating apropriately massaged plaintexts known plaintext attacks may
still be carried out -- it just takes a little
precomputation on the adversary's part.
> Maybe it's relative, in that it doesn't make it substantially harder like adding one
>bit to the key
> would.
>
> I couldn't quite figure out how you could uncompress a file which wasn't
>compressed to start with, so I just thought I'd try
> what David said to do. Here are the files I generated to run a couple of tests. It
>seems that Adaptive Huffman Compression don't
> particularly like pdf files. I couldn't find enough text files to make up 8.4Mb to
>test with DieHardC, especially since they needed
> to be that big after compression. ZipMagic with max compression compresses much
>tighter, but if an analyst knows that the first two
> letters in my plaintext are PK he/she can throw away wrong keys much quicker. Of
>course, I could do as suggested by one here and
> strip off the header and send it as plaintext, and save the analyst some trouble. I
>just want to be a troublemaker for the analyst.
>
> The attachment, outfiles.zip, contains the DieHardC results for each of the tests I
>ran. The first tests use a file of random
> numbers generated by the Merseinne Twister.
>
> Filename Size
> RNDFILE 8,520,008 The original output from the Merseinne
>Twister
> the file above passes all DieHardC tests, see rndfile.out for results
>
> TESTUNC AHC 28,703,076 Original RNDFILE uncompressed to this
> You CAN use h2unc.exe to uncompress a file that has not been compressed
>
> TESTCOMP AHC 8,520,270 Original RNDFILE compressed to this (larger)
> the file above also passes all DieHardC tests
> see testcomp.out for results
>
> TESTUNC ZIP 10,604,680 Uncompressed file zipped to this (max comp)
> ZipMagic (by Mijenix) used for all zip files... equiv to pkzip 2.04G
> See testuncz.out for results
>
> TESTUNC CMP 8,520,008 Uncompressed file compressed back to this
> *note* above file is identical to original (see rndfile & rndfile.out)
> if you uncompress a not-compressed file then compress it you DO get the
> original file back.
>
> TEST PDF 13,231,497 test file built from three pdf files
>
> TESTCOM AHC 11,745,172 test.pdf file compressed with Adaptive Huffman
> h2com.exe compressor
> this file passes only 37 of 235 tests, see testcom.out.
>
> TEST ZIP 7,057,309 test.pdf file zipped (max compression)
> This file too short for DieHardC, exausted input file at OPERM5 test.
> Failed all tests it ran. See testzip.out.
>
> The last two tests don't show anything, unless it's that zip compresses pdf
>files better. I doubt that passing 37 out of 235
> tests is significant. Passing all tests doesn't prove anything, either. It only says
>that the file *looks* random, not that it is.
>
> > [EMAIL PROTECTED]
>
------------------------------
From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Comments on ECC
Date: 22 Sep 1999 12:06:59 -0400
In article <[EMAIL PROTECTED]>,
Jerry Coffin <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>[EMAIL PROTECTED] says...
>> All crypto algorithms I know are in NP, if only for the reason that using the
>> private key must take a reasonable amount of time. So a trivial NP solution to
>> solving the ECDLP or DLP is guess the correct private key.
>
>I'm assuming you meant to use "P" where you said "NP" above
>(otherwise, I can't decipher your statement into anything meaningful).
No, his statement is both meaningful and correct. A definition of
NP is "polynomial time verifiable"; even given a correct answer, you
can verify the answer's correctness in reasonable (polynomial)
time. (This quickly equates to the more common lucky guess definition,
the key insight is IF even knowing the answer doesn't help you to
solve the problem, the problem is no longer in NP.)
So cryptography, in general, is in NP -- if you have the cyphertext
*and the key*, you should be able to decrypt quickly. A correct
non-deterministic guess of the key will yield the key and a solution.
This is provable.
The question as to whether any given cryptographic method is in P
is still an open question for the harder systems.
-kitten
------------------------------
From: "Boris Kolar" <[EMAIL PROTECTED]>
Subject: Re: frequency of prime numbers?
Date: Wed, 22 Sep 1999 17:25:30 +0200
Trevor Jackson, III <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Boris Kolar wrote:
>
> > Bob was right. There are true statements that can't be proved. One of
such
> > statements is "This axiomatic system is consistent" (for some axiomatic
> > systems). Obviously it can be either true or false. But if the axiomatic
> > system is rich enough, the statement can't be proved.
>
> In what sense is the statement true then?
It may be true (we just can't prove it). If you say it's neither true or
false, you are basically saying it's a paradox (this usally happen when the
sentence directly or indirectly states something about itself - like "This
sentence is a lie"). But the answer to the question "Is the axiomatic system
consistent" or "does this algorithm ever halt" has an answer. It's either
"yes" or "no", but we can't really answer it. For example, you could (in
principle) obtain an answer to "does this algorithm ever halt", but you
might have to wait infinitely long to get an answer.
>
> >
> >
> > Donald Welsh <[EMAIL PROTECTED]> wrote in message
> > news:[EMAIL PROTECTED]...
> > > On Fri, 06 Aug 1999 17:27:45 GMT, Bob Silverman <[EMAIL PROTECTED]> wrote:
> > >
> > > >No. What Goedel showed was that any sufficiently rich axiomatic
> > > >system is incomplete in the sense that there are true statements
> > > >which can not be proved. [as well as other stuff I won't discuss].
> > > >Peano arithmetic is "sufficiently rich", BTW.
> > >
> > > I'd like to correct this misconception, if I may. Godel's theorem
> > > does not say that "there are true statements that cannot be proved".
> > > It says that there are unprovable statements. These statements are
> > > neither true nor false.
> > >
>
>
>
------------------------------
From: "Boris Kolar" <[EMAIL PROTECTED]>
Subject: Re: Exclusive Or (XOR) Knapsacks
Date: Wed, 22 Sep 1999 17:15:32 +0200
David Wagner <[EMAIL PROTECTED]> wrote in message
news:7s8d4t$c1n$[EMAIL PROTECTED]...
> In article <0IOF3.152$[EMAIL PROTECTED]>,
> Boris Kolar <[EMAIL PROTECTED]> wrote:
> > Let me make it very simple: ANY cryptographic function that uses ONLY
XOR
> > operation is INSECURE. XOR knapsack can not be used for one-way
functions,
> > hash function or anywhere, where cryptographic strength is required.
>
> Nonsense. Making it very simple usually makes it very wrong, and this
> case is no exception.
>
> There are known constructions that seem to have cryptographic-quality
> strength, and use only XORs. One example is cryptosystems based on the
> difficulty of decoding random linear codes over GF(2) (it is conjectured
> that no polynomial-time algorithms exist for this problem, although the
> parameters must be chosen with care to avoid the easy instances).
> Another example is Maurer's cryptosystem.
>
> A third example is the one-time pad, which uses only XORs and yet remains
> provably secure.
>
> Be careful with those sweeping pronouncements...
Encoding and decoding linear codes involves matrix multiplications, which
can't be performed with only XOR operations. Almost all cryptosystems based
on error-correcting codes have been broken or shown to be very impractical
(like McEllice). I don't know Maurer's cryptosystem so I can't comment on
that.
One-time pad is proved to be unconditionally secure, but it's very
unpractical, the same key can't be used twice, and it's not resistant to
known-plaintext attack. One-time pad may be interesting theoretically, but
no one takes it seriously for real usage.
What I really wanted to point out is the fact that most algorithms using
only XOR operations fall in the category of linear algebra. Linear algebra
is not well suited for cryptographic application. Some nonlinearity is
required to make cryptosystem strong. A very good example for this is a very
popular random sequence generator "linear feedback shift register - SFSR)
which has very good statistical properties, but is cryptographically
insecure.
------------------------------
From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: Okay "experts," how do you do it?
Date: Wed, 22 Sep 1999 12:26:42 -0400
"Douglas A. Gwyn" wrote:
> Anton Stiglic wrote:
> > ... There is no way to say that we might be
> > able to proove that FACTORING is difficult or not?
>
> In one sense, factoring is easy. You can try all possible
> factors in sequence, or you can guess at random, until the
> factors are found.
I guess we have a *totaly* different definition about easy....
When I talk about easy, I usualy mean that there is a known
algorithm that runs in poly time in the size of the input (and
uses no more than poly in size of input memory...), this
is the definition that most text books in Complexity theory
adopt.
------------------------------
From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: Galois field isomorphism?
Date: Wed, 22 Sep 1999 12:50:18 -0500
Bob Silverman wrote:
> It's simple linear algebra.
> See IEEE P1363 for examples, or consult any textbook on
> abstract algebra.
Check out a paper by Kaliski and Yin on the P1363 web page,
it gives a complete description and references on how to
do the transforms.
Patience, persistence, truth,
Dr. mike
------------------------------
From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: (US) Administration Updates Encryption Export Policy
Date: 22 Sep 1999 12:01:12 -0400
In article <7s8h7v$9i1$[EMAIL PROTECTED]>,
Bill Unruh <[EMAIL PROTECTED]> wrote:
>In <[EMAIL PROTECTED]> [EMAIL PROTECTED] (Jerry Coffin)
>writes:
>
>]Third, the primary reason for finding the regulations unconstitutional
>]was the lack of ability to appeal a decision. The same basic
>]regulations on export itself could apparently be made constitutional
>]simply by providing a better process for appealing a decision.
>
>]Finally, note that the Bernstein case revolved around the use of
>]source code as a method of communication between people, so there was
>]a restriction on the right to free speech. If, for example, you write
>]source code like Dave Scott's, which nobody can read, that argument
>]obviously goes out the window. To export source code in accordance
>]with this decision, you could be called upon to prove that the
>]_primary_ reason for using source code was simply to provide an
>]unambiguous method of communicating with another person. If the
>]primary reason for the source code is to provide for a computer to
>]take certain actions, it's unlikely that this decision would apply.
>
>No. Your speech does not become less protected just because people find
>difficulty in understanding you.
Legally speaking, it's not a quesiton of ease of understanding, it's a
question of communicative intent. That's why, for example, there
are certain restrictions on "commercial speech" (such as advertising)
that would not be allowed on "political speech." This particular
distinction is well-accepted in (U.S.) case law and you're unlikely
to get it overturned no matter how many lawyers you paid.
Dr. Bernstein is in a rather unusual position in that he is researching
new cryptographic techniques which he then wants to communicate *to other
researchers* (publish or perish, what fun!) So his intention is
human-to-human communication, not human-to-computer.
In general, publishing source code for a software package can easily
be argued not to be "communicating" with other humans; I don't know
*how* many programs I've compiled without ever looking at the source.
The law is capable of recognizing the distinction and acting upon it,
as much as we might wish to the contrary.
-kitten
------------------------------
From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: frequency of prime numbers?
Date: 22 Sep 1999 12:10:23 -0400
In article <[EMAIL PROTECTED]>,
Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
>Boris Kolar wrote:
>
>> Bob was right. There are true statements that can't be proved. One of such
>> statements is "This axiomatic system is consistent" (for some axiomatic
>> systems). Obviously it can be either true or false. But if the axiomatic
>> system is rich enough, the statement can't be proved.
>
>In what sense is the statement true then?
Well, the system either *is* consistent, or it isn't. It's just
not possible to prove the consistency within the system. It may
be possible to prove (or disprove) consistency using a large
system.
First-Order-Logic has, for instance, been *proved* consistant (Godel's
first significant theorem). But he didn't restrict himself to
FOL in performing the proof.
-kitten
------------------------------
From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: frequency of prime numbers?
Date: 22 Sep 1999 12:11:42 -0400
In article <[EMAIL PROTECTED]>,
Trevor Jackson, III <[EMAIL PROTECTED]> wrote:
>Tim Tyler wrote:
>
>> Donald Welsh <[EMAIL PROTECTED]> wrote:
>> : On Fri, 06 Aug 1999 17:27:45 GMT, Bob Silverman <[EMAIL PROTECTED]> wrote:
>>
>> :>No. What Goedel showed was that any sufficiently rich axiomatic
>> :>system is incomplete in the sense that there are true statements
>> :>which can not be proved. [...]
>>
>> : I'd like to correct this misconception, if I may. Godel's theorem
>> : does not say that "there are true statements that cannot be proved".
>> : It says that there are unprovable statements. These statements are
>> : neither true nor false.
>>
>> Such statements are neither true nor false *from within the axiom system
>> in question*.
>>
>> Calling it a misconception may be a bit strong: Godel demonstrated
>> sentences which read like "axiom set X can never prove this statement to
>> be true", which an external observer not bound by axiom set X genuinely
>> *could* verify as being true statements.
>
>But an external observer not bound by axiom set X can also show the statement
>as false.
Not necessarily, if he's using a consistent (enlarged) axiom set.
And, of course, if he's using an inconsistent set, he can show
anything.
-kitten
------------------------------
From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: Okay "experts," how do you do it?
Date: Wed, 22 Sep 1999 14:22:05 -0400
>
>
> he's paranoid but not totally wrong. just an example.
> an unknown designs a new cypher A and coppersmith, rivest and shamir
> design a another new cypher B. Will you read both papers with the
> [...]
How do you beleive that coppersmith, rivest and shamir started out?
O.k., some started out as co-writers, but others by themself! Any
good paper should have the same style, follow common terminology,
have the same tone. Papers to Eurocrypt and Crypto are sent
anonymosly, take on the scientific style of writting and don't make
any mistakes, and they will read it!
Anton
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: some information theory (very long plus 72K attchmt)
Date: Wed, 22 Sep 1999 19:18:07 GMT
In article <7s9prd$c82$[EMAIL PROTECTED]>, "Rick Braddam" <[EMAIL PROTECTED]> wrote:
> Good morning, all. Sorry about the long post, especially with the
> attachment, but if I sent this one paragraph at a time it
>would take a week and might generate a lot of noise... and the noise is too
> high already.
>
> I've been following this thread and I thought I understood what David Scott
> was trying to say. I probably don't understand
>entropy correctly, though. I'm hoping that if I write down what I think I know,
> some of you will clarify it for me.
>
> If I have an 80-character message, the information is encoded in 80 8-bit
> bytes, which uses 640 bits. Only about 1.3 bits per
>byte are actually needed to encode the information, though, the rest are
> redundant. This indicates that the message could be encoded
>in as little as 10.4 bits. I have had PKZIP 2.04G report 95% compression on
> large text files, so that don't seem too far off.
>
> That seems to be the objective of compression programs, to sort of squeeze
> the information-carrying bits closer together and
>throw away the redundant bits. If entropy is a measure or indication of the
> amount of information represented, it cannot change from
>the long string to the shorter one, or you couldn't expand the short string
> back to the longer one. So the short string and the
>longer one have to have exactly the same entropy. That would be the entropy of
> the message. However, the shorter string uses fewer
>bytes, so the entropy per byte must have increased. I have seen some posters
> write about the entropy of the message, and others
>refer to the entropy per byte. Intuitively, entropy per byte makes sense to me,
> but which is correct?
>
It really depends on how you look at it. Projective geomitry is no
different Euclidean geomitry. Bothe can be used as a thought model
but people schooled in either may argue forever. Kind of like is light
a particle or a wave or something else. You can prove either of these.
When you compress a string with PKZIP for exampe you obviously
lose no information. So you can say that the total entropy is the same
as you started and if the resultant file is shorter you can say the entropy
per bit is larger. But if you look at entropy as function of probabilty of a
given set of files. If you compress with PKZIP and then encrypt. You could
create the class of all files that are found by using all the possible keys
if everyone of these file could have been generated by the message and
compression. The total entopy would be the same before compression
and after. However if many of those files are illegal files. In that no file
by PKZIP could have compressed to it you greatly cull the number of
possible files. You may even cull it so bad that you only have one cadidate
file left. IF this happens Your system is in trouble since the attacker never
even had to know anything about the file you encrypted. This is the ultimate
info an attacker like the NSA would want to see.
> David Scott promotes using compression before encryption, and several
> responses have agreed with him. If I understand correctly,
>compression doesn't increase security, but it increases the work factor needed
> to break the encryption. Why isn't that the same as
>increasing security? "Increase the work factor" should be the same as "makes it
> harder" to break the encryption... isn't that what
>increasing security means? Maybe it's relative, in that it doesn't make it
> substantially harder like adding one bit to the key
>would.
>
> I couldn't quite figure out how you could uncompress a file which wasn't
> compressed to start with, so I just thought I'd try
>what David said to do. Here are the files I generated to run a couple of tests.
> It seems that Adaptive Huffman Compression don't
>particularly like pdf files. I couldn't find enough text files to make up 8.4Mb
> to test with DieHardC, especially since they needed
>to be that big after compression. ZipMagic with max compression compresses much
> tighter, but if an analyst knows that the first two
>letters in my plaintext are PK he/she can throw away wrong keys much quicker.
> Of course, I could do as suggested by one here and
>strip off the header and send it as plaintext, and save the analyst some
> trouble. I just want to be a troublemaker for the analyst.
>
>The attachment, outfiles.zip, contains the DieHardC results for each of the
> tests I ran. The first tests use a file of random
>numbers generated by the Merseinne Twister.
>
>Filename Size
>RNDFILE 8,520,008 The original output from the Merseinne
> Twister
> the file above passes all DieHardC tests, see rndfile.out for
> results
>
>TESTUNC AHC 28,703,076 Original RNDFILE uncompressed to this
> You CAN use h2unc.exe to uncompress a file that has not been
> compressed
>
>TESTCOMP AHC 8,520,270 Original RNDFILE compressed to this (larger)
> the file above also passes all DieHardC tests
> see testcomp.out for results
I can't run these tests on my machine I don't have enough space but I
assume you used h2com.exe to compress the file above
>
>TESTUNC ZIP 10,604,680 Uncompressed file zipped to this (max comp)
> ZipMagic (by Mijenix) used for all zip files... equiv to pkzip
> 2.04G
> See testuncz.out for results
>
>TESTUNC CMP 8,520,008 Uncompressed file compressed back to this
> *note* above file is identical to original (see rndfile &
> rndfile.out)
> if you uncompress a not-compressed file then compress it you DO get
> the
> original file back.
>
>TEST PDF 13,231,497 test file built from three pdf files
>
>TESTCOM AHC 11,745,172 test.pdf file compressed with Adaptive Huffman
> h2com.exe compressor
> this file passes only 37 of 235 tests, see testcom.out.
This I take it was with one huffman pass. If you look at my code I
recomend reverseing file a running h3com.exe and reversing again.
Not sure what it would do but hope it would pass more tests.
Also after a million bytes the table most likely changes little. If
you look at the one that "focuses" the table. Meaning the one
that you can use a function to assign the 1's and 0's as a function
of past statistics. You could tune the compressor to pass certain
DIEHARD tests. But this is not as straightforward as it seems.
>
>TEST ZIP 7,057,309 test.pdf file zipped (max
> compression)
> This file too short for DieHardC, exausted input file at OPERM5
> test.
> Failed all tests it ran. See testzip.out.
>
> The last two tests don't show anything, unless it's that zip compresses pdf
> files better. I doubt that passing 37 out of 235
>tests is significant. Passing all tests doesn't prove anything, either. It only
> says that the file *looks* random, not that it is.
>
>> [EMAIL PROTECTED]
I could not recover you attached file
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: [EMAIL PROTECTED]
Subject: signature help with s/mime
Date: Wed, 22 Sep 1999 15:47:02 GMT
Given an S/MIME clearsigned message, I know this: I need to:
1) Run the message through a MIME parser
2) base64decode the result
3) extract the signature and the message from the result
Now - let's say I've done that already. How do I verify the signature
(PKCS#7)?
I've looked into many api's, and many seem suited for this purpose, but
they are relatively complex (though most claim to be high-level), and
require a certain amount of time just to learn the api's. Many also
require a full PKI installed (e.g. Entrust) and working, but this is a
great deal more than I need.
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Exclusive Or (XOR) Knapsacks
Date: 22 Sep 1999 11:21:35 -0700
In article <097G3.212$[EMAIL PROTECTED]>,
Boris Kolar <[EMAIL PROTECTED]> wrote:
> Encoding and decoding linear codes involves matrix multiplications, which
> can't be performed with only XOR operations.
I disagree.
Most of the proposals I know of use GF(2), and over GF(2), everything's
an XOR. When the matrix (e.g., the public key) is fixed, the bits you
XOR together are fixed.
So in fact, encoding with linear codes often uses only XOR operations.
> Almost all cryptosystems based
> on error-correcting codes have been broken or shown to be very impractical
> (like McEllice).
That's not my understanding. Cite?
My understanding is that these systems have typically only been broken
when the parameters are too small. Systems with large enough parameters
are apparently believed secure.
Consider, e.g., the following paper, where the best attacks cited on
McEliece's cryptosystem have complexity approximately 2^64.
A. Canteaut, F. Chabaud, ``Improvements of the attacks on cryptosystems
based on error-correcting codes,'', Tech. report DMI LIENS-95-21,
Ecole Normale Superieure, Jul 1995.
ftp://ftp.ens.fr/pub/reports/liens/liens-95-21.A4.dvi
With larger parameters, the security factor can apparently be improved.
> What I really wanted to point out is the fact that most algorithms using
> only XOR operations fall in the category of linear algebra. Linear algebra
> is not well suited for cryptographic application. Some nonlinearity is
> required to make cryptosystem strong. A very good example for this is a very
> popular random sequence generator "linear feedback shift register - SFSR)
> which has very good statistical properties, but is cryptographically
> insecure.
I agree with your general remarks. I objected merely because you made
a very sweeping statement, which seemed too broad.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************