Cryptography-Digest Digest #474, Volume #14 Wed, 30 May 01 00:13:01 EDT
Contents:
Cookie encryption (Chenghuai Lu)
Re: Cookie encryption ("Jeffrey Walton")
Re: Uniciyt distance and compression for AES ("Matt Timmermans")
Re: Crypto neophyte - programming question ("Joseph Ashwood")
Re: Cool Cryptography Website! (John Savard)
Re: The HDCP Semi Public-Key Algorithm (ammendment) (John Savard)
Re: Uniciyt distance and compression for AES (SCOTT19U.ZIP_GUY)
Re: "computationally impossible" and cryptographic hashs ("Dj Le Dave")
Re: "computationally impossible" and cryptographic hashs (Tom St Denis)
Re: "computationally impossible" and cryptographic hashs ("Scott Fluhrer")
Re: Uniciyt distance and compression for AES ([EMAIL PROTECTED])
Re: Uniciyt distance and compression for AES ([EMAIL PROTECTED])
Re: Uniciyt distance and compression for AES (SCOTT19U.ZIP_GUY)
Re: Uniciyt distance and compression for AES ([EMAIL PROTECTED])
----------------------------------------------------------------------------
From: Chenghuai Lu <[EMAIL PROTECTED]>
Subject: Cookie encryption
Date: Tue, 29 May 2001 20:25:53 -0400
I see some websites use encrypted cookies. They claim that it will
protect our privacy. What is their point? Why need cookie encryption?
Thank you very much for your reply.
Lu
--
-Chenghuai Lu ([EMAIL PROTECTED])
------------------------------
Reply-To: "Jeffrey Walton" <[EMAIL PROTECTED]>
From: "Jeffrey Walton" <[EMAIL PROTECTED]>
Subject: Re: Cookie encryption
Date: Tue, 29 May 2001 20:43:41 -0400
http://news.cnet.com/news/0-1007-202-2870712.html
"Chenghuai Lu" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
:
: I see some websites use encrypted cookies. They claim that it will
: protect our privacy. What is their point? Why need cookie encryption?
:
: Thank you very much for your reply.
:
: Lu
:
:
: --
:
: -Chenghuai Lu ([EMAIL PROTECTED])
------------------------------
From: "Matt Timmermans" <[EMAIL PROTECTED]>
Subject: Re: Uniciyt distance and compression for AES
Date: Tue, 29 May 2001 20:32:40 -0400
I'm actually not a fan of the compression-before-encryption thing, but you
can at least win here. Random strings will expand significantly when
decompressed, producing plaintexts with high redundancy.
This is actually an interesting metric for the model you use. If your model
is well optimized for the type of data you actually use it for, then random
strings should expand at about the same ratio that real compressed files do.
If random strings expand more, then your model is too specific, or it is
optimized for a different dataset. If random strings expand less, then your
model is too general.
"Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> "SCOTT19U.ZIP_GUY" wrote:
> > when breaking a encryption you need to know the language used.
>
> Actually, that's not always necessary. Often, just the fact
> that the natural language has high redundancy is enough.
> All one really has to do is to distinguish correct guesses
> from incorrect ones, not fit a detailed source model.
>
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Crypto neophyte - programming question
Date: Tue, 29 May 2001 17:11:56 -0700
The typical way is to build a display friendly mapping, since you've got
more than 64 values and less than 128, some form of ASCII armoring is what
you want to use. Since you're not worries about compatability, or even
security at this point a simple method is to begin with 0,9, a,z,A,Z, ", \
that gives you 64 characters to map into, to perform the decryption, unmap
the series, and perform the decryption as if the munging never happened.
There's actually a standard mapping along these lines, with slightly
different decision (unless I hit it by luck), and there are more advanced
M-to-N mappings available where space is at a premium.
I should warn you though that you shouldn't use this for anything more than
a thought experiment, the XOR with repeated values is one of the most basic
methods, it also has one of the most basic attacks. While this is good as an
introduction to a very small number of the problem, there are much more
important things you need to deal with. Since you seem to be focussing on
implementation of the algorithms, I'd suggest that you move away from
scripting languages, you'll find them very hampering once you get beyond the
most basic levels. C is a good general pupose language for cryptography. Of
course if you want to do cryptography at some of the higher levels you'll
find that a (large number of) pencils, and paper will be of more use than a
programming language for most of what needs to be done (since you won't be
able to actually perform most of the substantial attacks).
Joe
"edt"
<[EMAIL PROTECTED]>
wrote in message
news:[EMAIL PROTECTED]...
>
> I'm just getting into crypto (as of yesterday), and I'm coding a very
> simple script to XOR a textfile with a passphrase.
>
> After doing all the XORs, I get ASCII values between 1 and 127. I want
> to convert these to display-friendly ASCII (i.e. values between 32 and
> 126).
>
> How can I munge the values to get them printable, but in a way that can
> be decrypted later?
>
> This may be a dumb question for this group, but some of you must have
> done this before. Thanks...
>
> -eric
>
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Cool Cryptography Website!
Date: Wed, 30 May 2001 01:13:01 GMT
On Wed, 30 May 2001 00:48:29 +0200, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote, in part:
>Be happy that someone values your articles so much that
>he publishes them verbatim as if these were his own.
>After all, it assists your purpose of disseminating
>your views/ideas. (You don't expect revenues from your
>internet publishing, do you?)
Well, that's why I'm not _too_ upset. But this goes beyond any
question of revenues, since it could conceivably put my authorship in
question.
John Savard
http://home.ecn.ab.ca/~jsavard/frhome.htm
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: The HDCP Semi Public-Key Algorithm (ammendment)
Date: Wed, 30 May 2001 01:14:39 GMT
On Mon, 28 May 2001 17:56:54 GMT, [EMAIL PROTECTED]
(John Savard) wrote, in part:
>I came up with an even _better_ and *simpler* way to increase the
>resistance of the design to correlation attacks. (Although, come to
>think of it, all that's needed to negate the benefits of my second
>idea would be to apply a deconvolution to the output bit sequence...so
>I really need to do one more thing...)
>Anyhow, a diagram and explanation is at:
>http://home.ecn.ab.ca/~jsavard/crypto/co4y12.htm
After I made that post, I came up with one fix...and now I've come up
with a better one, a simple technique that can be added on to other
LFSR-based stream ciphers as well, and *really* put the kibosh on
correlation attacks.
John Savard
http://home.ecn.ab.ca/~jsavard/frhome.htm
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Uniciyt distance and compression for AES
Date: 30 May 2001 01:22:11 GMT
[EMAIL PROTECTED] (Matt Timmermans) wrote in
<BbXQ6.8935$[EMAIL PROTECTED]>:
>I'm actually not a fan of the compression-before-encryption thing, but
>you can at least win here. Random strings will expand significantly
>when decompressed, producing plaintexts with high redundancy.
>
>This is actually an interesting metric for the model you use. If your
>model is well optimized for the type of data you actually use it for,
>then random strings should expand at about the same ratio that real
>compressed files do. If random strings expand more, then your model is
>too specific, or it is optimized for a different dataset. If random
>strings expand less, then your model is too general.
This fact was noticed by A german who use to write me frequently.
Though I have not heard from him this year. He got noise sources and
compared several files and uncompressed random files with H2UNC.EXE
He noticed the same thing. That a "real random" file expanded quite
a bit more than the data that he compressed with H2COM.EXE So
his theory was and I belive he was correct. That an attacker could
determine if information was on the file and correct key was used
by the expansion. Actually with many random files some expanded in
the range of data files but most expanded much more.
About that time the "fix" I thought of was to compress the file
then reverse the file. Then uncompress the reversed file it now
exapnds like a random file. reverse the file again. And recompress
the file. Then encrypt.
This forces the attacker to do several passed throught the file
after decryption since the expansion during the first decompression
would be roughly the same for all files. I don't think we discussed
much after that so I am not sure where the topic stands. But thanks
for reminding me of the problem.
To apply this to BICOM I think I would use it for first pass
then reverse file expand with H2UNC.COM can only expand file by
factor of 8. Then reverse file again and run BICOM with
a second key. OF couse if you wnat one key thats fine.
This kind of procedure tends to make the encryption more of an
all or nothing transform. And fixs one other problem with file
encryption. The unicity is usually to opitimistic since it takes
several bytes for compression to kick in so this tends to get
rid of that problem of the actually uncitiy distance for an
encryption being to optimistic,
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged or
something..
No I'm not paranoid. You all think I'm paranoid, don't you!
------------------------------
From: "Dj Le Dave" <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Wed, 30 May 2001 02:15:58 GMT
Related to this, I always wondered why UNIX (and other such systems) bother
to hash at all. Could they not just "encrypt" the entire password, so to
speak? Break it up into 56-bit blocks (or whatever) and perform the hash
independantly on each block, then concat. all the output together to form
the password file entry. This way, an attacker would have to essentially
have to do a plaintext-cyhertext attack on each block to get the whole
password. Thus, if the password is X bits long, then the attacker would have
to brute-force all X bits (well, except for dictionary attacks, etc.). And,
we don't run into the birthday-paradox, as DES is a one-to-one function. At
any rate, it seems to me that we gain quite a bit of security at the cost of
a little disk space.
Any Comments?
======================
www.daverudolf.ca
"Daniel Graf" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> I have a basic question about cryptographic hashes. A friend
> and I were talking about how the hash algorithm used some UNIX
> machines uses only 64 (or so) bits of the password.
> Now, I don't know much of anything about cryptography, but I
> have used hash algorithms for use with hash-tables. Are
> cryptographic hashes mostly dissimilar, or am I wrong in guessing that
> more than one input at a Unix login prompt may match the string found
> in the passwd file? (particularly for passwords greater than 8
> characters).
> Looking at the sci.crypt faq I found in section 7.1 something
> which says:
>
> For some one-way hash functions it's also computationally
> impossible to determine two messages which produce the
> same hash.
>
> Does "computationally impossible" mean literally that such a
> thing cannot happen?
>
> Sorry if this is such a stupid question.
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Wed, 30 May 2001 02:26:51 GMT
Dj Le Dave wrote:
>
> Related to this, I always wondered why UNIX (and other such systems) bother
> to hash at all. Could they not just "encrypt" the entire password, so to
> speak? Break it up into 56-bit blocks (or whatever) and perform the hash
> independantly on each block, then concat. all the output together to form
> the password file entry. This way, an attacker would have to essentially
> have to do a plaintext-cyhertext attack on each block to get the whole
> password. Thus, if the password is X bits long, then the attacker would have
> to brute-force all X bits (well, except for dictionary attacks, etc.). And,
> we don't run into the birthday-paradox, as DES is a one-to-one function. At
> any rate, it seems to me that we gain quite a bit of security at the cost of
> a little disk space.
The problem with breaking the hash into smaller pieces (i.e encrypting
56-bit blocks) is the attack amounts to a linear amount of 2^28 work.
For example if you used two 56-bit hash values instead of one big
112-bit hash the work required to find the password or equivalent
passwords is 2^29 trials. Compared to 2^56 work required against the
112-bit hash ...
Which is why a login should really use a 128-bit or more hash with a
64-bit salt or so. Enough to make dictionary attacks against a whole
slew of passwords infeasible. Note that salts don't slow down the
search for a single password, only multiple.
Tom
------------------------------
From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Tue, 29 May 2001 19:19:08 -0700
Dj Le Dave <[EMAIL PROTECTED]> wrote in message
news:yJYQ6.7617$[EMAIL PROTECTED]...
> Related to this, I always wondered why UNIX (and other such systems)
bother
> to hash at all. Could they not just "encrypt" the entire password, so to
> speak? Break it up into 56-bit blocks (or whatever) and perform the hash
> independantly on each block, then concat. all the output together to form
> the password file entry. This way, an attacker would have to essentially
> have to do a plaintext-cyhertext attack on each block to get the whole
> password. Thus, if the password is X bits long, then the attacker would
have
> to brute-force all X bits (well, except for dictionary attacks, etc.).
And,
> we don't run into the birthday-paradox, as DES is a one-to-one function.
At
> any rate, it seems to me that we gain quite a bit of security at the cost
of
> a little disk space.
If you encrypt, what key do you use? If the attacker figured it out (using
whatever means), he could take the password file, and decrypt it, giving him
all the passwords.
--
poncho
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Uniciyt distance and compression for AES
Date: Tue, 29 May 2001 17:47:54 -0800
"Douglas A. Gwyn" wrote:
>
> "SCOTT19U.ZIP_GUY" wrote:
> > when breaking a encryption you need to know the language used.
>
> Actually, that's not always necessary. Often, just the fact
> that the natural language has high redundancy is enough.
> All one really has to do is to distinguish correct guesses
> from incorrect ones, not fit a detailed source model.
Right. So it seems to me that compressing before encryption just
adds one more step to distinguishing the correct guess.
For instance, say the "language" is ASCII. For a 64-bit block, there
are values which have no "meaning" in ASCII, so redundancy exists.
If I have an amount of ciphertext exceeding the unicity distance, pick
keys at random
and decrypt, and a decryption leads to a meaningful ASCII message, then
there is a high probability that that key is the correct key.
If some sort of compression is used such that every randomly chosen
decryption
key yields a "meaningful" decryption (i.e. valid compressed value), then
at
first glance it would appear that redundancy has been eliminated and
there
is no way to distinguish between correct and incorrect guesses. Hence,
the unicity distance has been increased since
unicity distance = keyspace entropy/redundancy
But *if I know the plaintext was compressed before encryption, and I
know
what the compression algorithm is* [1] then all I have to do is
decompress
each decrypted value. If it decompresses to a meaningful ASCII message,
then it distinguishes the correct key.
Then again, compression would seem to reduce the number of ciphertext
characters required for a unique, meaningful decompressed decipherment
so maybe it
*reduces* the unicity distance, which is a benefit to the cryptanalyst.
Is this right?
[1] Not necessarily a true assumption
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Uniciyt distance and compression for AES
Date: Tue, 29 May 2001 17:58:36 -0800
wtshaw wrote:
>
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>
> > Simply put, redundancy is a feature of the language. You can't change
> > the redundancy without changing the language. Without changing the
> > redundancy you can't change the unicity distance (assuming no
> > change in the entropy of the keyspace).
> >
> > Am I overlooking something?
>
> Yes, redundancy is an far more individually determined quality than you
> think. Language can be highly personalized. Language that is static is
> dead.
[snip]
Ok, but sooner or later you will have to re-create the original message
in
the original language so you can read it, execute it, view it, listen to
it,
compile it, whatever. If you can decompress in order
to recover the original message in it's original language (and its
original redundancy),
why can't the cryptanalysis do the same thing, assuming he knows
what compression was used?
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Uniciyt distance and compression for AES
Date: 30 May 2001 03:20:43 GMT
[EMAIL PROTECTED] wrote in <[EMAIL PROTECTED]>:
>"Douglas A. Gwyn" wrote:
>>
>> "SCOTT19U.ZIP_GUY" wrote:
>> > when breaking a encryption you need to know the language used.
>>
>> Actually, that's not always necessary. Often, just the fact
>> that the natural language has high redundancy is enough.
>> All one really has to do is to distinguish correct guesses
>> from incorrect ones, not fit a detailed source model.
>
>
>Right. So it seems to me that compressing before encryption just
>adds one more step to distinguishing the correct guess.
>
>For instance, say the "language" is ASCII. For a 64-bit block, there
>are values which have no "meaning" in ASCII, so redundancy exists.
>If I have an amount of ciphertext exceeding the unicity distance, pick
>keys at random
>and decrypt, and a decryption leads to a meaningful ASCII message, then
>there is a high probability that that key is the correct key.
>
>If some sort of compression is used such that every randomly chosen
>decryption
>key yields a "meaningful" decryption (i.e. valid compressed value), then
>at
>first glance it would appear that redundancy has been eliminated and
>there
>is no way to distinguish between correct and incorrect guesses. Hence,
>the unicity distance has been increased since
>
> unicity distance = keyspace entropy/redundancy
>
>But *if I know the plaintext was compressed before encryption, and I
>know
>what the compression algorithm is* [1] then all I have to do is
>decompress
>each decrypted value. If it decompresses to a meaningful ASCII message,
>then it distinguishes the correct key.
This is the fallacy Uncity distance is the amount of CIPHERTEXT
not the amount of PLAINTEXT
>
>Then again, compression would seem to reduce the number of ciphertext
>characters required for a unique, meaningful decompressed decipherment
compression would reduce the number of ciphertext characters
required to get the same number of Ascii characters that existed
in a message if compression was not used. But the benifits of real
compression ( that is bijective compression ) is that more keys
now map to ascii than before. So that you need more cipher text
characters that map to long plaintext strings before a unique
identification can be made. So the unicity distance is not
reduced but increases.
It like looking through several thousand possible ascii messages
when compression used. But if no compression if you only have
5 or 6 messages its easier to find the correct one with less
characters. No need to expand each string to a long distance.
As another example use my conditonal huffman compression only with
the condtion set being characters (A-Z) then when you compressed
and encrypt. Every key would lead to "ONLY STRINGS" of characaters
made of letters from A-Z. So you would have to look at longer
sequences to be sure you had the correct key. If no compression used
then may only a few keys map to ascii at all. So you don't need as
as many ciphertext characters to get the data. THis is nothing
new this is all stuff from the 40's by Shannon but it was kind of
knowlege kept secret from the average person.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
made in the above text. For all I know I might be drugged or
something..
No I'm not paranoid. You all think I'm paranoid, don't you!
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Uniciyt distance and compression for AES
Date: Tue, 29 May 2001 19:03:33 -0800
"SCOTT19U.ZIP_GUY" wrote:
>
> [EMAIL PROTECTED] wrote in <[EMAIL PROTECTED]>:
[snip]
> >If I have an amount of ciphertext exceeding the unicity distance,
[snip]
> This is the fallacy Uncity distance is the amount of CIPHERTEXT
> not the amount of PLAINTEXT
Since I specified "amount of ciphertext" it should have been clear
that I understood that unicity distance refers to ciphertext, not
plaintext.
For reasons you state below, compression seems to reduce the amount
of ciphertext needed for a meaningful, decompressed decryption.
> >
> >Then again, compression would seem to reduce the number of ciphertext
> >characters required for a unique, meaningful decompressed decipherment
>
> compression would reduce the number of ciphertext characters
> required to get the same number of Ascii characters that existed
> in a message if compression was not used. But the benifits of real
> compression ( that is bijective compression ) is that more keys
> now map to ascii than before. So that you need more cipher text
> characters that map to long plaintext strings before a unique
> identification can be made. So the unicity distance is not
> reduced but increases.
>
> It like looking through several thousand possible ascii messages
> when compression used. But if no compression if you only have
> 5 or 6 messages its easier to find the correct one with less
> characters. No need to expand each string to a long distance.
[snip]
I'll have to think about this more before I comment. I'm not
familiar with "bijective compression".
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************