Cryptography-Digest Digest #214, Volume #10 Fri, 10 Sep 99 01:13:03 EDT
Contents:
Re: sourcecode of DES in VB (James Pate Williams, Jr.)
Re: simple key dependent encryption ("Kwong Chan")
Re: Source code ([EMAIL PROTECTED])
Re: Looking for Completely-Free Strong Algorithms ([EMAIL PROTECTED])
Re: Looking for Completely-Free Strong Algorithms (David A Molnar)
Re: 512 bit number factored (Dylan Thurston)
Re: simple key dependent encryption
fun about FIPS74 (jerome)
Re: [q] gnupg strength (Tom St Denis)
Re: some coder/hacker help please? (Tom St Denis)
Re: some coder/hacker help please? (Tom St Denis)
Re: Looking for Completely-Free Strong Algorithms (Tom St Denis)
Re: What was the debugging symbol of the third Windows key?
Re: some coder/hacker help please? (Tom St Denis)
Re: some information theory (Anti-Spam)
Re: GnuPG 1.0 released (Jerry Coffin)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (James Pate Williams, Jr.)
Subject: Re: sourcecode of DES in VB
Date: Tue, 07 Sep 1999 21:06:37 GMT
On Mon, 6 Sep 1999 20:05:37 +0200, "Buchinger Reinhold"
<[EMAIL PROTECTED]> wrote:
>I need a version of DES in VB (possible in Pascal). It could also be a
>simplified DES. It's only to see how it works.
>I am very grateful for any help !
The algorithm is given in the _Handbook of Applied Cryptography_ by
Alfred J. Menezes et. al. Chapter 7 7.4.2 pages 252-256. You can find
this chapter on-line if you search for it. Try searching recent posts
to sci.crypt by Menezes or do a wb search using his name or the title
of the handbook. I implemented the algorithm easily from the handbook
in C.
==Pate Williams==
[EMAIL PROTECTED]
http://www.mindspring.com/~pate
------------------------------
From: "Kwong Chan" <[EMAIL PROTECTED]>
Subject: Re: simple key dependent encryption
Date: Fri, 10 Sep 1999 10:14:34 +0800
> >a) what is this type of encryption called?
> >b) am i wrong in thinking this type of key dependent encryption would be
> >tough to crack?
>
> a) A polyalphabetic cipher with a mixed alphabet and a repeating key.
> b) Yes, you are wrong.
>
If the key is of large period, say longer than the plaintext to be
encrypted,
is the polyalphabetic cipher still easy to be crack?
------------------------------
From: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
Subject: Re: Source code
Date: Thu, 09 Sep 1999 19:42:11 -0700
Try ftp://ftp.replay.com/pub/replay
And browse to your heart content.
-Ryan Phillips-
Erick Stevenson wrote:
> Greetings. I need source code for the highest exportable algor's. Can
> anyone help me with this? VB, C++, Java whatever is fine.
>
> Best regards,
> Erick Stevenson
------------------------------
From: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
Subject: Re: Looking for Completely-Free Strong Algorithms
Date: Thu, 09 Sep 1999 19:44:34 -0700
check out ftp://ftp.replay.com/pub/crypto and browse
Usually licenses are found with the algorithm itself.
-Ryan Phillips-
Joseph Ashwood wrote:
> I'm looking for royalty-free strong algorithms. I know that AES (when it's
> decided) will meet the criteria, but I need something fairly soon, and I
> need it to have free source code available also (not enough time to do it
> right myself). Now before Scott plugs himself again, let me say that this
> encryption will be used for bidirectional communication so treating
> everything as a single block simply will not work. I thank you for putting
> up with my questions (although I've only asked a couple over the years), I
> really do appreciate it.
> Joseph
------------------------------
From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Looking for Completely-Free Strong Algorithms
Date: 10 Sep 1999 01:58:52 GMT
Joseph Ashwood <[EMAIL PROTECTED]> wrote:
> I'm looking for royalty-free strong algorithms. I know that AES (when it's
> decided) will meet the criteria, but I need something fairly soon, and I
> need it to have free source code available also (not enough time to do it
> right myself). Now before Scott plugs himself again, let me say that this
I'm assuming that you want a symmetric system.
3DES is tried, true, and royalty-free. It's part of Wei Dai's crypto
library : http://www.eskimo.com/~weidai/cryptlib.html
Twofish has reference source code available from Counterpane. It has made
it to the final round of AES, for what you think that's worth.
http://www.counterpane.com/twofish.html
Blowfish is royalty-free, as well.
http://www.counterpane.com/blowfish.html
Those are the only algorithms which I'm 100% sure are unpatented
at this moment. There are likely other unpatented algorithms which
may be useful to you. Maybe checking the Block Cipher Lounge would
help determine whether another algorithm is attractive enough to
investigate : http://www.ii.uib.no/~larsr/bc.html
-David
------------------------------
From: Dylan Thurston <[EMAIL PROTECTED]>
Subject: Re: 512 bit number factored
Date: 09 Sep 1999 18:08:49 -0700
Bob Silverman <[EMAIL PROTECTED]> writes:
> I'd also like to address one more issue. Everyone keeps harping on
> the fact that computers are getting faster all the time. Current claims
> are 2x every 18 months. Even if * this* can be sustained for 20 years
> (and I don't know either way), I must point out that NFS depends
> much more on two OTHER things than CPU speed. The first
> is cache size, and the second is non-cache memory to register
> latency. And these most definitely are NOT improving 2x every
> 18 months. 10 years ago, typical dynamic RAM had 80 nsec
> latency. Now it has 60. Not much improvement. And which
> cache size has improved by a little more than this, it still has not
> been dramatic. For NFS to make use of the increased CPU speed,
> we need to see much bigger data caches. And such improvements
> do not seem to be forthcoming. If someone believes I am wrong
> here, please post some numbers.
This confuses me a little. From earlier comments (about the
impossibility of paging) I had understood that the access pattern for
solving the matrix was essentially flat; that it was impossible to
predict where future references would be. But now you ask for bigger
caches, which are useful exactly when you do have locality of
reference. Can you explain this a little?
Also, is it really memory _latency_ that matters so much? If I
understand correctly, memory _bandwith_ has improved more than latency
over the last 10 years. Why doesn't this help?
Best,
Dylan Thurston
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] ()
Subject: Re: simple key dependent encryption
Date: 10 Sep 99 02:15:01 GMT
JPeschel ([EMAIL PROTECTED]) wrote:
: First, you determine the length of the key by the Kasiski method or
: the IOC. (John seems to prefer Kasiski, Doug the IOC. You could
: use both tests -- that is, use the IOC to confirm the Kasiski results,
: and, in the process, learn both methods.)
Well, the Kasiski method is easier to describe briefly, to convince
someone that there is a way of attacking these ciphers. In this case,
because 8-bit characters are extra redundant, even Kasiski is not needed
(and might not work well on object code, although, especially when
compiler-generated, it too has repeated sequences) - one just has to look
for MSB patterns in text stretches. (If the file being encrypted happens
to be a .GIF or .JPG, then even Kasiski or the IC may not work; instead,
try deriving the key from over the known headers, and see if the result is
a valid file.)
John Savard
------------------------------
From: [EMAIL PROTECTED] (jerome)
Subject: fun about FIPS74
Date: 10 Sep 1999 03:04:04 GMT
Reply-To: [EMAIL PROTECTED]
in the fips74 about the DES implementation, you can found this in the
Table of content (in http://www.itl.nist.gov/fipspubs/fip74.htm)
5. IMPLEMENTATlON OF THE A1GOR1THM
It is probable that this document has been scanned and went through
a charactere recogniser... i found that funny for an organisation
feared partly because of its CPU power :)
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: [q] gnupg strength
Date: Fri, 10 Sep 1999 03:10:26 GMT
In article <YhWB3.153$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:
> could anybody versed or familiar with the algorithms of pgp and gnupg
> explain the relative strength of the gnupg system as compared to pgp2?
>
> is gnupg going to be worth the time of setting up and building new
> public keys and distributing them to all my friends?
Well it's probably no more secure, but I think GnuPG has more features, is
more open as well. Personally I would to stick with what works. If you keep
changing systems you could introduce your own errors.
Tom
--
damn windows... new PGP key!!!
http://people.goplay.com/tomstdenis/key.pgp
(this time I have a backup of the secret key)
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: some coder/hacker help please?
Date: Fri, 10 Sep 1999 03:08:27 GMT
In article <[EMAIL PROTECTED]>,
"John E. Kuslich" <[EMAIL PROTECTED]> wrote:
> The source code file appears to be corrupted. I tried downloading it twice and
> each time winzip barfed...complaining that the file was bogus.
>
> JK http://www.crak.com Password Recovery Software
>
Goplay is not working at all (my website/email), I am sorry about this. Any
ideas?
Tom
--
damn windows... new PGP key!!!
http://people.goplay.com/tomstdenis/key.pgp
(this time I have a backup of the secret key)
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: some coder/hacker help please?
Date: Fri, 10 Sep 1999 03:07:36 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (John Bailey) wrote:
> On Thu, 09 Sep 1999 16:19:48 -0700, "John E. Kuslich" <[EMAIL PROTECTED]>
> wrote:
>
> >The source code file appears to be corrupted. I tried downloading it twice and
> >each time winzip barfed...complaining that the file was bogus.
>
> I get the same result.
Yup goplay sucks the big one. I can't even upload 61kb....any suggestions?
(peekboo.exe is probably no good on the site either)
I can email it if anyone wants...
Tom
--
damn windows... new PGP key!!!
http://people.goplay.com/tomstdenis/key.pgp
(this time I have a backup of the secret key)
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Looking for Completely-Free Strong Algorithms
Date: Fri, 10 Sep 1999 03:33:37 GMT
In article <#rFelpy##GA.280@cpmsnbbsa02>,
"Joseph Ashwood" <[EMAIL PROTECTED]> wrote:
> I'm looking for royalty-free strong algorithms. I know that AES (when it's
> decided) will meet the criteria, but I need something fairly soon, and I
> need it to have free source code available also (not enough time to do it
> right myself). Now before Scott plugs himself again, let me say that this
> encryption will be used for bidirectional communication so treating
> everything as a single block simply will not work. I thank you for putting
> up with my questions (although I've only asked a couple over the years), I
> really do appreciate it.
Try using blowfish or cast if you need something right now. I personally
trust Twofish, Serpent, Rindael, RC6, MARS and a few other AES ciphers....
just cuz they are not standard doesn'r mean they are weak.
Tom
--
damn windows... new PGP key!!!
http://people.goplay.com/tomstdenis/key.pgp
(this time I have a backup of the secret key)
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: [EMAIL PROTECTED] ()
Subject: Re: What was the debugging symbol of the third Windows key?
Date: 10 Sep 99 03:24:53 GMT
Alex ([EMAIL PROTECTED]) wrote:
: Just out of curiosity, what was the debugging symbol of the third key in
: the Windows crypto routines? It does not seem to be mentioned in the
: original announcement.
That's because the error that allowed the second key's name to be visible
did not also reveal the third key's name. That is still a secret known
only to Microsoft.
John Savard
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: some coder/hacker help please?
Date: Fri, 10 Sep 1999 03:30:58 GMT
In article <7r94g3$v19$[EMAIL PROTECTED]>,
Tom St Denis <[EMAIL PROTECTED]> wrote:
> I have (as everyone knows) released PeekBoo ([1]). I have just put out a
> 'v1.3' which addresses some errors (security wise) that I have found. I
> would like however others to attack it as well. The program unfortuneatly is
> limited to symmetrical encryption but it serves it's purpose well. Basically
> I want to try and find any memory 'leaks' where key bits or password bits are
> left in memory and the like.
>
> [1] The program and source code can be found at
> http://people.goplay.com/tomstdenis/pb.html
http://members.tripod.com/~tomstd/pb.html as well..
You know what I hate 'free' online websites. Goplay is down every 30 seconds
and tripod is full of ads it's useless. Try the above link you might be able
to get pb.zip ... I hate this!!! damn it... I remember when tripod was
simple... just ftp the file... now it's ads this ads that...
Whoopy... any ideas? I want to get this out but I don't have anywhere to put
it that works!!!
Tom
--
damn windows... new PGP key!!!
http://people.goplay.com/tomstdenis/key.pgp
(this time I have a backup of the secret key)
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: Anti-Spam <[EMAIL PROTECTED]>
Subject: Re: some information theory
Date: Thu, 09 Sep 1999 21:18:46 -0700
John Savard wrote:
>
> Anti-Spam <[EMAIL PROTECTED]> wrote, in
> part:
>
> >Compression with huffman-encoding type schemes is identical to a
> >substition encipherment.
> >The relative order of SYMBOLS as ENCODED in the uncompressed data/file
> >with ASCII, UNICODE or any other binary representation is PRESERVED by
> >the compression
>
> Yes, with static Huffman compression. Not with _adaptive_ Huffman, or
> Lempel-Ziv.
>
> >Compressing data/files prior to encryption with a cipher system does not
> >alter the frequency of the SYMBOLS encoded in the compressed data/file
> >relative to their original frequencies in the uncompressed file/data.
> >Compressing prior to encrypting does not permutate the order of the
> >SYMBOLS encoded in the compressed data/file relative to their original
> >order as encoded in the uncompressed file/data. Compression only alters
> >the encoding of the SYMBOLS to achieve the maximal entropy possible as a
> >function of the probabilites of the SYMBOLS themselves.
>
> Not compression in general, but a particular limited form of
> compression.
>
> For cryptographic purposes, it is important to compress as well as
> possible.
>
> So instead of using single ASCII characters as SYMBOLS, how about
> compression where digraphs, trigraphs, and even English words are
> SYMBOLS? Even simply a multi-mode Huffman code, where letters are
> coded based on their frequencies after a vowel, or after a consonant,
> or at the beginning of a word, will help.
>
> >First, Compressed data is NOT necessarily random data. Many of us assume
> >the compressed form of a file is "equivalent" in some form to true
> >random data. It is not.
>
> No, but the better the compression, the closer the compressed file
> will resemble random data.
>
> You are right, though, that it is very difficult to compress well
> enough to get very close to randomness.
>
> But it is incorrect, and confusing, to conclude from that that it is
> impossible to derive any benefit from compression. The benefit is,
> unfortunately, limited.
I DO NOT conclude that it is impossible to derive any benefit from
compression prior to encryption. I am exploring the logical holes in
the assertion that compression supplies the diffusion and confusion
properties of encryption in and of itself. Many of the posts read here
imply ( or is just the way I read it? :) ) that compression achieves the
desired effects of encryption without using encryption.
The static Huffman encoding for compression is equal to a simple
substitution of the binary codes for the symbols in the uncompressed
data/file into another binary code for those symbols in the compressed
data/file. The order and frequency of the symbols remains invariant.
That's not a very secure "hiding" of symbols.
I am on the hook to extend this analysis to adaptive encoding for
compression (such as adaptive huffman encoding for compression.) I've
thought about it most of the day. I can only post at night. I've enjoyed
the responses.
Here's what I'm thinking about now:
Adaptive encoding for compression usually begins with an assumed model
for the probabilites of occurance for the symbol set ("alphabet" based
on some sample of text, data, file using those symbols. Each of the
symbols in the "alphabet" gets an a priori probability. We can use this
initial probability model to define an initial encoding for the
symbols.
As we process the data/file we update the frequencies for the symbols as
they appear. The encoding for a given symbol changes "rapidly" early
on. We encounter subsets of the "alphabet" as we progress through the
data/file. Now here's what charms me about this problem.
What is the spatial "rate of change" of probability of occurance for a
symbol as I process the uncompresses data/file from its beginning to its
end? At some point in the data/file binary stream the statistics
should/may (a conjecture - no proof yet) tend to steady-state values and
thus the encoding for symbols in the compressed data/file will not
change over time.
Autocorrelation testing of the compressed data/file may/should identify
candidate binary values for the encoded symbols.
Adaptive huffman coding and static huffman coding produce encodings for
the symbols with these general characteristics:
1. No two codings for a symbol are identical
2. No additional information is necessary to specify where one symbol's
code begins and another symbol' code ends once the starting point of the
sequence of encodings is known. (there's a clinker. So start the
analysis from the end of the compressed data/file.)
3. Given the probability of k symbols can be ranked as p1 >= p2 >= p3 >=
p4... >= pk,
the length of the binary codes for the symbols can be ranked as L1 <= L2
<= L3 <= L4...<= Lk.
4. At least 2, but not more than 2, of the codes with length Ln ( 1 <= n
<= k ) are identical in bit values except for their final bit.
5. If the encoding is optimum, all possible binary sequences of length
Ln - 1 are either codes or code prefixes.
If the encoding reaches a "steady state" then we should expect to see
the same codes for symbols from that point on in the compressed
data/file. We would need to guess at the value of k (how many unique
symbols there are) and by the assumption that frequencies are not
changing or changing "slowly" (can't define that yet), and assuming the
encoding is optimal, use the 5 points above as a guide to sort through
candidate code strings.
I would look for autocorrelations in the later part of the data/file
with binary sequencies of length Ln - 1 constrained by a guess for the
number of symbols k. If I knew something about the symbols as the
appeared in the uncompressed data/file, I'd start with that. If its a
short text message this would prove fruitless, since many of the
characters in the alpabet may not occur in the uncompressed text. I
(conjecture) would think text longer than the unnicity distance for
English would mean I could just assume all letters of the alphabet did
appear, and use static freq. of occurance models for the letters and
establish which letter should have the shortest, the next shortest, then
next, etc... bit length encoding.
I'd try an similar definition of "symbols" for other kinds of data/files
(like images or pictures, etc..) based on knowledge of standard
structures and data representations for that type of data/file.
What if the frequencies of occurance do NOT reach a steady state?
That's something I'm still mulling over. I am leaning toward a possible
equivalence between polyalphabetic encryption systems and adaptive
huffman codes. In the static case, the huffman codes for symbols
substitute for the original codes for the symbols in the data/file. As
an approximation to the adaptive huffman encoding, imagine that as the
codes for the symbols (ie. an alphabet) in the early (front) part of the
data/stream is substitution set, and later ( deeper ) into the
data/stream, its another substitution set, them another... this is
analogous to polyalphabetic encipherment. Different codes represent the
same symbol in different part of the compressed data/file, but I
(conjecture) think the cryptanalysis techniques for polyalphbetic
ciphers would work here.
Again, the 5 points above about the characteristics of the binary codes
for symbols in the compressed data/file would guide my initial guesses
at the encodings.
Adaptive and static huffman codes for compression maintain the order and
frequency of the symbols in the compressed data/file. These compression
methods only change the code (binary sequence) that represents the
symbol. A data/file with varying codes for the same symbol may be
analogous to a polyalpabetic cipher. (Wish we had a proof.)
[EMAIL PROTECTED]
>
> John Savard ( teneerf<- )
> http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (Jerry Coffin)
Subject: Re: GnuPG 1.0 released
Date: Thu, 9 Sep 1999 22:35:33 -0600
In article <7r9kja$k2n$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] says...
[ ... ]
> What the Free Software Foundation says about patents is not legal advice
> that I would bet my life savings on. For years the FSF excused the GNU
> support for decompressing LZW by claiming that the LZW patent was only
> about compression, and never mind that about half of the ~150 claims in
> 4,558,302 contain the word "decompression."
Though it causes me great pain to do so, I have to agree with the FSF
on this one. The question is NOT whether the Welch patent is about
decompression at all. In considering infringement of a patent, there
is one crucial question to ask: does the alleged infringer implement
all the elements of at least one independent claim, or reasonable
equivalents thereof?
If the answer is no, then there can be no direct infringement of the
patent. Regardless of the total number of claims that mention
decompression, ALL the independent claims of the Welch patent specify
the ability to carry out compression. Most ALSO mention decompression
in a phrase something like "A system of compression and decompression
comprising..." but not ONE independent claim talks about a method of
decompression independent of compression.
As such, I have to agree with the FSF that a program that does ONLY
decompression of LZW encoding, but NOT LZW compression probably does
not directly infringe on the Welch patent.
That, it should be mentioned, leaves two other possible forms of
infringement: inducing infringement, and contributory infringement.
Inducing infringement basically consists of knowingly inducing some
other party to infringe on the patent. E.g. if I supplied a library
that included LZW compression code, I could argue that the library,
but itself doesn't do any compression, so I don't infringe. The court
would almost certainly be easy to convince that I was inducing others
to infringe on the patent though, by selling the code that was useful
by infringing on the patent. I don't think the FSF code for LZW
decompression falls under this form of infringement at all.
Contributory infringement is a more likely possibility. This is when
a particular thing doesn't carry out all the elements of infringing
the patent by itself, but when used in combination with other things,
the result infringes. This form of infringement carries one
substantial limitation however: for something to be considered a
contributory infringer, it must NOT have any substantial non-
infringing use. If, for example, you created a program that could
decompress and view GIF files, this might be considered as
contributory infringement, because it's of no real use unless used in
some sort of overall system that can also create GIF files.
OTOH, if the same program could also view files compressed with, say,
JPEG compression, then the possibility of contributory infringement
would probably be removed, because there would now be a substantial
non-infringing use for the program.
To summarize: even if EVERY claim in the Welch patent contained the
word "decompression" it would not necessarily mean that a decompressor
on its own would infringe.
Disclaimer: I work with patents and (particularly) reading patents to
figure out whether things infringe or not on a regular and ongoing
basis, so I think my opinions above are reasonably well-informed.
Despite this, I hasten to point out that I'm NOT an attorney of any
kind, and none of the statements above should be construed as legal
advice in any way, shape, form or fashion. If you find somebody who's
a patent attorney in the US, and the disagree with what I've said
above, I'd _really_ like to hear about it, and the particulars of what
they did say.
--
Later,
Jerry.
The Universe is a figment of its own imagination.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************