Cryptography-Digest Digest #772, Volume #12 Mon, 25 Sep 00 17:13:01 EDT
Contents:
Re: LFSR as a passkey hashing function? (Bryan Olson)
Re: Tying Up Loose Ends - Correction (SCOTT19U.ZIP_GUY)
Re: Please verify ("Joseph Ashwood")
Re: Encryption speed of *fish (Simon Johnson)
Re: Letter substitution decoder (Albert Yang)
Re: What make a cipher resistent to Differential Cryptanalysis? (Tom St Denis)
Re: Proper way to intro a new algorithm to sci.crypt? (Albert Yang)
Re: Tying Up Loose Ends - Correction (SCOTT19U.ZIP_GUY)
Re: Why is TwoFish better than Blowfish? (Albert Yang)
On block encrpytion processing with intermediate permutations (Mok-Kong Shen)
Re: A New (?) Use for Chi (Mok-Kong Shen)
Re: Software patents are evil. (Darren New)
Re: Why is TwoFish better than Blowfish? (Tom St Denis)
Re: Triple DES CBC test vectors (Mok-Kong Shen)
----------------------------------------------------------------------------
From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: LFSR as a passkey hashing function?
Date: Mon, 25 Sep 2000 18:56:52 GMT
Simon Johnson wrote:
> I just want a 128-bit hash of a 128-bit key so i know the a user has
> supplied the correct password to decrypt the file, in my file
> encryption utility.
The standard solution is to add salt and hash. You might want
to do "key stretching" since people tend to choose pass phrases
poorly, and the full 128 bits is longer than needed for a check.
Or you could look for an existing encryption utility that
meets your needs.
--Bryan
--
email: bolson at certicom dot com
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Tying Up Loose Ends - Correction
Date: 25 Sep 2000 19:00:47 GMT
[EMAIL PROTECTED] (John Savard) wrote in
<[EMAIL PROTECTED]>:
>On Sat, 23 Sep 2000 01:41:42 +0100, David Hopwood
><[EMAIL PROTECTED]> wrote, in part:
>
>>With all due respect, this is complete nonsense.
>
>>When we talk about "reducing the keyspace", that means reducing the
>>size of the set of keys that need to be considered at all; it does
>>not mean finding a test that will eliminate keys by testing them one
>>by one.
>
>Although you may be correcting a genuine error in the previous
>posting, I should point out that David Scott is talking about
>something completely different.
>
>He is not claiming that imperfections in the compression used reduce
>the number of possible keys in the cipher used afterwards.
>
>Rather, *assuming* that the keyspace can be searched, what an
>imperfection in compression does is reduce the number of the candidate
>plaintexts, already deciphered, that need to be laboriously
>distinguished from the gibberish that decryption with the wrong key
>would produce.
>
>It certainly is true that "perfect" compression, were it attainable,
>would confer a kind of information-theoretic security (although still
>short of that provided by the one-time pad) on messages. However, it
>is equally true that no compression scheme could possibly be devised
>that would, for a given ciphertext, cause decryption with DES to
>produce 2^56 equally plausible plaintexts (and, of course, the
>ciphertext would be another plausible plaintext, hence perfect
>compression would also be steganographic!).
>
>However, Mr. Scott does not claim to have achieved compression that is
>perfect in that sense.
>
>He simply advocates using compression that is carefully designed to
>leave no *obvious* redundancy for the cryptanalyst: specifically, when
>Huffman coding is used, there is a redundancy caused by the message
>ending on a symbol boundary _which can easily be removed_, so why not?
>
However one can use my conditioned huffman coding such that
if you limit the input to only character A to Z and space. THen
if you compress you will get 2^56 messages back from the use of all
keys so that they only contain messages made up of A to Z and spaces.
If one further had a dictionary only based on words with a space
attached. It would be rather simple to modify it to compress a file such
that each DES guessed key would come back as valid word messages
but as to plausable meainngs thats another question.
I think it would be easier to just have a dictionary of sounds
since less sounds than words. IN this sense maybe Japanese easier
to encrypt.
If some one supples a list of say 2^16 words I would offer to make
the bijective compressor decompressor for it. so that any binary
8 bit file would decompress to that a file of that set and visa versa.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Please verify
Date: Mon, 25 Sep 2000 12:04:13 -0700
What does your friend consider brute force? And which part of the system did
he brute force?
If he claims he brute force searched for his private key value, then I will
have no problem proclaiming him a liar. As proof, assume his computers were
performing 2^64 checks per second (that would require many, many billions of
processors), it would still require an average of 2^4063 seconds to recover
the private key, that's approximately 10^1215 years.
Now if he brute forced his passphrase, then he chose a bad passphrase, and
he should really put more effort into them.
If he brute forced the value that is actually used for the key, then at
minimum, using the same mega-cluster as before, it would require at minimum
2^48 seconds, which is still a matter of nearly 9 million years.
The key to this is that your friend said brute force. If he had used the
optimal attacks these are in some cases greatly reduced, the passphrase
statement remains the same, attacking the private key becomes a matter of
only a few centuries, and the attacking of the value used for the key
becomes a mere 2^26 seconds or about 2 years (using the mega-cluster above).
I'm sorry but your friend either chose a bad passphrase, or is not being
entirely truthful.
Joe
------------------------------
From: Simon Johnson <[EMAIL PROTECTED]>
Subject: Re: Encryption speed of *fish
Date: Mon, 25 Sep 2000 19:24:57 GMT
In article <[EMAIL PROTECTED]>,
Runu Knips <[EMAIL PROTECTED]> wrote:
> Mark Wooding wrote:
> > Some will tell you that Twofish is faster. It depends. If you
really
> > pull all of the stops out, then Twofish is faster on some
> > architectures.
>
> I'm surprised to hear that Twofish is faster than Blowfish. The
> encryption loop of Blowfish is extremely simple
>
> a ^= p[i];
> b ^= sbox1[(a >> 000) & 0xff] + (sbox1[(a >> 010) & 0xff]
> ^ (sbox2[(a >> 020) & 0xff] + sbox3[(a >> 030) & 0xff]);
>
> while one round of Twofish looks like this (in my own AFAIK very
> fast implementation):
>
> x = ( (k->s[0][((a) >> 000) & 0xff]) ^ (k->s[1][((a) >> 010) & 0xff])
> ^ (k->s[2][((a) >> 020) & 0xff]) ^ (k->s[3][((a) >> 030) & 0xff]));
> y = ( (k->s[1][((b) >> 000) & 0xff]) ^ (k->s[2][((b) >> 010) & 0xff])
> ^ (k->s[3][((b) >> 020) & 0xff]) ^ (k->s[0][((b) >> 030) & 0xff]));
> x += y; y += x;
> x += k->t[i++];
> y += k->t[i++];
> c = rotr (c ^ x, 1);
> d = rotl (d, 1) ^ y;
>
> Both algorithms have 16 rounds. The round of Twofish is
> substantly more complex than that of Blowfish, isn't it ?
> It is Blowfish with some additional instructions (an
> addition, a rotation, and a xor). So how can Twofish ever
> be faster than Blowfish ???????????????
>
Thats a sin, changing one function can alter the security of the cipher
so much that it deserves a new name. If many functions have been
changed or added, drawing a comparision is meaningless.
As for how could it be faster? Well, this is not a good question to
ask. Faster than what? -> Blowfish is awful in hardware, its requires a
lot of hardware memeory and slow. Twofish would probably be far
superior in hardware.
Simon.
----
Hi, i'm the signuture virus,
help me spread by copying me into Signiture File
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Albert Yang <[EMAIL PROTECTED]>
Subject: Re: Letter substitution decoder
Date: Mon, 25 Sep 2000 19:37:40 GMT
http://www.cs.arizona.edu/http/html/people/muth/Cipher/
I assume this is what you are looking for?
Albert
mls wrote:
>
> No, not Captain Marvel, but am looking for a program that will try all
> possible letter substitution combinations, with small library of
> plaintext for matching. Maybe even in BASIC.
> Assumes that the text to be decrypted is simple substitution and not
> one time pad.
> Anyone know of such an .exe? (Oakland is down, winfiles has
> nothing...)
>
> Thanx,
>
> m shannon
> mls at fusionsites dot com
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Mon, 25 Sep 2000 19:33:16 GMT
In article <[EMAIL PROTECTED]>,
Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
>
> Tom St Denis wrote:
> >
>
> > Oh sorry, yes you're right. Let's consider DES with ultra weak
sboxes,
> > at most you add 8! work to the attack (or 16(8!) if you use round
> > independent reorderings) which is not a heck of alot)
>
> But how LARGE would be the figure, if each round has a
> different ordering? And if one has 16 S-boxes available
> for choice?
16! is only 2^44 still not large enough. Plus you need sufficient
diffusion to make the cipher work.
So if you had 32 sboxes and a really well balanced linear transform of
some sort, then the cipher could be secure given random orderings ofthe
sboxes... but slide attacks would still win...
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Albert Yang <[EMAIL PROTECTED]>
Subject: Re: Proper way to intro a new algorithm to sci.crypt?
Date: Mon, 25 Sep 2000 19:51:28 GMT
Thank you very much. I am just about finished with some preliminary
crypto-analysis.
This is my observation as far as new algorithm postings:
First, there is Pedigree. When Bruce or Eli says they have a new
algorithm, then people give them quite a bit of attention. But I feel
rightly so, Pedigree plays a large roll, and so for a "newbie" to throw
out a new algorithm for someone to take a look at, there is a hill to go
over because there is no "name recognition". I mean, we know them on a
first name basis, Lars, Eli, Bruce etc...
So what I have done (what I am doing) to remedy this is that when I do
stick my algorithm up for all of you wolves out there to tear apart, I
will have done some preliminary crypto-analysis for it. I'll
(hopefully) have pointed out some of the weaknesses of the algorithm. I
think this, more than anything, shows that a lot of thought and care
went into the algorithm, the writing of it, the architecture of it. I
think this is the part that gives an algorithm credibility.
Also, I hope that my documentation will be concise and clear. I give
examples, sample code, sample reference code, sample optimized code, and
describe in great details some thinking and logic behind the choices and
selections made. Oh, and hopefully throw a little math and pepper in
there to spice up the dish.
I'm thinking (hoping) this is what the crypto community (specifically
sci.crypt) would like to see as far as an algorithm release, and that
any attacks that come this way will be against the algorithm itself, and
not against me for making claims that just are unfounded and untrue, or
for lack of leg work...
But of course, every newbie has to make some rediculous claim, and so
I'll do it below, so I will have gotten it out of my system and we can
get on with real cryptoanalysis.
Thank you all.
Albert
~~~~~~~~~~~~~
My algorithm:
cures cancer, athelete's foot, and cold sores.
it is unbreakable,
and most importantly, does not clash with plaid pants, a claim that
cannot be said by any other algorithm in current existance.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Tying Up Loose Ends - Correction
Date: 25 Sep 2000 19:52:33 GMT
[EMAIL PROTECTED] (Bryan Olson) wrote in
<8qo6h7$5fo$[EMAIL PROTECTED]>:
>Tim Tyler wrote:
>> Bryan Olson wrote:
>> : With no attack better than exhaustive
>> : search you have no way to rapidly eliminate any large class of keys.
>>
>> You might well have a method that works much faster than decrypting
>> blocks and analysing the plaintext for known characteristics.
>> The latter might require a number of blocks and take a non-trivial
>> volume of processing.
>
>Yes, you can work on reducing that constant. The mistake is
>pretending it does something to the keyspace. The lower
>bound on the work for exhaustive search increases
>exponentially in the size of the key.
>
>
>> : The problem is a conceptual error and cannot be fixed by adjusting
>> : terminology.
>>
>> I don't think so - the issue appears to be purely terminological.
>
>No. The effect of the keyspace is still there. If I give
>you a thousand bits of Blowfish (448-bit key) ciphertext and
>corresponding plaintext, what's the "effective keyspace"?
>
I would say it very close to "ZERO" in most cases. However
that does not mean it takes zero time. It would greatly depend
on the cipher. I would guess the NSA may have analyzed the total
method of BLOWFISH to use the info. The fact that the information
for breaking it is there does not mean that one who has little
knowledge could do it in his life time. Just like if you spoke
Najaho to me. All the information was available to the Japanese
if they knew what to look for.
But when looking at crypto. One should strive so that even if
the person has a GOD computer that is infintely fast no single solution
can be found. There are not many software methods available in
the non BLACK world for quantum computers yet. There are some and
in the open and IBM has a 5 qubit quatum computer. Do you really think
the NSA does not. Why design crypto that only has one solution when
it is so easy to have many. What is known about quatum computers is
that they work by doing may things at once. It is the solution collapses
down to a single state. If there is only one correct solution then
that is the one it will collapse to. So why reduce the effective keyspace
when one does not have too.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
From: Albert Yang <[EMAIL PROTECTED]>
Subject: Re: Why is TwoFish better than Blowfish?
Date: Mon, 25 Sep 2000 20:05:15 GMT
I have read quite a bit of personal attacks on Mr. BS (as you refer to
him as) and I refrain from comment on most of it. But I do have to say
that the implication of Mr. BS putting in a back door or designing an
algorithm and claiming a given level of security when he knows that it
is cracked (or crackable by the NSA) is a HUGE accusation. So just
throwing some caution in your general direction. I have no way to prove
that statement to be true or false, I have no way to substantiate
whether the NSA has or has not cracked both Blowfish and/or Twofish.
But I just caution the tautologistic statements that exceed ad hominem.
Play nice everybody...
Albert
> Having known people who know the man who designed blow fish or
> two fish leads me to believe both ciphers are most likely broken by
> the NSA before they were introduced to the public. At least these
> are my feelings about these fishy ciphers. It seems like NSA humour
> to give both ciphers FISHY names.
> But since the idea of a cipher is security. It is plain stupid to
> say Twofish is better than Blowfish becasue Blowfish is a PC cipher.
> If one has a PC and is sending messages to someone with a PC then why
> use a cipher that could becasue of its ability to be run on many machines
> would ecpoxe it to more attacks. Even if they algoritmically had
> the same level security which can't be proved anyway.
>
> David A. Scott
> --
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: On block encrpytion processing with intermediate permutations
Date: Mon, 25 Sep 2000 22:37:46 +0200
I should very much appreciate comments on the following idea:
Given a common block cipher of m cycles (for a Feistel
cipher, a cycle means 2 rounds, i.e. processing steps
resulting in all bits of the block being processed once),
we can, in case of software implementation, does processing
of one cycle for all blocks of the message, then perform
a pseudo-random permutation of the words of the entire
message, thus rearranging the contents of each individual
block, before doing processing of the next following cycle
(again for all blocks of the message), and so on till all
m cycles are done.
This seems to be a viable alternative to achieve interaction
among blocks via block chaining. It may incur more computing
costs. It offers on the other hand the means of introducing
more secret informations than the key of the cipher, e.g.
via the seed of a PRNG to effect the pseudo-random
permutation. But one can also avoid providing that additional
secret information by dividing the message into two halves
and applying sorting of the words of one half to obtain data
to rearrange the words of the other half and vice versa.
(See the thread 'On pseudo-random permutation' initiated by
me on 21st Aug. We neglect for our purpose here the fact
that the bits in each half are unlikely to be sufficiently
random.)
Apparently, the scheme is not well suited for hardware
implementation. On the other hand, the scheme could be
scaled down in the sense that one does pseudo-random
permutation of the bytes of the block, thus limiting
all operations to within each individual block being
processed.
We note that for multiple encryptions with several block
encryption algorithms, the pseudo-random permutation of
the words of the message may also be done at the
interfaces of the algorithms instead of at the interfaces
of the cycles of the individual algorithms.
M. K. Shen
==========================
http://home.t-online.de/home/mok-kong.shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: A New (?) Use for Chi
Date: Mon, 25 Sep 2000 22:57:04 +0200
John Savard wrote:
>
> Perhaps because this is not practical by hand, or because it is not
> needed, it hasn't been mentioned in the references I've seen - which
> don't include everything publicly available, of course -
>
> it occurred to me that a part of the information in a contact chart
> for either a monalphabetic substitution - or for one of the alphabets
> in mixed-alphabet Vigenere - could be summarized in the following way:
>
> The letters found following some letter, as they constitute a set of
> letters, are a distribution. So the extent to which one letter has
> only a few contacts, or a wide variety of them, could be represented
> by the chi of the letters that follow it, and separately the chi of
> the letters that precede it. Thus, in addition to its frequency, each
> letter has two other simple numbers which might be useful in
> distinguishing it from other letters.
For a Vigenere, if the key is sufficiently long, couldn't
it be that the quantities you mentioned are not sufficiently
differentiating to be easily exploited? (I don't know,
just a conjecture.)
M. K. Shen
------------------------------
From: Darren New <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Software patents are evil.
Date: Mon, 25 Sep 2000 20:46:10 GMT
Jerry Coffin wrote:
> selling the product. If that's really the case, and they threatened
> a lawsuit, about all they'd have to do is attach a mildly edited copy
> of the lawsuit, where the patent holder says the product is the same
> as what's patented, and proof that they were selling it three years
> before.
IANAL. But there are business concerns, like, do you really want someone
telling the press that your startup is already being sued for patent
infringement? Especially when you're trying to get more funding?
> I doubt that's what really happened though: what I suspect really
> happened is that the person applied for the patent BEFORE they were
> selling the product, and it took around three years for the patent
> application to be processed.
Nope. The application date was 1997. We were selling it in 1994.
> They infringed on the patent for three
> years without knowing about it, and when the patent was issued, the
> patent holder put them on notice. After examining things, the
> company's attorneys undoubtedly realized that there was no way they
> could win, but to put a good face on things, they made noises about
> it being too expensive to fight rather than admitting that they's
> simply been stealing something (albeit unknowingly) all along.
Nope. It was considered useful (business-wise) to obtain the exclusive
rights.
But I'll remember your comments for next time it comes up. I wasn't aware
there was any way to challenge a patent without going to court.
--
Darren New / Senior MTS & Free Radical / Invisible Worlds Inc.
San Diego, CA, USA (PST). Cryptokeys on demand.
"No wonder it tastes funny.
I forgot to put the mint sauce on the tentacles."
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Why is TwoFish better than Blowfish?
Date: Mon, 25 Sep 2000 20:36:38 GMT
In article <[EMAIL PROTECTED]>,
Albert Yang <[EMAIL PROTECTED]> wrote:
> I have read quite a bit of personal attacks on Mr. BS (as you refer to
> him as) and I refrain from comment on most of it. But I do have to
say
> that the implication of Mr. BS putting in a back door or designing an
> algorithm and claiming a given level of security when he knows that it
> is cracked (or crackable by the NSA) is a HUGE accusation. So just
> throwing some caution in your general direction. I have no way to
prove
> that statement to be true or false, I have no way to substantiate
> whether the NSA has or has not cracked both Blowfish and/or Twofish.
>
> But I just caution the tautologistic statements that exceed ad
hominem.
Why? I know the NSA broke Scottu19 which is why he totes it so much.
Oh proof? Um left that in my other pants... hehehehe
His posts are funny if not completely useless....
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Triple DES CBC test vectors
Date: Mon, 25 Sep 2000 23:16:36 +0200
MVJuhl wrote:
>
> I'm looking for test vectors for Triple DES in CBC mode.
>
> I have looked at NIST Special Publication 800-20, but it doesn't contain the
> tests I'm looking for.
>
> What I would like is encryption/decryption on 2 or 3 blocks of known
> plaintext, so I can verify that the chaining I've implemented is working.
But the CBC involves only xor. If you employ xor of
the words of the ciphertext of the previous block
with the words of the plaintext of the current block,
that code in software (assuming you do software
implementation) should be simple enough as to be
unlikely to introduce errors, I believe.
M. K. Shen
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************