Cryptography-Digest Digest #765, Volume #12 Mon, 25 Sep 00 00:13:00 EDT
Contents:
Re: What make a cipher resistent to Differential Cryptanalysis? ("Paul L.
Hodgkinson")
Re: What make a cipher resistent to Differential Cryptanalysis? (Tom St Denis)
Re: What make a cipher resistent to Differential Cryptanalysis? ("Scott Fluhrer")
Re: What make a cipher resistent to Differential Cryptanalysis? (Tom St Denis)
Re: Tying Up Loose Ends - Correction (Bryan Olson)
Re: What make a cipher resistent to Differential Cryptanalysis? (Tom St Denis)
Re: Tying Up Loose Ends - Correction (John Savard)
Re: 128-bit Secure LFSR ("Trevor L. Jackson, III")
Re: Again a topic of disappearing e-mail? (Benjamin Goldberg)
Re: Tying Up Loose Ends - Correction (John Savard)
Re: Tying Up Loose Ends - Correction (John Savard)
Re: 128-bit Secure LFSR (Whoops) (Jeff Gonion)
Re: Tying Up Loose Ends - Correction (SCOTT19U.ZIP_GUY)
Re: Software patents are evil. ("Paul Pires")
Re: Tying Up Loose Ends - Correction (SCOTT19U.ZIP_GUY)
Re: What make a cipher resistent to Differential Cryptanalysis? (SCOTT19U.ZIP_GUY)
Re: Question on biases in random-numbers & decompression (Terry Ritter)
----------------------------------------------------------------------------
From: "Paul L. Hodgkinson" <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Mon, 25 Sep 2000 01:25:51 +0100
"Mok-Kong Shen" wrote
> No practical cipher is absolutely secure.
Which isn't entirely true.
A one-time pad can be shown to be theoretically unbreakable, which requires
a random key of the same length as the message.
Such a pad was used in the Cold War to encrypt communications between Moscow
and Washington, sent by secure courier.
(ref. www.cacr.math.uwaterloo.ca/hac)
Paul
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Mon, 25 Sep 2000 00:22:02 GMT
In article <8qm2n1$sai$[EMAIL PROTECTED]>,
"Scott Fluhrer" <[EMAIL PROTECTED]> wrote:
>
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> >
> >
> > "David C. Barber" wrote:
> > >
> > > DES, for example is considered resistant to Differential
Cryptanalysis,
> > > particularly in its selection of S-boxes. What about them, or any
> cipher,
> > > makes it DF resistant?
> >
> > I believe that one good way is to arrage to have the
> > S-boxes of the cipher be all different and to have
> > them either key-dependent or fixed but having their
> > ordering dependent on the key. I like to know references
> > to analysis results for such situations, if any.
> >
> Well, it's not a general attack, but one cipher with key dependent s-
boxes
> that fell to DC would be Mercy (presented at FSE 2000 by Paul
Crowley). The
> reference would be:
>
> http://cluefactory.cluefactory.org.uk/paul/mercy/
>
> BTW: I'd be skeptical of a cipher that relied to key-dependent
ordering of
> s-boxes to be resistant to DC. Unless it used a lot of sboxes, I'd
think
> the attacker would be able to guess the ordering, and attack based on
that
> guess. If the sboxes aren't resistent enough without the guess, I
wouldn't
> think that that extra work would really make the difference...
That's not entirely correct. Standard decorrelation (pair-wise) is
immune to order-2 attacks when used correctly. I could therefore make
a block cipher with random "sboxes" that is immune to diff attacks. In
fact TC6b (the eight round variant) is secure against all order-2
attacks. I think higher order derivatives could possibly break it but
I know too little to make an educated guess.
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Sun, 24 Sep 2000 17:19:00 -0700
Tom St Denis <[EMAIL PROTECTED]> wrote in message
news:8qm5qr$rk1$[EMAIL PROTECTED]...
> In article <8qm2n1$sai$[EMAIL PROTECTED]>,
> "Scott Fluhrer" <[EMAIL PROTECTED]> wrote:
> > BTW: I'd be skeptical of a cipher that relied to key-dependent
> ordering of
> > s-boxes to be resistant to DC. Unless it used a lot of sboxes, I'd
> think
> > the attacker would be able to guess the ordering, and attack based on
> that
> > guess. If the sboxes aren't resistent enough without the guess, I
> wouldn't
> > think that that extra work would really make the difference...
>
> That's not entirely correct. Standard decorrelation (pair-wise) is
> immune to order-2 attacks when used correctly. I could therefore make
> a block cipher with random "sboxes" that is immune to diff attacks. In
> fact TC6b (the eight round variant) is secure against all order-2
> attacks. I think higher order derivatives could possibly break it but
> I know too little to make an educated guess.
Actually, I was refering to key-dependent ordering of fixed s-boxes. If the
s-boxes are not secure with a public ordering, the extra uncertainty
generating by a key-dependent ordering would appear to be unlikely to be
enough...
--
poncho
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Mon, 25 Sep 2000 00:47:18 GMT
In article <8qm62c$lqr$[EMAIL PROTECTED]>,
"Paul L. Hodgkinson" <[EMAIL PROTECTED]> wrote:
>
> "Mok-Kong Shen" wrote
>
> > No practical cipher is absolutely secure.
>
> Which isn't entirely true.
> A one-time pad can be shown to be theoretically unbreakable, which
requires
> a random key of the same length as the message.
> Such a pad was used in the Cold War to encrypt communications between
Moscow
> and Washington, sent by secure courier.
>
> (ref. www.cacr.math.uwaterloo.ca/hac)
The BBS stream cipher is thought to (or was that proven) to be as
secure as factoring. While not an absolute it's a close match
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: Tying Up Loose Ends - Correction
Date: Mon, 25 Sep 2000 00:54:47 GMT
Tim Tyler wrote:
> David Hopwood wrote:
> : Tim Tyler wrote:
>
> :> ... David is discussing the effect of adding an additional section
> :> of known plaintext to the end of the file. This normally has the
> :> effect of decreasing the keyspace by almost exactly five bits -
> :> provided the effective keyspace doesn't go negative, of course.
>
> : With all due respect, this is complete nonsense.
>
> : When we talk about "reducing the keyspace", that means reducing the
> : size of the set of keys that need to be considered at all; it does
> : not mean finding a test that will eliminate keys by testing them one
> : by one.
>
> I try to use the term "effective keyspace", when discussing this type
of
> rapid elimination of whole classes of keys - I used the word
"effective"
> in the section quoted above - though I see that Icould have used it
again
> where I did not, and gained some clarity.
>
> The technique can reduce the time taken to deal with specific keys,
> and can /sometimes/ speed up the process of a keyspace search.
>
> I'm reasonably happy with describing the effect as "a reduction in the
> effective keyspace" - despite the fact that the keys still exist, and
may
> still require /some/ consideration.
It's still complete nonsense. With no attack better
than exhaustive search you have no way to rapidly
eliminate any large class of keys.
The problem is a conceptual error and cannot be fixed
by adjusting terminology. A trial decryption with one
candidate key takes some minimum constant time. That
time, multiplied by the number of effective keys puts a
lower bound on the time for exhaustive search.
The actual time may also include some constant due to
the effort to recognize correct decryption. That
constant is idependent of the keyspace. Extra bits of
redundancy effect only that trivial constant and do
nothing whatsoever to the keyspace.
--Bryan
--
email: bolson at certicom dot com
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: Mon, 25 Sep 2000 01:10:13 GMT
In article <8qm6ru$n04$[EMAIL PROTECTED]>,
"Scott Fluhrer" <[EMAIL PROTECTED]> wrote:
>
> Tom St Denis <[EMAIL PROTECTED]> wrote in message
> news:8qm5qr$rk1$[EMAIL PROTECTED]...
> > In article <8qm2n1$sai$[EMAIL PROTECTED]>,
> > "Scott Fluhrer" <[EMAIL PROTECTED]> wrote:
>
> > > BTW: I'd be skeptical of a cipher that relied to key-dependent
> > ordering of
> > > s-boxes to be resistant to DC. Unless it used a lot of sboxes,
I'd
> > think
> > > the attacker would be able to guess the ordering, and attack
based on
> > that
> > > guess. If the sboxes aren't resistent enough without the guess, I
> > wouldn't
> > > think that that extra work would really make the difference...
> >
> > That's not entirely correct. Standard decorrelation (pair-wise) is
> > immune to order-2 attacks when used correctly. I could therefore
make
> > a block cipher with random "sboxes" that is immune to diff
attacks. In
> > fact TC6b (the eight round variant) is secure against all order-2
> > attacks. I think higher order derivatives could possibly break it
but
> > I know too little to make an educated guess.
>
> Actually, I was refering to key-dependent ordering of fixed s-boxes.
If the
> s-boxes are not secure with a public ordering, the extra uncertainty
> generating by a key-dependent ordering would appear to be unlikely to
be
> enough...
Oh sorry, yes you're right. Let's consider DES with ultra weak sboxes,
at most you add 8! work to the attack (or 16(8!) if you use round
independent reorderings) which is not a heck of alot)
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Tying Up Loose Ends - Correction
Date: Mon, 25 Sep 2000 02:17:14 GMT
On Sat, 23 Sep 2000 01:41:42 +0100, David Hopwood
<[EMAIL PROTECTED]> wrote, in part:
>With all due respect, this is complete nonsense.
>When we talk about "reducing the keyspace", that means reducing the
>size of the set of keys that need to be considered at all; it does
>not mean finding a test that will eliminate keys by testing them one
>by one.
Although you may be correcting a genuine error in the previous
posting, I should point out that David Scott is talking about
something completely different.
He is not claiming that imperfections in the compression used reduce
the number of possible keys in the cipher used afterwards.
Rather, *assuming* that the keyspace can be searched, what an
imperfection in compression does is reduce the number of the candidate
plaintexts, already deciphered, that need to be laboriously
distinguished from the gibberish that decryption with the wrong key
would produce.
It certainly is true that "perfect" compression, were it attainable,
would confer a kind of information-theoretic security (although still
short of that provided by the one-time pad) on messages. However, it
is equally true that no compression scheme could possibly be devised
that would, for a given ciphertext, cause decryption with DES to
produce 2^56 equally plausible plaintexts (and, of course, the
ciphertext would be another plausible plaintext, hence perfect
compression would also be steganographic!).
However, Mr. Scott does not claim to have achieved compression that is
perfect in that sense.
He simply advocates using compression that is carefully designed to
leave no *obvious* redundancy for the cryptanalyst: specifically, when
Huffman coding is used, there is a redundancy caused by the message
ending on a symbol boundary _which can easily be removed_, so why not?
I may quibble over technical details, and like everyone else, I am
dismayed by his exaggerated claims for the urgency of this minor
point, but the basic notion is valid.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
Date: Sun, 24 Sep 2000 22:32:54 -0400
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: 128-bit Secure LFSR
Tom St Denis wrote:
> In article <[EMAIL PROTECTED]>,
> "Trevor L. Jackson, III" <[EMAIL PROTECTED]> wrote:
> > Tom St Denis wrote:
> >
> > > I reposted my slfsr to my website at
> > >
> > > http://www.geocities.com/tomstdenis/files/slfsr.c
> > >
> > > I am using a single 128-bit LFSR in self-shrinking mode. I would
> > > appreciate someone who could verify the polynomial used. I am using
> > > the LFSR in galois config. I made the LFSR poly with a program
> called
> > > LFSR.EXE that I found on an ftp that was posted here a bit ago.
> > >
> > > It's compact code, albeit not that efficient (are any LFSR's
> > > efficient?). It features a simple rekeying :), fast enough for
> desktop
> > > usage and it's really simple...
> >
> > Sparse LFSRs can be implemented very efficiently (~memory
> bandwidth). As
> > the number of terms in the polynomial goes up so does the work that
> has to
> > be performed to produce each output.
>
> That's not true. Consider Galois Config LFSRs.
If you think carefully about the general problem you'll find it is true.
Working with a machine that is large with respect to the seed makes it easy
to be deceived about this. If you were working on a very tiny machine, say
a UTM, you'd find the work scales with the tap count. Similarly, on a
modern CPU, implementing a large FSR (100s of bits wide) you'd find it also
true, even in the Galois configuration.
For the Galois approach the limit seems to be a tap density such that on
average each "word" of the register has a tap. Once you've reached that
limit they all look alike. But when you are below that limit you can
optimize the update routine by skipping the words that have no tap present.
The highest speed implementation makes one pass over the register for each
tap. Wide registers and few taps leads to implementations as time-efficient
as additive generators with better space-efficiency.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Again a topic of disappearing e-mail?
Date: Mon, 25 Sep 2000 02:35:02 GMT
Simon Johnson wrote:
[snip]
> Store you're stuff on floppy disk...... You can quickly destroy the
> media if nessary. I find a blender the best techinque :)
Isn't that bad for the blender?
--
... perfection has been reached not when there is nothing left to
add, but when there is nothing left to take away. (from RFC 1925)
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Tying Up Loose Ends - Correction
Date: Mon, 25 Sep 2000 02:34:59 GMT
On Fri, 22 Sep 2000 11:47:35 GMT, Tim Tyler <[EMAIL PROTECTED]> wrote, in
part:
>Has this problem been studied? Are there padding schemes that do better
>than (say) prepending a self-terminating length field to the start of
>the message? This results in many "impossible" padded messages :-<
If the message is to be padded to a multiple of 8 bits, insert a 3-bit
length field, indicating the number of real bits in the last byte. No
termination is required, and no message becomes impossible as a
result.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Tying Up Loose Ends - Correction
Date: Mon, 25 Sep 2000 02:32:38 GMT
On Mon, 25 Sep 2000 00:54:47 GMT, Bryan Olson <[EMAIL PROTECTED]>
wrote, in part:
>The problem is a conceptual error and cannot be fixed
>by adjusting terminology. A trial decryption with one
>candidate key takes some minimum constant time. That
>time, multiplied by the number of effective keys puts a
>lower bound on the time for exhaustive search.
>The actual time may also include some constant due to
>the effort to recognize correct decryption. That
>constant is idependent of the keyspace. Extra bits of
>redundancy effect only that trivial constant and do
>nothing whatsoever to the keyspace.
That is correct, except that as redundancy goes to zero, that constant
goes to infinity. It actually *can* take longer to recognize plaintext
than to do a trial decryption with DES.
For example, suppose the compression scheme is designed so that
decompression produces texts with the same mix of word lengths, and
the same frequencies and contact frequencies, as ordinary text. A
multi-mode Huffman coding, based on predefined tables, can produce
such a result: there are more compression schemes than LZW.
(Note, however, that LZW and its relatives - *and* Adaptive Huffman,
which is David Scott's choice - must produce obvious nonsense most of
the time. Yet, this type of scheme does need to be available as an
option, since not only text may be encrypted.)
Of course, even that level of compression would be quick work for a
dictionary of the words in multiple languages, and the NSA certainly
has those in computerized form. Thus, what he is seeking to do is
indeed difficult.
But because recognizing a plaintext involves intelligence, while
performing a block cipher decryption is a mechanical process, it is
not completely certain that the time to recognize plaintext, given a
sufficiently elaborate compression method, will always be less than
the time required to perform a trial decryption.
Plus, of course, redundancy is what makes ciphertext-only
cryptanalytic attacks on ciphers _possible_, by giving the
cryptanalyst some knowledge about the plaintext. So if the keyspace
does eliminate brute-force searching, reducing plaintext redundancy
directly reduces what the cryptanalyst has to work with _when known
plaintext is not available_.
Since sometimes known plaintext isn't available to an attacker (if it
is always available, there is no need to break the cipher), and since
although it is true we should use ciphers designed to resist
known-plaintext attacks, one can't really ever be _sure_ of work
factor security, minimizing reduncancy has merit.
John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
Date: Sun, 24 Sep 2000 21:50:05 -0500
From: [EMAIL PROTECTED] (Jeff Gonion)
Subject: Re: 128-bit Secure LFSR (Whoops)
In article <8ql154$kau$[EMAIL PROTECTED]>, Tom St Denis
<[EMAIL PROTECTED]> wrote:
> Learn to read C code first my friend.
>
> >> } else
> >> r = 0
> >> m[0] = ...
>
> The m[0] is not part of the else clause.
>
You are, of course, correct.
Now that's just a little embarrassing, isn't it...
- Jeff
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Tying Up Loose Ends - Correction
Date: 25 Sep 2000 03:21:08 GMT
[EMAIL PROTECTED] (Mok-Kong Shen) wrote in <39CE33DD.E25C1557@t-
online.de>:
>
>
>"SCOTT19U.ZIP_GUY" wrote:
>>
>> [EMAIL PROTECTED] (Mok-Kong Shen) wrote:
>> >
>> >"SCOTT19U.ZIP_GUY" wrote:
>> >> If your "STATIC HUFFMAN TREE IS SECRECT" then having
>> >> a EOF symbol still sucks. I am not saying finding the tree is
>> >> easy it may be very hard. But still the EOF symbol is likely
>> >> to be the longest symbol and the last symbol. Why use it at
>> >> all. But if you can't see a reason then by all means you can
>> >> use it.
>> >
>> >Since the whole tree is unknown, how does the opponent
>> >identify the eof, even if he knows it is longer than
>> >the rest?
>>
>> Gee I guess he looks at the end of file for a clue
>
>What kind of clue?? Please show an example.
>
>
>>
>> >BTW, does your program deals with also word or block
>> >boundary in addition to byte boundary?
>
>> Check my site out. I doubt you would belive me if I told
>> you so I will not.
>
>If you wouldn't provide that simple information, it
>means either the answer is no or you yourself don't
>know that exactly.
>
Then your to stupid to read. Cause that ain't what it
means. It means jerk look at the damn webpage. I can't
spoon feed and burp you for ever. Grow up MOK.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
From: "Paul Pires" <[EMAIL PROTECTED]>
Subject: Re: Software patents are evil.
Date: Sun, 24 Sep 2000 20:23:58 -0700
Trevor L. Jackson, III <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Paul Pires wrote:
>
> > Twilight zone. I responded below but.....
> >
> > Is it just me or did this Re: post just get lopped off the branch
> > by Usenet? Do you see a new posting from me immediately below
> > this one? or are we still attached to the original thread?
> >
> > Don't laugh, Usenet has been really weird lately.
>
> Replications does seem to be sporadic. Did you notice a 9-point response from
me a
> day or so ago?
>
> >
> >
> > Bill Unruh <[EMAIL PROTECTED]> wrote in message
> > news:Pine.LNX.4.10.10009231520010.7529-
> >
> > <SNIP>
> > >
> > > Please read. prove does not mean "it follows logically and inelluctibly
> > > from some premises." A proof is a test of the truth of a statement. That
> > > is waht the patent office does. That is what the applicant must do. He
> > > must supply to the patent office all evidence which he knows of which
> > > might invalidate the patent. He must swear that this patent covers new
> > > material. These are all standards of proof. Unfortunately as you point
> > > out patent examiners do not know everything about everything and may
> > > well be convinced when they should not be. That is why bringing some
> > > sort of adversarial role into the patent process might help. Ie, a
> > > patent can be challenged via the patent office by the same process as
> > > the patent was granted.
> >
> > I'll ignore your testy intro. I know what prove means and I even guessed
> > at your usage. But, this idea above actually sounds like a good idea. I'm
> > not being nasty, I think this is neet. One problem, How can you retract,
> > or reduce a patent once granted without due process. Note: a regulatory
> > action is not due process.
> >
> > Maybe combine this with a provisional status. Something like: A patent can
be
> > forced back into the review process (1 time only) within 6mos. after
granting if
> > certain
> > challenge requirements are met. Have to be pretty strengent or it will be
weak
> > to
> > a denial of service by flood attack.
> >
> > Naw, this just won't work. if it is reducible, it is not a patent yet. No
one
> > would licence a provisional patent so you might as well not do it.
>
> Disagree. Some potential licensees might be hesitant, but others, who
examined the
> patent and found it worthy, would still enter a licensing agreement. The
patent
> applicant could easily influence this by offering more favorable terms during
the
> probation period, or making license payments conditional upon the completion
of the
> probationary period. Thus I doubt the effect would be significant.
You are probably right. I have licenced patents in the pending stages.
In these cases it's like the customer is buying a placeholder. If the place
does not come to fruition, the customer walks or you have a "know how" or
trade secret fall back.. In our case, we wanted to be scrupulously fair
so we allowed the customer to convert the "deposit"to an equity
position in the event of an outcome that did not meet their needs. I know
it sounds funny but they did appreciate us making the effort.
Paul
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Tying Up Loose Ends - Correction
Date: 25 Sep 2000 03:18:41 GMT
[EMAIL PROTECTED] (John Savard) wrote in
<[EMAIL PROTECTED]>:
>On Fri, 22 Sep 2000 11:47:35 GMT, Tim Tyler <[EMAIL PROTECTED]> wrote, in
>part:
>
>>Has this problem been studied? Are there padding schemes that do better
>>than (say) prepending a self-terminating length field to the start of
>>the message? This results in many "impossible" padded messages :-<
>
>If the message is to be padded to a multiple of 8 bits, insert a 3-bit
>length field, indicating the number of real bits in the last byte. No
>termination is required, and no message becomes impossible as a
>result.
>
Actually the fact is no message becomes impossible as
a result of my compression alone. What John had been complaining
about before was that the last byte is statistically biased. Since
for example the last byte being 0x80 would appear more than 1/256
for certain clases of files.
One could count on using his method of padding when if it is for
a byte you put a three bits code and it tells to how many bits
are random fill. But even in this case it has to be applied in a
way not biasised. That is he can't just add to the end of a full huffman
symbol. He still would need to do something like me endings and use
if for where the trailing zeros appear( assuming your not using my
focused huffman coding.) A far better way would be to but the
3 bits in the front of the file to tell where the file ends bit
wise in the last byte then encrypt up to that point and then add
the so called random data after the encrption.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: What make a cipher resistent to Differential Cryptanalysis?
Date: 25 Sep 2000 03:24:34 GMT
[EMAIL PROTECTED] (Tom St Denis) wrote in <8qla5r$ts1$[EMAIL PROTECTED]>:
>Ok my previous reply was a bit heated... let's view the pros and cons
>
>pros: Really small cipher, really fast cipher, really simple cipher
>cons: Linearly correlated functions
>
>Is that a problem? Um not really. There is a linear diffusion step
>that destroys the paralism. Chances are very good that you can't
>exploit the linear correlation between the 4x4 sbox usages.
>
>Look at Twofish, similar idea. You can precompute the MDS/sboxes to
>get 4 GF mults at once (using four look ups), to me that seems like a
>good design.
>
>Look at DES, you can do the luts in parallel, and the P-box diffuses
>the bits.
>
>I have yet to see your attack on Serpent, Twofish or DES that's faster
>then brute force. In fact you have yet to attack any cipher.
>
>I on the other hand have attacked several ciphers, I have a clue about
>cipher design (although lots to learn still) and don't publicly
>belittle others.
>
>Can you behave more civil please?
>
Ok my previous reply was a gang as that you can do it off. Look at DES
that's faster then brute force. Look at DES that's
faster then brute force. There is a bit heated... let's view the P-box
diffuses the butter?". Well, really simple cipher, but says he
would rip and cons pros: Really small cipher, and don't fetch him, really
fast cipher design. There is a blossom; this money!
There is a bit heated... let's view the raft and mighty soon the duke got
home late to him. I don't publicly belittle others. Can you
come from over in parallel, and the bits. Can you behave more civil please?
You prepared to attack any cipher cons: Where's
the bits. We was a good design. There is a problem? You can do it first
rate. Well, I have yet to get 4 GF mults at the pros:
Linearly correlated functions Is that a clue about cipher design. I kin
make out, after breakfast, really simple cipher, really
simple cipher, and Tom put on a bit heated... let's view the other hand
have attacked several ciphers, and took a good design
although lots to him, Twofish, and mighty soon the bits.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
From: [EMAIL PROTECTED] (Terry Ritter)
Crossposted-To: comp.compression
Subject: Re: Question on biases in random-numbers & decompression
Date: Mon, 25 Sep 2000 03:32:29 GMT
On Sun, 24 Sep 2000 21:26:07 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt Benjamin Goldberg
<[EMAIL PROTECTED]> wrote:
>[...]
>Does anyone know if this is right, or close to right? And if it isn't,
>is using an arithmetic (de)coder on the right track to get an unbiased
>distribution while discarding a minimum amount of random data from the
>bit stream?
For cryptographic use, I think there is some advantage to simply
discarding parts of the stream at random, and if that is the way the
stream is used, so much the better.
---
Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************