Cryptography-Digest Digest #865, Volume #11 Fri, 26 May 00 16:13:00 EDT
Contents:
Re: Short Secure Serial Numbers (David A. Wagner)
Re: PGP wipe how good is it versus hardware recovery of HD? (Vernon Schryver)
Re: Q: appropriate number of key-uses before replacement?
([EMAIL PROTECTED])
Re: Another sci.crypt Cipher (David A. Wagner)
Re: Another sci.crypt Cipher (David A. Wagner)
Re: PGP wipe how good is it versus hardware recovery of HD? ("Trevor L. Jackson,
III")
Re: Short Secure Serial Numbers (Roger Schlafly)
Re: PGP wipe how good is it versus hardware recovery of HD? ("Trevor L. Jackson,
III")
Re: Short Secure Serial Numbers (Mike Rosing)
Re: Q: appropriate number of key-uses before replacement? (Mike Rosing)
Re: Short Secure Serial Numbers (David A. Wagner)
Re: Short Secure Serial Numbers (Roger Schlafly)
Re: Is OTP unbreakable? (Mickey McInnis)
Re: Short Secure Serial Numbers (David A. Wagner)
Re: Encryption within newsgroup postings (Anton Stiglic)
Re: Is OTP unbreakable? (Joaquim Southby)
Re: Veryfying CRLs - how practically to do that? ([EMAIL PROTECTED])
Re: A Family of Algorithms, Base78Ct (wtshaw)
Re: Q: appropriate number of key-uses before replacement?
([EMAIL PROTECTED])
Re: Short Secure Serial Numbers (Roger Schlafly)
Re: Anti-Evidence Eliminator messages, have they reached a burn-out point? (EE
Support)
Re: Another sci.crypt Cipher (tomstd)
Re: Another sci.crypt Cipher (tomstd)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: Short Secure Serial Numbers
Date: 26 May 2000 09:23:54 -0700
One possible solution, if you want a very small signature size,
is to use elliptic curves.
------------------------------
From: [EMAIL PROTECTED] (Vernon Schryver)
Subject: Re: PGP wipe how good is it versus hardware recovery of HD?
Date: 26 May 2000 10:28:15 -0600
In article <8glvp3$[EMAIL PROTECTED]>,
Guy Macon <[EMAIL PROTECTED]> wrote:
> ...
>Some caching disk controllers turn multipass overwrites into
>single pass overwrites withou telling you.
That's merely one tip of that iceberg. Otheps include automatic relocation
of sick sectors by disk controllers, collapsing multiple application
writes into single writes by file systems (do you really trust the
WIN32 CreateFile() no-caching or write-through modes or the UNIX sync()
or fsync()?), some kinds or implementations of RAID, and log-based
file systems where the notion of overwriting by any application program
is foolish ignorance.
Any kind of overwriting scheme for computer media makes as much sense
as overwriting or erasing paper. Some methods of erasing, eradicating,
bleaching, or overwriting paper are more effective than others, but only
a fool relies on any of them to obliterate anything that matters.
Vernon Schryver [EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Q: appropriate number of key-uses before replacement?
Date: Fri, 26 May 2000 16:27:28 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (S. T. L.) wrote:
> <<For a 160-bit MAC, with a 2048-bit RSA, how many
> encryptions are too many? Changing keys often
> means the keys are more susceptable to tampering
> in-transit...>>
>
> I don't know, but I'm almost certain it'll be something godawful like
10^81
> messages. I.E., don't worry about it, unless you're using a sucky
algorithm,
> in which case you should first worry about the algorithm.
[Heh, 137. Cool. :]
So, Lyalc suggests changing keys with every message. STL137 suggests
changing every few universe-lifetimes. While I can see good arguments
for both positions, Lyalc's suggestion is not practical (I am working
under the assumption that there will be several thousand encryptions and
signings per day) and STL's suggestion (though convenient :) leaves us
open to fraud if the single key is ever compromised.
A middle ground must exist; Verisign hands out new keys every year,
correct? Are the only issues time-based, or per-encryption based? A nice
mixture of both? (ie, a key never used in millions of years is not
likely to be cracked, but a key used 2^80 times in a day leaks how many
bits..? :)
Again, references or suggestions much appreciated. :) Thanks Lyalc and
STL. :)
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: Another sci.crypt Cipher
Date: 26 May 2000 09:41:24 -0700
In article <[EMAIL PROTECTED]>,
Mark Wooding <[EMAIL PROTECTED]> wrote:
> Don't look to your S-boxes for the problem: look to your diffusion.
Indeed. You could replace the bit permutation with an arbitrary bijective
matrix: say, an MDS matrix. This won't have much impact on the performance
of the cipher, so why not?
Add a MDS matrix + a strong key schedule, and you get "Onefish". :-)
------------------------------
From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: Another sci.crypt Cipher
Date: 26 May 2000 09:43:13 -0700
In article <8gkq7v$bkr$[EMAIL PROTECTED]>, <[EMAIL PROTECTED]> wrote:
> There is a class of 2^32 weak keys.
There is also a larger class of 2^96 weak keys: whenever subkey 0 = subkey 1,
you get a weak key where the key schedule is palindromic. For instance, these
weak keys have 2^32 fixed points. This may not be a huge deal in practice,
unless you want to use this block cipher in a hash function.
------------------------------
Date: Fri, 26 May 2000 13:24:44 -0400
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: PGP wipe how good is it versus hardware recovery of HD?
tomstd wrote:
> In article <[EMAIL PROTECTED]>,
> Lee Herfel <[EMAIL PROTECTED]> wrote:
> >I have a program called shredder which I believes overwrites a
> file 7
> >times with random data to try and prevent hardware recovery of
> deleted
> >files aka the story in the WSJ. Does PGP wipe function do this
> or does
> >it only overwrite once?
>
> Er, all you need todo is overwrite a file once to completely
> kill the information.
>
> Despite what others think, once you overwrite the information on
> disk once or twice, it's completely gone. This is because the
> hard disks are so dense there is no room for 'extra' noise.
This is not true. Thermal variations alone will induce track positioning and
width variations.
Further you can look up "head settle time". It doesn't end when the head is
perfectly still, it ends when the head is stable enough to avoid overwriting
adjacent tracks. In fact the head never settles perfectly because it flies just
above the surface of the recording medium, and air turbulence continually
disturbs it. In fact this air turbulence produces an effect so distinctive that
it is a proposed source of entropy for hardware RNGs. It the head is moving
with laterally respect to the track then there will be traces of the previous
flight path where it does not match the current flight path.
Variations in the timing flux changes also provides an avenue of recovery of
previously written data both because overwriting does not completely erase the
previous state and because variations in rotational speed make it impossible to
put the new flux change exactly on opt of an old flux change even when you write
exactly the same data.
------------------------------
From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: Short Secure Serial Numbers
Date: Fri, 26 May 2000 10:16:30 -0700
"David A. Wagner" wrote:
> One possible solution, if you want a very small signature size,
> is to use elliptic curves.
Or DSA. DSA signatures are about the same size.
------------------------------
Date: Fri, 26 May 2000 13:29:13 -0400
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: PGP wipe how good is it versus hardware recovery of HD?
Vernon Schryver wrote:
> In article <8glvp3$[EMAIL PROTECTED]>,
> Guy Macon <[EMAIL PROTECTED]> wrote:
>
> > ...
> >Some caching disk controllers turn multipass overwrites into
> >single pass overwrites withou telling you.
>
> That's merely one tip of that iceberg. Otheps include automatic relocation
> of sick sectors by disk controllers, collapsing multiple application
> writes into single writes by file systems (do you really trust the
> WIN32 CreateFile() no-caching or write-through modes or the UNIX sync()
> or fsync()?), some kinds or implementations of RAID, and log-based
> file systems where the notion of overwriting by any application program
> is foolish ignorance.
>
> Any kind of overwriting scheme for computer media makes as much sense
> as overwriting or erasing paper. Some methods of erasing, eradicating,
> bleaching, or overwriting paper are more effective than others, but only
> a fool relies on any of them to obliterate anything that matters.
As a matter of historical curiosity, does anyone have an idea of how many times
a palimpsest can be recycled?
------------------------------
From: Mike Rosing <[EMAIL PROTECTED]>
Subject: Re: Short Secure Serial Numbers
Date: Fri, 26 May 2000 12:11:07 -0500
David A. Wagner wrote:
>
> One possible solution, if you want a very small signature size,
> is to use elliptic curves.
I'll agree with that (supprise :-) With a 101 bit field for the
data plus head room you get about 48 bits of security, more than the
40 bit checksum requested. And all the software to implement it is
free too.
Patience, persistence, truth,
Dr. mike
------------------------------
From: Mike Rosing <[EMAIL PROTECTED]>
Subject: Re: Q: appropriate number of key-uses before replacement?
Date: Fri, 26 May 2000 12:31:40 -0500
[EMAIL PROTECTED] wrote:
> So, Lyalc suggests changing keys with every message. STL137 suggests
> changing every few universe-lifetimes. While I can see good arguments
> for both positions, Lyalc's suggestion is not practical (I am working
> under the assumption that there will be several thousand encryptions and
> signings per day) and STL's suggestion (though convenient :) leaves us
> open to fraud if the single key is ever compromised.
>
> A middle ground must exist; Verisign hands out new keys every year,
> correct? Are the only issues time-based, or per-encryption based? A nice
> mixture of both? (ie, a key never used in millions of years is not
> likely to be cracked, but a key used 2^80 times in a day leaks how many
> bits..? :)
It also depends on your block size. If you have a 64 bit cipher, you
want
to change the key every 2^32 blocks of transmission. On a terabit link,
that's pretty often :-) Even a 128 bit block cipher on a terabit link
should be changed pretty often, like once a day.
It's the combination of time and data which really determines the amount
of effort you want to expend on rekeying. I tend to change my PGP key
once every 5 years because I forget the pass phrase :-) Much simpler to
rekey than to remember or crack it!
There's no lower bound, but certainly when the number of blocks is
2^(n/2)
where n is the block size, you want to rekey. Once a message is
probably
too fast, but once every 2^(n/4) isn't unreasonable.
Patience, persistence, truth,
Dr. mike
------------------------------
From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: Short Secure Serial Numbers
Date: 26 May 2000 11:18:59 -0700
In article <[EMAIL PROTECTED]>,
Roger Schlafly <[EMAIL PROTECTED]> wrote:
> Or DSA. DSA signatures are about the same size.
Really? I don't see it.
For t-bit security, with DSA you need 4t-bit signatures,
but for elliptic curives you need 2t-bit signatures. (Roughly.)
Have I gone wrong somewhere?
------------------------------
From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: Short Secure Serial Numbers
Date: Fri, 26 May 2000 11:32:08 -0700
"David A. Wagner" wrote:
> > Or DSA. DSA signatures are about the same size.
>
> Really? I don't see it.
> For t-bit security, with DSA you need 4t-bit signatures,
> but for elliptic curives you need 2t-bit signatures. (Roughly.)
> Have I gone wrong somewhere?
In either case, you need a group of order roughly 2^(2t)
to get t-bit security. Typically, t = 80. Ie, any group
of order roughly 2^160 has discrete logs in 2^80 steps.
But you still need 2 numbers of the size of the group
order to make a signature. So you get 4t-bit signatures
in either case.
------------------------------
From: [EMAIL PROTECTED] (Mickey McInnis)
Subject: Re: Is OTP unbreakable?
Date: 26 May 2000 18:29:29 GMT
Reply-To: [EMAIL PROTECTED]
In article <8gkc5k$1rj$[EMAIL PROTECTED]>, Greg <[EMAIL PROTECTED]> writes:
|>
|> > The OTP does not offer any authentication.
|>
|> How rediculous. OTP offers the same level of authentication as
|> most other private keys in a public key cryptosystem. If you have
|> the key, then you can sign the document. That is all authentication
|> means.
The OTP authentication weakness comes from that fact that if you
can get a matching cleartext and ciphertext, you can easily determine
the pad for that message.
One way to exploit this is:
1) Somehow determine the cleartext for one message or part of a
message . (dumpster diving, find a note that was sent to many
correspondents, one of whom you've corrupted, etc.)
2) Intercept the ciphertext.
3) Determine the pad from the cleartext/ciphertext pair.
4) Now you can encrypt a message of the same length as this one
cleartext and send it to the recipient and it will look as
though it came from the "authorized" sender. If you have a
partial message, you can change that part of the message.
This exploit isn't practical with many cryptosystems, because
they make it difficult to obtain a key given a matching cleartext/
ciphertext pair.
This exploit or variations thereof aren't always practical, but that's
the reason why you tend not to use OTP for verification.
|>
|> From a practical point of view, it is far more difficult to maintain
|> the security of the OTP from use by others unawares because you cannot
|> memorize it and destroy it.
|> ....
As key lengths get longer, memorization becomes more and more difficult.
It's hard to remember a 512 or 1024 bit key in common use now.
It's not just a problem with OTP's.
--
Mickey McInnis - [EMAIL PROTECTED]
--
All opinions expressed are my own opinions, not my company's opinions.
------------------------------
From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: Short Secure Serial Numbers
Date: 26 May 2000 11:56:19 -0700
In article <[EMAIL PROTECTED]>,
Roger Schlafly <[EMAIL PROTECTED]> wrote:
> In either case, you need a group of order roughly 2^(2t)
> to get t-bit security. Typically, t = 80. Ie, any group
> of order roughly 2^160 has discrete logs in 2^80 steps.
Ok.
> But you still need 2 numbers of the size of the group
> order to make a signature.
I thought that was not the case for elliptic curves:
instead of sending (x,y), I thought it was enough to
send just x and a single bit to indicate the sign of y;
then the receiver may re-compute y from this information.
But I may well be mistaken, because I don't understand the math.
Can anyone verify whether this is right or not?
------------------------------
From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: Encryption within newsgroup postings
Date: Fri, 26 May 2000 15:06:26 -0400
"Douglas A. Gwyn" wrote:
>
> Anton Stiglic wrote:
> > I don't think that there is a two letter word in english
> > that has the same two letters (in french neither...).
>
> My friend the witch doctor,
> He told me what to do.
> He said:
> Oo, ee, oo aa aa,
> Ting tang, walla walla bing bang.
> ...
Now is that realy english?
Shoubougamo bindong boummmmmmm bang-bang!
:)
------------------------------
From: Joaquim Southby <[EMAIL PROTECTED]>
Subject: Re: Is OTP unbreakable?
Date: 26 May 2000 19:08:00 GMT
In article <8gmfq9$ol4$[EMAIL PROTECTED]> Mickey McInnis,
[EMAIL PROTECTED] writes:
>One way to exploit this is:
>
>1) Somehow determine the cleartext for one message or part of a
> message . (dumpster diving, find a note that was sent to many
> correspondents, one of whom you've corrupted, etc.)
>
>2) Intercept the ciphertext.
>
>3) Determine the pad from the cleartext/ciphertext pair.
>
>4) Now you can encrypt a message of the same length as this one
> cleartext and send it to the recipient and it will look as
> though it came from the "authorized" sender. If you have a
> partial message, you can change that part of the message.
>
I'm a little confused about your reasoning for item 4. You've recovered
plaintext and intercepted the corresponding ciphertext. From these, you
have recovered the keystream used for that message.
Since the OTP security is based on never using the same portion of
keystream twice (hence, "One Time"), how would your new message pass
muster if it uses the previously used keystream? Are you assuming that
the ciphertext you intercepted did not reach the receiver?
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Veryfying CRLs - how practically to do that?
Date: Fri, 26 May 2000 19:05:56 GMT
RFC 2560 (OCSP) is probably the closest existing standard for what you
want.
In article <8ggb2p$sii$[EMAIL PROTECTED]>,
Alex Garter <[EMAIL PROTECTED]> wrote:
> How is it possible to verify certificate revocation lists, i.e. given
a
> certificate to check it for revocation without downloading all the
> database, e.g. to do it from an applet.
>
> TIA,
> Alex
>
>
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: A Family of Algorithms, Base78Ct
Date: Fri, 26 May 2000 12:41:34 -0600
In article <[EMAIL PROTECTED]>, Mok-Kong Shen
<[EMAIL PROTECTED]> wrote:
> wtshaw wrote:
>
> > The essence of the mathematical relationships, the inequalities in most
> > cases, is at the heart of the concept. By studying the mathematical
> > relationships involving an associated set of bases in a given algorithm,
> > you should come to learn pretty much all that you need. If you have
> > addition questions, ask.
>
> I like to know whether the following correctly captures the essence
> of your methods:
>
> One has a string of digits in a base B1. Break this up into sets of
> certain fixed size. The digits in each set represent an integer in base
> B1. Obtain the representation of these integers in base B2. This results
> in sets of digits in base B2. Do some permutation of the digits in
> each set. Finally concatenate all to form a string of digits in base B2.
>
> M. K. Shen
Yes, but you can transpose *digits* in any of the bases, and/or substitute
in any base. Fore example, base 26 can be an intermediate base, be
substituted and/or letters tranposed.
With usual base translation, the inequalities mean that there are fewer
possible different types of groups possible than the ciphertext set might
indicate. This is where the efficiencies should be considered,
actual/expected for any specific base change. The higher the efficiency,
the stronger that stage can be, depending on choice of keys.
--
Secrets that are told or available are not secrets any more, surely
not trade secrets. Security of secrets is no dependant on someone
else's stupidy, only in your making them available in any form.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Q: appropriate number of key-uses before replacement?
Date: Fri, 26 May 2000 19:25:24 GMT
In article <[EMAIL PROTECTED]>,
Mike Rosing <[EMAIL PROTECTED]> wrote:
> There's no lower bound, but certainly when the number of blocks is
> 2^(n/2) where n is the block size, you want to rekey. Once a
> message is probably too fast, but once every 2^(n/4) isn't
> unreasonable.
[sigh, sorry for bad line-breaks.. I wish we had a news server here.]
Ok. So, if I use RSA (or other public-key signing method) with a 160-bit
SHA-1 (for instance), then I should rekey every 2^(160/4)=10^12
signings? (Wow, assuming a billion signings a day, that is about every
three years. :)
Thanks! :)
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: Short Secure Serial Numbers
Date: Fri, 26 May 2000 12:48:03 -0700
"David A. Wagner" wrote:
> > In either case, you need a group of order roughly 2^(2t)
> > to get t-bit security. Typically, t = 80. Ie, any group
> > of order roughly 2^160 has discrete logs in 2^80 steps.
>
> Ok.
>
> > But you still need 2 numbers of the size of the group
> > order to make a signature.
>
> I thought that was not the case for elliptic curves:
> instead of sending (x,y), I thought it was enough to
> send just x and a single bit to indicate the sign of y;
> then the receiver may re-compute y from this information.
You are thinking about sending a public key, which is then
just a point (x,y) on the curve. With t=80 as above, then
x and y are each 160 bits. But y is one of the 2 square roots
of a cubic in x, so (x,y) can be reconstructed from x and
one carefully chosen bit of y.
This is the main advantage of elliptic curve crypto -- that
public keys can be sent compactly. Only need 161 bits
when RSA and DH need 1024 bits.
For signatures, DSA, Schnorr, NR, and variants only need
320 bits if the subgroup is sized roughly 2^160. This is
an advantage over RSA, which typically needs 1024 bits.
But the elliptic curve signatures use the same formulas
as DSA etc, and the size of the signature depends on the
size of the group. So those signatures will typically be
320 bits.
If you send a certificate, then it will have at least one
public key and a signature, as well as other stuff, so it
will be bigger. A public key in a certificate usually also
has the system parameters, and there are several of these
for elliptic curves. So an EC cert might even be bigger
than an RSA cert.
BTW, a variant of DH has recently been discovered that
allows public keys to be represented as compactly as for
elliptic curves. For details, see:
http://www.ecstr.com/
Lenstra and Verheul call it XTR and seem to be claiming that
it is better than EC in terms of size, speed, and security.
------------------------------
From: EE Support <[EMAIL PROTECTED]>
Crossposted-To: alt.privacy,alt.privacy.anon-server,alt.security.pgp
Subject: Re: Anti-Evidence Eliminator messages, have they reached a burn-out point?
Date: Fri, 26 May 2000 20:46:21 +0100
Reply-To: [EMAIL PROTECTED]
On Fri, 26 May 2000 05:30:30 GMT, "donoli" <[EMAIL PROTECTED]>
wrote:
>
>EE Support wrote in message ...
>>Hi,
>>
>>EE Tech Support here.
>>
>>Greetings to those genuine people who continue to support our
>>wonderful Evidence Eliminator software.
>>
>>Isn't it amazing how many "Anonymous" or semi-anonymous "people",
>>often they have big-sigs with PGP too, are spending all their time and
>>effort broadcasting false reports about our wonderful software.
>>
>###########
>snip
>At this point in time I am neutral on this debate, as I was with the Aureate
>debate. What I don't understand is, in both cases, the side in favor of the
>software company, claims that posts from anonymous posters are less valid
>than someone w/ a traceable e-mail address. To me, it makes no sense at all
>even though I am not posting anonymously.
>donoli.
>###########
>
Hi,
Re: the rest of our post to which you replied, we don't claim that
anonymous posts are less valid on their name status.
We do dispute the contents of some messages, namely their claims that
our Evidence Eliminator software trashes drives or sends data back to
a secret HQ. As these claims are false, we can only speculate on who's
actually putting them up here!
Cheers,
--
Regards,
EE Support
[EMAIL PROTECTED] (remove NO_SP_AM for e-mail)
http://www.evidence-eliminator.com/
------------------------------
Subject: Re: Another sci.crypt Cipher
From: tomstd <[EMAIL PROTECTED]>
Date: Fri, 26 May 2000 12:57:22 -0700
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
(Mark Wooding) wrote:
>tomstd <[EMAIL PROTECTED]> wrote:
>
>> The chance of getting these keys is 2^-96 right? Hmm I can
live
>> with that. Think I should include the round counter
>> to 'counter' the weak keys?
>>
>> Thanks for looking at my cipher.
>
>I have a 16-round differential characteristic for your complete
cipher,
>with probability approximately 2^{62.5}.
You mean 2^-62.5? Which would require ~2^62.5 pairs right? I
don't consider this damning, but a good break none-the-less.
>The characteristic evolves like this:
>
> 0: 00000000 00010000
> 1: 00010000 00000000 (01[1] -> 03[2], p = 4)
> 2: 00000300 00010000 (03[2] -> 01[1], p = 6)
> 3: 00000000 00000300
> 4: 00000300 00000000 (03[2] -> 01[1], p = 6)
> 5: 00010000 00000300 (01[1] -> 03[2], p = 4)
> 6: 00000000 00010000
> 7: 00010000 00000000 (01[1] -> 03[2], p = 4)
> 8: 00000300 00010000 (03[2] -> 01[1], p = 6)
> 9: 00000000 00000300
>10: 00000300 00000000 (03[2] -> 01[1], p = 6)
>11: 00010000 00000300 (01[1] -> 03[2], p = 4)
>12: 00000000 00010000
>13: 00010000 00000000 (01[1] -> 03[2], p = 4)
>14: 00000300 00010000 (03[2] -> 01[1], p = 6)
>15: 00000000 00000300
>16: 00000300 00000000 (03[2] -> 01[1], p = 6)
>17: 00010000 00000300
>
>The hex numbers on the left are the input differences to each
round.
>The thing on the right is the differential through the S-box and
>permutation which it exploits, given as input and output
differentials,
>with the S-box to which each pertains. The probabilities given
are
>fractions of 256.
>
>As you can see, it's basically a three-round iterative
characteristic,
>which moves a difference between S-boxes 1 and 2.
The char is (00010000) -> (00000000) -> (00030000) -> ... right?
The best learning "thing" you could do for me right now is
explain how you found that differential. I have been scratching
my head all day about it.
If I were to implement this on reduce rounds (for the fun of
it), would I just take a plaintext (A,B) and (A,B xor 00010000)
and look for the output difference of (A xor 00030000, B) after
3 or 4 rounds? I am not clear on this part.
>While this implies that almost the entire codebook must be
recovered to
>perform the attack, I think it suggests that TC1 has a low
security
>margin. I recommend against its use.
That doesn't make sense. If it requires close the to the
codebook that is a good thing.
>Don't look to your S-boxes for the problem: look to your
diffusion.
>
I know the F function doesn't follow SAC which is why this is
possible.
BTW I think the key attacks are more damning. From what I read
if I xor a round counter I can stop the current key attacks, is
that right?
BTWx2 Thanks for the info, I really want to learn from this.
BTWx3 I designed this cipher so I could break it. So I am not
disappointed it was broken, just that I didn't do it first.
Tom
* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!
------------------------------
Subject: Re: Another sci.crypt Cipher
From: tomstd <[EMAIL PROTECTED]>
Date: Fri, 26 May 2000 12:58:20 -0700
In article <8gm9j1$gm2$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (David A. Wagner) wrote:
>In article <8gkq7v$bkr$[EMAIL PROTECTED]>, <matthew_fisher@my-
deja.com> wrote:
>> There is a class of 2^32 weak keys.
>
>There is also a larger class of 2^96 weak keys: whenever subkey
0 = subkey 1,
>you get a weak key where the key schedule is palindromic. For
instance, these
>weak keys have 2^32 fixed points. This may not be a huge deal
in practice,
>unless you want to use this block cipher in a hash function.
>
>
Wouldn't xoring a round counter with the key in each round
prevent these attacks?
Tom
* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************