Cryptography-Digest Digest #244, Volume #13 Wed, 29 Nov 00 22:13:01 EST
Contents:
Re: collision probability with file checksums (Bryan Olson)
Re: keysize for equivalent security for symmetric and asymmetric keys (DJohn37050)
Re: keysize for equivalent security for symmetric and asymmetric keys (DJohn37050)
Re: keysize for equivalent security for symmetric and asymmetric keys (Roger
Schlafly)
Re: RC4 Questions ([EMAIL PROTECTED])
Re: keysize for equivalent security for symmetric and asymmetric keys (David Wagner)
Re: keysize for equivalent security for symmetric and asymmetric keys (David Wagner)
Re: Are there collisions in DES? ([EMAIL PROTECTED])
Re: keysize for equivalent security for symmetric and asymmetric keys (Roger
Schlafly)
Re: keysize for equivalent security for symmetric and asymmetric keys (Roger
Schlafly)
Re: keysize for equivalent security for symmetric and asymmetric keys (DJohn37050)
Bugs (was Re: PDF/PS to MS WORD converter) (Eric Smith)
Re: voting through pgp (Benjamin Goldberg)
Re: Mode of operation to maintain input size with block ciphers? (Benjamin Goldberg)
Probability Question on Square Attack (Raphael Phan)
Re: On mutation of crypto algorithms (Benjamin Goldberg)
Re: keysize for equivalent security for symmetric and asymmetric keys (David Wagner)
Re: keysize for equivalent security for symmetric and asymmetric keys (David Wagner)
----------------------------------------------------------------------------
From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: collision probability with file checksums
Date: Thu, 30 Nov 2000 00:00:56 GMT
David Schwartz wrote:
> The 2^80 assumes that you can store every previous trial's
> outcome, which I'm not sure is a reasonable assumption.
Not really. Cycle detection methods avoid the need for
massive storage at a small cost in computation. The van
Oorschot and Wiener paper is largely about adapting the Floyd
"two-finger cycle detector" to efficient parallel
implementation.
--Bryan
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Date: 30 Nov 2000 00:30:55 GMT
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
I am not a NFS expert nor do I play one on TV. In no way am I trying to slight
Bob S. I respect him a lot.
One well-accepted method to try to get a handle on complexity is to do a TIME
complexity analysis, ignoring all storage costs. Another is to do a SPACE
analysis, which includes TIME. Both are valid ways of assessing the situation
and there can be others.
Here is the point I am trying to make:
1) Hashes, sym. enc., ECC, and the DSA q (prime order subgroup) value can all
be broken using algorithms that rely on TIME. These algs do not need much
SPACE. If you have extra space, you can build tables, but this still take TIME
to do. As each depends on TIME, this means you can have high confidence in the
mapping among these different items. It is a straightforward mapping of TIME
to break one alg. to TIME break another alg.
2) Bob points out that GNFS has massive storage requirements and that it is the
current best method to attack RSA or the DSA p (field) value. But then he
needs to try to map cracking RSA/DSAp which hits a storage (SPACE) limit to
cracking AES/SHA-2 which hits a TIME limit. How to do this?
3) One way to proceed is to IGNORE storage limits and assume infinite storage
and just rely on TIME. This apparently means that the RSA key is larger than
Bob would like it to be. However, this is not an invalid way to proceed, it is
simply a conservative simplifying choice. If a better way is found to use less
storage, the keysize table does not change. If storage gets a lot cheaper,
then the keysize tables do not need to be redone. If ops get a lot cheaper
(e.g., quantum) then the minimum key sizes need to be made larger, but the
table does not need to be redone.
4) Another way is to proceed is to try to map to a fixed cost, allowing an
attacker to buy TIME and/or STORAGE at a certain cost each. It seems obvious
that the absolute costs and the ratio of costs are going to be a factor in the
analysis. If these numbers change due to technology changes then the keysize
table may need to be redone. If storage gets a lot cheaper, then the tables
might need to be redone. If ops get a lot cheaper, then the table might need
to be redone.
5) Now, here is a critical point. Both models are valid, they just have
different assumptions. So pick your model and make your mapping. But
understand ahead of time which model is the most conservative and which is less
likely to change if the unanticipated happens.
6) Here are some questions to ask:
A) Will big quantum computers become a reality? Who knows? IBM announced a
working 7 qubit computer as a research project.
B) Will a new algorithm be discovered? Who knows? Lots of people try hard to
find one and we hope no one does. But see Koblitz and Miller and others on why
ECDLP seems likely to remain a very hard problem (one hint: look at the height
function).
C) Will existing algorithms that use storage be improved to use less storage?
This is a very plausible assumption, shown time and time again in comp. sci.
Don Johnson
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Date: 30 Nov 2000 00:34:44 GMT
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
Go David. You are right that a new alg. makes a bigger splash than a storage
improvement to an existing alg.
Don Johnson
------------------------------
From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
Date: Wed, 29 Nov 2000 17:09:52 -0800
David Wagner wrote:
> >Don argues that it is more conservative to make certain unrealistic
> >assumptions, such as infinite memory being completely free, but this
> >makes no sense to me. Such an assumption might be useful as a
> >simplifying assumption, but it is not conservative.
>
> That's not the conservative assumption. The conservative assumption
> is that advances in factoring might reduce the storage costs more than
> they reduce the running time in factoring algorithms.
Why is that conservative? There might be advances in factoring that
reduce the running time more than storage costs.
> This assumption may or may not be the best one to base your analysis
> on, but it *does* fit the usual definition of what it means to be
> conservative as a cryptosystem designer.
I don't see how. Conservative might be to assume your attacker
has more capabilities than he is likely to have. Making an
assumption about the relative cost of time and space does not fit.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: RC4 Questions
Date: Thu, 30 Nov 2000 01:02:02 GMT
> The site led me to believe that the maximum secure key length was 512
bits
> (where the key is the string used to initialize the random number
> generator). Is this correct??? Could I use a longer key and not
have my
> security compromised?
Yes you can. the reason CipherSaber recommends not going over 512 bits
for the key is the degree of infusion from the key. The actual
effectiveness of each bit for each bit in use actually drops. This is
fairly easy to see if you look at 1 and 2 bit keys, with a 1 bit key,
the key is used 2048 times to determine the state, and 2 bit key is
only used 1024 times. Obviously you don't want to use keys that small,
but the effect continues. 512 bits was just a judgement call.
>
> Also, the ciphersaber implemtation suggests appending a fixed number
of
> random bytes to the end of the key (10), and then placing these
random bytes
> at the start of the encrypted stream. Doesn't this compromise
security???
> Is there something I'm missing??
This is called an IV. It is used to that you can make use of the same
key without compromising the security because the actual key will
continually change (due to the IV). In the case of RC4 it does reduce
the amount of reinfusion supplied, but if it reduces the security at
all, the amount is so trivial as to be ignorable (it's a matter of
(very very large X) - (1/(very large y)) 1/Y can be safely ignored).
>
> Also, I own a copy of Applied Cryptography, and in the description of
RC4 it
> states you could make the S array larger.... in the standard
algorith, it's
> at 8x8, but the book suggested changing this to 16x16, and
lengthening the
> init period for the sake of speeding up the algorith (you can code 2
bytes
> at a time now...) Would this have any affect on the security of the
> alogorithm at all?? Would larger sizes be more secure (like 32x32)??
Yes, to the best of my knowledge every increase in the size of the S-
boxes will yield an increase of security. However be very careful how
big you try to make this thing, a 16-bit RC4 would be 2^16*16 bits in
size, that's 128K, a 32-bit RC4 would be 32*2^32 bits in size, that's
several gigabytes, but it would be more secure.
>
> Finally, I was thinking of creating a larger byte pattern using the
password
> and the password length, and then hashing with SHA to get a final 160-
bit
> key to encrypt with... is there anything that I should be careful
about or
> watch out for???
Watch out for the first 2 bytes in the final key, make sure that S[0]+S
[1] mod 256 <> 0. Or just pop the first 512-bytes (vast overkill, all
you need to remove is the first byte, but 512 will add extra stirring
into the mix), and ignore everything else. Your idea though will not
appreciably increase the security of the system.
Joe
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
Date: 30 Nov 2000 01:10:42 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)
Stanley Chow wrote:
>There is an unspoken agreement that whatever the assumptions, the result
>should be useful.
Sure, but I'm replying specifically to the following comment:
``Such an assumption might be useful as a simplifying assumption,
but it is not conservative.''
I claim that the comment is wrong, that the assumption is indeed conservative.
I make no claims about whether it is the "Right" assumption, or whether
it is the most appropriate one, or whether it leads to the most useful
results.
I am simply arguing that the above objection (``it is not conservative'')
is a misunderstanding.
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
Date: 30 Nov 2000 01:13:06 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)
Stanley Chow wrote:
>I have a new model - conservatively assume that advances in DL might
>reduce CPU time to O(n). What conclusion would you draw? Would it
>be useful?
It might not be useful, but I think we can all agree that this assumption
is conservative! It is, if anything, too conservative.
If someone objected to your assumption, saying ``it is not conservative'',
you might get a chuckle out of it. This is because the correct rebuttal is
quite the opposite: namely, that ``it is *too* conservative.''
Note that I am confining my comments here merely to matter of whether or
not the assumption of ignoring storage complexity is conservative or not.
I'm not saying whether it's a good idea. I'm not saying whether it leads
to useful results. I'm merely suggesting that it is indeed conservative.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Are there collisions in DES?
Date: Thu, 30 Nov 2000 01:12:44 GMT
> 1: Can 2 different keys with the same source give the same
encrypted
> result?
I believe so. If I remember correctly there is a class of weak keys.
Actually I can prove the exist of them given independent subkeys, but
this only indicates that they may exist in DES.
>
> 2: Can 2 different sources with the same key give the same
encrypted result?
Absolutely not. By the inversion principal, which simply means that
given an output and the key, the input is uniquely identified.
>
> 3: Can 2 different sources (of non-trivial length) with 2
different keys
> give the same encrypted result?
Absolutely YES. This is trivial, each key generates 2^64 outputs out of
2^64 possible outputs, so every key must generate every output for some
input.
>
> 4: Does the reduced keyspace of 56-bits contribute to a lack or low
> probability of collisions, since the 2^56 keys maps a source into a
result
> space of 2^64? Would this be a 'feature' of the reduced keyspace
that has
> not otherwise been explained well IMHO?
I think you are confused about what a key and an input are. DES is
composed of the following:
2^56 Keys
For Each key, there are 2^64 Input Plaintexts.
For Each (Plaintext, Key) pair there is one and only one Ciphertext
Output
For Each (Ciphertext, Key) pair there is one and only one Plaintext
The key simply determines the mapping. To consider a much simpler
example, with the same properties I mentioned above, and that is easier
for most people to see them. Take the function X+B mod MAX = Y,
alternately Plaintext + Key mod MAX = Ciphertext
This is a Ceasar Cipher, the inverse is Ciphertext - Key mod MAX = Y.
It is much easier to see that if we set MAX = 2^64 and only take
Plaintext and Ciphertext from [0,2^64-1]. And the answers are actually
the same (it obeys the invertability property which most of your
questions dealt with).
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
Date: Wed, 29 Nov 2000 17:58:04 -0800
David Wagner wrote:
> Note that I am confining my comments here merely to matter of whether or
> not the assumption of ignoring storage complexity is conservative or not.
> I'm not saying whether it's a good idea. I'm not saying whether it leads
> to useful results. I'm merely suggesting that it is indeed conservative.
I disagree. Let
X := $ cost of MIPS-year
Y := $ cost of gigabyte
F(X,Y,n) := cost of breaking n-bit RSA
G(X,Y,n) := cost of breaking n-bit ECC
(with current algorithms, G does not depend on Y)
Bob is saying:
Estimate X,Y. Find m,n such that F(X,Y,m) = G(X,Y,n).
Then m-bit RSA and n-bit ECC have comparable security.
Don is saying:
Estimate X. Let Y=0. Find m,n such that F(X,0,m) = G(X,0,n).
Then m-bit RSA and n-bit ECC have comparable security.
Don's claim that his method is more "conservative" is utterly bogus.
If Don says,
I'm not sure what X and Y should be, so I'll choose a low value
of X and let Y be 0, and then make sure n is large enough so that
F(X,0,n) is larger than any adversary can afford.
then that would be a conservative argument. But if he comparing
security to something else calculated with a faulty assumption,
then the comparison is bogus.
------------------------------
From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
Date: Wed, 29 Nov 2000 18:07:51 -0800
Michael Scott wrote:
> "Bob Silverman" <[EMAIL PROTECTED]> wrote in message
> news:903sgc$jv9$[EMAIL PROTECTED]...
> > Using a 15K RSA key is worse than ridiculous. Using a 256 bit
> > AES key is gross overkill as well. I suggest you do the arithmetic
> > to see how long it will take to break a 128-bit key. You may assume
> > computers 1 million times faster than existing computers. You may
> > assume that you have a billion of them available.......
> > you buy.
> Perhaps the thinking behind 256-bit keys is in response to the threat from
> quantum computing, for which if I remember right there there is an O(n^0.5)
> algorithm for exhaustive search - which brings us back effectively to 128
> bits
I think that NIST was just tired of the accusation that it is part
of secret plot to ration the bits that we are allowed to use.
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Date: 30 Nov 2000 02:18:35 GMT
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
I guess this shows that "conservative" is open to interpretation.
By conservative I mean as David means it.
A conservative number may be too high, because all factors may not be taken
into account. Assuming storage (may become) essentially free is a conservative
assumption. If it is not free, then the problem is harder than you thought it
was. If it somehow becomes essentially free, then you arrive at the number
already arrived at.
One concern is as a model tries to take in more and more factors, then it gets
more and more complex. Another is that as a model takes in more and more
factors, it may get more and more accurate in some current sense, but its
assumptions proliferate. The KISS principle applies in security.
Please note, a model is a model. It is a guide to reality but is not the
reality. As a model, it can have any assumptions, but please state them. And
multiple models are possible.
Don Johnson
------------------------------
From: Eric Smith <[EMAIL PROTECTED]>
Subject: Bugs (was Re: PDF/PS to MS WORD converter)
Date: 29 Nov 2000 18:23:00 -0800
Benjamin Goldberg <[EMAIL PROTECTED]> writes:
> There are three methods for writing code in which no bug can be found:
> 1) Make the code so straightforward that there are obviously no bugs.
> 2) Make the code so complicated that there are no obvious bugs.
> 3) Insist that any apparent bugs were really intentional features.
It might be nice to credit the first two methods as being a paraphrase
of a portion of C.A.R. Hoare's 1980 ACM Turing Award Lecture, "The
Emperor's Old Clothes":
...there are two ways of constructing a software design: One way is to
make it so simple that there are _obviously_ no deficiencies and the
other way is to make it so complicated that there are no _obvious_
deficiencies.
The first method is far more difficult. It demands the same skill,
devotion, insight, and even inspiration as the discovery of the simple
physical laws which underlie the complex phenomena of nature. It also
requires a willingness to accept objectives which are limited by
physical, logical, and technological constraints, and to accept a
compromise when conflicting objectives cannot be met. No committee
will ever do this until it is too late.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: voting through pgp
Date: Thu, 30 Nov 2000 02:30:31 GMT
Dan Oetting wrote:
>
> In article <[EMAIL PROTECTED]>, Benjamin Goldberg
> <[EMAIL PROTECTED]> wrote:
>
> > Actually, I believe that something like that exists in some places.
> > I live in New York state, and although I haven't participated in the
> > recent election, I think that voting machines which do this are
> > used.
> >
> > Moving the lever for one candidate would cause any other lever for a
> > candidate for that position to be reset. (Rather like a radio
> > button)
>
> If you make the world idiot proof the world will make better idiots.
Heh, true enough.
> These voting machines are still subject to human error. I recal an
> election where the local voting booths were setup with the equivalent
> of the "Buterfly Ballot". One of the offices ended up being split
> between two of the rows of levers. The election officials hadn't
> realized that the lockouts that prevent multiple votes in the same
> office couldn't be programmed across the rows. The election officials
> had to go through by hand reading dimples on the paper audit log to
> cast out the overvote.
Although it's clear that there was a problem in that election, the
concept is still sound. Firstly, the audit log probably actually
punched (not dimpled) holes in paper, or printed with ink on paper,
unlike the problematic florida votes, where in properly done there were
punched holes, but in some ballots were only dimpled -- thus, checking
by hand probably did not involve "iffy" ballots. Secondly, the problem
described *could* most certainly be fixed in the next design of the
machine -- once new machines are built, with the problem fixed, the
problem you described won't happen again.
--
There are three methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
3) Insist that any apparent bugs were really intentional features.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: Mode of operation to maintain input size with block ciphers?
Date: Thu, 30 Nov 2000 02:40:04 GMT
Jerry Leichter wrote:
>
> | > It's an oddity that the short and long are easy, while the
> | > intermediate cases are hard! (Or maybe there's a clever technique
> | > for those ... anyone?)
> |
> | Encrypt the first byte with the technique you suggested for a single
> | byte, and then take the part of the ciphertext block which you would
> | have discarded, and use that as a pad.
>
> I don't follow. I can think of two uses for the word "pad" in crypto-
> graphic contexts. The most common is pad as filler - how that would
> help here, I don't follow at all. The other is in the sense of "one-
> time pad", to be XOR'ed or otherwise combined with the cleartext in
> some stream mode.
Yes, that is the sense I had meant it.
> Yes, you can do that here - but why bother? If you go
> that route, you might as well just encrypt some constant, pick off as
> many bytes of the result as you need, and XOR with the cleartext.
> Perfectly reasonable, but it doesn't match the security properties of
> the original: Flipping one bit of the cleartext flips exactly one bit
> of the cyphertext.
I guess it was a dumb idea, after all.
> Even if you use a stronger combiner than XOR, a bit can only affect
> the byte it's in. If you want to go this route, you can use my
> algorithm a byte at a time... providing a new, if not particularly
> efficient, way to turn a block cipher into a stream cipher.
>
> Yes, you can use some mode that diffuses changes across bytes, but by
> the time you get the right guarantee for *all* the input bits, you're
> essentially inventing a new block encryption algorithm, which you now
> have to convince yourself is secure. The goal is to make the security
> properties of the result dependent only on the security properties of
> the algorithm you started with.
>
> -- Jerry
I guess that the only reasonable thing is to use CTS if there's more
than one block total, and use a VBSC, which has single byte [or smaller]
increments in blocksize. Hasty Pudding and lja1 are both good examples,
although I would suggest lja1 due to it's simplicity.
I might consider making a variant on the cipher I've recently posted,
but with variable sized blocks.
--
There are three methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
3) Insist that any apparent bugs were really intentional features.
------------------------------
From: Raphael Phan <[EMAIL PROTECTED]>
Subject: Probability Question on Square Attack
Date: Thu, 30 Nov 2000 02:42:01 GMT
Hi,
What is the random probability of obtaining a delta set used in the
Square Attack? In the case of Square, a block is a collection of 16
bytes, hence a delta set is a byte with all 256 possible values while
the 15 other bytes have all constant values...
Raphael
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: On mutation of crypto algorithms
Date: Thu, 30 Nov 2000 02:52:52 GMT
Tom St Denis wrote:
>
> In article <[EMAIL PROTECTED]>,
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> >
> > Tom St Denis wrote:
> > >
> > > Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> >
> > > > An interesting question is how would a scaled-down version
> > > > of Rijndael compete with DES. My feeling of the gut is that
> > > > 20 rounds would probably suffice to provide a fairly safe
> > > > bet.
> > >
> > > Irrelevant. Rijndael cannot process 64-bit blocks. The closest
> thing
> > > is a 72-bit block.
> >
> > What hinders a scaling-down to 64 bit blocks?
>
> The fact that you cannot build a square with eight elements...
>
> Tom
What if we used smaller bytes? 16 bytes, 4 bits each.
--
There are three methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
3) Insist that any apparent bugs were really intentional features.
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
Date: 30 Nov 2000 02:57:36 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)
Roger Schlafly wrote:
>I disagree. Let
> X := $ cost of MIPS-year
> Y := $ cost of gigabyte
> F(X,Y,n) := cost of breaking n-bit RSA
> G(X,Y,n) := cost of breaking n-bit ECC
> (with current algorithms, G does not depend on Y)
>
>Bob is saying:
>Estimate X,Y. Find m,n such that F(X,Y,m) = G(X,Y,n).
>Then m-bit RSA and n-bit ECC have comparable security.
>
>Don is saying:
>Estimate X. Let Y=0. Find m,n such that F(X,0,m) = G(X,0,n).
>Then m-bit RSA and n-bit ECC have comparable security.
Yes, I concur. Thank you for so clearly explaining your point!
I agree that it is somewhat bogus to use this to compare RSA and ECC,
if you want your comparison to be universally applicable. However, as
you say, it is fine if all you want is a lower bound on the parameters
needed to achieve your desired security level.
In other words, let
Z := $ cost of attacks you want to prevent
Estimate X. Find the smallest m',n' such that F(X,0,m') >= Z and
G(X,0,n') >= Z. Then either m'-bit RSA keys or n'-bit ECC keys will
provide your desired security level. The resulting keylengths might
not be necessary for security, but they will be sufficient.
Of course, comparing m' to n' is not justified. Comparing lower bounds
on a pair of absolute quantities is unlikely to tell you about the relation
of the two absolute quantities. So one simple answer is: Don't do that.
However, there is a slight tension here. In general, we can never find
F() or G() anyway; all that we can ever find is lower bounds on F() or G().
So, in some sense, we are always violating this rule, and always comparing
lower bounds rather than comparing absolute quantities. This is unavoidable.
It is a good idea to try to do everything in your power to minimize the
chance that this causes you to make bad decisions. Still, I think it
is impossible to completely avoid this type of fallacy when comparing
RSA and ECC (unless someone finds a reduction from RSA to ECC or vice
versa). It's all a matter of degree, isn't it?
Also, it depends on what degree of assurance you want that your lower
bounds won't be violated. To illustrate my point, suppose I'm the sort
of guy who is extremely conservative and unwilling to use a RSA key
(or ECC key) unless it is at least m' (or n') bits long, because I worry
that algorithmic improvements might make storage no longer the bottleneck.
Then I might reasonably compare m' to n'.
On the other hand, if I'm the sort who is willing to ignore the chance
that algorithmic improvements will make storage costs irrelevant, then
I might be happy using m (or n) bit keys, and in this case I might reasonably
compare m to n.
It all depends on how conservative the end user is. If the end user
wants to be conservative about improvements to storage complexity, then
she should use the comparison method you are attributing to Don. If the
end user wants to be less conservative about this issue, then it would
be more rational for her to "Bob's method". Agreed?
------------------------------
From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: keysize for equivalent security for symmetric and asymmetric keys
Date: 30 Nov 2000 03:05:13 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)
Roger Schlafly wrote:
>David Wagner wrote:
>> The conservative assumption
>> is that advances in factoring might reduce the storage costs more than
>> they reduce the running time in factoring algorithms.
>
>Why is that conservative? There might be advances in factoring that
>reduce the running time more than storage costs.
Surely it is not the most conservative possible assumption, but perhaps
it is not entirely unreasonable to call it conservative.
Let < represent the relation "is more conservative than".
Consider the following three assumptions:
A_0: There will be no advances in factoring.
A_1: Advances in factoring might reduce its storage complexity of
factoring faster than its time complexity.
A_2: The time complexity of factoring might reduce faster than storage.
Then A_1 < A_0 and A_2 < A_0 (but A_1 and A_2 need not be comparable).
A_0 is a blatantly anti-conservative assumption. If you are willing to
call an assumption A conservative whenever A < A_0, then both A_1 and
A_2 are conservative assumptions. If not, well, my comments don't apply.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************