Well one reason people like Lim-Lee primes is its much faster to generate
them.  That is because of prime density being lower for strong primes, at
the sizes of p & q for p=2q+1 and you need to screen both p & q for primeness.

With Lim-Lee as you maybe saw in the paper you just generate a few extra
small p_i values n of them where only k needed, then try permutations C(n,k)
untl you find one for which p = 2q*p_1*...p_k is prime.  As the p_i are
small they are fast and cheap to generate.

Adam

On Tue, Dec 18, 2012 at 08:16:01PM -0500, Jeffrey Walton wrote:
So, I've got to read through most of Section 4.

I'm not sure what to think of the shortcut of p = 2 q p_1 p_2 p_3 ... p_n.

With p = 2q + 1, we could verify the the [other party's] parameters
and stop processing. I believe the same is true for p = 2 p_1 q + 1
(which is basically p = q r + 1), but I could be wrong.

With p = 2 q p_1 p_2 p_3 ... p_n, we don't have a witness to the
fitness of the key's generated by GnuPG. So we can't easily decide to
stop processing. Maybe I'm being to harsh and I should do the unique
factorization. But in that case, wouldn't be easier to use p = 2q + 1
since I am validating parameters?

Finally, an open question for me (which seems to be the motivation for
the change): how many folks are using, for example, ElGamal shared
decryption and ElGamal shared verification? Was the loss of
independent verification a good tradeoff *if* the feature is almost
never used?
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to