Yep, I see your track from the start of the thread was to understand why
validation was failing.  And validation is good, many protocols can be
broken unintentionally or plausibly deniably sabotaged due to coding error
on one side of the protocol only.  I think it would be easily possible to
encoe the p_i values in the public key format and so run primality tests on
them.  (Format changes... yuck).  Also as p_i are really small perhaps you
could factor n = p_1 * ... * p_k moderately quickly.  Its been quite a while
since factoring a 384 bit key was a big deal.
But I imagine most commonly these elgamal keys are 2048 bits or more and q so 
that
probably is heading towards the computional in feasibility, so the only real
chance is if people would communicate the p_i values (or the seed for
re-generating them, perhaps.)

I presume like key sizes will be (from DSA) (|p|,|q|) = (1024,160),
(2048,224) (2048,256) (3072,256) and factoring a 512-bit number is still
painful.

Adam

On Tue, Dec 18, 2012 at 08:50:11PM -0500, Jeffrey Walton wrote:
Thanks Adam,

With Lim-Lee as you maybe saw in the paper you just generate a few extra
small p_i values n of them where only k needed, then try permutations C(n,k)
untl you find one for which p = 2q*p_1*...p_k is prime.  As the p_i are
small they are fast and cheap to generate.
So, that's the "good case." How about the "bad case(s)"? What happens
when the bad guy generates or influences the supposed Lim-Lee prime?
He/she will only be caught if someone performs the unique
factorization (versus the previous primality test).

I think the paper's conclusions says it all (Section 5): "...
therefore, our attack can be easily prevented if the relevant protocol
variables are properly checked." It appears many folks did not
validate parameters - presumably due to efficiency/speed/laziness -
and now everyone is penalized (even the folks who were validating
parameters).

Something easy has been made hard. Folks who specialize in these types
of analysis are probably pissing their pants because they are laughing
so hard.

Jeff

On Tue, Dec 18, 2012 at 8:29 PM, Adam Back <a...@cypherspace.org> wrote:
Well one reason people like Lim-Lee primes is its much faster to generate
them.  That is because of prime density being lower for strong primes, at
the sizes of p & q for p=2q+1 and you need to screen both p & q for
primeness.

With Lim-Lee as you maybe saw in the paper you just generate a few extra
small p_i values n of them where only k needed, then try permutations C(n,k)
untl you find one for which p = 2q*p_1*...p_k is prime.  As the p_i are
small they are fast and cheap to generate.

Adam


On Tue, Dec 18, 2012 at 08:16:01PM -0500, Jeffrey Walton wrote:

So, I've got to read through most of Section 4.

I'm not sure what to think of the shortcut of p = 2 q p_1 p_2 p_3 ... p_n.

With p = 2q + 1, we could verify the the [other party's] parameters
and stop processing. I believe the same is true for p = 2 p_1 q + 1
(which is basically p = q r + 1), but I could be wrong.

With p = 2 q p_1 p_2 p_3 ... p_n, we don't have a witness to the
fitness of the key's generated by GnuPG. So we can't easily decide to
stop processing. Maybe I'm being to harsh and I should do the unique
factorization. But in that case, wouldn't be easier to use p = 2q + 1
since I am validating parameters?

Finally, an open question for me (which seems to be the motivation for
the change): how many folks are using, for example, ElGamal shared
decryption and ElGamal shared verification? Was the loss of
independent verification a good tradeoff *if* the feature is almost
never used?
_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to