>>> Oops! Wrong patch! Correct one attached. If you feel like testing the
>>> wrong one, go ahead, but there are some later non-essential adjustments.
>>>
>>> diff --git a/crypto/ec/ecp_nistz256.c b/crypto/ec/ecp_nistz256.c
>>> index bf3fcc6..33b07ce 100644
>>> --- a/crypto/ec/ecp_nistz256.c
>>> +++ b/crypto/ec/ecp_nistz256.c
>>> @@ -637,7 +637,7 @@ static void ecp_nistz256_windowed_mul(const EC_GROUP * 
>>> group,
>>>          ecp_nistz256_point_double(&row[10 - 1], &row[ 5 - 1]);
>>>          ecp_nistz256_point_add   (&row[15 - 1], &row[14 - 1], &row[1 - 1]);
>>>          ecp_nistz256_point_add   (&row[11 - 1], &row[10 - 1], &row[1 - 1]);
>>> -        ecp_nistz256_point_add   (&row[16 - 1], &row[15 - 1], &row[1 - 1]);
>>> +        ecp_nistz256_point_double(&row[16 - 1], &row[ 8 - 1]);
>>>      }
>>>
>>>      index = 255;
>> I can believe that this fixes the issue, but it's just masking it, no?

The underlying problem is that assembly routines return "partially
reduced" results. "Partially reduced" means that it can return result +
modulus if it fits in 256 bits. Rationale is that
((x+m)*y)%m=(x*y+m*y)%m=x*y%m+m*y%m and last term is 0. While it does
work with series of multiplications, I failed to recognize that there
are corner cases in non-multiplication operations. I'm preparing an
update...


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
Development Mailing List                       openssl-dev@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to