Hi Ilari:

I have trouble trying to understand your reasoning, since it seems to drift from one email to the next.

I suggest we tackle this topic systematically, so as to get clarity:
a) Specification of ECDSA itself.
Here, you claimed that ECDSA is not well-defined.
Here, I refuted this with reference to SEC1, BSI, ANSI X9.62-2005, FIPS 186-4.
b) Specification of hash functions.
Here, you claimed that the output of hash functions (in the context of ECDSA) is an octet string. Here, I refuted this with reference to FIPS 186-4, FIPS 180-4, and FIPS 202, which all specify the output as a bit string.
c) Specification of a specific hash function, viz. SHAKE-128 or SHAKE-256.
Here, you claimed that confusion regarding octet string vs. bit string conversion rules is unclear. To me, if true, this is a problem with FIPS 202 and not with the specification of ECDSA. We can go over this, pending request below.

Could you please confirm that you agree with a) and b) above, so that we can then restrict discussion to c) only?

Once we have agreed-upon closure on a) and b), we can discuss c) itself.

Rene


On 2021-11-25 9:55 a.m., Ilari Liusvaara wrote:
On Fri, Nov 19, 2021 at 11:25:49AM -0500, Rene Struik wrote:
Hi Ilari:

Could you elaborate on "There are some open documents that have description
of ECDSA, but that description might have been "simplified" using invalid
assumptions. So these descriptions are suspect at best."?

I don't understand why one would read "some open documents" and speculate on
its accuracy, rather than read the specification of ECDSA (which has been
out for over 20 (!) years). The actual specification in SEC1, BSI, ANSI
X9.62-2005, FIPS Pub 186-4 is only two pages, so it should be a two-minute
read to remind oneself about the correct specifications. No need to look up
blog posts by bitcoin developers, YouTube videos, etc.

The data conversion rules in SEC1 are described in Section 2.3 of SEC 1 (see
[1]), which treats data conversions between octet strings, bit strings,
finite field elements, and integers, where the output of a hash function is
a bit-string of fixed length. {SEC1 incorrectly suggests that the output of
a hash function is an octet string, but due to the careful conversion rules
in that document the description is still correct in any specification (see
[1], Section 4.1.3, Step 5.1 - which describes conversion of hash function
output to bit string in ECDSA and the subsequent conversion to integer in
Steps 5.3, 5.4, after potential truncation in Step 5.2.}

I updated the lwig curve document in June 9, 2021 (nearly half a year ago),
where I included an example of how ECDSA truncation and conversion works
(see [2]).
The problem is that the descriptions of ECDSA based on bit strings and
octet strings are not equivalent when the hash function defines an un-
usual mapping between bit strings and octet strings. Unfortunately,
SHA-3/SHAKE does exactly this (Section B of FIPS pub 202).

In situation like this, one must resolve what happens based on the
definitive specification, which in case of ECDSA is ANSI X9.62-2005.
The above text confirms that it defines ECDSA in terms of bit strings.

If one has ECDSA library that has signhash-style interface written to
SEC1 or BSI specifications (e.g. libnettle), there is a trick to
make things work correctly with SHA-3/SHAKE: bitreverse all the octets
of the hash before signing. This works regardless of truncation.


And looking at LWIG curves draft, I see that sections Q.3.2 and Q.3.3.
have test vectors. I can't get those to match up, specifically things
go off-rails in conversion from E to e.

In section Q.3.2., the first octet of E is 0x74, which impiles that
the first octet of e should be 0x05. Instead, the section gives 0x0E
as the first octet of e.

0x74 -> 0010 1110 -> 000 0010 1110 -> 0000 0101 -> 0x05.

And computing e from E explictly using two different methods (one
directly picks the bits from hash and assembles the bits as e, and
the other is bitreverse followed by reading the number and dividing
the excess bits away) gives:

e=0x05c6e240e373b867cbc37895b0f362fd9445bc74617e92839f13331ec9f40698.

(Which also agrees with above on the first octet.)

In section Q.3.3., the first octet of E is 0xE6, which impiles that
the first octet of e should be 0x19. Instead, the section gives 0x39
as the first octet of e.

0xE6 -> 0110 0111 -> 00 0110 0111 -> 00011001 -> 0x19.

And direct computation of e from E using the same methods yields:

e=0x19d6f88f024cf0ef2b1b4bc6aef97946119a62cfdbede2a724ba64700230f5f408b85930f88022a5e891b0e638201aa0db1b607475f64c86.

(Which also agrees with above on the first octet.)




-Ilari


--
email: [email protected] | Skype: rstruik
cell: +1 (647) 867-5658 | US: +1 (415) 287-3867

_______________________________________________
COSE mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/cose

Reply via email to