Hi Ilari:

Thanks for your note.

To summarize:
a) You agree that ECDSA, in all its incarnations, is well-defined (where H is clearly stipulated to be a hash function with bit-string outputs). b) You believe that the example of Appendix Q.3.2 (w/ SHAKE128 w/ output-size d0=256) and that of Appendix Q.3.3 (w/SHAKE256 w/ output-size d0=512 bits) use a numerical value E that has been incorrectly computed, where your belief is based on a particular interpretation of FIPS 202. I think you will agree with me that the remainder of the computation is consistent with its stated inputs.

All right, conclusion on my part:
i) [CLOSED] ECDSA w/ SHA256, ECDSA w/ SHAKE128 and ECDSA w/ SHAKE256 have been correctly specified.
ii) [TO BE CHECKED] Verify output E of the two examples in Q3.2 and Q3.3.

The only action item would be to verify i) above, which (if your claim is correct) would result in replacing the single-line E strings in both examples by another string and recomputing values that depend on E in either case.

I will look into this and let you know my findings.

Please note, however, have no impact on the specification of ECDSA and the iana cose codepoints for the invocations with SHAKE* functionality.

To be frank: your comment is just about an (in your mind) one-line glitch in two informational examples I included as courtesy to readers to illustrate a specification.

Rene


On 2021-12-01 1:50 p.m., Ilari Liusvaara wrote:
On Wed, Dec 01, 2021 at 10:01:34AM -0500, Rene Struik wrote:
Hi Ilari:

Can we please get closure on the a) and b) points *today*? This does not
require programming, experiments, etc., since simply a less than 1-minute
exercise.
a)  ECDSA with hash H is well-defined iff H has well-defined bit string
output.

SHA-2 and SHAKE have well-defined bit string outputs, so all uses of
ECDSA in draft-ietf-lwig-curve-representations are well-defined.


b) The hash function output ECDSA uses is bit string.


With a) and b) closed... to c):


c) SHA-2 and SHAKE have different bitendianess. And this absolutely
has to be taken into account in software ECDSA implmentations that
implement both hashes.

The easiest way to take this difference in account is performing octet-
wise bitreverse on the hash value if hash function used is SHA-3 or
SHAKE. Then the rest can be the same as it is for SHA-2.


And unfortunately the part of FIPS 202 that warns about this difference
is written in really unclear way:

"The convention for interpreting hexadecimal strings as bit strings for
the inputs and outputs of the SHA-3 examples is different from the
convention for other functions on the examples page."

- FIPS 202, section B, page 25 (page 33 in the PDF).


This is closely related to why I disagree with the test vectors in
sections Q.3.2. and Q.3.3: Those test vectors seem to incorrectly
assume that SHAKE and SHA-2 have the same bitendian.


On 2021-11-27 12:59 p.m., Rene Struik wrote:
Hi Ilari:

Could you please explicitly confirm closure on a) and b), as I
requested, so that we can restrict attention to c) below?

Your present "ECDSA is well-defined if instantiated with SHA-1, SHA-2,
SHA-3 or SHAKE. The conversion rules for SHAKE are not unclear, they are
defined in section B of FIPS 202" still leaves this nebulous.

If you believe something of ECDSA is ill-defined, please indicate which
substep of the signing operation in Appendix Q.1 of the lwig curve draft
[1] is unclear and suggest edits:


Ref: [1] 
https://datatracker.ietf.org/doc/html/draft-ietf-lwig-curve-representations-22#appendix-Q.1

Thanks, Rene

On 2021-11-27 6:15 a.m., Ilari Liusvaara wrote:
On Thu, Nov 25, 2021 at 10:32:29AM -0500, Rene Struik wrote:
Hi Ilari:

I have trouble trying to understand your reasoning, since it
seems to drift
from one email to the next.

I suggest we tackle this topic systematically, so as to get clarity:
a) Specification of ECDSA itself.
Here, you claimed that ECDSA is not well-defined.
Here, I refuted this with reference to SEC1, BSI, ANSI X9.62-2005, FIPS
186-4.

b) Specification of hash functions.
Here, you claimed that the output of hash functions (in the context of
ECDSA) is an octet string.
Here, I refuted this with reference to FIPS 186-4, FIPS 180-4,
and FIPS 202,
which all specify the output as a bit string.

c) Specification of a specific hash function, viz. SHAKE-128 or
SHAKE-256.
Here, you claimed that confusion regarding octet string vs. bit string
conversion rules is unclear.
To me, if true, this is a problem with FIPS 202 and not with the
specification of ECDSA. We can go over this, pending request below.

Could you please confirm that you agree with a) and b) above, so
that we can
then restrict discussion to c) only?

Once we have agreed-upon closure on a) and b), we can discuss c)
itself.
ECDSA is well-defined if instantiated with SHA-1, SHA-2, SHA-3 or SHAKE.
The conversion rules for SHAKE are not unclear, they are defined in
section B of FIPS 202.


To give an example, let's go through converting m to e in for test
vectors in section Q.3.2. of draft-ietf-lwig-curve-representations-22,
closely following definitions of ECDSA and SHAKE. The resulting e fails
to agree:


1) The message to sign is:

"example ECDSA w/ Wei25519 and SHAKE128"

This is taken as correct by definition.


2) Computing 32 octets of SHAKE-128 of the mesage gives (in hex):

74ec48e0d8b9c37c7ad823b5e1d9e83745b4c7c5d02f29381f99196ff2052ce3

Which agrees with what test vectors claim.


3) However, in order to give octet string output, SHAKE internally
appiles b2h algorithm (FIPS 202 s. B.1). Doing h2b (FIPS 202 s. B.1)
in order to undo it and recover the bit string output gives:

0010111000110111000100100000011100011011100111011100001100111110
0101111000011011110001001010110110000111100110110001011111101100
1010001000101101111000111010001100001011111101001001010000011100
1111100010011001100110001111011001001111101000000011010011000111

This is the SHAKE output as bit string.


4) By definition of ECDSA for the instantiating curve, that hash
output as bits is truncated to 253 leftmost bits, giving:

0010111000110111000100100000011100011011100111011100001100111110
0101111000011011110001001010110110000111100110110001011111101100
1010001000101101111000111010001100001011111101001001010000011100
1111100010011001100110001111011001001111101000000011010011000


5) By definition of ECDSA, that bit string is the value of e.
Converting it to base 10 gives:

2612961505806908197104377195031002667796773794190844346899366204629451015832


This disagrees with the test vectors, which claim:
6610721166316936979487388405471295120434869298500984291368316241434902570396.



6) Converting that into hexadecimal gives:

05C6E240E373B867CBC37895B0F362FD9445BC74617E92839F13331EC9F40698

And this also disagrees with the test vectors, which claim:
0e9d891c1b17386f8f5b0476bc3b3d06e8b698f8ba05e52703f3232dfe40a59c.


There is a trick to calculate this: Taking the octet string output,
bitreversing each octet, interpretting result as big-endian integer,
and doing right shift by 3 bits gives the same 5C6...698 result.
This especially handy as it does not require any modifications to
hash or signhash steps.


The e I compute for section Q.3.3. test vectors also disagrees for
similar reasons.

-Ilari


--
email: [email protected] | Skype: rstruik
cell: +1 (647) 867-5658 | US: +1 (415) 287-3867

_______________________________________________
COSE mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/cose

Reply via email to