On Wednesday, August 9, 2017 at 12:05:32 AM UTC-4, Peter Gutmann wrote:
> Matthew Hardeman via dev-security-policy
> <firstname.lastname@example.org> writes:
> >I merely raise the point that IF the framers of the 20 bytes rule did, in
> >fact, simultaneously intend that arbitrary SHA-1 hash results should be able
> >to be stuffed into the serial number field AND SIMULTANEOUSLY that the DER
> >encoded integer field value must be a positive integer and that insertion of
> >a leading 0x00 byte to ensure that the high order bit would be 0 (thus
> >regarded as a positive value per the coding), THEN it must follow that at
> >least in the minds of those who engineered the rule, that the inserted 0x00
> >byte must not be part of the 20 byte maximum size of the value AS legitimate
> >SHA-1 values of 20 bytes do include values where the high order bit would be
> >1 and without pre-padding the proper interpretation of such a value would be
> >as a negative integer.
> That sounds like sensible reasoning. So you need to accept at least 20 + 1
> bytes, or better yet just set it to 32 or 64 bytes and be done with it because
> there are bound to be implementations out there that don't respect the 20-byte
> limit. At the very least though you'd need to be able to handle 20 + 1.
I see. So the solution to standards non-compliance that creates compatibility
issues is to invent arbitrary standards (32 or 64 bytes)? How does that align
with https://www.mozilla.org/en-US/about/manifesto/#principle-06 ?
The original language in RFC 2459 restricted it to INTEGER, and in 2459 had no
length limit (thus, unbounded). This was reformed in RFC 3280, which introduced
the language limiting the upper bound to 20 octets - which clearly intends to
be the encoded value, by virtue of X.690. Similarly, when coupled with the
'positive integer', this would hopefully have clearly limited the length to 20
octets - there's no "20 plus padding" because the guarantee of a positive
integer is a transformation that happens before the conversion to octets, and
the result is limited to 20 octets, and those octets are the result of encoding
to the appropriate rules (BER or DER).
So no, this attempt at retro-analyzing 'large enough to fit SHA-1' does not fit
the historic context, does not fit the text, and the argument for arbitrary
bytes (e.g. actively ignoring 3280) is equally silly.
dev-security-policy mailing list