Hi,

yet another question on the fingerprints. My context is that I am
thinking what I need to compare in order to authorize via fingerprints.

Text in question is:

===
The RECOMMENDED mechanism to generate a fingerprint is to take the
SHA-1 hash of the certificate and convert the 20 byte result into 20
colon separated, hexadecimal bytes, each represented by 2 uppercase
ASCII characters.  When a fingerprint value is displayed or
configured the algorithm used to generate the fingerprint SHOULD be
indicated.
===

What is "the algorithm used to generate..."? Is it SHA1 et al, thus
the hash algorithm used? Or is it actually the algorithm that was
used the generate the fingerprint.

If it is the former, it sounds like I should compare the hash values
and not actually the fingerprints. So

55:D8:43:57:39:6C:23:0F:86:B1:EB:93:1E:F3:09:DE:7B:8B:62:70
55-D8-43-57-39-6C-23-0F-86-B1-EB-93-1E-F3-09-DE-7B-8B-62-70

are identical (it is just RECOMMENDED to use colons). However, this
assumes that the fingerprint is always a hash. In this case, I think it
would be preferable to talk directly about the hash values.

If the fingerprint is not necessarily a hash, I need to compare the 
actual fingerprint, the ASCII representation. Then, the two strings above
would be different. That could cause interop problems.

I propose that we strictly define fingerprints to be arbitrarily long 
printable USASCII. If the fingerprint contains unprintable data, the
whole string must be encoded as a set of octets represented by 2 USASCII
hex characters delimited by colons - or we may specify this format for
all cases. This does not tie us to hashes but prevents interoperability
problems due to different formats.

Rainer



I

_______________________________________________
Syslog mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/syslog

Reply via email to