Several people have expressed a desire to be kept informed about RPKI validator testing, so I'm sending a brief summary to the list. At IETF 82, Rob Austein, Tim Bruijnzeels, and I did some more RPKI validator testing. If there are other validator implementers out there, please let us know so we can include you.

Data Sets

Good Data: The well-known trust anchors list that Rob assembled
http://subvert-rpki.hactrn.net/trunk/rcynic/sample-trust-anchors/

Bad Data: BBN's RPKI Syntax Conformance repository
TA: rsync://rpki.bbn.com/conformance/root.cer

Previously we compared validators on good data. This time we added specially crafted bad data, where each of 161 input files was intended designed to violate a single item in the spec. The violations range from mundane (negative serial number) to serious (SKI != hash of public key), but we intended each case to be detectable using the standalone file, without referencing other files.

Overall Results

The "Good Data" testing confirmed that all three validators currently agree, modulo some known minor issues/differences:
- rcynic: key usage checking
- RIPE: stale CRL strictness
- BBN: bottom-up processing, scope of subdir chasing

The "Bad Data" testing is a work-in-progress, but the BBN conformance test cases have already proven useful in revealing corner cases in the validators. All three validators (rcynic, RIPE, BBN) will benefit from more robust error handling due to this test set. Sunday's session was also useful for debugging the syntax conformance cases themselves. Credit where it's due: various Cert/CRL AKI issues (thanks Rob), various CMS encoding issues + "good" CMS case for contrast (thanks Tim), missing top-level MFT/CRL (thanks Tim).

Another useful side-effect of the testing was that it raised some questions about relying party gray areas.

Relying party gray-areas

The specs leave room for the relying party to decide what to do with imperfect but not completely invalid objects. This is for good reason. But I believe implementers could benefit from an informational doc or BCP to guide them in these gray areas. Here are two examples we came across:

1. An invalid CRL casts doubt on, and depending on the strictness of the validator, potentially invalidates the manifest in the same directory. This is because the EE cert in the manifest is no longer perfect -- it *could* have been revoked by a CRL. In a sense, this is the strictest interpretation of the spec. But one could also imagine wanting more resilience against an attacker who can remove a CRL. This is left up to the RP right now and the current implementations differ.

2. AIA correctness. Does res-certs require validators to reject a certificate with a messed up AIA URI, even if top-down traversal is ok? Having clean AIAs obviously helps bottom-up validators. But validators capable of bottom-up traversal must already defend against AIA-wild-goose-chase DoS, e.g. by limiting chase depth. Should we encourage validators to enforce AIA correctness?

Matt Lepinski and I are currently thinking about how to structure a document that will help RP implementers make informed decisions in these types of cases.

-Andrew

_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to