Happy New Year!
I have given it (version 14) a complete read. Thanks, Matt, once gain for all
your efforts.
My comments are split over two posts.
This is the first post that seeks to clarify a technical point.
My second post will have comments/suggestions to help improve
the clarity/presentation in the document, and also lists some typos and nits.
The following were the three possibilities for a protocol change to address the
problem that David identified:
1. Signature protects the Target ASN, {ASN, pCounts, Flags} of the signer,
Previous Secure_Path, and {AFI, SAFI, NLRI}.
2. Signature protects all of the data in #1 and the most recently added
Signature_Segment
(i.e. that of the eBGPsec speaker from whom the update was received).
3. Signature protects all of the data in #1 and the Previous Signature_Block
(i.e. all previous Signature_Segments).
David initially proposed #1.
http://www.ietf.org/mail-archive/web/sidr/current/msg07258.html
Following on that, Rob made a case for #2.
http://www.ietf.org/mail-archive/web/sidr/current/msg07261.html
I don’t think #3 was discussed in that thread. Nevertheless it is a solution
candidate.
#3 is what Matt has included in the revised draft.
The relative merits of #2 and #3 are worth a little discussion.
One thing to note is that ECDSA-P256 signature length is about 72 octets
(including DER (ASN1) format overheads); plus 22 octets for SKI and Sig length.
So a total of 94 octets of additional data (per signature) is to be included
in the hashed data at BGPsec routers.
In the case of #2, the *additional data* to be hashed would be a constant at 94
octets,
(independent of the AS path length), while in the case of #3,
it would be variable given by #AS (i.e. previous path length) x 94 octets.
So, for example, 940 octets more (for #3 over #2) if the previous AS path
length is 10.
Is this of concern? One good thing is that the Signature_Segments to be added
to the hashed data are adjacent (that helps in consideration of #3).
Adjacency helps with lowering # CPU cycles consumed for marshaling the data for
hash (for #3).
We have preliminary measurement data which shows that the performance for
SHA-256 hash operations on Intel NUC 3GHz (single threading) is as follows:
(we will be conducting this study also on a 3.5 GHz server with a lot more RAM):
Hash input data size | time per SHA-256 hash operation | hash operation
speed |
50 octets | 0.34 usec | 2,923,662 hash ops/s
100 octets | 0.58 usec | 1,716,556 hash ops/s
500 octets | 2.02 usec | 493,969 hash ops/s
1000 octets | 3.93 usec | 254,462 hash ops/s
5000 octets | 17.21 usec | 58,109 hash ops/s
(Averaged over 1000 iterations; usec = microseconds)
For a long previous-AS-path length of 10, with the choice of #2,
we’ll be in 300 octets ballpark for hashed data size, and
with the choice of #3, we’ll be around 1200 octets ballpark for hashed data
size.
So, to me it looks like #3 choice does not have a significant penalty
over #2 choice in terms the performance of hashing operations
for the range of hashed data sizes of our interest.
It is true that the hash processing time more than doubles for #3 compared to
#2 in the
hash data size ranges of interest. However, since all other BGPsec update
processing
is likely to dominate over the hash processing time, the concern with regard to
hash performance may not be huge when comparing #3 with #2.
The adjacency assumption of the multiple Signature_Segments for hashing is
important
for #3, and should hold (hopefully) in the implementations.
Your thoughts/inputs on the merits/trade-offs of #2 vs. #3? Please share them.
Thank you.
Sriram
_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr