Answering a response to a statement made as wg chair

wrt:

Brian Dickson [[email protected]] said on Friday, May 11, 2012 3:44 
PM

>On Fri, May 11, 2012 at 2:00 PM, Murphy, Sandra 
><[email protected]> wrote:
>
>
>>Before we get too involved in discussing performance and different 
>>methods of measuring performance, I think it is very important to 
>>address the features of what Brian is suggesting;


>Would you be so kind as to explain why you believe this is important? 

I believe that analyzing performance measures of a system is premature before 
its features are at least well enough established to understand that it meets 
the requirements and does not have systemic architectural drawbacks.

To choose an extreme example, for illustrative purposes only, the NULL 
algorithm has really wonderful performance measures, but would not suit the 
purpose.

Furthermore, it is difficult to know what performance to measure if the system 
is not understood to some degree.

--Sandy, answering as wg co-chair


From: Brian Dickson [[email protected]]

Sent: Friday, May 11, 2012 3:44 PM

To: Murphy, Sandra

Cc: Sriram, Kotikalapudi; [email protected]; sidr wg list 
([email protected])

Subject: Re: [sidr] Keys and algorithms for Updates - feasibility analysis? 
(was Re: RPKI and private keys)









On Fri, May 11, 2012 at 2:00 PM, Murphy, Sandra 
<[email protected]> wrote:


Before we get too involved in discussing performance and different methods of 
measuring performance, I think it is very important to address the features of 
what Brian is suggesting;







Would you be so kind as to explain why you believe this is important? It's not 
like the WG mailing list is overloaded or the participants doing much (or even 
responding very much, it would seem.)
I was asked about this at the interim meeting, and there were many SIDR 
participants who were not at the meeting (physically or remotely.) My follow-up 
is pretty much required, just to inform the rest of the participants.



We have not had, to the best of my knowledge, any (or much at all if there was 
some) in-SIDR discussion of performance metrics on the proposed mechanisms, 
when implemented in software, or when implemented in hardware.



It has been proposed that a roadmap timeframe of 5-7 years is acceptable, in 
order that vendors provide hardware-based implementations. No justification for 
this has been offered, beyond "well, it is common sense".



The requirements doc explicitly calls out hardware-based solutions as 
acceptable, without any cost analysis or cost/benefit analysis, or absolute 
performance metrics for requirements.



I think the starting point should have been previously, and in the context of 
the current suggestions should be, performance numbers based on actual 
experiments using the protocols in question (and alternatives where 
alternatives have been suggested).
 In fact, the results from such testing should help inform discussion on 
acceptable performance, and those acceptable levels should be based on 
real-world observations of absolute rates.



 

such as:



what security services are being supplied?

who is involved in the service - where is the service applied and where 
validated?

what is being protected?

what are the components and the architecture?



etc.





This can be addressed, for the moment, in a "Napoleon Dynamite" fashion.
- whatever is needed
- whoever wants to  & what do you mean by "service being validated"?
- bgp paths vs threats
- whatever they need to be, and whatever works



The basic thing here is, I'm asking the question, "Why is the heavy-duty crypto 
needed?".
When I start using formal logic, I take a bunch of axioms, and rules based off 
those axioms.
Then, take a hypothetical additional piece of logic (rule or assumption), and 
see what it can be reduced to, in terms of which axioms or rules are violated.



Working backwards, it appears the crypto is needed because all the validation 
is based on information passed in-band.
Without the crypto, anyone can take part of another update, and forge additions 
to it, or changes to it, trivially. (That's the case with vanilla BGP, too.)



The additional assumption is, assume some OOB validation process, e.g. suing an 
outside-of-BGP communication path of some sort.



And my question was, if we start the analysis by ignoring the details of the 
OOB system, but merely presume it exists as an adjunct, what are the minimum 
things we can do to tie into that system by adding _something_ to BGP updates, 
and how does that change
 the performance (and thus cost) of on-router stuff?



The reason it is fair to ignore the OOB element is, that unlike on-router 
stuff, the possibility of achieving scaling benefits exists, since multiple 
routers could conceivably use a shared OOB black box.



By definition, on-router stuff can't achieve scaling benefits of this sort.



And, also by definition, there is no requirement that the OOB component look 
anything like the in-band stuff. The OOB could conceivably be anything under 
the sun, and be implemented by any existing or new technology. It could use 
DNSSEC, it could use IPSEC,
 it could use LDAP, it could use Kerberos (how, I don't know, but it could), it 
could even use carrier pigeons or SMTP or SNMP or quantum encryption.



And systemically, any number of topologies are possible, since they are not 
fundamentally tied to the topology of the BGP infrastructure itself.



 



--Sandy, speaking as wg co-chair










When you speak as wg co-chair, I think it is important to be clear on the 
rationale for any attempt to moderate on-topic discussion. No offense intended, 
but not providing rationale looks to WG participants as political.



Thanks,
Brian
 

 

________________________________________

From: 
[email protected] [[email protected]] on behalf of Sriram, Kotikalapudi 
[[email protected]]

Sent: Thursday, May 10, 2012 9:05 AM

To: 
[email protected]

Cc: sidr wg list ([email protected])


Subject: Re: [sidr] Keys and algorithms for Updates - feasibility analysis? 
(was Re: RPKI and private keys)



Hi Ross,



The 10,000/s number is Brian Dickson's and

it is not related to ECDSA (or RSA) signing/verification of BGPSEC updates.



In my work on performance modeling of BGPSEC, I have used the basic

signing/verification measurement data from the eBACS benchmarking effort:

http://bench.cr.yp.to/results-sign.html

The measurement numbers they report are in the same ballpark as yours for RSA 
signing.

However, the BGPSEC spec draft specifies ECDSA-P256, which is much faster than 
RSA-2048 for signing.

(Side note: ECDSA-P256 was also preferred because it results in a much lower 
size for BGPSEC updates

and hence lower router RIB memory size.

http://www.nist.gov/itl/antd/upload/BGPSEC_RIB_Estimation.pdf  )



Regarding how the eBACS measurement data were used to model BGPSEC CPU 
performance,

please see:

http://ripe63.ripe.net/presentations/127-111102.ripe-crypto-cost.pdf

(slides 10 and 11 summarize signing/verification speeds for various latest 
Intel and Cavium processors)

or see,

http://www.ietf.org/proceedings/83/slides/slides-83-sidr-7.pdf

(slides 7 and 8).



The performance modeling and measurement work is still evolving and there is 
still ways to go

w.r.t. prototyping of BGPSEC and measurements with actual signed updates, etc.



Sriram



________________________________________

From: 
[email protected] [[email protected]]

Sent: Thursday, May 10, 2012 4:35 AM

To: Sriram, Kotikalapudi

Cc: Brian Dickson ([email protected]); sidr wg list ([email protected])

Subject: Re: [sidr] Keys and algorithms for Updates - feasibility analysis? 
(was Re: RPKI and private keys)



Sriram



You can't get 10,000 signature creations and verifications a second on

a standard Intel core. You can get maybe 100.



Your colleagues who work on smart grid standards have experience of

this. The IEC working group assumed that all LAN traffic in

electricity substations could be authenticated by digital signatures.

This turned out to not work, and caused a major stall in the smart

grid security program. Some substation LAN traffic has a hard

end-to-end latency bound of 4ms, and that simply can't be achieved on

standard cores using 1024-bit RSA signatures. You need custom

hardware, which brings serious export control headaches as well as

significant costs. We wrote this up in



 http://www.cl.cam.ac.uk/~sf392/publications/S4-1010.pdf



Ross




_______________________________________________

sidr mailing list

[email protected]

https://www.ietf.org/mailman/listinfo/sidr

_______________________________________________

sidr mailing list

[email protected]

https://www.ietf.org/mailman/listinfo/sidr










_______________________________________________
sidr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to