On 11/11/2014 12:32 PM, Ryan Sleevi wrote:
On Tue, November 11, 2014 10:26 am, Nicholas Nethercote wrote:
  On Mon, Nov 10, 2014 at 7:06 PM, Ryan Sleevi
  <ryan-mozdevtechcry...@sleevi.com> wrote:
Not to be a pain and discourage someone from hacking on NSS
  My patches are in the following bugs:

  https://bugzilla.mozilla.org/show_bug.cgi?id=1094650
  https://bugzilla.mozilla.org/show_bug.cgi?id=1095307
  https://bugzilla.mozilla.org/show_bug.cgi?id=1096741

  I'm happy to hear specific criticisms.

  Nick
  --
Not trying to be a pain, but I don't think that's fair to position like
that. I'd rather the first question be answered - each one of your patches
adds more branches and more complexity.

OK, looking at that patches, I would challege that assertion. The patch in https://bugzilla.mozilla.org/show_bug.cgi?id=1094650 actually simplifies the code. I do agree we need to make complexity versus reward calculations, but most of these changes don't raise the complexity bar very high, if at all.
That part is indisputable, even if
it may be seen as small.
Sorry, I just disputed it;).
  However, what measures do we have to ensure that
this is meaningfully improving any objective measure of performance (other
than allocation churn, which allocators are exceptionally capable of
handling)?
I agree that these changes should be targeted based on instrumentation. I believe Nicholas has provided this in the bug.
  How do we ensure this doesn't regress?
I think that depends on the change. Certainly https://bugzilla.mozilla.org/show_bug.cgi?id=1095307 is simple enough, that the combination of review and normal testing should give us confidence in the patch's correctness.


I'd say the same forhttps://bugzilla.mozilla.org/show_bug.cgi?id=1094650  .
The patch in bug  https://bugzilla.mozilla.org/show_bug.cgi?id=1096741  seems 
more risky, as it delays allocation. So it would require more testing and and a 
more dillegent review, and may actually cross the complexity/benefit bar for 
me. (also, the code in question is code meant to avoid cache attacks, we want 
to make sure we don't reintroduce a new back channel attack).

Otherwise, we're adding
complexity without any benefits. And I don't think there are - or at
least, I haven't seen, other than "reduces allocations".
For me the bar is pretty low, particularly since I've seen patches that actually simplify the code. The 'use static until some threshold, the malloc' is a pretty well established meme in NSS already, and is particularly safe if we can see it's entire use over a single function.

The bigger issue is are we getting proper bang for the buck. Here we have an NSS user who is admittedly tuning for his particular workload, but doing performance tuning. It's targeted by hitting the largest offenders using instrumented tools to figure that out given the client's workload, then knocking out the low hanging fruit. I think that's pretty classic optimization strategy.

bob

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

-- 
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to