On Tue, 2007-12-04 at 14:11 +0100, Ivan Voras wrote:
> Robert Watson wrote:
> 
> > Changing
> > locking primitives, as I mentioned in an earlier post, is a risky thing:
> > after all, it intentionally changes the timing for critical kernel data
> > structures in the file system code.  I've given Stephan, the author of
> > the patch, a ping to ask him about this, but late in a release cycle,
> > conservativism is the watch-word.
> 
> Agreed, but it would be a shame to miss on the momentum 7.0 has acquired
>  for performance. Web servers are so common that there's a huge chance
> one of the first thing people will do with 7.0 would be some kind of
> web-benchmarks, especially after this thread on [EMAIL PROTECTED] Though (as I
> read the thread) the patch won't bring FreeBSD in line with Linux, it
> will help it not to be so slow it's silly.
> 
> Re: timings: Would looking at past instances give insight into future? I
> don't remember the time accurately, but in the past, when VFS was
> translated to MPSAFE and the locking reengineered, were there such problems?
> 
> Maybe Peter Holm can run a week or so of constant stress testing
> (24-hours-a-day) with the patch to verify it at least in short term?
> 

I need to agree with Robert on this one.  At some point you need to stop
fiddling with nits, cut the release, and then fiddle with the nits in
preparation for the next release.  As we get closer to the point we
think we can actually do the release RE needs to weigh the benefits of
commit requests versus the risks.  One of the biggest factors in our
evaluation of the benefits is whether it's addressing an issue that
completely blocks functionality (due to the bugs the system panics or
otherwise does not do something it should) or if it "merely" improves on
something.  The latter we really need to consider extremely carefully
because it's *possible* that adjustment would lead to the introduction
of new bugs of the "blocks functionality" form.

And this thread demonstrates to some degree exactly why a week of Peter
Holm's stress testing doesn't leave us with the warm fuzzy feeling that
an adjustment is perfect.  It shows it's OK for his synthetic workload.
But synthetic workloads of various forms showed improvements in
throughput with 7.0 versus 6.3 while other workloads (e.g. the one that
started off this thread...) don't.  Whether 7.0 helps with peoples'
workloads or not there is one thing in common throughout this thread and
that's nobody here has been saying the system fails completely (note I
said *this* *thread*... :-).  RE values that over people getting
improved performance for specific workloads at *this* phase of a release
cycle.

-- 
                                                Ken Smith
- From there to here, from here to      |       [EMAIL PROTECTED]
  there, funny things are everywhere.   |
                      - Theodore Geisel |

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to