With regard to rescheduling and latency:

For ats_pagespeed, changing the fast_signal argument's default from false
to true in I_EThread::schedule helped improve latency when scheduling an
event from an external thread pool (up to 5 ms per scheduled operation).
This ticket [1] at least documents this fact, and can perhaps serve as a
discussion point. Perhaps it's worth it to make a setting out of this hard
coded default.
I'm not 100% sure that setting fast_signal to true would be a good idea in
general, as there probably is a cost to it. It was working fine for us
though, and considering ats_pagespeed's cause (leaning towards favoring
user agents) I think it can sometimes be beneficial.

[1] https://issues.apache.org/jira/browse/TS-2992

Otto


2015-03-11 15:12 GMT+01:00 Brian Rectanus <brect...@gmail.com>:

> On Wednesday, March 11, 2015, Nick Kew <n...@apache.org> wrote:
>
> > On Wed, 2015-03-11 at 03:10 -0500, Brian Rectanus wrote:
> > > I certainly have some cycles if someone can help point me in the right
> > > direction. Right now I am at a loss as to where to dig. Profiler just
> > shows
> > > a lot of nothing (epoll) happening as you wrote.
> >
> > Hmmm.  Is this with or without asynchronous notifications?
>
>
> > This is with Master, not the new experimental code.
>
>
> >
> > I've been working on an experiment running Ironbee asynchronously
> > using a thread pool similar to the demo plugin.  The issue that
> > concerns me most is that this approach requires rendezvous,
> > where the plugin needs to wait on notifications to determine
> > Continue vs Error in TxnReenable.
> >
> > I thought that might improve throughput despite potentially
> > blocking, because the txn thread is released in a cond_wait.
> > But I'd prefer a logic where I could proceed directly to
> > TxnReenable with neither Continue nor Error, but rather an
> > EAGAIN-like status and a same-phase callback to determine
> > Continue/Error when the rendezvous's condition is satisfied.
> > Does that make any sense?
> >
> > --
> > Nick Kew
> >
> >
>
> --
> Brian Rectanus
>

Reply via email to