Hi Marcin,

On Thu, May 14, 2020 at 01:23:50PM +0200, Marcin Deranek wrote:
> > The current ctime reports the avg time experienced by the
> > last 1024 *requests* and is documented as such, so when you want to think
> > in terms of user experience it's the one to consider. For example, if you
> > have a 99% reuse rate and 1% connect rate, even a DC far away at 100ms
> > will only add 1ms on average to the request time, because 99% of the time,
> > the connect time really is zero for the request. Ditto if you're using
> > the cache. Your timer reports the average connect time per *connection*,
> > and there it's much more suitable to analyse the infrastructure's impact
> > on performance. Both are equally useful but do not report the same metric.
> >
> It was not my intention to change that. In my mind they both report the
> very same thing with one major difference: current implementation provides
> misleading results before it processes at least TIME_STATS_SAMPLES samples
> (it always assumes there were at least TIME_STATS_SAMPLES samples
> processed) whereas new implementation dynamically scales n value depending
> on amount of samples it processed so far.

Yes I agree that this part is different and for the time measurements your
approach is better. I think my response was confusing when I spoke about
the additional metrics, what I'd like to see for the ctime etc is:
  - average over the number of samples like you did, but still per request ;
  - possibly new sets of per-connection metrics if these have some values
    to you (I guess they do).

> In fact up to TIME_STATS_SAMPLES
> samples it should produce an exact moving average. New implementation takes
> into account reuse logic too, but after reload there are no connections to
> be re-used, so that's why latency sharply goes up (I actually looked at
> timing reported in log entries to confirm that). ctime, ttime etc. are also
> produced for tcp mode (which makes sense in my mind), so documentation
> might not be accurate in this matter (just tested it).

Yes it's still accurate in that historically when we didn't support keep-alive,
both TCP and HTTP had the concept of request since the difference was blurry
between the request and the connection. In fact there was always one set of
request/response instanciated for TCP, as you an see for example when you
perform some "tcp-request content accept ..." rules.

What will happen with two sets of metrics is that both will be exactly
identical in TCP. But that's precisely because HTTP makes a difference that
we want to split them!

> > I'd be in favor of creating a new one for yours. Anyway we're still missing
> > a number of other ones like the average TLS handshake cost on each side,
> > which should also be useful both per connection and per request. I'm saying
> > this in case that helps figuring a pattern to find a name for this one :-)
> >
> I don't mind creating additional metrics if they make sense of course. One
> of them (which could help to avoid exceptions) would be req_tot for a
> server (it's available for backend).

Wow, I thought we already had it. When I say that we really need to invest
time digging into the existing metrics, I'm not kidding. Yes sure, feel free
to add this one as well.

> To dynamically scale n, req_tot is
> much more suitable than lbconn for http mode (lbconn is incremented much
> earlier than req_tot creating a time window when metrics can be incorrectly
> calculated leading to wrong averages).

It's not the worst issue, the worst issue is that lbconn is only incremented
on load balanced connections; as soo as you have cookie-based stickiness or
use-server rules, lbconn doesn't move.

I've just looked on the stats page and I'm seeing a "Cum HTTP responses"
field for the servers in the tooltip, which is accurately incremented. I
just don't know what counter it is, I would have thought it's the request

> Of course if you still think it's a
> good idea to separate them (personally I'm not convinced about this unless
> you don't want to change behaviour of existing metric etc.) I can make new
> metrics eg. ctime_avg.

Just to be sure we're talking about the same thing, I'm convinced about the
benefit of splitting such averages per-request and per-connection, and of
changing them to use your sliding average.

> Keep in mind that I already decided (as you suggested) to create new 
> functions.


> What is your take on not exposing the metric if there is no data? There
> were no requests and we don't know what connect time is. Shall we report 0
> or nothing (empty) value ?

I understand your point but I'm not fond of this because this is different
from what we're doing everywhere else, and requires one extra annoying check
for those who don't care and just want to graph them in scripts, while
conversely those who care already have the info required to decide not
to report it. The difference is that for some people there will be a zero
instead of a gray area on a graph; for others, there used to be a working
script which suddenly will report errors by trying to multiply empty fields.

I wouldn't suggest to report a negative value either as we used to do for
timers or status codes, because for the same reason it caused quite some
confusion (e.g. when subtracting timers to determine intermediary steps).

> > > - I haven't changed a similar swrate_add_scaled function as it's not used
> > > yet and the description feels a bit misleading to me.
> >
> > So I'd rather have a different set of functions for this (and we'd update
> > the stats code to use these). The reason is that these ones were made to
> > be usable at extremely low cost in the lower layers. For example we do use
> > them in the activity measurements that are at the heart of the polling loop
> > and I also have a pending patch to use them in pool_free() under a thread
> > lock where we definitely don't want to spend our time adding an integer
> > divide when the current code only requires one addition and a right shift.
> > Because typically, knowing that for these core operations that can easily
> > be called billions of times a day, we'd have extra instructions only to
> > improve the accuracy of the measurement during the very first millisecond
> > would be a waste.
> >
> I see your point. I tried to optimize them as much as I could, but of
> course there's a limit somewhere..


> > It's even possible to further improve the accuracy by only adding n/2:
> >
> >                 new_sum = old_sum + v - (all ? (old_sum + n / 2) / n : 0);
> >
> That depends if we want to just round it or round it up.

I'm pretty sure I tried the two and found that rounding up was better.
I think it's related to the loss of precision caused by exponentiating
the old value, which would disappear after the second sample if only
rounded. But I could be wrong, that's a bit old now. I'd suggest to
proceed as you find the most accurate between the two.

> > I'd also ask you to split your patch in 2 :
> >
> >   - one which improves the freq_ctr with the new function(s)
> >   - one which adds the new metric, its update and reporting to the
> >     various call places.
> >
> > In addition, I think there will be less changes with a separate metric.
> >
> Will do. Thank you for the review.

You're welcome, thanks for this!


Reply via email to