Hi Damien,

On Mon, Oct 28, 2019 at 02:50:12PM +0000, Damien Claisse wrote:
> +  <unit> is facultative, and can be set to "s" for seconds, "ms" for
> +  milliseconds or "us" for microseconds. Default unit is seconds.
> +  If <unit> is set, microseconds or milliseconds of current date sample are
> +  appended to timestamp. This is useful when a time resolution of less than
> +  a second is needed.

I'm still having a doubt reading this, it makes me think that we're
returning a fractional number, which is obviously not the case. I think
it's easier to explain that the date will always be an integer reflecting
the number of seconds, milliseconds, or microseconds since the epoch.

This also makes me think that modifying the http_date() converter to also
take such an optional unit could be really nice :-)

> +static int smp_check_date_unit(struct arg *args, char **err)
> +{
> +        if (args[1].type == ARGT_STR) {
> +                if (strcmp(args[1].data.str.area, "s") == 0) {
> +                        free(args[1].data.str.area);
> +                        args[1].type = ARGT_SINT;
> +                        args[1].data.sint = TIME_UNIT_S;

Please assign NULL to args[1].data.str.area after freeing it, I really
hate to find wandering pointers in cores.

> - * of args[0] seconds.
> + * of args[0] seconds. Add milli/microseconds if asked to in args[1].

I'd change this to "report in milli/microseconds" rather than "add" since
it doesn't really add, it multiplies first, thus it's a change of scale.

>  static int
>  smp_fetch_date(const struct arg *args, struct sample *smp, const char *kw, 
> void *private)
> @@ -2952,6 +2989,17 @@ smp_fetch_date(const struct arg *args, struct sample 
> *smp, const char *kw, void
>       if (args && args[0].type == ARGT_SINT)
>               smp->data.u.sint += args[0].data.sint;
>  
> +     /* add milliseconds to timestamp if needed */
> +     if (args[1].type == ARGT_SINT && args[1].data.sint == TIME_UNIT_MS) {
> +             smp->data.u.sint *= 1000;
> +             smp->data.u.sint += (date.tv_usec + 500) / 1000;

You have an issue here. You must really not round up at all, or your date
will evolve in sawtooth. If you emit both the millisecond and the microsecond
in a same log line, you'll find stuff like :

    1572333053.123 1572333053.123451
    1572333053.124 1572333053.123532

Timestamps must never ever be rounded up or they will cause sorting issues
and processing issues down the chain.

Otherwise OK for me.

Thanks,
Willy

Reply via email to