чт, 21 нояб. 2019 г. в 15:18, William Dauchy <w.dau...@criteo.com>:

> On Thu, Nov 21, 2019 at 03:09:30PM +0500, Илья Шипицин wrote:
> > I understand. However, those patches add complexity (which might be moved
> > to another dedicated tool)
> those patch makes sense for heavy haproxy instances. As you might have
> seen above, we are talking about > 130MB of data. So for a full scraping
> every 60s or less, this is not realistic. Even the data loading might
> take too much time. We had cases where loading the data on exporter side
> was taking more time than the frequency of scraping, generating a
> snowball effect.
> It's a good pratice to avoid exporting data you know
> you won't use instead of: scraping -> deleting -> aggregate
> In our case we do not use most of the server metrics. It represents a
> factor of 10 in terms of exported data.

yep. I did see 130mb and I was impressed.

> --
> William

Reply via email to