In most cases WAL growing graphic is pretty linear, if you're not adding 
dynamically exporters. So you can check increase per hour and it will be 
your number.
Also this article maybe not very actual on current versions, but giving 
some highlights on what causing WAL to grow: 
https://www.robustperception.io/how-much-ram-does-prometheus-2-x-need-for-cardinality-and-ingestion

To reduce it you can decrease retention time. Prometheus will run 
compaction jobs compacting WAL what older 1/10 of retention time.

Other workaround is to play with --storage.tsdb.min-block-duration and 
--storage.tsdb.max-block-duration options
Example: --storage.tsdb.min-block-duration=30m 
--storage.tsdb.max-block-duration=2h will cause running compactions every 
30 minutes for WAL files older 2 hours. Consuming also less resources 
because of smaller chunks.

NOTE: despite I can see people more and more tweaking these options - 
Prometheus developers strongly not recommend to use them, as they are for 
inner performance testing and non-default block sizes can cause performance 
issues.



On Friday, February 21, 2020 at 1:45:30 PM UTC-8, Florin Andrei wrote:
>
> To store Prometheus data, I have a volume with a given, fixed size. I need 
> to make sure usage level stays below 100%.
>
> I've already set storage.tsdb.retention.time, but this is not always very 
> effective. Sometimes large data spikes will fill up the volume. I cannot 
> reduce the retention time too much.
>
> I want to also use storage.tsdb.retention.size. But the problem is, this 
> doesn't take into account the WAL file size.
>
> Is there a way to estimate the size of WAL data? Are there any guidelines 
> for fitting the retention size in a given volume size?
>
> Thanks!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/908c032b-70d9-42ba-be99-b3d5c17a17e0%40googlegroups.com.

Reply via email to