In general this is a tricky topic, linked blog post help me in the past but 
it won't give you a magical number that's always true.
What I have found is that you really need to worry about one thing: the 
number of time series scraped.
With that in mind you can calculate how much memory is needed per time 
series with a simple query: go_memstats_alloc_bytes / 
prometheus_tsdb_head_series.
This give you per time series memory cost before GC, now unless you specify 
custom GOGC env variable for your Prometheus instance you usually need to 
double that to get the RSS memory cost.
Then we need to add other memory costs to the mix and these are less easy 
to quantify, there are other parts of Prometheus that use memory and 
queries will eat more or less memory depending on how complex they are etc. 
So it gets more fuzzy from there. But in general memory usage will scale 
with (since it's mostly driven by) the number of time series you have in 
Prometheus (prometheus_tsdb_head_series tells you that).

Now another complication is that all time series stay in memory for a while 
even if you scrape them only once. If you plot prometheus_tsdb_head_series 
over a few hours range you should see it go down every now and then, 
there's metrics garbage collection that happens (which you can see in logs) 
and also blocks get written from in-memory data every 2h (by default 
AFAIR). And this is an important thing to remember - if you have a lot of 
"event like" metrics that are exported only for a few seconds, for example 
if labels on metrics keeps changing all the time because some services put 
things like user IDs, requests paths etc, then that will get accumulated in 
memory until gc/block write happens. Again - prometheus_tsdb_head_series 
will show you that - if it just keeps growing all the time then so will 
your memory usage.

tl;dr keep an eye on prometheus_tsdb_head_series and you'll see how many 
time series you're able to fit into your instance
On Wednesday, 25 May 2022 at 08:56:51 UTC+1 [email protected] wrote:

> I am attempting to use the memory calculator formula from 
> https://www.robustperception.io/how-much-ram-does-prometheus-2-x-need-for-cardinality-and-ingestion
>  
> as we have been getting oom killed . 
>
> Is there a query that will provide  Number of Unique Label Pairs , and 
>  Average Bytes per Label Pairs ? Looking at the stats, and trying to get 
> this data out of our prometheus has been quite difficult to this point.
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4a81d2c2-c7db-4e6f-adce-293389d9a5a2n%40googlegroups.com.

Reply via email to