The standard approach for larger setups is to start sharding Prometheus. In
Kubernetes it's common to have a Prometheus-per-namespace.

You may also want to look into how many metrics each of your pods is
exposing. 20GB of memory indicates that you probably have over 1M
prometheus_tsdb_head_series

Changing the scrape interval is probably not going to help as much as
reducing your cardinality per Prometheus.

For example, we have a couple different shards. One is using 33GB of memory
and managing 1.5M series. The other shard is 38GB and managing 2.5M series.
We allocate 64GB memory instances for these servers.

If you don't want to go down the sharding route, you'll likely need some
larger nodes to run Prometheus on.

On Wed, Jun 17, 2020 at 9:48 AM Tomer Leibovich <[email protected]>
wrote:

> Thanks, so if I cannot reduce the amount of pods, it’s better to change
> the scraper interval from default of 30s to 60s?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/71fc37fc-4e4f-4a14-9fdb-67ef49e5f661o%40googlegroups.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/CABbyFmq0jPsn3NbVDwQh4iPBAvjMwf9ypvpHs4va_nezTm%3D_jw%40mail.gmail.com.

Reply via email to