The Prometheus solution for this is to pre-aggregate away the per-instance
labels using recording rules, and then only pass on (federate or
remote-write) the recorded metrics, not the raw original. This way, you
have better control over the correctness of the aggregation, and don't need
to impose an artificial ordering / replacement relationship.

/MR

On Wed, Sep 13, 2017 at 1:17 PM <sergei.storozhe...@gmail.com> wrote:

> To make things more clear, this is not a problem for live-monitoring with
> short retention time. The problem that we are facing is a long-time trend
> analysis. For this, we export all Prometheus data in InfluxDB. For long
> retention times this become a real problem because the time series
> cardinality grows indefinitely leading to indefinite growth of the memory
> usage, since InfluxDB keeps in-memory index of all tags. In 2 weeks on 30GB
> RAM VM the memory usage grew from 47% to 75%.
> On Tuesday, September 12, 2017 at 12:36:43 PM UTC+2, Matthias Rampke wrote:
> > A rescheduled pod is a new pod, and there is no logical continuity with
> any specific predecessor. Prometheus 1.x was designed with this in mind –
> one of the main motivations for developing it were the difficulties we
> (SoundCloud) had with per-instance time series in Graphite. However, it's
> not perfect – frequently churning time series still come at a cost.
> Prometheus 2.0 (currently in beta) will improve this significantly.
> >
> >
> > I would caution against micro-optimising for the frequency of time
> series changes. If pods live on the order of hours, you are well within
> what Prometheus 1.x supports, it just needs a little more resources to deal
> with it. If this doesn't work for you, try if the Prometheus 2.x beta works
> for you.
> >
> >
> > /MR
> >
> >
> > PS: cAdvisor/kubelet do sometimes create per-container metrics that vary
> across restarts, or did in the past. You can try using relabelling to
> reduce these, but you risk causing duplicate time series where the old and
> new container overlap, and that's not good either. I tried this and rolled
> it back for this reason.
> >
> >
> >
> >
> >
> >
> > On Tue, Sep 12, 2017 at 6:07 AM <sergei.st...@gmail.com> wrote:
> > Can you elaborate on this, please
> >
> > On Tuesday, September 12, 2017 at 12:24:28 AM UTC+2, Tim Hockin wrote:
> >
> > > When it is rescheduled, it very likely ends up on a different Node.
> >
> > > If you want to erase that info, you'll need to track ordinals yourself
> >
> > > (via templating or via an ID service) or use StatefulSet.
> >
> > >
> >
> > > On Mon, Sep 11, 2017 at 1:03 PM,  <sergei.st...@gmail.com> wrote:
> >
> > > > Indeed, restarts do not create new series. However, we don't want a
> new serie even when the pods get rescheduled.
> >
> > > > On Monday, September 11, 2017 at 9:41:01 PM UTC+2, Tim Hockin wrote:
> >
> > > >> Pod restarts should not create new series'.  Only if they get
> rescheduled, as in a rolling update.  In that case they ARE different.
> >
> > > >>
> >
> > > >>
> >
> > > >>
> >
> > > >> On Sep 6, 2017 2:06 AM,  <sergei.st...@gmail.com> wrote:
> >
> > > >> Is there any way to create ordinal index of pods in a normal
> ReplicaSet similarly to StatefulSet?
> >
> > > >>
> >
> > > >> From functional point of view this doesn't make much sense. But
> could be really useful for our monitoring solution. We use Prometheus that
> scrapes metrics from kubernetes cluster and sends it to InfluxDB.  Each
> newly created pod gets a unique name by adding a random suffix. The problem
> is that each time a new pod of the same replica is created, a new time
> series is created. This increases the metrics cardinality reducing
> performance and increasing memory footprint. It also results in creating
> each time a new line in a dashboard instead of continuing the same line.
> >
> > > >>
> >
> > > >> For example, suppose, we have a ReplicaSet with 2 replicas and each
> replica had 5 restarts. This will result in 10 separate time series and 10
> lines in a monitoring dashboard, while with the ordinal index in place,
> there will be just 2.
> >
> > > >>
> >
> > > >> When dealing with very long time intervals, hundred/thousands time
> series will be created for the same instance instead of one.
> >
> > > >>
> >
> > > >> Any suggestions are  greatly appreciated
> >
> > > >>
> >
> > > >>
> >
> > > >>
> >
> > > >> --
> >
> > > >>
> >
> > > >> You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q&A" group.
> >
> > > >>
> >
> > > >> To unsubscribe from this group and stop receiving emails from it,
> send an email to kubernetes-use...@googlegroups.com.
> >
> > > >>
> >
> > > >> To post to this group, send email to kubernet...@googlegroups.com.
> >
> > > >>
> >
> > > >> Visit this group at
> https://groups.google.com/group/kubernetes-users.
> >
> > > >>
> >
> > > >> For more options, visit https://groups.google.com/d/optout.
> >
> > > >
> >
> > > > --
> >
> > > > You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q&A" group.
> >
> > > > To unsubscribe from this group and stop receiving emails from it,
> send an email to kubernetes-use...@googlegroups.com.
> >
> > > > To post to this group, send email to kubernet...@googlegroups.com.
> >
> > > > Visit this group at https://groups.google.com/group/kubernetes-users
> .
> >
> > > > For more options, visit https://groups.google.com/d/optout.
> >
> >
> >
> > --
> >
> > You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q&A" group.
> >
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to kubernetes-use...@googlegroups.com.
> >
> > To post to this group, send email to kubernet...@googlegroups.com.
> >
> > Visit this group at https://groups.google.com/group/kubernetes-users.
> >
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to