On 27 Feb 06:14, Paul van der Linden wrote:
> Thanks, completely missed that option. The error doesn't disappear though, 
> does it mean this error keeps happening until the ingested metrics with the 
> labelsdrop get deleted because of retention?

I don't know what is the cause of the original issue. Does not sound
like the relabel config you have shown because there would still be a
'job' label.

> 
> On Thursday, 27 February 2020 14:51:31 UTC+1, Julien Pivotto wrote:
> >
> > On 27 Feb 05:48, Paul van der Linden wrote: 
> > > The label set in the error is empty. There are still a bunch of labels 
> > on 
> > > the metric though. The metrics don't have timestamps, just generated 
> > > everytime prometheus scrapes them. 
> > > 
> > > How do I improve the alert rule to ignore these? Looking at the docs, is 
> > > the only way to wrap these in 2 label_replace, or otherwise sum and 
> > specify 
> > > every relevant label? 
> >
> > You can use `without(pod, instance)` as well. 
> >
> > > 
> > > On Thursday, 27 February 2020 14:43:05 UTC+1, Julien Pivotto wrote: 
> > > > 
> > > > On 27 Feb 05:33, Paul van der Linden wrote: 
> > > > > I have currently a pod running in kubernetes exporting some metrics 
> > to 
> > > > > prometheus. If we have alerts on them, they will be retriggered 
> > > > (resolved & 
> > > > > trigger shortly after eachother) every time I update the software 
> > (as 
> > > > the 
> > > > > pod would have a different name/ip). I came to the conclusing that 
> > this 
> > > > is 
> > > > > because of the unique pod and instance labels on these metrics. I 
> > have 
> > > > > added a config to drop these labels, but this seems to cause the 
> > error " 
> > > > > persist head block: write compaction: add series: out-of-order 
> > series 
> > > > added 
> > > > > with label set", while it solves the issue. What is the correct way 
> > to 
> > > > > solve this issue. 
> > > > 
> > > > Is the label set in the error empty? Are you exposing metrics with 
> > > > timestamps? 
> > > > 
> > > > The correct way to deal with this would probably be to ingest with the 
> > > > instance and pod name and improve your alert rule to ignore that at 
> > this 
> > > > point. 
> > > > 
> > > > Thanks 
> > > > 
> > > > -- 
> > > >  (o-    Julien Pivotto 
> > > >  //\    Open-Source Consultant 
> > > >  V_/_   Inuits - https://www.inuits.eu 
> > > > 
> > > 
> > > -- 
> > > You received this message because you are subscribed to the Google 
> > Groups "Prometheus Users" group. 
> > > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to [email protected] <javascript:>. 
> > > To view this discussion on the web visit 
> > https://groups.google.com/d/msgid/prometheus-users/54ff4ef8-7154-4517-bcda-8e52411fddea%40googlegroups.com.
> >  
> >
> >
> >
> > -- 
> >  (o-    Julien Pivotto 
> >  //\    Open-Source Consultant 
> >  V_/_   Inuits - https://www.inuits.eu 
> >
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/prometheus-users/28465380-afa9-48ac-94f9-e01ad901ae7a%40googlegroups.com.


-- 
 (o-    Julien Pivotto
 //\    Open-Source Consultant
 V_/_   Inuits - https://www.inuits.eu

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/20200227142518.GA17176%40oxygen.

Attachment: signature.asc
Description: PGP signature

Reply via email to