I have updated today to 2.16.0, before that it was 2.9.0 (if I'm not 
mistaken).

On Thursday, 27 February 2020 15:52:41 UTC+1, Julien Pivotto wrote:
>
> Can you tell me what is your Prometheus version? 
>
> On 27 Feb 06:48, Paul van der Linden wrote: 
> > There are multiple labels on those metrics indeed. How can I figure out 
> > what is causing this. Looking at the git changes for the deployment 
> > scripts, the only thing changed was this metric job. I still get a 
> > continuous stream of these in my logs, and they started around the time 
> I 
> > added the label drop: 
> > level=error ts=2020-02-27T14:38:24.235Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=error ts=2020-02-27T14:39:26.332Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=error ts=2020-02-27T14:40:26.960Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=warn ts=2020-02-27T14:40:36.044Z caller=klog.go:86 
> > component=k8s_client_runtime func=Warningf 
> > msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints 
> > ended with: too old resource version: 95336324 (95337598)" 
> > level=error ts=2020-02-27T14:41:28.247Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=warn ts=2020-02-27T14:41:34.021Z caller=klog.go:86 
> > component=k8s_client_runtime func=Warningf 
> > msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints 
> > ended with: too old resource version: 95336132 (95337864)" 
> > level=warn ts=2020-02-27T14:42:18.027Z caller=klog.go:86 
> > component=k8s_client_runtime func=Warningf 
> > msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints 
> > ended with: too old resource version: 95336301 (95338065)" 
> > level=error ts=2020-02-27T14:42:28.877Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=error ts=2020-02-27T14:43:30.582Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=error ts=2020-02-27T14:44:32.665Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=error ts=2020-02-27T14:45:33.427Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=warn ts=2020-02-27T14:46:32.003Z caller=klog.go:86 
> > component=k8s_client_runtime func=Warningf 
> > msg="/app/discovery/kubernetes/kubernetes.go:261: watch of *v1.Endpoints 
> > ended with: too old resource version: 95338072 (95339190)" 
> > level=error ts=2020-02-27T14:46:34.061Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > level=error ts=2020-02-27T14:47:35.562Z caller=db.go:617 component=tsdb 
> > msg="compaction failed" err="persist head block: write compaction: add 
> > series: out-of-order series added with label set \"{}\"" 
> > 
> > 
> > On Thursday, 27 February 2020 15:25:31 UTC+1, Julien Pivotto wrote: 
> > > 
> > > On 27 Feb 06:14, Paul van der Linden wrote: 
> > > > Thanks, completely missed that option. The error doesn't disappear 
> > > though, 
> > > > does it mean this error keeps happening until the ingested metrics 
> with 
> > > the 
> > > > labelsdrop get deleted because of retention? 
> > > 
> > > I don't know what is the cause of the original issue. Does not sound 
> > > like the relabel config you have shown because there would still be a 
> > > 'job' label. 
> > > 
> > > > 
> > > > On Thursday, 27 February 2020 14:51:31 UTC+1, Julien Pivotto wrote: 
> > > > > 
> > > > > On 27 Feb 05:48, Paul van der Linden wrote: 
> > > > > > The label set in the error is empty. There are still a bunch of 
> > > labels 
> > > > > on 
> > > > > > the metric though. The metrics don't have timestamps, just 
> generated 
> > > > > > everytime prometheus scrapes them. 
> > > > > > 
> > > > > > How do I improve the alert rule to ignore these? Looking at the 
> > > docs, is 
> > > > > > the only way to wrap these in 2 label_replace, or otherwise sum 
> and 
> > > > > specify 
> > > > > > every relevant label? 
> > > > > 
> > > > > You can use `without(pod, instance)` as well. 
> > > > > 
> > > > > > 
> > > > > > On Thursday, 27 February 2020 14:43:05 UTC+1, Julien Pivotto 
> wrote: 
> > > > > > > 
> > > > > > > On 27 Feb 05:33, Paul van der Linden wrote: 
> > > > > > > > I have currently a pod running in kubernetes exporting some 
> > > metrics 
> > > > > to 
> > > > > > > > prometheus. If we have alerts on them, they will be 
> retriggered 
> > > > > > > (resolved & 
> > > > > > > > trigger shortly after eachother) every time I update the 
> > > software 
> > > > > (as 
> > > > > > > the 
> > > > > > > > pod would have a different name/ip). I came to the 
> conclusing 
> > > that 
> > > > > this 
> > > > > > > is 
> > > > > > > > because of the unique pod and instance labels on these 
> metrics. 
> > > I 
> > > > > have 
> > > > > > > > added a config to drop these labels, but this seems to cause 
> the 
> > > > > error " 
> > > > > > > > persist head block: write compaction: add series: 
> out-of-order 
> > > > > series 
> > > > > > > added 
> > > > > > > > with label set", while it solves the issue. What is the 
> correct 
> > > way 
> > > > > to 
> > > > > > > > solve this issue. 
> > > > > > > 
> > > > > > > Is the label set in the error empty? Are you exposing metrics 
> with 
> > > > > > > timestamps? 
> > > > > > > 
> > > > > > > The correct way to deal with this would probably be to ingest 
> with 
> > > the 
> > > > > > > instance and pod name and improve your alert rule to ignore 
> that 
> > > at 
> > > > > this 
> > > > > > > point. 
> > > > > > > 
> > > > > > > Thanks 
> > > > > > > 
> > > > > > > -- 
> > > > > > >  (o-    Julien Pivotto 
> > > > > > >  //\    Open-Source Consultant 
> > > > > > >  V_/_   Inuits - https://www.inuits.eu 
> > > > > > > 
> > > > > > 
> > > > > > -- 
> > > > > > You received this message because you are subscribed to the 
> Google 
> > > > > Groups "Prometheus Users" group. 
> > > > > > To unsubscribe from this group and stop receiving emails from 
> it, 
> > > send 
> > > > > an email to [email protected] <javascript:>. 
> > > > > > To view this discussion on the web visit 
> > > > > 
> > > 
> https://groups.google.com/d/msgid/prometheus-users/54ff4ef8-7154-4517-bcda-8e52411fddea%40googlegroups.com.
>  
>
> > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > -- 
> > > > >  (o-    Julien Pivotto 
> > > > >  //\    Open-Source Consultant 
> > > > >  V_/_   Inuits - https://www.inuits.eu 
> > > > > 
> > > > 
> > > > -- 
> > > > You received this message because you are subscribed to the Google 
> > > Groups "Prometheus Users" group. 
> > > > To unsubscribe from this group and stop receiving emails from it, 
> send 
> > > an email to [email protected] <javascript:>. 
> > > > To view this discussion on the web visit 
> > > 
> https://groups.google.com/d/msgid/prometheus-users/28465380-afa9-48ac-94f9-e01ad901ae7a%40googlegroups.com.
>  
>
> > > 
> > > 
> > > 
> > > -- 
> > >  (o-    Julien Pivotto 
> > >  //\    Open-Source Consultant 
> > >  V_/_   Inuits - https://www.inuits.eu 
> > > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "Prometheus Users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/prometheus-users/ec686aa7-7f54-4485-9121-507527137b20%40googlegroups.com.
>  
>
>
>
> -- 
>  (o-    Julien Pivotto 
>  //\    Open-Source Consultant 
>  V_/_   Inuits - https://www.inuits.eu 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/a11dbe6e-8798-4354-b7fb-b45fa398436c%40googlegroups.com.

Reply via email to