We are using Prometheus operator 
https://github.com/helm/charts/tree/master/stable/prometheus-operator 0.38.1  
and Prometheus 2.18.2.

We are seeing a weird problem with this setup, Intermittently Prometheus's 
static endpoints like /-/ready, /-/healthy, /-/metrics become unresponsive 
and they never recover again.
Apparently kubelet checks readiness probes on these endpoints and after 
some failure, it will send a sigterm to the Prometheus. This is ok but due 
to these intermittent restarts, we lost metrics for the restarts period for 
like ~10 mins.

We have run 2 replicas of Prometheus pods and we have seen restarts or 
sigterm sent on the same time to both pods.
CPU and Memory requested is 8 Core, 24GB. And we are not having much load 
on the Prometheus in terms of time series(307K only).


Attaching the logs of a sigterm received pod in debug mode and 
configuration of Prometheus.

Its been a headache for us now, any help is highly appreciated.

Thanks 
Ravindra Singh

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/b98a18eb-18eb-4326-8230-f4ffac7de4d7o%40googlegroups.com.

Attachment: prometheus_config.yaml
Description: Binary data

020-08-06T14:28:53.141874696Z level=info ts=2020-08-06T14:28:53.141Z 
caller=main.go:337 msg="Starting Prometheus" version="(version=2.18.2, 
branch=HEAD, revision=a6600f564e3c483cc820bae6c7a551db701a22b3)"
2020-08-06T14:28:53.141937593Z level=info ts=2020-08-06T14:28:53.141Z 
caller=main.go:338 build_context="(go=go1.14.4, user=root@130a411dd4ff, 
date=20200609-09:05:58)"
2020-08-06T14:28:53.141943504Z level=info ts=2020-08-06T14:28:53.141Z 
caller=main.go:339 host_details="(Linux 4.14.35-1902.3.2.el7uek.x86_64 #2 SMP 
Tue Jul 30 03:59:02 GMT 2019 x86_64 
prometheus-monitoring-prometheus-oper-prometheus-0 (none))"
2020-08-06T14:28:53.14195177Z level=info ts=2020-08-06T14:28:53.141Z 
caller=main.go:340 fd_limits="(soft=1048576, hard=1048576)"
2020-08-06T14:28:53.141987237Z level=info ts=2020-08-06T14:28:53.141Z 
caller=main.go:341 vm_limits="(soft=unlimited, hard=unlimited)"
2020-08-06T14:28:53.144558754Z level=info ts=2020-08-06T14:28:53.144Z 
caller=main.go:678 msg="Starting TSDB ..."
2020-08-06T14:28:53.144588211Z level=info ts=2020-08-06T14:28:53.144Z 
caller=web.go:523 component=web msg="Start listening for connections" 
address=0.0.0.0:9090
2020-08-06T14:28:53.149671863Z level=info ts=2020-08-06T14:28:53.149Z 
caller=head.go:575 component=tsdb msg="Replaying WAL, this may take awhile"
2020-08-06T14:28:53.150320416Z level=info ts=2020-08-06T14:28:53.150Z 
caller=head.go:624 component=tsdb msg="WAL segment loaded" segment=0 
maxSegment=6
2020-08-06T14:28:54.746904644Z level=info ts=2020-08-06T14:28:54.746Z 
caller=head.go:624 component=tsdb msg="WAL segment loaded" segment=1 
maxSegment=6
2020-08-06T14:28:55.153054189Z level=info ts=2020-08-06T14:28:55.152Z 
caller=head.go:624 component=tsdb msg="WAL segment loaded" segment=2 
maxSegment=6
2020-08-06T14:28:55.894163246Z level=info ts=2020-08-06T14:28:55.893Z 
caller=head.go:624 component=tsdb msg="WAL segment loaded" segment=3 
maxSegment=6
2020-08-06T14:28:56.495329908Z level=info ts=2020-08-06T14:28:56.495Z 
caller=head.go:624 component=tsdb msg="WAL segment loaded" segment=4 
maxSegment=6
2020-08-06T14:28:57.211729615Z level=info ts=2020-08-06T14:28:57.211Z 
caller=head.go:624 component=tsdb msg="WAL segment loaded" segment=5 
maxSegment=6
2020-08-06T14:28:57.212010254Z level=info ts=2020-08-06T14:28:57.211Z 
caller=head.go:624 component=tsdb msg="WAL segment loaded" segment=6 
maxSegment=6
2020-08-06T14:28:57.212031664Z level=info ts=2020-08-06T14:28:57.211Z 
caller=head.go:627 component=tsdb msg="WAL replay completed" 
duration=4.06243357s
2020-08-06T14:28:57.384086995Z level=info ts=2020-08-06T14:28:57.383Z 
caller=main.go:694 fs_type=EXT4_SUPER_MAGIC
2020-08-06T14:28:57.384121409Z level=info ts=2020-08-06T14:28:57.383Z 
caller=main.go:695 msg="TSDB started"
2020-08-06T14:28:57.384126188Z level=debug ts=2020-08-06T14:28:57.383Z 
caller=main.go:696 msg="TSDB options" MinBlockDuration=2h MaxBlockDuration=3d 
MaxBytes=0B NoLockfile=true RetentionDuration=30d WALSegmentSize=0B 
AllowOverlappingBlocks=false WALCompression=false
2020-08-06T14:28:57.384130927Z level=info ts=2020-08-06T14:28:57.383Z 
caller=main.go:799 msg="Loading configuration file" 
filename=/etc/prometheus/config_out/prometheus.env.yaml
2020-08-06T14:28:57.388595614Z level=info ts=2020-08-06T14:28:57.388Z 
caller=kubernetes.go:253 component="discovery manager scrape" discovery=k8s 
msg="Using pod service account via in-cluster config"
2020-08-06T14:28:57.389609134Z level=info ts=2020-08-06T14:28:57.389Z 
caller=kubernetes.go:253 component="discovery manager scrape" discovery=k8s 
msg="Using pod service account via in-cluster config"
2020-08-06T14:28:57.390434309Z level=info ts=2020-08-06T14:28:57.390Z 
caller=kubernetes.go:253 component="discovery manager scrape" discovery=k8s 
msg="Using pod service account via in-cluster config"
2020-08-06T14:28:57.391276708Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=manager.go:224 component="discovery manager scrape" msg="Starting 
provider" provider=*kubernetes.SDConfig/0 
subs="[monitoring-staging/monitoring-prometheus-oper-prometheus/0 
monitoring-staging/monitoring-prometheus-oper-grafana/0 
monitoring-staging/monitoring-prometheus-oper-alertmanager/0 
monitoring-staging/monitoring-prometheus-oper-kube-state-metrics/0 
monitoring-staging/monitoring-prometheus-oper-node-exporter/0 
monitoring-staging/monitoring-prometheus-oper-operator/0]"
2020-08-06T14:28:57.391306484Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=manager.go:224 component="discovery manager scrape" msg="Starting 
provider" provider=*kubernetes.SDConfig/1 
subs=[monitoring-staging/monitoring-prometheus-oper-apiserver/0]
2020-08-06T14:28:57.391311463Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=manager.go:224 component="discovery manager scrape" msg="Starting 
provider" provider=*kubernetes.SDConfig/2 
subs="[monitoring-staging/monitoring-prometheus-oper-kubelet/0 
monitoring-staging/monitoring-prometheus-oper-kubelet/1 
monitoring-staging/monitoring-prometheus-oper-kubelet/2 
monitoring-staging/monitoring-prometheus-oper-kubelet/3 
monitoring-staging/monitoring-prometheus-oper-kube-etcd/0]"
2020-08-06T14:28:57.391639141Z level=info ts=2020-08-06T14:28:57.391Z 
caller=kubernetes.go:253 component="discovery manager notify" discovery=k8s 
msg="Using pod service account via in-cluster config"
2020-08-06T14:28:57.391853735Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Endpoints (10m0s) from 
/app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:28:57.391866339Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:28:57.39187208Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:363"
2020-08-06T14:28:57.391876989Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:362"
2020-08-06T14:28:57.391923517Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:363"
2020-08-06T14:28:57.391936191Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:362"
2020-08-06T14:28:57.391942543Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:363"
2020-08-06T14:28:57.392026301Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:363"
2020-08-06T14:28:57.392037822Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Endpoints (10m0s) from 
/app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:28:57.392043282Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:28:57.392048402Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Endpoints (10m0s) from 
/app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:28:57.39205292Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:363"
2020-08-06T14:28:57.392057369Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:28:57.392062058Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:363"
2020-08-06T14:28:57.392066877Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:362"
2020-08-06T14:28:57.392071515Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:362"
2020-08-06T14:28:57.392076194Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:362"
2020-08-06T14:28:57.392111391Z level=debug ts=2020-08-06T14:28:57.391Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:362"
2020-08-06T14:28:57.392712404Z level=debug ts=2020-08-06T14:28:57.392Z 
caller=manager.go:224 component="discovery manager notify" msg="Starting 
provider" provider=*kubernetes.SDConfig/0 subs=[config-0]
2020-08-06T14:28:57.393027979Z level=debug ts=2020-08-06T14:28:57.392Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Endpoints (10m0s) from 
/app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:28:57.393038088Z level=debug ts=2020-08-06T14:28:57.392Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:28:57.393043227Z level=debug ts=2020-08-06T14:28:57.392Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Service (10m0s) from /app/discovery/kubernetes/kubernetes.go:362"
2020-08-06T14:28:57.393047756Z level=debug ts=2020-08-06T14:28:57.392Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Starting 
reflector *v1.Pod (10m0s) from /app/discovery/kubernetes/kubernetes.go:363"
2020-08-06T14:28:57.393068635Z level=debug ts=2020-08-06T14:28:57.392Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Service from /app/discovery/kubernetes/kubernetes.go:362"
2020-08-06T14:28:57.393076481Z level=debug ts=2020-08-06T14:28:57.392Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Pod from /app/discovery/kubernetes/kubernetes.go:363"
2020-08-06T14:28:57.401679147Z level=debug ts=2020-08-06T14:28:57.401Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/endpoints?limit=500&resourceVersion=0
 200 OK in 9 milliseconds"
2020-08-06T14:28:57.401878212Z level=debug ts=2020-08-06T14:28:57.401Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?limit=500&resourceVersion=0
 200 OK in 9 milliseconds"
2020-08-06T14:28:57.401889152Z level=debug ts=2020-08-06T14:28:57.401Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?limit=500&resourceVersion=0
 200 OK in 8 milliseconds"
2020-08-06T14:28:57.401894062Z level=debug ts=2020-08-06T14:28:57.401Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/services?limit=500&resourceVersion=0
 200 OK in 9 milliseconds"
2020-08-06T14:28:57.402141729Z level=debug ts=2020-08-06T14:28:57.402Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?limit=500&resourceVersion=0
 200 OK in 8 milliseconds"
2020-08-06T14:28:57.402164733Z level=debug ts=2020-08-06T14:28:57.402Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/endpoints?limit=500&resourceVersion=0
 200 OK in 9 milliseconds"
2020-08-06T14:28:57.402169452Z level=debug ts=2020-08-06T14:28:57.402Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/pods?limit=500&resourceVersion=0
 200 OK in 9 milliseconds"
2020-08-06T14:28:57.402226539Z level=debug ts=2020-08-06T14:28:57.402Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/services?limit=500&resourceVersion=0
 200 OK in 10 milliseconds"
2020-08-06T14:28:57.402478273Z level=debug ts=2020-08-06T14:28:57.402Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?limit=500&resourceVersion=0
 200 OK in 10 milliseconds"
2020-08-06T14:28:57.403255479Z level=debug ts=2020-08-06T14:28:57.403Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?limit=500&resourceVersion=0
 200 OK in 10 milliseconds"
2020-08-06T14:28:57.406461832Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/endpoints?allowWatchBookmarks=true&resourceVersion=57952067&timeout=8m19s&timeoutSeconds=499&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.4065184Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/endpoints?allowWatchBookmarks=true&resourceVersion=57952067&timeout=8m1s&timeoutSeconds=481&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.406527667Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=7m11s&timeoutSeconds=431&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.40655055Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=7m7s&timeoutSeconds=427&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.406655257Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57952067&timeout=9m42s&timeoutSeconds=582&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.406797095Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=8m26s&timeoutSeconds=506&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.406806913Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=5m19s&timeoutSeconds=319&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.406811422Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57952067&timeout=5m46s&timeoutSeconds=346&watch=true
 200 OK in 1 milliseconds"
2020-08-06T14:28:57.40695338Z level=debug ts=2020-08-06T14:28:57.406Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?limit=500&resourceVersion=0
 200 OK in 14 milliseconds"
2020-08-06T14:28:57.408743404Z level=debug ts=2020-08-06T14:28:57.408Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/pods?limit=500&resourceVersion=0
 200 OK in 16 milliseconds"
2020-08-06T14:28:57.412612027Z level=debug ts=2020-08-06T14:28:57.412Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/pods?allowWatchBookmarks=true&resourceVersion=57952065&timeout=5m29s&timeoutSeconds=329&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.417985486Z level=debug ts=2020-08-06T14:28:57.417Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?allowWatchBookmarks=true&resourceVersion=57952065&timeout=6m30s&timeoutSeconds=390&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.418299578Z level=debug ts=2020-08-06T14:28:57.418Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?allowWatchBookmarks=true&resourceVersion=57952065&timeout=7m34s&timeoutSeconds=454&watch=true
 200 OK in 1 milliseconds"
2020-08-06T14:28:57.425956963Z level=debug ts=2020-08-06T14:28:57.425Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/pods?allowWatchBookmarks=true&resourceVersion=57952065&timeout=9m4s&timeoutSeconds=544&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:28:57.457219996Z level=info ts=2020-08-06T14:28:57.457Z 
caller=main.go:827 msg="Completed loading of configuration file" 
filename=/etc/prometheus/config_out/prometheus.env.yaml
2020-08-06T14:28:57.457242288Z level=info ts=2020-08-06T14:28:57.457Z 
caller=main.go:646 msg="Server is ready to receive web requests."
2020-08-06T14:28:57.491704613Z level=debug ts=2020-08-06T14:28:57.491Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="caches 
populated"
2020-08-06T14:28:57.491727006Z level=debug ts=2020-08-06T14:28:57.491Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="caches 
populated"
2020-08-06T14:28:57.491732686Z level=debug ts=2020-08-06T14:28:57.491Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="caches 
populated"
2020-08-06T14:34:16.407430591Z level=debug ts=2020-08-06T14:34:16.407Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:34:16.41067248Z level=debug ts=2020-08-06T14:34:16.410Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=6m4s&timeoutSeconds=364&watch=true
 200 OK in 3 milliseconds"
2020-08-06T14:34:26.413048633Z level=debug ts=2020-08-06T14:34:26.412Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 0 
items received"
2020-08-06T14:34:26.416253916Z level=debug ts=2020-08-06T14:34:26.416Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/pods?allowWatchBookmarks=true&resourceVersion=57952065&timeout=6m54s&timeoutSeconds=414&watch=true
 200 OK in 3 milliseconds"
2020-08-06T14:34:43.407077724Z level=debug ts=2020-08-06T14:34:43.406Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 2 items received"
2020-08-06T14:34:43.409677374Z level=debug ts=2020-08-06T14:34:43.409Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57952090&timeout=6m35s&timeoutSeconds=395&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:35:27.418384667Z level=debug ts=2020-08-06T14:35:27.418Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 1 
items received"
2020-08-06T14:35:27.421350988Z level=debug ts=2020-08-06T14:35:27.421Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?allowWatchBookmarks=true&resourceVersion=57952088&timeout=7m20s&timeoutSeconds=440&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:36:04.406956468Z level=debug ts=2020-08-06T14:36:04.406Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:36:04.409735026Z level=debug ts=2020-08-06T14:36:04.409Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=6m24s&timeoutSeconds=384&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:36:08.406860222Z level=debug ts=2020-08-06T14:36:08.406Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:36:08.409586661Z level=debug ts=2020-08-06T14:36:08.409Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=6m27s&timeoutSeconds=387&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:36:31.418749677Z level=debug ts=2020-08-06T14:36:31.418Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 1 
items received"
2020-08-06T14:36:31.420764004Z level=debug ts=2020-08-06T14:36:31.420Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?allowWatchBookmarks=true&resourceVersion=57952088&timeout=8m23s&timeoutSeconds=503&watch=true
 200 OK in 1 milliseconds"
2020-08-06T14:36:58.406799129Z level=debug ts=2020-08-06T14:36:58.406Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 235 items received"
2020-08-06T14:36:58.409810676Z level=debug ts=2020-08-06T14:36:58.409Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/endpoints?allowWatchBookmarks=true&resourceVersion=57955191&timeout=6m5s&timeoutSeconds=365&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:37:16.40688695Z level=debug ts=2020-08-06T14:37:16.406Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 741 items received"
2020-08-06T14:37:16.409813907Z level=debug ts=2020-08-06T14:37:16.409Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/endpoints?allowWatchBookmarks=true&resourceVersion=57955291&timeout=6m0s&timeoutSeconds=360&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:37:23.407198781Z level=debug ts=2020-08-06T14:37:23.407Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:37:23.409270086Z level=debug ts=2020-08-06T14:37:23.409Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=6m48s&timeoutSeconds=408&watch=true
 200 OK in 1 milliseconds"
2020-08-06T14:38:01.425871656Z level=debug ts=2020-08-06T14:38:01.425Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 0 
items received"
2020-08-06T14:38:01.428628771Z level=debug ts=2020-08-06T14:38:01.428Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/pods?allowWatchBookmarks=true&resourceVersion=57952065&timeout=7m51s&timeoutSeconds=471&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:38:39.40710783Z level=debug ts=2020-08-06T14:38:39.406Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 2 items received"
2020-08-06T14:38:39.409115696Z level=debug ts=2020-08-06T14:38:39.408Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57952090&timeout=9m18s&timeoutSeconds=558&watch=true
 200 OK in 1 milliseconds"
2020-08-06T14:38:39.409876119Z level=warn ts=2020-08-06T14:38:39.409Z 
caller=klog.go:86 component=k8s_client_runtime func=Warningf 
msg="/app/discovery/kubernetes/kubernetes.go:361: watch of *v1.Endpoints ended 
with: too old resource version: 57952090 (57952629)"
2020-08-06T14:38:40.410082277Z level=debug ts=2020-08-06T14:38:40.409Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:38:40.413455112Z level=debug ts=2020-08-06T14:38:40.413Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?limit=500&resourceVersion=0
 200 OK in 3 milliseconds"
2020-08-06T14:38:40.415908735Z level=debug ts=2020-08-06T14:38:40.415Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57955801&timeout=6m27s&timeoutSeconds=387&watch=true
 200 OK in 1 milliseconds"
2020-08-06T14:38:57.404277557Z level=debug ts=2020-08-06T14:38:57.403Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: forcing resync"
2020-08-06T14:38:57.404338362Z level=debug ts=2020-08-06T14:38:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: forcing resync"
2020-08-06T14:38:57.404346618Z level=debug ts=2020-08-06T14:38:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: forcing resync"
2020-08-06T14:38:57.404352469Z level=debug ts=2020-08-06T14:38:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: forcing resync"
2020-08-06T14:38:57.404357738Z level=debug ts=2020-08-06T14:38:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: forcing resync"
2020-08-06T14:38:57.40513288Z level=debug ts=2020-08-06T14:38:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: forcing resync"
2020-08-06T14:38:57.405147397Z level=debug ts=2020-08-06T14:38:57.405Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: forcing resync"
2020-08-06T14:40:20.411040775Z level=debug ts=2020-08-06T14:40:20.410Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:40:20.414107526Z level=debug ts=2020-08-06T14:40:20.413Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=6m29s&timeoutSeconds=389&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:41:18.410078671Z level=debug ts=2020-08-06T14:41:18.409Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 0 items received"
2020-08-06T14:41:18.413220844Z level=debug ts=2020-08-06T14:41:18.413Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57952090&timeout=8m45s&timeoutSeconds=525&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:41:18.41344702Z level=warn ts=2020-08-06T14:41:18.413Z 
caller=klog.go:86 component=k8s_client_runtime func=Warningf 
msg="/app/discovery/kubernetes/kubernetes.go:361: watch of *v1.Endpoints ended 
with: too old resource version: 57952090 (57953624)"
2020-08-06T14:41:19.41420127Z level=debug ts=2020-08-06T14:41:19.413Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof msg="Listing 
and watching *v1.Endpoints from /app/discovery/kubernetes/kubernetes.go:361"
2020-08-06T14:41:19.416512177Z level=debug ts=2020-08-06T14:41:19.416Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?limit=500&resourceVersion=0
 200 OK in 2 milliseconds"
2020-08-06T14:41:19.418736961Z level=debug ts=2020-08-06T14:41:19.418Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57956830&timeout=6m1s&timeoutSeconds=361&watch=true
 200 OK in 1 milliseconds"
2020-08-06T14:41:20.416679379Z level=debug ts=2020-08-06T14:41:20.416Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 0 
items received"
2020-08-06T14:41:20.419985168Z level=debug ts=2020-08-06T14:41:20.419Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/pods?allowWatchBookmarks=true&resourceVersion=57952065&timeout=9m19s&timeoutSeconds=559&watch=true
 200 OK in 3 milliseconds"
2020-08-06T14:42:28.410101521Z level=debug ts=2020-08-06T14:42:28.409Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:42:28.413273928Z level=debug ts=2020-08-06T14:42:28.413Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=8m29s&timeoutSeconds=509&watch=true
 200 OK in 3 milliseconds"
2020-08-06T14:42:35.409907099Z level=debug ts=2020-08-06T14:42:35.409Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:42:35.413025908Z level=debug ts=2020-08-06T14:42:35.412Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=7m37s&timeoutSeconds=457&watch=true
 200 OK in 3 milliseconds"
2020-08-06T14:42:47.421949496Z level=debug ts=2020-08-06T14:42:47.421Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 0 
items received"
2020-08-06T14:42:47.424945354Z level=debug ts=2020-08-06T14:42:47.424Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?allowWatchBookmarks=true&resourceVersion=57952088&timeout=5m8s&timeoutSeconds=308&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:43:03.409948301Z level=debug ts=2020-08-06T14:43:03.409Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 179 items received"
2020-08-06T14:43:03.413753314Z level=debug ts=2020-08-06T14:43:03.413Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/endpoints?allowWatchBookmarks=true&resourceVersion=57957454&timeout=5m47s&timeoutSeconds=347&watch=true
 200 OK in 3 milliseconds"
2020-08-06T14:43:16.410222927Z level=debug ts=2020-08-06T14:43:16.410Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 535 items received"
2020-08-06T14:43:16.415711485Z level=debug ts=2020-08-06T14:43:16.415Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/endpoints?allowWatchBookmarks=true&resourceVersion=57957538&timeout=8m2s&timeoutSeconds=482&watch=true
 200 OK in 5 milliseconds"
2020-08-06T14:44:11.409681353Z level=debug ts=2020-08-06T14:44:11.409Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:44:11.411948624Z level=debug ts=2020-08-06T14:44:11.411Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=9m52s&timeoutSeconds=592&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:44:54.421143808Z level=debug ts=2020-08-06T14:44:54.420Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 0 
items received"
2020-08-06T14:44:54.423538489Z level=debug ts=2020-08-06T14:44:54.423Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?allowWatchBookmarks=true&resourceVersion=57952088&timeout=5m23s&timeoutSeconds=323&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:45:07.416299926Z level=debug ts=2020-08-06T14:45:07.416Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 0 items received"
2020-08-06T14:45:07.418378252Z level=debug ts=2020-08-06T14:45:07.418Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57955801&timeout=7m58s&timeoutSeconds=478&watch=true
 200 OK in 1 milliseconds"
2020-08-06T14:45:52.428886183Z level=debug ts=2020-08-06T14:45:52.428Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 0 
items received"
2020-08-06T14:45:52.432037373Z level=debug ts=2020-08-06T14:45:52.431Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/kube-system/pods?allowWatchBookmarks=true&resourceVersion=57952065&timeout=5m17s&timeoutSeconds=317&watch=true
 200 OK in 3 milliseconds"
2020-08-06T14:46:49.414464132Z level=debug ts=2020-08-06T14:46:49.414Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: Watch close - *v1.Service 
total 0 items received"
2020-08-06T14:46:49.417450952Z level=debug ts=2020-08-06T14:46:49.417Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/services?allowWatchBookmarks=true&resourceVersion=57926361&timeout=8m27s&timeoutSeconds=507&watch=true
 200 OK in 2 milliseconds"
2020-08-06T14:47:20.419121466Z level=debug ts=2020-08-06T14:47:20.418Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 0 items received"
2020-08-06T14:47:20.422657217Z level=debug ts=2020-08-06T14:47:20.422Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/endpoints?allowWatchBookmarks=true&resourceVersion=57956830&timeout=6m30s&timeoutSeconds=390&watch=true
 200 OK in 3 milliseconds"
2020-08-06T14:47:55.425419104Z level=debug ts=2020-08-06T14:47:55.425Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:363: Watch close - *v1.Pod total 0 
items received"
2020-08-06T14:47:55.586716663Z level=debug ts=2020-08-06T14:47:55.586Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/monitoring-staging/pods?allowWatchBookmarks=true&resourceVersion=57952088&timeout=5m51s&timeoutSeconds=351&watch=true
 200 OK in 161 milliseconds"
2020-08-06T14:48:40.414675195Z level=debug ts=2020-08-06T14:48:40.414Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: forcing resync"
2020-08-06T14:48:50.414761422Z level=debug ts=2020-08-06T14:48:50.414Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: Watch close - *v1.Endpoints 
total 170 items received"
2020-08-06T14:48:50.419148651Z level=debug ts=2020-08-06T14:48:50.419Z 
caller=klog.go:70 component=k8s_client_runtime func=Infof msg="GET 
https://10.19.0.1:443/api/v1/namespaces/default/endpoints?allowWatchBookmarks=true&resourceVersion=57959831&timeout=7m42s&timeoutSeconds=462&watch=true
 200 OK in 4 milliseconds"
2020-08-06T14:48:57.404667115Z level=debug ts=2020-08-06T14:48:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: forcing resync"
2020-08-06T14:48:57.404701649Z level=debug ts=2020-08-06T14:48:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: forcing resync"
2020-08-06T14:48:57.404708943Z level=debug ts=2020-08-06T14:48:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: forcing resync"
2020-08-06T14:48:57.404714103Z level=debug ts=2020-08-06T14:48:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:361: forcing resync"
2020-08-06T14:48:57.404718671Z level=debug ts=2020-08-06T14:48:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: forcing resync"
2020-08-06T14:48:57.404723311Z level=debug ts=2020-08-06T14:48:57.404Z 
caller=klog.go:53 component=k8s_client_runtime func=Verbose.Infof 
msg="/app/discovery/kubernetes/kubernetes.go:362: forcing resync"
2020-08-06T14:50:02.228596264Z level=warn ts=2020-08-06T14:50:02.228Z 
caller=main.go:524 msg="Received SIGTERM, exiting gracefully..."
2020-08-06T14:50:02.228624937Z level=info ts=2020-08-06T14:50:02.228Z 
caller=main.go:547 msg="Stopping scrape discovery manager..."
2020-08-06T14:50:02.228631349Z level=info ts=2020-08-06T14:50:02.228Z 
caller=main.go:561 msg="Stopping notify discovery manager..."
2020-08-06T14:50:02.22863652Z level=info ts=2020-08-06T14:50:02.228Z 
caller=main.go:583 msg="Stopping scrape manager..."
2020-08-06T14:50:02.228760534Z level=info ts=2020-08-06T14:50:02.228Z 
caller=main.go:557 msg="Notify discovery manager stopped"
2020-08-06T14:50:02.228787083Z level=info ts=2020-08-06T14:50:02.228Z 
caller=main.go:543 msg="Scrape discovery manager stopped"
2020-08-06T14:50:02.23066435Z level=debug ts=2020-08-06T14:50:02.230Z 
caller=scrape.go:962 component="scrape manager" 
scrape_pool=monitoring-staging/monitoring-prometheus-oper-kubelet/0 
target=https://176.38.8.10:10250/metrics msg="Scrape failed" err="Get 
"https://176.38.8.10:10250/metrics\": context canceled"
2020-08-06T14:50:02.239246079Z level=info ts=2020-08-06T14:50:02.239Z 
caller=manager.go:882 component="rule manager" msg="Stopping rule manager..."
2020-08-06T14:50:02.239331741Z level=info ts=2020-08-06T14:50:02.239Z 
caller=manager.go:892 component="rule manager" msg="Rule manager stopped"
2020-08-06T14:50:02.270830847Z level=info ts=2020-08-06T14:50:02.270Z 
caller=main.go:577 msg="Scrape manager stopped"
2020-08-06T14:50:02.277624488Z level=info ts=2020-08-06T14:50:02.277Z 
caller=notifier.go:601 component=notifier msg="Stopping notification manager..."
2020-08-06T14:50:02.277667689Z level=info ts=2020-08-06T14:50:02.277Z 
caller=main.go:749 msg="Notifier manager stopped"```

Reply via email to