On 05 Aug 13:59, Peter S wrote:
> Hi,
>
> After we remove some rule groups, their metrics still show up, like
> prometheus_rule_group_last_duration_seconds
> (exported by prometheus server's built-in exporter at the metrics endpoint)
>
> They won't vanish until we restart the prometheus processe
Hi,
After we remove some rule groups, their metrics still show up, like
prometheus_rule_group_last_duration_seconds
(exported by prometheus server's built-in exporter at the metrics endpoint)
They won't vanish until we restart the prometheus processes (managed by
systemd).
I wonder if there is
[Seems that is doable, Ive ended up doing something like this
```
class MyRegistry(CollectorRegistry)
def collect(self):
"""Yields metrics from the collectors in the registry."""
for metric in super().collect():
metric.samples = [
Sample(
In case anyone else runs across this, the convo has been moved to the
prometheus-developers group,
here:
https://groups.google.com/g/prometheus-developers/c/Gh8GMscELao/m/2zZFPwdLBwAJ?pli=1
Thanks Naseem!
On Tuesday, March 24, 2020 at 9:47:12 PM UTC-4 Naseem Ullah wrote:
> To whom it may conc
HI to all,
Brian pointed to me to that link [1] which basically claims that in
Prometheus - compared to other TimeSeries databases - there is no necessity
of prefixing the names of the metrics. And TBH I like the idea of not
having to prefix the metric name with for example the service name, I
Hi Stuart ,
Thank you for help on this . One thing I forgot to mention . We have Thanos
also in front of Prometheus , will it impact if enable SSL ?
Thanks
> On 6 Aug 2020, at 1:36 AM, Stuart Clark wrote:
>
> On 05/08/2020 17:54, sunils...@gmail.com wrote:
>> Hi All,
>>
>> In our current
On 05/08/2020 17:54, sunils...@gmail.com wrote:
Hi All,
In our current setup Prometheus is non-ssl configured , which means I
am accessing Prometheus as http://server-name:9090
During our recent Security scan below details were reported . Please
advise , what step I can take for this .
Mes
Hi All,
In our current setup Prometheus is non-ssl configured , which means I am
accessing Prometheus as http://server-name:9090
During our recent Security scan below details were reported . Please advise
, what step I can take for this .
Message :
X-Frame-Options or Content-Security-Policy
Sure
Config prometheus.yml
cadvisor:
#image: docker-registry.ju.globaz.ch:5000/cadvisor:0.30.2-globaz
image: google/cadvisor
ports:
- 8080:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
On 05 Aug 07:14, Tom Kun wrote:
> The Prometheus does not seems to take the labels define in my Docker
> compose service...
>
> x-common-labels: &label-monitoring
> com.docker.swarm.prometheus-job: monitoring
>
> cadvisor:
> #image: docker-registry.ju.globaz.ch:5000/cadvisor:0.30.2-glob
The Prometheus does not seems to take the labels define in my Docker
compose service...
x-common-labels: &label-monitoring
com.docker.swarm.prometheus-job: monitoring
cadvisor:
#image: docker-registry.ju.globaz.ch:5000/cadvisor:0.30.2-globaz
image: google/cadvisor
ports:
-
Hi @Christian I am facing an issue that my alaert manager is not sending
the alerts on regular intervals.how can i configure my alert manager to
send alerts to my slack?
On Wednesday, August 5, 2020 at 6:11:38 PM UTC+5:30, Christian Hoffmann
wrote:
>
> Hi,
>
> On 8/5/20 2:40 PM, Pachha Gopi wr
Hi thanks for reply yes i had gone through article and it was very userfful
On Wednesday, August 5, 2020 at 6:11:38 PM UTC+5:30, Christian Hoffmann
wrote:
>
> Hi,
>
> On 8/5/20 2:40 PM, Pachha Gopi wrote:
> > I am Using Prometheus for my Production Servers,my question is is there
> > any way
On Wednesday, 5 August 2020 14:29:16 UTC+2, Julien Pivotto wrote:
>
> On 05 Aug 05:24, Tom Kun wrote:
> > Hi folks,
> >
> > I'm trying to retrieve metrics from different Swarm clusters into a
> > Prometheus container which is deployed in an other Swarm cluster
> dedicated
> > to the monitor
Hi,
On 8/5/20 2:40 PM, Pachha Gopi wrote:
> I am Using Prometheus for my Production Servers,my question is is there
> any way that we can configure different alerts for individual server.
> for example I have 3 Servers ,Server1 cpu usage is 20% ,Server 2 cpu
> usage is 30% and Server 3 cpu usage i
Hi All,
I am Using Prometheus for my Production Servers,my question is is there any
way that we can configure different alerts for individual server.
for example I have 3 Servers ,Server1 cpu usage is 20% ,Server 2 cpu usage
is 30% and Server 3 cpu usage is 90% .Now I need to get alert if serv
On 05 Aug 05:24, Tom Kun wrote:
> Hi folks,
>
> I'm trying to retrieve metrics from different Swarm clusters into a
> Prometheus container which is deployed in an other Swarm cluster dedicated
> to the monitoring part of the entire infrastructure.
>
> I have actually setup the http through the
Yes,my AlertManager is used together with Prometheus.
In prometheus.yml:
alerting:
alertmanagers:
- static_configs:
- targets: ["x.x.x.x:9093"]
For all the warning message i received, then immediately a resolved
message received. This is not i want.
Thanks,
Lei
在2020年8月5日星期三 UTC
On 8/3/20 8:34 AM, 'Píer Bauer' via Prometheus Users wrote:
> Due to the fact that my query (in real world) contains several thousand
> rows of output, I would like to pursue a generic approach to avoid
> setting a separate PowerShell variable for each table cell data...
>
>
> But currently I don
On 8/5/20 10:21 AM, leiwa...@gmail.com wrote:
> rules.yml
> groups:
> - name: network-delay
> rules:
> - alert: "network delay"
> expr: probe_duration_seconds * 1000 > 3000
> for: 1s
> labels:
> severity: warning
> team: ops
> annotations:
> description: "{{$la
Hi,
On 8/4/20 12:24 PM, Vinod M V wrote:
>
> I am facing Memory usage with Prometheus service and
> Maintaining 30 days of data from Node exporter, Process exporter and JMX
> exporter for 95 servers in Prometheus Database.
>
> Grafana and Prometheus are running on the same no
Hi,
On 8/4/20 3:23 PM, jumble wrote:
> Latest prometheus, on RHEL8.
>
> Observed behavior: bound to |127.0.0.1:9090|
This sounds unexpected. Are you using the official binaries from
prometheus.io / github?
Can you share the exact logs from your experiments?
Is it possible that you've got multi
Hi,
On 8/4/20 10:54 AM, e huang wrote:
> ts=2020-08-04T05:41:58.646Z caller=main.go:169
> module=dns_eboss.enmonster.com target=10.208.100.9 level=debug
> msg="Error while sending a DNS query" err="read udp4 10.208.100.
> 10:36709->10.208.100.9:53: i/o timeout"
> ts=2020-08-04T05:41:58.646Z caller
Hi,
On 8/4/20 2:21 PM, shiqi chai wrote:
> Hey guys,I have a problem with configuration of resolve_timeout. As
> it means, a notication of resolved will be send after the timeout.
> But actually the issue still be firing, it disturb the correct
> resolved notification How can I prevent it?
Not su
Hi again.
Well, thinking about it, that makes sense. I guess I'll revisit our setup
in light of this aspect.
Thanks a ton!
---
Federico Buti
On Wed, 5 Aug 2020 at 11:02, Brian Brazil
wrote:
> On Wed, 5 Aug 2020 at 09:56, Federico Buti wrote:
>
>> Hi.
>>
>> Thanks for the reply Brian.
>> So o
On Wed, 5 Aug 2020 at 09:56, Federico Buti wrote:
> Hi.
>
> Thanks for the reply Brian.
> So one should not alert on absence of a metric? Never ever? Just on the
> upness of the targets?
>
Generally you should alert on the absence of up, as that indicates
something has either gone wrong with ser
Hi.
Thanks for the reply Brian.
So one should not alert on absence of a metric? Never ever? Just on the
upness of the targets?
---
Federico Buti
On Wed, 5 Aug 2020 at 10:46, Brian Brazil
wrote:
> On Wed, 5 Aug 2020 at 09:32, Federico Buti wrote:
>
>> Hi all.
>>
>> A few months ago we introdu
On Wed, 5 Aug 2020 at 09:32, Federico Buti wrote:
> Hi all.
>
> A few months ago we introduced target down rules to keep track of targets
> that were missing. The rules are relatively simple being something like e.g.
>
> alert: target_down_slower_scraping_jobs
> expr: up{job=~"monitoring-script
On 05/08/2020 09:37, Liu Chang wrote:
[prometheus]# ./bin/prometheus --version
prometheus, version 2.19.2
./bin/prometheus --config.file=./conf/prometheus.yml
--web.listen-address="0.0.0.0:8089" --storage.tsdb.retention.size=200GB
*Killed process 203738 (prometheus) total-vm:241826816kB,
an
Mark
On Tuesday, August 4, 2020 at 6:24:38 PM UTC+8, Vinod M V wrote:
>
> Hi All ,
>
> I am facing Memory usage with Prometheus service and Maintaining
> 30 days of data from Node exporter, Process exporter and JMX exporter for
> 95 servers in Prometheus Database.
>
> Grafana
[prometheus]# ./bin/prometheus --version
prometheus, version 2.19.2
./bin/prometheus --config.file=./conf/prometheus.yml
--web.listen-address="0.0.0.0:8089" --storage.tsdb.retention.size=200GB
*Killed process 203738 (prometheus) total-vm:241826816kB,
anon-rss:185585992kB, file-rss:0kB*
We h
Hi all.
A few months ago we introduced target down rules to keep track of targets
that were missing. The rules are relatively simple being something like e.g.
alert: target_down_slower_scraping_jobs
expr: up{job=~"monitoring-scripts-5m|monitoring-scripts-hourly"} == 0
for: 13m
labels:
rules.yml
groups:
- name: network-delay
rules:
- alert: "network delay"
expr: probe_duration_seconds * 1000 > 3000
for: 1s
labels:
severity: warning
team: ops
annotations:
description: "{{$labels.instance}} : {{ $value }}"
alertmanager resolve_timeout is 5m
S
You didn't provide any evidence of why you think it's not working. You need
to include more information.
On Tue, Aug 4, 2020, 17:25 Byungkwon Choi wrote:
> Hello,
>
> I want to collect the HTTP requests per second every second.
> To do so, I'm using Prometheus and Prometheus Adapter.
>
> I set t
34 matches
Mail list logo