Re: [prometheus-developers] Java Prometheus Exporter for Traffic Metrics

2023-10-20 Thread Matthias Rampke
Hi,

I think this discussion is better suited for the -users mailing list,
moving it there.

Metric systems, like Prometheus, offer you a specific tradeoff: they allow
you to count a *large number* of events by *limited dimensions*.
Fundamentally, for each combination of dimensions, it tracks a *number* and
incrementing that number is very cheap, but this breaks down if you have
too many dimensions, because you end up with a huge amount of numbers that
each only change infrequently. In practice, this means metric systems are
not suited for tracking *all* the metadata of API request, and you will
need to remove any dimension that varies a lot from your labels. See this
document for some recommendations:
https://prometheus.io/docs/practices/instrumentation/#do-not-overuse-labels

To give a concrete example, metrics are not well suited to track API
latency "by customer" (I assume this is what you mean with consumer name?)
You can use Prometheus to track overall latency, and break it down by a few
low-cardinality dimensions such as the status code. For in-depth
breakdowns, record the events (logs) into a separate system that is
designed for this, such as a logging system. These have other tradeoffs,
notably that querying them is a lot slower and more expensive; typically
you would use metrics to tell you *that* there is a problem, and logs to
tell you *what* the problem is once you start narrowing it down. I hope
understanding these tradeoffs will help you design a viable observability
stack for your requirements :)

/Matthias

On Thu, Oct 19, 2023 at 2:47 AM 'Sidath Weerasinghe' via Prometheus
Developers  wrote:

> Hi Fabian and team,
>
> I'm using the API  name, version, consumer name, response latency, status
> code, and many metadata for Histrogram labels. With the live traffic, those
> labels assign different values, and a lot of histogram child objects are
> created.
> That is caused by the OOM of the JVM.
> I would like to know what is the best and recommended way to export those
> values.
>
>
> On Tuesday, 17 October 2023 at 17:16:52 UTC+5:30 Fabian Stäber wrote:
>
>> Hi Sidath,
>>
>> histograms have a limited number of buckets, they should not grow
>> indefinitely.
>>
>> The reason for your OOM might be "cardinality explosion": Maybe you
>> generate more and more different label values, each set of label values
>> adds a new histogram.
>>
>> If this is not the case, and you see increasing memory usage with a fixed
>> set of histograms, please open an issue on
>> https://github.com/prometheus/client_java, ideally with a way to
>> reproduce this.
>>
>> Fabian
>>
>> On Tue, Oct 17, 2023 at 1:31 PM 'Sidath Weerasinghe' via Prometheus
>> Developers  wrote:
>>
>>> Hi Team,
>>>
>>> I have written a custom java Prometheus Exporter to export API traffic
>>> details such as API  name, version, consumer name, response latency, status
>>> code, and many metadata. For this, I have used counters and histograms.
>>> With the heavy traffic in production, I'm getting OOM on the client side
>>> because of this huge amount of histogram object size.
>>> Prometheus pulls the data in every 3s from the client.
>>>
>>> Do you have any other solution for this?
>>>
>>>
>>> Thank you
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-devel...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/76bc4885-181c-46eb-9a9c-03c466607f21n%40googlegroups.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/2e5aff68-3796-467f-82ae-2d5f109a889fn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZD9Nd5-__NOYcSz8cQb-44L1QHT1yommdUiEbF6NikMQ%40mail.gmail.com.


Re: [prometheus-developers] [feature/proposal] Changing alert fingerprint calculation in prometheus/common

2023-06-23 Thread Matthias Rampke
For a very long time, Prometheus did not store apeet state across restarts,
so the alert startsAt would update even though the condition had not
changed.

I don't think we ever considered this time to be very meaningful or stable,
partially due to the originally stateless implementation, but also due to
the HA synchronization issue you mentioned.

Can you explain more what the scenario is where the current label-based
identity doesn't work? If I am reading it right, this is the first time
someone asks for the alerts to be more responsive to flapping, more
typically the desire is to reduce that, identifying successive alerts as
being the same thing even if the alert condition wasn't held for a
short period of time.

/MR

On Tue, 20 Jun 2023, 15:14 'George Robinson' via Prometheus Developers, <
prometheus-developers@googlegroups.com> wrote:

> In prometheus/common the fingerprint of an alert is calculated as an
> fnv64a hash of it's labels. The labels are first sorted, and then the label
> name, separator, label value, and another separator for each label is added
> to the hash before the final sum is calculated.period of time
>
> I noticed that something missing from the fingerprint is the alert's
> StartsAt time. You could argue that an alert with labels a₁a₂aₙ that
> started at time t₁ and then resolved at time t₂ is a different alert than
> one also with the labels a₁a₂aₙ but started at time t₃ - and so these two
> alerts should have different fingerprints.
>
> The fact that the fingerprint is constant over its labels has proven
> interesting while debugging cases of flapping alerts in Alertmanager.
>
> However, while I would like to add StartsAt to the fingerprint, I am
> concerned that adding the StartsAt timestamp to the fingerprint will break
> Prometheus rules when run in HA as I do not believe the StartsAt time is
> synchronised across rulers.
>
> I was wondering if there is some historical context for this? Perhaps the
> reasons mentioned above, but there could be others that I am also unaware
> of?
>
> Best regards
>
> George
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/b722e00e-bfd8-4ff4-bbef-e5e0836280bbn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYviO9Ad%3DJXrHKHdZzMDgnu545rTEtPi_hnALvyWzaxAA%40mail.gmail.com.


Re: [prometheus-developers] [feature/proposal] Amazon EFA collector

2023-04-16 Thread Matthias Rampke
To clarify, you are asking about adding this to the node exporter?

I am torn between "this seems very specific" and "I guess it won't hurt
anyone who doesn't need it".

IMO adding support to the procfs package makes sense, whether it's then
consumed by node exporter or a more specific one.

/MR

On Tue, 28 Mar 2023, 18:44 Perif,  wrote:

> Hi,
>
> We wrote a collector for Amazon EFA
>  which is a
> high-speed network interface similar to Infiniband.
>
> This interface is used for tightly coupled applications in HPC (WRF, Ansys
> Fluent, Gromacs...) and distributed ML (think LLMs like BLOOM, OPT... or
> Diffusion based models like Stable diffusion). The metrics are used for
> optimization and troubleshooting of these computational workloads. The
> collector we wrote is based on the one used by Infiniband and involved
> changes on ProcFS as well as EFA metrics are exposed similarly.
>
> *Would the team be open for us to create a PR to add a new collector for
> this network interface? *
>
> Thanks,
>
> Perif
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/06dd2715-8c52-4bf9-9516-43c5ef134357n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYE4NW8_14RM%2BhNY96m70a_hdLrNK2T1zkBrVrb%2BKuzaA%40mail.gmail.com.


Re: [prometheus-developers] Reconsider marshalling secrets in Prometheus libraries

2023-02-15 Thread Matthias Rampke
I agree that this should be possible.

My first intuition was to make this a separate API call, like
UnsafeMarshalYAML but I am not sure how well that would play with the YAML
infrastructure in Go? Maybe we could have a helper (.MarshalSecrets()) that
returns the struct with wrapped/aliased types that have a different
marshaling function?

What would the build-time option look like? How could a use case that
requires both options (say, show the safe version on a status page, write
the unsafe version to disk) work?

/MR

On Wed, Feb 15, 2023 at 11:10 AM Julien Pivotto 
wrote:

> Dear Prometheus developers,
>
> I'd to request that we reconsider our policy regarding the marshalling
> of secrets in Prometheus libraries.
>
> Currently, our policy is not to marshal secrets back in clear text in
> every case. When you unmarshall a secret, it is displayed as .
>
> However, I would like to suggest that we introduce an sort
> of code API that would enable library users to marshal such secrets
> programmatically, to generate Prometheus configurations from code.
>
> This issue has been brought up on several occasions, as you may be aware
> from the following links:
>
> https://github.com/prometheus/alertmanager/pull/1804
> https://github.com/prometheus/alertmanager/issues/1985
> https://github.com/prometheus/common/pull/259
>
> It was argued in the past that since common and types are an internal
> library, we should not be concerned with marshalling secrets. However, I
> believe that we have agreed to make Prometheus libraries more usable in
> the field. Therefore, I think it is time to introduce a flag in the
> library to marshall secrets in clear text.
>
> As for the implementation, I do not have a strong opinion on whether
> this should be a build-time flag or a runtime change. However, I do
> believe that a build-time flag might be a bit safer, although it
> adds more complexity for library users.
>
> Thanks.
>
>
> --
> Julien Pivotto
> @roidelapluie
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/Y%2Byvfddzxwy6s4t2%40nixos
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZbLkhia_yy5QtCwt7oCe1P5ZGZNR6io3WiG0u0S3mMTw%40mail.gmail.com.


Re: [prometheus-developers] How Prometheus uses go-restful ?

2023-02-11 Thread Matthias Rampke
I couldn't find any other reference except for the "// indirect" dependency
in go.mod. I think that line means we are forcing a newer but compatible
version of go-restful than we would naturally get through the kube client
dependency? I'm not very fluent in Go modules though, so if anyone else
knows…?

/MR

On Fri, 10 Feb 2023, 23:03 Jian Xue,  wrote:

> Thanks Matthias for the input, yes, I was expecting K8S client libraries
> would need go-restful, but if you look at the go-restful versions client-go
> and kube-openapi depend on, they are different from what Prometheus’s one,
> which is v2.16.0. It looks like Prometheus uses it somewhere under the
> hood, but I could not figure out that dependancy chain yet.
>
> As to the trigger of this question, yeah, Tristan is right, couples of
> vulnerabilities are reported on go-restful, and I want to know whether
> Prometheus is affected or not.
>
> Thanks
>
> BRs
> /Gavin
>
>
>
> On 11 Feb 2023, at 02:10, Tristan Colgate  wrote:
>
> 
> This is probably due to GitHub dependabot currently flagging a security
> issue with go-restful (I hit the same issue yesterday).
>
> On Fri, 10 Feb 2023 at 09:13, Matthias Rampke 
> wrote:
>
>> You are on the right track with go mod graph: go-restful is a dependency
>> of k8s.io/client-go and k8s.io/kube-openapi, so colloquially "the
>> Kubernetes client library". Prometheus uses it for service discovery,
>> fetching information about pods, endpoints, and services. From a cursory
>> look through github.com/kubernetes
>> <https://cs.github.com/?q=org%3Akubernetes+go-restful=5=95635d459e0c25fdd23327d2aafd76e1f37667b598be932cf35f694610b9e2fdf25d33e2091317b4ac9ab081a73c33fb776e90bf43b358ccf8565727b3f565af4626586ae6c28a4f093334dae79e5902c0585106a7c18533377316a7ed7ed3707c35cfb5c6cefeec6a6bf460e6d647ef3efaff12d0e8f030086105ec53f721c80b8f6ef53f1b2a14a5056bba204cfd393caa86738516c15447179894b1d4c306b2eff7f90304288c1a6c77e4fe4c78d28ff57c899caf2022ee5307c5557005db6805dd6cdadf95e112f02963229e83c1650497d49955865324acdbcb5fe10221fe4086dd34dd004f2f61932b84a0bbd9389c3a8d765f1a5a7696b1a730b888cf34251395a7e13d938d87f3720eca3e989d8a24f0a3451a6c12e134fda00a78ca4e393e1c7983f67c1292ae451465e2726d9d5464f0944d6248f8be7106c441b2b7fadff735dc91161d5cdae0d714ea5039c73703981936c01cf2c7d2e4912d428bcd766b1cd7f9ab1f13ccc1b70882598ed5db908d4861f53c18ba6afbd5829b801d50dbade208b5726b8f396b05==All+repos>,
>> it seems that it is only actually *called* on the kube-apiserver side,
>> so Prometheus should not encounter any of it, but don't take my word for it.
>>
>> I am curious now, can you share why you are interested in go-restful? 
>>
>> Best,
>> Matthias
>>
>>
>>
>> On Fri, Feb 10, 2023 at 9:59 AM Gavin  wrote:
>>
>>> Hello Prometheus team,
>>>
>>> May I have a question about how Prometheus uses go-restful ?
>>>
>>> We are using Prometheus 2.38.0 and from the binary, we can see
>>> go-restful is compiled.
>>>
>>> $go version -m prometheus |grep go-restful
>>> dep github.com/emicklei/go-restful
>>> <https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-45444731-93b88ba1585bcc50=1=0b06f888-fcaa-4df9-97aa-bff71ddd877f=http%3A%2F%2Fgithub.com%2Femicklei%2Fgo-restful>
>>> v2.16.0+incompatible h1:rgqiKNjTnFQA6kkhFe16D8epTksy9HQ1MyrbDXSdYhM=
>>>
>>> I did grep on Prometheus source code, and failed to find where
>>> go-restful is invoked, 'go mod why', 'go mod graph' and 'go list 'don't
>>> help much either.
>>>
>>> prometheus $ [v2.38.0] [] $ go mod graph |grep go-restful
>>>
>>> github.com/prometheus/prometheus
>>> <https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-45444731-c33587582a4a88bd=1=0b06f888-fcaa-4df9-97aa-bff71ddd877f=http%3A%2F%2Fgithub.com%2Fprometheus%2Fprometheus>
>>> github.com/emicklei/go-restful@v2.16.0+incompatible
>>> <https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-45444731-393d0419c214a867=1=0b06f888-fcaa-4df9-97aa-bff71ddd877f=http%3A%2F%2Fgithub.com%2Femicklei%2Fgo-restful%40v2.16.0%2Bincompatible>
>>>
>>> k8s.io/client-go@v0.24.3
>>> github.com/emicklei/go-restful@v2.9.5+incompatible
>>> <https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-45444731-55c83c6042a445e0=1=0b06f888-fcaa-4df9-97aa-bff71ddd877f=http%3A%2F%2Fgithub.com%2Femicklei%2Fgo-restful%40v2.9.5%2Bincompatible>
>>>
>>> k8s.io/kube-openapi@v0.0.0-20220328201542-3ee0da9b0b42
>>> github.com/emicklei/go-restful@v0.0.0-20170410110728-ff4f55a20633
>>> <https://protect2.fireeye.com/v1/url?k=31323334-501d5122-3132

Re: [prometheus-developers] How Prometheus uses go-restful ?

2023-02-10 Thread Matthias Rampke
You are on the right track with go mod graph: go-restful is a dependency of
k8s.io/client-go and k8s.io/kube-openapi, so colloquially "the Kubernetes
client library". Prometheus uses it for service discovery, fetching
information about pods, endpoints, and services. From a cursory look
through github.com/kubernetes
,
it seems that it is only actually *called* on the kube-apiserver side, so
Prometheus should not encounter any of it, but don't take my word for it.

I am curious now, can you share why you are interested in go-restful? 

Best,
Matthias



On Fri, Feb 10, 2023 at 9:59 AM Gavin  wrote:

> Hello Prometheus team,
>
> May I have a question about how Prometheus uses go-restful ?
>
> We are using Prometheus 2.38.0 and from the binary, we can see go-restful
> is compiled.
>
> $go version -m prometheus |grep go-restful
> dep github.com/emicklei/go-restful
> 
> v2.16.0+incompatible h1:rgqiKNjTnFQA6kkhFe16D8epTksy9HQ1MyrbDXSdYhM=
>
> I did grep on Prometheus source code, and failed to find where go-restful
> is invoked, 'go mod why', 'go mod graph' and 'go list 'don't help much
> either.
>
> prometheus $ [v2.38.0] [] $ go mod graph |grep go-restful
>
> github.com/prometheus/prometheus
> 
> github.com/emicklei/go-restful@v2.16.0+incompatible
> 
>
> k8s.io/client-go@v0.24.3
> github.com/emicklei/go-restful@v2.9.5+incompatible
> 
>
> k8s.io/kube-openapi@v0.0.0-20220328201542-3ee0da9b0b42
> github.com/emicklei/go-restful@v0.0.0-20170410110728-ff4f55a20633
> 
>
> It would be highly appreciated if you could pinpoint why/where go-restful
> is used.
>
> Thanks!
>
> BRs
>
> /Gavin
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/9c7b9160-b8ce-42cd-bf78-1d0ee48ed638n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYk7BFQHaHtffU1Ze4ztrd-5O7%2BMKFnXXLdz5V5LJjDvQ%40mail.gmail.com.


Re: [prometheus-developers] Should Alertmanager be more tolerant of templating errors?

2023-02-09 Thread Matthias Rampke
I agree that silently sending *no* alert is the worst possible outcome. I
wonder what would be "nicer" in case a template fails - send the alert with
the fields that did not fail to render (possibly render the error *into*
the fields that failed to make it very obvious?), or (as proposed) fall
back to a "safe" template?

/MR

On Thu, Feb 9, 2023 at 6:44 PM Bjoern Rabenstein  wrote:

> On 07.02.23 05:57, 'George Robinson' via Prometheus Developers wrote:
> >
> > While I appreciate the responsibility of writing correct templates is on
> > the user, I have also been considering whether Alertmanager should be
> more
> > tolerant of template errors, and attempt to send some kind of
> notification
> > when this happens. For example, falling back to the default template
> that
> > we have high confidence of being correct.
>
> I think that makes sense. The fall-back template could call out very
> explicitly that the intended template failed to expand and therefore
> you get a replacement, maybe even with the error message of the
> attempt to expand the original template.
>
> But I'm not really an Alertmanager experts. And despite having a lot
> of historical context about Prometheus in general, I don't remember
> anything specific about error handling in alert templates.
>
> I only remember that trying out an alert "in production" is really
> hard since you need to trigger it. And if the moment you notice that
> your template doesn't work is also the moment when your alert is
> supposed to fire, that's really bad.
>
> So better test tooling might help here, but even if we had that, I
> think there should be a safe fall-back so that no alert is ever
> swallowed because of a templating error.
>
> --
> Björn Rabenstein
> [PGP-ID] 0x851C3DA17D748D03
> [email] bjo...@rabenste.in
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/Y%2BUxD3QTKJbrLACk%40mail.rabenste.in
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gbGJ8hmsUubJbBkTrRxH87R%2BhSKVeyyosbPGLwEpE1UCg%40mail.gmail.com.


[prometheus-developers] macOS DNS resolving change in Go 1.20

2023-02-03 Thread Matthias Rampke
Will this affect how DNS SD behaves on macOS?

https://danp.net/posts/macos-dns-change-in-go-1-20/

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZmCNbPQcvusxvBsZrb3TSFdfjBOUgHfgHS77%2BRBb9iGw%40mail.gmail.com.


[prometheus-developers] FYI: CircleCI setup_remote_docker architecture change

2023-01-13 Thread Matthias Rampke
Hey all,

The influxdb_exporter master build is currently broken
,
even though it uses the same configuration as other projects, and the build
configuration did not change when it broke. From the difference in build
steps, I believe this is due to the change in setup_remote_docker

that
CircleCI is rolling out.

I filed a support ticket with CircleCI. I'll let you know what comes out of
that.

/MR

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gaJG6akJ-JyFLvtL8onqFWguw84d12jYbvSNG0wuvp4ZA%40mail.gmail.com.


Re: [prometheus-developers] Why do Info metrics have two Label Sets?

2022-12-08 Thread Matthias Rampke
I can't speak to the original thinking but I can speculate. In text format
info metrics, we conflate two things into labels: the identification of the
thing we are informing about, and the information itself.

For example, an info metric about hard disks would have a label that
identifies the disk (with more target labels added later), and "labels"
that hold the string valued information like the firmware version.

It gets more extreme with e.g. kube_pod_labels, where the set of info
labels itself is dynamic. At least in the past, kube-state-metrics had to
hack open the Prometheus client library to subvert label consistency checks
so that it could do this, since the set of (info) label keys is unknowable
ex ante.

By splitting the labels from the info like this, you can make assertions
about consistency within the *label* set without constraining the *info*
set too much.

/MR

On Wed, 7 Dec 2022, 21:31 'Fabian Stäber' via Prometheus Developers, <
prometheus-developers@googlegroups.com> wrote:

> Hi,
>
> in OpenMetrics, all metrics (metric == time series) are modelled like this:
>
> message Metric {
> repeated Label labels = 1;
> repeated MetricPoint metric_points = 2;
> }
>
> However, Info metrics have another set of Labels as the value of their
> MetricPoint:
>
> message InfoValue {
> repeated Label info = 1;
> }
>
> I first thought that might be a mistake, but then I found this cryptic
> statement in the spec:
>
> A MetricPoint of an Info Metric contains a LabelSet. An Info MetricPoint's
> LabelSet MUST NOT have a label name which is the same as the name of a
> label of the LabelSet of its Metric.
>
> I'm curious why Info metrics are modelled that way. Is that something we
> should simplify?
>
> Fabian
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAPX310hQLkeVEd5rNTtVkMuHJ11N4hqsyQwnAEeyterSPn3W0w%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gbCv7npMr0CTzi%3Dch20xTH_R%2BUNV2O7ePy4pvEMfEW9XA%40mail.gmail.com.


Re: [prometheus-developers] Changing consensus on HTTP headers

2022-12-07 Thread Matthias Rampke
In general, what is a foot gun to me can be a rocket shoe to you, so I am
in favor of providing them to those who require them, with clear labeling
of the dangers.

Specifically in this case, it has become more common ("beyond corp", "zero
trust") to use HTTPS over the public internet, combined with additional
layers of authentication. Always requiring users to layer even more proxies
on top is a drag on them and on Prometheus, so I am all in favor of this.

I had to read up on the semantics of specifying a header multiple times
,
is this something to call out in the documentation?

/MR

On Wed, Dec 7, 2022 at 12:16 AM Bjoern Rabenstein 
wrote:

> On 06.12.22 23:15, Julien Pivotto wrote:
> >
> > https://github.com/prometheus/prometheus/issues/1724
> >
> > Quoting Brian in 2016:
> > > The question here is how complex do we want to allow scraping protocol
> > > to be, and how complex a knot are we willing to let users tie
> themselves
> > > in via the core configuration? Are we okay with making it easy for a
> > > scrape not to be quickly testable via a browser? At some point we have
> > > to tell users to use a proxy server to handle the more obscure use
> > > cases, rather than drawing their complexity into Prometheus.
> > >
> > > As far as I'm aware the use case here relates to a custom auth solution
> > > with a non-recommended network setup. It's not unlikely that the next
> > > request in this vein would be to make these relabelable, and as this is
> > > an auth-related request, per discussion on #1176 we're not going to do
> > > that. I think we'd need a stronger use case to justify adding this
> > > complexity.
> >
> > I do think that Brian's comments on authorization and security are still
> > valid, and I don't plan to add headers support to relabeling - such as I
> > don't plan to add relabeling for basic auth and other autorisation
> > methods.
>
> Thank you very much. Yes, this all makes sense. I.e. no plans for
> support via relabeling, but allow users to do their special thing in
> special cases via the config, even if that also opens up the
> possibility to build a foot gun. (BTW, I'm a fan of clearly
> documenting the dragons, so don't just add the config option, but put
> a warning sign next it describing the typical pitfalls, like creating
> metric endpoints that are inaccessible to browsers.)
>
> --
> Björn Rabenstein
> [PGP-ID] 0x851C3DA17D748D03
> [email] bjo...@rabenste.in
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/Y4/bTp4k4mlqSHWg%40mail.rabenste.in
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gbZHFzwzxEN9JC_Oasgte6D_cWinxchb-QHsfA%2Bui8nnQ%40mail.gmail.com.


Re: [prometheus-developers] Gauge Exemplars

2022-12-07 Thread Matthias Rampke
At scrape time, how would I know which method was used? "always the total
value of the gauge" seems like the least surprising choice to me. If users
need to track separate exemplars for increment and decrement, they could
use two counters (basically, a non-native UpDownCounter), which
conceptually and physically preserves that the final value is the result of
repeated additions and removals.

/MR

On Wed, Dec 7, 2022 at 7:58 AM 'Fabian Stäber' via Prometheus Developers <
prometheus-developers@googlegroups.com> wrote:

> Thanks Julien. I guess there are examples for both, having Exemplars with
> the delta as well as the absolute value.
>
> Example 1) Number of Bytes in a queue. You would call myGauge.inc(nBytes)
> if a message is added to the queue, and myGauge.dec(nBytes) if a message is
> fetched from the queue. If adding / fetching is done via REST API, it would
> make sense to have Exemplars for the add/fetch operations.
>
> Example 2) An IoT device reaching out to a Web hook for reporting
> temperature. To track the current temperature, you'd call
> myGauge.set(temperature). If the IoT device is capable of adding trace
> headers when calling the Web hook it would be good to have Exemplars of
> these calls.
>
> The Java library supports automatic Exemplars, i.e. users don't call
> explicit API to add Exemplars, but the library figures out if a current
> trace context exists and samples Exemplars automatically under the hood.
> For that, we need a default behavior. So what would a good default be?
> Always the total value of the Gauge, or always the delta, or make it depend
> on whether inc(), dec(), or set() was called?
>
> Fabian
>
>
> On Tue, Dec 6, 2022 at 11:18 PM Julien Pivotto 
> wrote:
>
>> On 06 Dec 23:06, 'Fabian Stäber' via Prometheus Developers wrote:
>> > Hi,
>> >
>> > I'm experimenting with Exemplars for Gauge metrics in client_java
>> > (background: at the dev summit on 10 November 2022
>> > <
>> https://docs.google.com/document/d/11LC3wJcVk00l8w5P3oLQ-m3Y37iom6INAMEu2ZAGIIE/edit
>> >
>> > we decided that "Prometheus will ingest Exemplars on all time series").
>> >
>> > For comparison: A counter MUST have the following methods
>> > :
>> >
>> >- inc(): Increment the counter by 1
>> >- inc(double v): Increment the counter by the given amount. MUST
>> check
>> >that v >= 0.
>> >
>> > Exemplars will contain the increment, i.e. if inc() is called the
>> Exemplar
>> > will have the value 1.0, if inc(v) is called the Exemplar will have the
>> > value v.
>> >
>> > Now, A gauge MUST have the following methods
>> > :
>> >
>> >- inc(): Increment the gauge by 1
>> >- inc(double v): Increment the gauge by the given amount
>> >- dec(): Decrement the gauge by 1
>> >- dec(double v): Decrement the gauge by the given amount
>> >- set(double v): Set the gauge to the given value
>> >
>> > Which value should we choose for Gauge Exemplars?
>>
>> That is a good question and my guess would be that it will depend on the
>> use case. Do you have examples of gauges would would attach exemplars
>> to?
>>
>> --
>> Julien Pivotto
>> @roidelapluie
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAPX310ipsufix2s4_P4PXOvzGr4MFDbHaaBmmJZXpnW-k%2Bas6g%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZa%2BCspGu7ug0VcFJMzxMxSk_Jsr-%2Bdt8dQG%3DOyEpH0hg%40mail.gmail.com.


Re: [prometheus-developers] [VOTE] Promote Windows Exporter as an official exporter

2022-12-05 Thread Matthias Rampke
YES

On Mon, Dec 5, 2022 at 10:44 AM Julien Pivotto 
wrote:

> Dear Prometheans,
>
> As per our governance [1], "any matter that needs a decision [...] may
> be called to a vote by any member if they deem it necessary."
>
> I am therefore calling a vote to promote Prometheus-community's Windows
> Exporter [2] to Prometheus GitHub org, to make it an official exporter.
>
> Official exporters are exporters under the Prometheus github org, listed
> as official on Prometheus.io and available under the Downloads page.
>
> This would provide recognition and credibility to the exporter and its
> contributors, which have provided a large amount of work in the last
> years, and built a huge community.
>
> It would make it easier for users to find and use the exporter, as it
> would be listed on the Prometheus website and promoted on the other
> official channels - such as our announce mailing list.
>
> Anyone interested is encouraged to participate in this vote and this
> discussion. As per our governance, only votes from the team members will
> be counted.
>
> Vote is open for 1 week - until December 12.
>
> [1] https://prometheus.io/governance/
> [2] https://github.com/prometheus-community/windows_exporter
>
> --
> Julien Pivotto
> @roidelapluie
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/Y43Lmr2%2Bb2fk8YSz%40nixos
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYn8VaiMdQiui98JFw3%2BTGnF-hWxmZikayCJ0gkm_cXag%40mail.gmail.com.


Re: [prometheus-developers] Ingesting OTLP

2022-11-13 Thread Matthias Rampke
Hmm, my assumption was that ingesting OTLP would be equivalent to running
the OTel Collector + remote write, but without the collector in the middle.
That implies using the same mappings as specified in the OTel spec /
implemented in the collector.

This is something that can happen relatively quickly.

On the other hand, there are a *lot* of open questions about extending the
metric naming. Aside from working out how to represent this in PromQL
without breaking compatibility, we *also* need to think about managing the
transition for existing users. That includes users of the OTel collector. I
see that adding another place where the translation happens, right in
Prometheus, adds to the complexity of that transition, but I don't think it
makes it *fundamentally* harder, since we need a solution for the
collector-remote write setup anyway.

Since "ingesting OTLP into Prometheus" is an existing problem with an
existing (albeit suboptimal) solution, I would prefer not to put one
tangible improvement on that solution on hold with no timeline for figuring
out the other, much more complex one.

/MR

On Thu, Nov 10, 2022 at 9:19 PM Julien Pivotto 
wrote:

> Hello,
>
> I have seen the dev summit notes.
>
> I would like to create a dependency between:
>
> > Goutham: Reconsider OTLP Ingest
>
> and
>
> > Goutham: . (dots) and slashes in metric and label names.
>
>
> I think we should FIRST address the special characters in the metric
> names and label names before ingesting OTLP.
>
> That way, when we implement the OTLP feature flag, we have a good user
> experience since the start, and we don't need to change it later on,
> confusing early adopters.
>
> Regards
>
> --
> Julien Pivotto
> @roidelapluie
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/Y21q7VJC/xTaS1%2Bn%40nixos
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gY8HsMOg8uqfpZybQdUM6CT3UsDJjocvVMwMz_COBMHtQ%40mail.gmail.com.


Re: [prometheus-developers] Would tooling for PromQL formatting/manipulation be useful and where should it live?

2022-10-05 Thread Matthias Rampke
Re: drawing the line – I often feel like "specialized" tools that try to
solve all advanced use cases end up with very complex and hard to use
configuration (looking at you, relabeling). I often find it more pleasant
to express what I want to do as *code*. What would an API look like that
you or others could use to build their own tools for their specific needs,
without making it part of any particular CLI?

In general, is there a mock-up or reference of the interface you propose?

/MR

On Wed, Oct 5, 2022 at 4:06 PM r...@chronosphere.io 
wrote:

> Yes I realized that to manipulate the AST (and the AST will of course
> change as new functions and features are added) much like codemirror-promql
> moved into the Prometheus repository to get updates as they come to PromQL
> that somewhere in the Prometheus repo itself would be a good starting point.
>
> How would you all feel of adding the commands under a "--experimental"
> flag as David suggested? I'd be happy to make the "--experimental" flag
> addition too David if you like, also happy to wait too until that's
> available if that's preferential.
>
>
> On Wednesday, October 5, 2022 at 5:55:58 AM UTC-4 Julius Volz wrote:
>
>> The versioning aspect is a good point, I hadn't thought of that.
>>
>> If we make promtool's scope broader than what I proposed, it's IMO still
>> a question of where we draw the line in terms of niche specialized use
>> cases. The proposes features in
>> https://github.com/prometheus/prometheus/pull/11411 are kind of
>> borderline to me in that regard, but I also wouldn't be unhappy if they
>> went into promtool.
>>
>> On Wed, Oct 5, 2022 at 11:25 AM Julien Pivotto 
>> wrote:
>>
>>> I think the opposite - Prometheus contains PromQL, it's same codebase,
>>> same version. It makes sense to have those tools in promtool as well, so
>>> it is shipped to everyone, and has a known version.
>>>
>>> On 05 Oct 11:22, Julius Volz wrote:
>>> > I do feel that formatting entire rule files would be in scope for
>>> promtool,
>>> > but more specialized formatting and manipulations of individual PromQL
>>> > queries (while cool) should likely live in a separate tool. I see the
>>> scope
>>> > of promtool to be mostly a tool to interact with both the Prometheus
>>> > server, its immediately configuration files, and its TSDB directory.
>>> >
>>> > On Wed, Oct 5, 2022 at 11:13 AM David Leadbeater  wrote:
>>> >
>>> > > Hi Rob,
>>> > >
>>> > > I wonder if PromQL related things fit in promtool given the use for
>>> > > PromQL is wider than just Prometheus. I can imagine something like a
>>> > > "promqltool", which might actually be backed by the promql language
>>> > > server (so people can get similar things in editors too).
>>> > >
>>> > > However that's clearly a larger discussion, I don't see an issue with
>>> > > adding some promql subcommands to promtool for now, particularly as
>>> > > the formatting one exercises the code in Prometheus and is useful for
>>> > > developers anyway.
>>> > >
>>> > > I do think it's important to get the interface right, while we don't
>>> > > guarantee complete stability in promtool, it is difficult to change
>>> > > without breaking people. To that end I'm thinking of adding a top
>>> > > level "--experimental" flag in promtool, which can then enable the
>>> > > promql subcommands. (We do have feature flags in promtool, but that
>>> > > feels wrong here, as feature flags are currently shared with
>>> > > prometheus.)
>>> > >
>>> > > David
>>> > >
>>> > > On Wed, 5 Oct 2022 at 07:58, Rob Skillington 
>>> wrote:
>>> > > >
>>> > > > Hey Prometheus team,
>>> > > >
>>> > > > Have noticed asks for tooling around reformatting/manipulating and
>>> > > generally refactoring sets of queries and rule definitions (where
>>> there is
>>> > > a high number of defined queries). Use cases include such cases as
>>> "I want
>>> > > to duplicate a set of alerts to target different environments with
>>> > > different label combinations and also conditions".
>>> > > >
>>> > > > I opened a PR to add some basic commands given I had seen this
>>> earlier
>>> > > PR mention that there was intention for the PromQL AST pretty print
>>> > > formatting to be useable from promtool:
>>> > > > https://github.com/prometheus/prometheus/pull/10544
>>> > > >
>>> > > > I now realize it may have been better perhaps to raise the
>>> question of
>>> > > if/where it should live here before opening the PR. What would be the
>>> > > reception of housing these commands in promtool and/or if not there
>>> then
>>> > > where a good recommended place would be for these to live do people
>>> think?
>>> > > >
>>> > > > PR in question:
>>> > > > https://github.com/prometheus/prometheus/pull/11411
>>> > > >
>>> > > > Best,
>>> > > > Rob
>>> > > >
>>> > > >
>>> > > >
>>> > > > --
>>> > > > You received this message because you are subscribed to the Google
>>> > > Groups "Prometheus Developers" group.
>>> > > > To unsubscribe from this group and stop receiving emails 

[prometheus-developers] Re: Governance Working Group

2022-09-28 Thread Matthias Rampke
There is now a mailing list for this effort:

https://groups.google.com/a/prometheus.io/g/governance-wg

Please request to join if you want to be part of it – and fill in the
Doodle! I will nail down a date later this week.

Cheers,
MR

On Tue, Sep 13, 2022 at 4:05 PM Matthias Rampke 
wrote:

> Hi,
>
> At the in-person Dev Summit in May there was a lot of interest in updating
> the Prometheus project governance <https://prometheus.io/governance>.
>
> I was volunteered to organize this discussion. To that end, I want to kick
> off a working group. If you are interested in the topic, please fill in this
> Doodle <https://doodle.com/meeting/participate/id/enrJkqDe> even if none
> of the dates work for you.
>
> A mailing list will follow shortly, I am still finalizing the setup for
> that.
>
> To keep track of the progress I have created this document
> <https://docs.google.com/document/d/1jVuMcf2uChxhcaksz2fQ-DNwjJOGelQVJm-VPlCJFm8/edit#>
> .
>
> Cheers,
> MR
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZcP4Ep0woBD5R3RJQ7gqpYVAiSz%3DQjX7dnLCb6g95USQ%40mail.gmail.com.


[prometheus-developers] Governance Working Group

2022-09-13 Thread Matthias Rampke
Hi,

At the in-person Dev Summit in May there was a lot of interest in updating
the Prometheus project governance .

I was volunteered to organize this discussion. To that end, I want to kick
off a working group. If you are interested in the topic, please fill in this
Doodle  even if none of
the dates work for you.

A mailing list will follow shortly, I am still finalizing the setup for
that.

To keep track of the progress I have created this document

.

Cheers,
MR

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gaUPYbVyvGD5VYiy2gp09f0Taja_-O34JwtNMo_4-siDg%40mail.gmail.com.


Re: [prometheus-developers] Return specific value if label regex not match

2022-08-12 Thread Matthias Rampke
Hi, this mailing list is for development of Prometheus and related
projects. Since your question is about usage, I'm moving the thread to the
prometheus-users mailing list.

To answer your question, in general a regular expression can have an
unbounded number of matches, so Prometheus cannot automatically determine
from the matcher alone that name2 should be there.

You can set up recording rules with all the names you expect to be there:

- record: probe_success:expected_name
  expr: 1
  labels:
name: name1
- record: probe_success:expected_name
  expr: 1
  labels:
name: name2
- record: probe_success:expected_name
  expr: 1
  labels:
name: name3

and then use it in the your query like

probe_success{name=~"name1|name2|name3"} or -1*probe_success:expected_name

I am using the value 1 for this metric because it is customary to do that
for "metadata metrics" like this – you can multiply it with the desired
value in the query like I did here.

Another thing about your query – you are matching __name__ but that is a
special label representing the metric name. Since your query specifies
probe_success as the metric name, the two are in conflict.

/MR



On Fri, Aug 12, 2022 at 8:35 AM Simon  wrote:

> Hello everyone,
> I have a query: probe_success{__name__=~"name1|name2|name3"}.
> Prometheus does not have label __name___ = name2 and i want it return -1
> if prometheus does not have that label value.
> How can i do that?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/eccbaad3-9bb0-41a0-a626-25403d34a4d9n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYuxKbpx62Ve97tvSm0%2Bb15YJcJaE6mkOxiW45jbZcdaw%40mail.gmail.com.


[prometheus-developers] No Dev Summit today and in August

2022-07-28 Thread Matthias Rampke
Hello,

because too many people are out for the summer, today's Dev Summit
unfortunately won't be happening. Unless unexpected things happen, this is
also true for the one in August.

Dev Summit will be back in September – see you then!

/MR

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZG7yMdONTW4HD%3DQkML2ca3JAgDYx%2BpwT-MPxMn%2BMM8Gg%40mail.gmail.com.


Re: [prometheus-developers] Understanding the structure of node_exporter

2022-07-28 Thread Matthias Rampke
Not all collectors are available on all platforms – in many cases, due to
the platform dependent code, they won't even compile. This structure allows
us to selectively compile the various collector files, and those that are
getting compiled register themselves.

Additionally, this structure allows third party software to re-use the
collectors, for example as part of a bundled "one binary to rule them all"
style uber-exporter.

/MR

On Wed, Jul 27, 2022, 10:18 Siddhant Gupta 
wrote:

> I am reading the code of node_exporter and I could not understand why do
> we have a generic collector.go that maintains all the various collectors
> and registers a generic Collector interface with prometheus?
>
> Couldn't we register all the collector directly with prometheus?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/1034f67a-7e48-482a-b3e3-c04befb0ca6an%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYPCJEq_NGKKvHBAVDgbs3D0NeYPRxhS1TBpKZq%3DUv68Q%40mail.gmail.com.


Re: [prometheus-developers] Quick Question: do tests fail on macos?

2022-05-06 Thread Matthias Rampke
Hi,

I'm trying this out – I can't reproduce the zookeeper failure (on main as
of today) but I ran into the same problem on the TSDB test.

I think what's happening is that the test actually takes too long. The test

loops
20 times, and in each loop writes out 20 blocks. I don't know what macOS is
doing there, it takes >1s on my (not old, not new) macbook to write one
block.

I don't know if there is a more efficient way to run this test on macOS (or
to speed up tsdb on macOS in general?), but I think in the meantime I think
we can avoid hitting the timeout
.

/MR

On Fri, Apr 22, 2022 at 8:16 PM Mayur R  wrote:

>
> *make build succeeds then I did following*
> MacBook-Pro-2:prometheus mraleras$ *make test*
> >> running all tests
> GO111MODULE=on go test -race  ./...
> ok  github.com/prometheus/prometheus/cmd/prometheus 111.481s
> ok  github.com/prometheus/prometheus/cmd/promtool   130.848s
> ok  github.com/prometheus/prometheus/config 2.780s
> ok  github.com/prometheus/prometheus/discovery  5.684s
> ?   github.com/prometheus/prometheus/discovery/aws  [no test files]
> ok  github.com/prometheus/prometheus/discovery/azure0.515s
> ok  github.com/prometheus/prometheus/discovery/consul   3.499s
> ok  github.com/prometheus/prometheus/discovery/digitalocean 1.938s
> ok  github.com/prometheus/prometheus/discovery/dns  1.441s
> ok  github.com/prometheus/prometheus/discovery/eureka   1.135s
> ok  github.com/prometheus/prometheus/discovery/file 3.167s
> ?   github.com/prometheus/prometheus/discovery/gce  [no test files]
> ok  github.com/prometheus/prometheus/discovery/hetzner  0.650s
> ok  github.com/prometheus/prometheus/discovery/http 1.572s
> ?   github.com/prometheus/prometheus/discovery/install  [no test
> files]
> ok  github.com/prometheus/prometheus/discovery/kubernetes   27.412s
> ok  github.com/prometheus/prometheus/discovery/legacymanager
>  3.735s
> ok  github.com/prometheus/prometheus/discovery/linode   0.618s
> ok  github.com/prometheus/prometheus/discovery/marathon 0.368s
> ok  github.com/prometheus/prometheus/discovery/moby 6.517s
> ok  github.com/prometheus/prometheus/discovery/openstack0.521s
> ok  github.com/prometheus/prometheus/discovery/puppetdb 1.199s
> ok  github.com/prometheus/prometheus/discovery/refresh  0.387s
> ok  github.com/prometheus/prometheus/discovery/scaleway 1.810s
> ok  github.com/prometheus/prometheus/discovery/targetgroup  0.261s
> ok  github.com/prometheus/prometheus/discovery/triton   0.494s
> ok  github.com/prometheus/prometheus/discovery/uyuni0.532s
> ok  github.com/prometheus/prometheus/discovery/xds  2.572s
> --- FAIL: TestNewDiscoveryError (0.07s)
> zookeeper_test.go:35: expected error, got nil
> FAIL
> FAILgithub.com/prometheus/prometheus/discovery/zookeeper0.369s
> ok
> github.com/prometheus/prometheus/documentation/examples/custom-sd/adapter
>   0.591s
> ?
> github.com/prometheus/prometheus/documentation/examples/custom-sd/adapter-usage
> [no test files]
> ?   github.com/prometheus/prometheus/model/exemplar [no test files]
> ok  github.com/prometheus/prometheus/model/labels   0.370s
> ok  github.com/prometheus/prometheus/model/relabel  0.251s
> ok  github.com/prometheus/prometheus/model/rulefmt  1.043s
> ok  github.com/prometheus/prometheus/model/textparse0.395s
> ?   github.com/prometheus/prometheus/model/timestamp[no test
> files]
> ?   github.com/prometheus/prometheus/model/value[no test files]
> ok  github.com/prometheus/prometheus/notifier   0.955s
> ?   github.com/prometheus/prometheus/plugins[no test files]
> ?   github.com/prometheus/prometheus/prompb [no test files]
> ok  github.com/prometheus/prometheus/promql 29.842s
> ok  github.com/prometheus/prometheus/promql/parser  8.617s
> ok  github.com/prometheus/prometheus/rules  44.925s
> ok  github.com/prometheus/prometheus/scrape 20.093s
> ok  github.com/prometheus/prometheus/storage1.526s
> ok  github.com/prometheus/prometheus/storage/remote 14.953s
> ok  github.com/prometheus/prometheus/template   0.438s
> ok  github.com/prometheus/prometheus/tracing0.516s
> level=info msg="Replaying on-disk memory mappable chunks if any"
> level=info msg="On-disk memory mappable chunks replay completed"
> duration=4.315µs
> level=info msg="Replaying WAL, this may take a while"
> level=warn msg="Unknown series references" samples=3803 exemplars=0
> level=info msg="WAL segment loaded" segment=0 maxSegment=1
> level=info msg="WAL segment loaded" segment=1 maxSegment=1
> level=info msg="WAL replay completed" checkpoint_replay_duration=235.938µs
> 

Re: [prometheus-developers] Update Prometheus Readme on docker hub

2022-02-04 Thread Matthias Rampke
Ah, I didn't realize. I would have kept even less information but if this
is not too much to keep up to date, I'm happy! Let's see how things develop
:)

/MR

On Fri, Feb 4, 2022 at 1:19 PM Julien Pivotto 
wrote:

> On 04 Feb 13:12, Matthias Rampke wrote:
> > I propose that we cut most of it, only keep a short paragraph about what
> > Prometheus is, and link to prometheus.io/docs and/or the README for
> > details? That way we only need to update it when things fundamentally
> > change. I don't think people come to Docker Hub to read extended
> > documentation, and following a link is not a burden in that case.
>
>
> That's mostly what I did, did you check the last version?
>
>
> >
> > /MR
> >
> > On Wed, Feb 2, 2022 at 6:27 PM Ben Kochie  wrote:
> >
> > > I looked into this a while back and didn't find a good API / tool to
> > > automatically push updates.
> > >
> > > Maybe things have improved since I last tried.
> > >
> > > On Wed, Feb 2, 2022, 17:01 Julien Pivotto 
> > > wrote:
> > >
> > >>
> > >> Hello,
> > >>
> > >> Someone pointed out on twitter that our Docker Hub readme was not
> > >> useful to run Prometheus in Docker. They also suggested a snippet from
> > >> our documentation to update it.
> > >> https://twitter.com/phil_eaton/status/1488895298239877128
> > >>
> > >> After analyzing the claim, the README of the docker image was very
> > >> outdated. I have therefore applied the user's advice.
> > >>
> > >> We can think further how we can improve this, but I think it was not
> > >> reasonable to have a X years old readme on the docker hub.
> > >>
> > >> This is the new page:
> > >>
> > >> https://hub.docker.com/r/prom/prometheus
> > >>
> > >> Regards,
> > >>
> > >>
> > >> --
> > >> Julien Pivotto
> > >> @roidelapluie
> > >>
> > >> --
> > >> You received this message because you are subscribed to the Google
> Groups
> > >> "Prometheus Developers" group.
> > >> To unsubscribe from this group and stop receiving emails from it,
> send an
> > >> email to prometheus-developers+unsubscr...@googlegroups.com.
> > >> To view this discussion on the web visit
> > >>
> https://groups.google.com/d/msgid/prometheus-developers/20220202160143.GA669695%40hydrogen
> > >> .
> > >>
> > > --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "Prometheus Developers" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an
> > > email to prometheus-developers+unsubscr...@googlegroups.com.
> > > To view this discussion on the web visit
> > >
> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmr_0dg7kP-omQVrzeLnrQFQ%3DuoyCZBTEb_qqU6g8%2BDZkA%40mail.gmail.com
> > > <
> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmr_0dg7kP-omQVrzeLnrQFQ%3DuoyCZBTEb_qqU6g8%2BDZkA%40mail.gmail.com?utm_medium=email_source=footer
> >
> > > .
> > >
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Prometheus Developers" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to prometheus-developers+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gacDJghire3uppkA6FughHi2ax-UJSaDC1R_8sbO%3D5Kyw%40mail.gmail.com
> .
>
> --
> Julien Pivotto
> @roidelapluie
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_garPCQ7AFSoRp6qPQ1CJuY00WWtQpN7_DVP0xo0QnczQw%40mail.gmail.com.


Re: [prometheus-developers] Service Discovery for Oracle Cloud Infrastructure

2022-02-04 Thread Matthias Rampke
Hi Mayur,

Yes, this is the correct document.

Will you be able to support this service discovery mechanism long-term? In
the past we had service discovery mechanisms slowly break due to lack of
attention, and would like to avoid that in the future.

Best,
Matthias

On Thu, Jan 6, 2022 at 10:57 PM Mpr Testing 
wrote:

> Hello Prometheus Team,
> I am with OCI(*Oracle Cloud Infrastructure)*. I am reaching out to you on
> behalf of joint OCI and Prometheus customers. These joint customers are
> asking for support for native Service Discovery in Prometheus for their OCI
> resources(similar to the ones listed here
> )
>
> We at OCI would really appreciate your advice and guidance about the next
> steps in this regard. Is this(
> https://github.com/prometheus/prometheus/tree/main/discovery#writing-an-sd-mechanism)
> the documentation that we can refer to?
>
> Thanks
> Mayur
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/87d246a4-d860-45fc-899d-fbad63eab650n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZxndrc0epwu1hcNfK1VvGF-JNvi%2BLTpy6PUtSVsitx-Q%40mail.gmail.com.


Re: [prometheus-developers] Update Prometheus Readme on docker hub

2022-02-04 Thread Matthias Rampke
I propose that we cut most of it, only keep a short paragraph about what
Prometheus is, and link to prometheus.io/docs and/or the README for
details? That way we only need to update it when things fundamentally
change. I don't think people come to Docker Hub to read extended
documentation, and following a link is not a burden in that case.

/MR

On Wed, Feb 2, 2022 at 6:27 PM Ben Kochie  wrote:

> I looked into this a while back and didn't find a good API / tool to
> automatically push updates.
>
> Maybe things have improved since I last tried.
>
> On Wed, Feb 2, 2022, 17:01 Julien Pivotto 
> wrote:
>
>>
>> Hello,
>>
>> Someone pointed out on twitter that our Docker Hub readme was not
>> useful to run Prometheus in Docker. They also suggested a snippet from
>> our documentation to update it.
>> https://twitter.com/phil_eaton/status/1488895298239877128
>>
>> After analyzing the claim, the README of the docker image was very
>> outdated. I have therefore applied the user's advice.
>>
>> We can think further how we can improve this, but I think it was not
>> reasonable to have a X years old readme on the docker hub.
>>
>> This is the new page:
>>
>> https://hub.docker.com/r/prom/prometheus
>>
>> Regards,
>>
>>
>> --
>> Julien Pivotto
>> @roidelapluie
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/20220202160143.GA669695%40hydrogen
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmr_0dg7kP-omQVrzeLnrQFQ%3DuoyCZBTEb_qqU6g8%2BDZkA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gacDJghire3uppkA6FughHi2ax-UJSaDC1R_8sbO%3D5Kyw%40mail.gmail.com.


Re: [prometheus-developers] [VOTE] Rename blackbox_exporter to prober

2022-01-20 Thread Matthias Rampke
YES

On Thu, Jan 20, 2022, 14:59 Ben Kochie  wrote:

> YES
>
> On Thu, Jan 20, 2022 at 3:41 PM Julien Pivotto 
> wrote:
>
>> Dear Prometheans,
>>
>> As per our governance, I'd like to cast a vote to rename the Blackbox
>> Exporter to Prober.
>> This vote is based on the following thread:
>>
>> https://groups.google.com/g/prometheus-developers/c/advMjgmJ1E4/m/A0abCsUrBgAJ
>>
>> Any Prometheus team member is eligible to vote, and votes for the
>> community are welcome too, but do not formally count in the result.
>>
>> Here is the content of the vote:
>>
>> > We want to rename Blackbox Exporter to Prober.
>>
>> I explicitly leave out the "how" out of this vote. If this vote passes,
>> a specific issue will be created in the blackbox exporter repository
>> explaining how I plan to work and communicate on this change. I will
>> make sure that enough time passes so that as many people as possible can
>> give their input on the "how".
>>
>> The vote is open until February 3rd. If the vote comes positive before
>> next week's dev summit, the "how" can also be discussed during the dev
>> summit, and I would use that discussion as input for the previously
>> mentioned github issue.
>>
>> --
>> Julien Pivotto
>> @roidelapluie
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/20220120144119.GA522055%40hydrogen
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmpyFGGv5amrq4EoftRRLEbzZ5zFVpuR8CKhge5iMicmBw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gb5gGKv%2Bm3HNS1R3inVXQa4SfJz5ggsEjw1VT4iEX1TjQ%40mail.gmail.com.


Re: [prometheus-developers] Add label to mysqld-exporter to show the mysql instance

2022-01-17 Thread Matthias Rampke
This is not supported in the exporter and we have no plans to add it. Most
exporters use a different approach, which we recommend for exporters in
general.

Deploy the expory as a sidecar alongside the MySQL instance. In Kubernetes,
this means an additional container in the MySQL pod. This solves your
problem by making each MySQL+exporter pod its own scrape target with its
own instance label.

For most exporters, the way to think about them is not as a separate
service that somehow interacts with what it is translating for (in this
case, MySQL). Rather see the exporter as an out-of-process plugin, paired
1:1 with each MySQL process. For the purposes of monitoring, they are one
unit, and when looking at metrics and alerts you don't need to worry
whether a piece of software supports Prometheus natively or through an
exporter.

I hope this helps!
Matthias

On Mon, Jan 17, 2022, 13:01 ehsan karimi  wrote:

> I install the mysqld-exporter on Kubernetes and when I scrape it with
> Prometheus, the instance label will show the pod IP of the mysqld-exporter
> instance and when we saw the MySqlIsDown alert, I don't know what MySQL
> instance is for it. I wanna add a label to the exposed metrics to show the
> host of MySQL.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/0a4a1c76-3436-4e92-be4d-e90e0e6cc069n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gbi7mV2UWL-n8aFAUWZY%3DVC2c3hPo8%3DfUKp7ZkefNwb2w%40mail.gmail.com.


[prometheus-developers] Please welcome Fabian "fstab" Stäber to the Prometheus Team

2022-01-17 Thread Matthias Rampke
Dear all,

I am happy to announce that Fabian Stäber  is
joining the Prometheus Team. Fabian has been the maintainer of the official
Java client library for a while, and has now become a full team member.

Welcome to the team, Fabian!

Best,
Matthias

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZDE%2BsbQu77rp-ThPT%2BSXJfL6PNUTB%3DrxcpnuQiDAx63g%40mail.gmail.com.


Re: [prometheus-developers] Proposal: Adopting rust-open-metrics-client as official Rust client

2022-01-13 Thread Matthias Rampke
For the public record – with "and" the sentence was even more confusing.
What I really meant to write was "official clients should be in the
official org".

+1 to the plan!

/MR

On Thu, Jan 13, 2022, 21:12 Matthias Rampke  wrote:

> > What does "amd" stand for?
>
> Just fat fingers  should have been "and"
>
> /MR
>
> On Thu, Jan 13, 2022, 21:07 Max Inden  wrote:
>
>> Late reply. Sorry about that.
>>
>>
>> > amd official clients should be in the community org with the implied
>> promise of support.
>>
>> I am not quite sure I follow Matthias. What does "amd" stand for?
>>
>>
>> > We could follow python and call it prometheus-client.
>>
>> > I would move the repository directly under
>> GitHub.com/Prometheus/client_rust.
>>
>> I like these two suggestions. I would proceed with the following:
>>
>> 1. Move https://github.com/mxinden/rust-open-metrics-client to
>> https://github.com/prometheus/client_rust
>>
>> 2. Move https://crates.io/crates/open-metrics-client to
>> https://crates.io/crates/prometheus-client
>>
>> Any objections? If not I will start the transition over the weekend.
>>
>>
>> On 28.11.21 00:12, Matthias Rampke wrote:
>> > +1 for putting it in the prometheus org, we should have an official
>> > client for all the popular languages, amd official clients should be in
>> > the community org with the implied promise of support.
>> >
>> > /MR
>> >
>> > On Sat, 27 Nov 2021, 18:12 Julien Pivotto, > > <mailto:roidelapl...@prometheus.io>> wrote:
>> >
>> > We could follow python and call it prometheus-client. In pypi,
>> > Prometheus is also a community made package.
>> >
>> > I would move the repository directly under
>> > GitHub.com/Prometheus/client_rust. I really would love to have a
>> > fully supported rust implementation and I do not think we need an
>> > intermediate step via Prometheus-community.
>> >
>> > Regards,
>> >
>> > Le sam. 27 nov. 2021, 17:09, Max Inden > > <mailto:inde...@gmail.com>> a écrit :
>> >
>> > Hi there,
>> >
>> >
>> > I would like to propose adopting
>> > https://github.com/mxinden/rust-open-metrics-client
>> > <https://github.com/mxinden/rust-open-metrics-client> as the
>> > official Rust
>> > Prometheus client library.
>> >
>> > Next to the source code on GitHub you may find detailed
>> > documentation
>> > along with various examples on
>> > https://docs.rs/open-metrics-client/
>> > <https://docs.rs/open-metrics-client/> as
>> > well as a comparison with the popular rust-prometheus library on
>> > https://github.com/tikv/rust-prometheus/issues/392
>> > <https://github.com/tikv/rust-prometheus/issues/392>.
>> >
>> > # Open Issues
>> >
>> > In case we reach general consensus on the above, the following
>> > issues
>> > still need to be discussed.
>> >
>> > ## Naming on crates.io <http://crates.io>
>> >
>> > Today the library is published on crates.io <http://crates.io>
>> > (the Rust package registry)
>> > as "open-metrics-client" [1]. The crate name "prometheus" is
>> already
>> > taken on crates.io <http://crates.io> [2].
>> >
>> > Would you want to keep the "open-metrics-client" name on
>> > crates.io <http://crates.io>?
>> >
>> > ## Code repository
>> >
>> > Should we keep the source code at github.com/mxinden
>> > <http://github.com/mxinden> for now? Or should
>> > we move the source code to github.com/prometheus-community
>> > <http://github.com/prometheus-community> as an
>> > intermediary step? Or would we want to move it to
>> > github.com/prometheus <http://github.com/prometheus>
>> > directly?
>> >
>> >
>> > Regards,
>> > Max
>> >
>> > --
>> > You received this message because you are subscribed to the
>> > Google Groups "Prometheus

Re: [prometheus-developers] Proposal: Adopting rust-open-metrics-client as official Rust client

2022-01-13 Thread Matthias Rampke
> What does "amd" stand for?

Just fat fingers  should have been "and"

/MR

On Thu, Jan 13, 2022, 21:07 Max Inden  wrote:

> Late reply. Sorry about that.
>
>
> > amd official clients should be in the community org with the implied
> promise of support.
>
> I am not quite sure I follow Matthias. What does "amd" stand for?
>
>
> > We could follow python and call it prometheus-client.
>
> > I would move the repository directly under
> GitHub.com/Prometheus/client_rust.
>
> I like these two suggestions. I would proceed with the following:
>
> 1. Move https://github.com/mxinden/rust-open-metrics-client to
> https://github.com/prometheus/client_rust
>
> 2. Move https://crates.io/crates/open-metrics-client to
> https://crates.io/crates/prometheus-client
>
> Any objections? If not I will start the transition over the weekend.
>
>
> On 28.11.21 00:12, Matthias Rampke wrote:
> > +1 for putting it in the prometheus org, we should have an official
> > client for all the popular languages, amd official clients should be in
> > the community org with the implied promise of support.
> >
> > /MR
> >
> > On Sat, 27 Nov 2021, 18:12 Julien Pivotto,  > <mailto:roidelapl...@prometheus.io>> wrote:
> >
> > We could follow python and call it prometheus-client. In pypi,
> > Prometheus is also a community made package.
> >
> > I would move the repository directly under
> > GitHub.com/Prometheus/client_rust. I really would love to have a
> > fully supported rust implementation and I do not think we need an
> > intermediate step via Prometheus-community.
> >
> > Regards,
> >
> > Le sam. 27 nov. 2021, 17:09, Max Inden  > <mailto:inde...@gmail.com>> a écrit :
> >
> > Hi there,
> >
> >
> > I would like to propose adopting
> > https://github.com/mxinden/rust-open-metrics-client
> > <https://github.com/mxinden/rust-open-metrics-client> as the
> > official Rust
> > Prometheus client library.
> >
> > Next to the source code on GitHub you may find detailed
> > documentation
> > along with various examples on
> > https://docs.rs/open-metrics-client/
> > <https://docs.rs/open-metrics-client/> as
> > well as a comparison with the popular rust-prometheus library on
> > https://github.com/tikv/rust-prometheus/issues/392
> > <https://github.com/tikv/rust-prometheus/issues/392>.
> >
> > # Open Issues
> >
> > In case we reach general consensus on the above, the following
> > issues
> > still need to be discussed.
> >
> > ## Naming on crates.io <http://crates.io>
> >
> > Today the library is published on crates.io <http://crates.io>
> > (the Rust package registry)
> > as "open-metrics-client" [1]. The crate name "prometheus" is
> already
> > taken on crates.io <http://crates.io> [2].
> >
> > Would you want to keep the "open-metrics-client" name on
> > crates.io <http://crates.io>?
> >
> > ## Code repository
> >
> > Should we keep the source code at github.com/mxinden
> > <http://github.com/mxinden> for now? Or should
> > we move the source code to github.com/prometheus-community
> > <http://github.com/prometheus-community> as an
> > intermediary step? Or would we want to move it to
> > github.com/prometheus <http://github.com/prometheus>
> > directly?
> >
> >
> > Regards,
> > Max
> >
> > --
> > You received this message because you are subscribed to the
> > Google Groups "Prometheus Developers" group.
> > To unsubscribe from this group and stop receiving emails from
> > it, send an email to
> > prometheus-developers+unsubscr...@googlegroups.com
> > <mailto:prometheus-developers%2bunsubscr...@googlegroups.com>.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/prometheus-developers/7174ebe6-5dca-b322-c067-3f48d64b3353%40gmail.com
> > <
> https://groups.google.com/d/msgid/prometheus-developers/7174ebe6-5dca-b322-c067-3f48d64b3353%40gmail.com
> >.
> >
> > --
> > You received this messa

Re: [prometheus-developers] amtool: passwords passed in the command line

2021-12-03 Thread Matthias Rampke
That's fair, I only ask that we consider use cases when they come up :)

/MR

On Wed, Dec 1, 2021 at 12:38 PM Julien Pivotto 
wrote:

> What usecase for amtool would not involve authorization or authentication?
> I don't think there are.
>
> Le mer. 1 déc. 2021, 09:21, Matthias Rampke  a
> écrit :
>
>> I take a less hard line on that … I think it's good not to *accept
>> secrets* on the command line, but I think we should not categorically
>> exclude generic features (like headers on the command line) because someone
>> *might* put secrets there.
>>
>> I don't have a final opinion whether we should add more than the config
>> file in this case, but a feedback I hear a lot from users is that having to
>> generate files left and right is challenging in
>> post-configuration-management systems (think "I want to run this as a
>> one-off job on Kubernetes"). If our stance that secrets only go in files
>> causes someone to commit that file to source control, we've
>> verschlimmbessert the overall situation.
>>
>> /MR
>>
>>
>> On Tue, Nov 30, 2021 at 9:09 AM Ben Kochie  wrote:
>>
>>> There are lots of ways to easily inject secrets into configs.
>>>
>>> Adding secrets/headers via config file is the safest way.
>>>
>>> While I'm all for allowing sharp edges in tools if they're not default,
>>> I'm strongly against having known unsafe things like secrets on the command
>>> line.
>>>
>>> On Tue, Nov 23, 2021 at 5:38 PM Augustin Husson <
>>> husson.augus...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> I think having the http config file is a good idea and a safe one.
>>>> The fact users have a rotation in the credential used only means the
>>>> client has to authenticate themself first to get a fresher session / token
>>>> / credentials. Maybe it's more sophisticated than that, but from my
>>>> understanding it shouldn't be.
>>>>
>>>> Kubernetes is using a config file for it's kube client and it works
>>>> nicely. The token used and stored in the file expires every 24h  and it's
>>>> not so hard to have a fresher one.
>>>>
>>>> Best regards,
>>>> Augustin.
>>>>
>>>> Le mar. 23 nov. 2021 à 17:15, Julien Pivotto <
>>>> roidelapl...@prometheus.io> a écrit :
>>>>
>>>>> Hello -developers,
>>>>>
>>>>> In the past and still today, we have asked exporters not to use secrets
>>>>> on the command line.
>>>>>
>>>>> There is a pull requests that wants to add secrets on the amtool
>>>>> command
>>>>> line:
>>>>> https://github.com/prometheus/alertmanager/pull/2764
>>>>>
>>>>> and users requests to pass arbitrary http headers in amtool via the
>>>>> command line too. In the same way, users want to add arbitraty secrets
>>>>> in HTTP headers:
>>>>> https://github.com/prometheus/alertmanager/issues/2597
>>>>>
>>>>> I am personally opposed to allow what we ask others not to do, but
>>>>> maybe
>>>>> I am stubborn, so I am asking the developers community here what should
>>>>> we do here?
>>>>>
>>>>> My proposal was to introduce a HTTP client configuration file to
>>>>> amtool,
>>>>> so we tackle the secret issue and enable all the other HTTP client
>>>>> options easily (oauth2, bearer token, proxy_url, ...). The community
>>>>> was
>>>>> not entirely keen on it:
>>>>>
>>>>> https://github.com/prometheus/alertmanager/issues/2597#issuecomment-974144389
>>>>>
>>>>> What do the large group of developers think about all this? Note that
>>>>> the solution we chose here could/should be applied to promtool and
>>>>> getool later.
>>>>>
>>>>> Thanks!
>>>>>
>>>>> --
>>>>> Julien Pivotto
>>>>> @roidelapluie
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Prometheus Developers" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>>>> To

Re: [prometheus-developers] amtool: passwords passed in the command line

2021-12-01 Thread Matthias Rampke
I take a less hard line on that … I think it's good not to *accept secrets* on
the command line, but I think we should not categorically exclude generic
features (like headers on the command line) because someone *might* put
secrets there.

I don't have a final opinion whether we should add more than the config
file in this case, but a feedback I hear a lot from users is that having to
generate files left and right is challenging in
post-configuration-management systems (think "I want to run this as a
one-off job on Kubernetes"). If our stance that secrets only go in files
causes someone to commit that file to source control, we've
verschlimmbessert the overall situation.

/MR


On Tue, Nov 30, 2021 at 9:09 AM Ben Kochie  wrote:

> There are lots of ways to easily inject secrets into configs.
>
> Adding secrets/headers via config file is the safest way.
>
> While I'm all for allowing sharp edges in tools if they're not default,
> I'm strongly against having known unsafe things like secrets on the command
> line.
>
> On Tue, Nov 23, 2021 at 5:38 PM Augustin Husson 
> wrote:
>
>> Hello,
>>
>> I think having the http config file is a good idea and a safe one.
>> The fact users have a rotation in the credential used only means the
>> client has to authenticate themself first to get a fresher session / token
>> / credentials. Maybe it's more sophisticated than that, but from my
>> understanding it shouldn't be.
>>
>> Kubernetes is using a config file for it's kube client and it works
>> nicely. The token used and stored in the file expires every 24h  and it's
>> not so hard to have a fresher one.
>>
>> Best regards,
>> Augustin.
>>
>> Le mar. 23 nov. 2021 à 17:15, Julien Pivotto 
>> a écrit :
>>
>>> Hello -developers,
>>>
>>> In the past and still today, we have asked exporters not to use secrets
>>> on the command line.
>>>
>>> There is a pull requests that wants to add secrets on the amtool command
>>> line:
>>> https://github.com/prometheus/alertmanager/pull/2764
>>>
>>> and users requests to pass arbitrary http headers in amtool via the
>>> command line too. In the same way, users want to add arbitraty secrets
>>> in HTTP headers: https://github.com/prometheus/alertmanager/issues/2597
>>>
>>> I am personally opposed to allow what we ask others not to do, but maybe
>>> I am stubborn, so I am asking the developers community here what should
>>> we do here?
>>>
>>> My proposal was to introduce a HTTP client configuration file to amtool,
>>> so we tackle the secret issue and enable all the other HTTP client
>>> options easily (oauth2, bearer token, proxy_url, ...). The community was
>>> not entirely keen on it:
>>>
>>> https://github.com/prometheus/alertmanager/issues/2597#issuecomment-974144389
>>>
>>> What do the large group of developers think about all this? Note that
>>> the solution we chose here could/should be applied to promtool and
>>> getool later.
>>>
>>> Thanks!
>>>
>>> --
>>> Julien Pivotto
>>> @roidelapluie
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/20211123161546.GA696401%40hydrogen
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/CAOJizGcb45MwjCj3Bd6_gt9ZatS%2Bnbw%2B1QvjD8wbNdfR77eo%3DQ%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmpuNnWrT2H6o2Vkpuuvhsa0mJ%2B5MKapUvhs2_0Vs_FZ4w%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: [prometheus-developers] Requirements / Best Practices to use Prometheus Metrics for Serverless environments

2021-11-27 Thread Matthias Rampke
What properties would an ideal OpenMetrics push receiver have? In
particular, I am wondering:

- What tradeoff would it make when metric ingestion is slower than metric
production? Backpressure or drop data?
- What are the semantics of pushing a counter?
- Where would the data move from there, and how?
- How many of these receivers would you typically run? How much
coordination is necessary between them?

>From observing the use of the statsd exporter, I see a few cases where it
covers ground that is not very compatible with the in-process aggregation
implied by the pull model. It has the downside of mapping through a
different metrics model, and its tradeoffs are informed by the ones statsd
made 10+ years ago. I wonder what it would look like, remade in 2022
starting from OpenMetrics.


/MR

On Sat, 27 Nov 2021, 12:50 Rob Skillington, 
wrote:

> Here’s the documentation for using M3 coordinator (with it without M3
> aggregator) with a backend that has a Prometheus Remote Write receiver:
> https://m3db.io/docs/how_to/any_remote_storage/
>
> Would be more than happy to do a call some time on this topic, the more
> we’ve looked at this it’s a client library issue primarily way before you
> consider the backend/receiver aspect (which there are options out there and
> are fairly mechanical to overcome, vs the client library concerns which
> have a lot of ergonomic and practical issues especially in a serverless
> environment where you may need to wait for publishing before finishing your
> request - perhaps an async process like publishing a message to local
> serverless message queue like SQS is an option and having a reader read
> that and use another client library to push that data out is ideal - it
> would be more type safe and probably less lossy than logs and reading the
> logs then publishing but would need good client library support for both
> the serverless producers and the readers/pushers).
>
> Rob
>
> On Sat, Nov 27, 2021 at 1:41 AM Rob Skillington 
> wrote:
>
>> FWIW we have been experimenting with users pushing OpenMetrics protobuf
>> payloads quite successfully, but only sophisticated exporters that can
>> guarantee no collisions of time series and generate their own monotonic
>> counters, etc are using this at this time.
>>
>> If you're looking for a solution that also involves aggregation support,
>> M3 Coordinator (either standalone or combined with M3 Aggregator) supports
>> Remote Write as a backend (and is thus compatible with Thanos, Cortex and
>> of course Prometheus itself too due to the PRW receiver).
>>
>> M3 Coordinator however does not have any nice support to publish to it
>> from a serverless environment (since the primary protocol it supports is
>> Prometheus Remote Write which has no metrics clients, etc I would assume).
>>
>> Rob
>>
>>
>> On Mon, Nov 15, 2021 at 9:54 PM Bartłomiej Płotka 
>> wrote:
>>
>>> Hi All,
>>>
>>> I would love to resurrect this thread. I think we are missing a good
>>> push-gateway like a product that would ideally live in Prometheus
>>> (repo/binary or can be recommended by us) and convert events to metrics in
>>> a cheap way. Because this is what it is when we talk about short-living
>>> containers and serverless functions. What's the latest Rob? I would be
>>> interested in some call for this if that is still on the table. (:
>>>
>>> I think we have some new options on the table like supporting Otel
>>> metrics as such potential high-cardinal event push, given there are more
>>> and more clients for that API. Potentially Otel collector can work as such
>>> "push gateway" proxy, but at this point, it's extremely generic, so we
>>> might want to consider something more focused/efficient/easier to maintain.
>>> Let's see (: The other problem is that Otel metrics is yet another
>>> protocol. Users might want to use push gateway API, remote write or
>>> logs/traces as per @Tobias Schmidt  idea
>>>
>>> Another service "loggateway" (or otherwise named) would then stream the
 logs, aggregate them and either expose them on the common /metrics endpoint
 or push them with remote write right away to a Prometheus instance hosted
 somewhere (like Grafana Cloud)."
>>>
>>>
>>> Kind Regards,
>>> Bartek Płotka (@bwplotka)
>>>
>>>
>>> On Fri, Jun 25, 2021 at 6:11 AM Rob Skillington 
>>> wrote:
>>>
 With respect to OpenMetrics push, we had something very similar at
 $prevco that pushed something that looked very similar to the protobuf
 payload of OpenMetrics (but was Thrift snapshot of an aggregated set of
 metrics from in process) that was used by short running tasks (for Jenkins,
 Flink jobs, etc).

 I definitely agree it’s not ideal and ideally the platform provider can
 supply a collection point (there is something for Jenkins, a plug-in that
 can do this, but custom metrics is very hard / nigh impossible to make work
 with it, and this is a non-cloud provider environment that’s actually
 possible to make 

Re: [prometheus-developers] Proposal: Adopting rust-open-metrics-client as official Rust client

2021-11-27 Thread Matthias Rampke
+1 for putting it in the prometheus org, we should have an official client
for all the popular languages, amd official clients should be in the
community org with the implied promise of support.

/MR

On Sat, 27 Nov 2021, 18:12 Julien Pivotto, 
wrote:

> We could follow python and call it prometheus-client. In pypi, Prometheus
> is also a community made package.
>
> I would move the repository directly under
> GitHub.com/Prometheus/client_rust. I really would love to have a fully
> supported rust implementation and I do not think we need an intermediate
> step via Prometheus-community.
>
> Regards,
>
> Le sam. 27 nov. 2021, 17:09, Max Inden  a écrit :
>
>> Hi there,
>>
>>
>> I would like to propose adopting
>> https://github.com/mxinden/rust-open-metrics-client as the official Rust
>> Prometheus client library.
>>
>> Next to the source code on GitHub you may find detailed documentation
>> along with various examples on https://docs.rs/open-metrics-client/ as
>> well as a comparison with the popular rust-prometheus library on
>> https://github.com/tikv/rust-prometheus/issues/392.
>>
>> # Open Issues
>>
>> In case we reach general consensus on the above, the following issues
>> still need to be discussed.
>>
>> ## Naming on crates.io
>>
>> Today the library is published on crates.io (the Rust package registry)
>> as "open-metrics-client" [1]. The crate name "prometheus" is already
>> taken on crates.io [2].
>>
>> Would you want to keep the "open-metrics-client" name on crates.io?
>>
>> ## Code repository
>>
>> Should we keep the source code at github.com/mxinden for now? Or should
>> we move the source code to github.com/prometheus-community as an
>> intermediary step? Or would we want to move it to github.com/prometheus
>> directly?
>>
>>
>> Regards,
>> Max
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/7174ebe6-5dca-b322-c067-3f48d64b3353%40gmail.com
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAFJ6V0o7e-cytj8r1%3DraHzRK57BVqd1F9iZzYnZphbQFBtc%2BOw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYjqoPThkM9kaPN1onv9QqNyU5%3DXy38DowmbH3%3D5bXg2g%40mail.gmail.com.


Re: [prometheus-developers] Welcoming Matthias Loibl as a new Prometheus team member

2021-11-26 Thread Matthias Rampke
Welcome to the other Matthias >:)

/MR

On Thu, Nov 25, 2021 at 2:11 PM Goutham Veeramachaneni 
wrote:

> Welcome Matthias! Looong overdue :)
>
> Thanks
> Goutham
>
> On Thu, Nov 25, 2021 at 2:31 PM Julien Pivotto 
> wrote:
>
>> Welcome!!
>>
>> On 25 Nov 12:32, Julius Volz wrote:
>> > Hi Prometheans,
>> >
>> > Please welcome Matthias Loibl as a new member to the Prometheus team!
>> > Matthias has been a friend of the Prometheus project for a long time and
>> > has helped out with a lot of community work (meetups, evangelism,
>> > contributor office hours, best practices around SLOs, and more).
>> >
>> > Cheers,
>> > Julius
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups "Prometheus Developers" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>> > To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/CA%2BT6Yoz4nb69BhUt_q7JS%2BhuYveHiUuYbn_FABb0_biAyVzW3g%40mail.gmail.com
>> .
>>
>> --
>> Julien Pivotto
>> @roidelapluie
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/20211125133143.GB465801%40hydrogen
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAKQV3GEsD5U0JnSRBQRDYKpY7XPB8QHoo%3Dev2UB_eg57SbbYZA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gb%3DbhowCBqq242%3DthiGasAUCa57%3DXLKP-M-6sdJS%3DqJ1w%40mail.gmail.com.


Re: [prometheus-developers] Enabling auto-merge

2021-11-26 Thread Matthias Rampke
Ah, I didn't understand at first that this is a per-pull-request thing.
What is the right way to enable this? Do I change the repository
configuration by hand?

/MR

On Thu, Nov 25, 2021 at 5:16 PM Julien Pivotto 
wrote:

> Hello,
>
> I have enabled "auto merge" in prometheus/prometheus with the following
> checks:
>
> build
> test_mixins
> test_windows
> test_go
> test_ui
> Fuzzing
> lint
>
> It's an experiment, we can revert if we wish. Also, it's opt in per pull
> request ofc.
>
> On 25 Nov 09:44, Levi Harrison wrote:
> > Hi Augustin,
> >
> > That's one thing I was slightly confused about. In the official
> > documentation (
> >
> https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/automatically-merging-a-pull-request
> )
> > it says "when all required reviews are met and status checks have
> passed",
> > which makes me think that it doesn't take the "required status check"
> > distinction into consideration, instead just waiting for all of them to
> > pass, required or not. I agree with you that for auto-merge all jobs
> should
> > be mandatory.
> >
> > Thanks,
> > Levi
> >
> > On Thu, Nov 25, 2021 at 9:32 AM Augustin Husson <
> husson.augus...@gmail.com>
> > wrote:
> >
> > > Hi Levi,
> > >
> > > From my point of view, yeah it's safe to activate it. But before doing
> > > that, maybe we should take a look at the different jobs and figure out
> > > which one are mandatory ? Probably all jobs are mandatory (that's my
> > > feeling at least).
> > >
> > > Cheers,
> > > Augustin.
> > >
> > > Le jeu. 25 nov. 2021 à 15:24, Levi Harrison <
> levisamuelharri...@gmail.com>
> > > a écrit :
> > >
> > >> Hi all,
> > >>
> > >> Picking up the previous thread (
> > >>
> https://groups.google.com/g/prometheus-developers/c/tPLOmT9pnBw/m/kxLn0q59AgAJ
> ),
> > >> I'd like to re-propose allowing auto-merge as a merge option.
> > >>
> > >> Auto-merge is a merge option, to be used in conjugation with other
> merge
> > >> options like squash merge, that once "enabled" for a pull request
> (the same
> > >> action as clicking the merge button), will automatically merge the
> pull
> > >> request once all checks have passed and required approvals have been
> > >> submitted. If more commits are added by contributors without write
> access
> > >> in the period between when auto-merge is enabled and the PR is merged,
> > >> auto-merge will be canceled.
> > >>
> > >> Auto-merge won't just merge any PR once it's green, it requires the
> same
> > >> intentional action from a maintainer as regularly merging a PR does.
> > >> Auto-merge also doesn't equate an approval as a merge action, unless
> it is
> > >> one of the requirements to merge and auto-merge is enabled. I believe
> we
> > >> only require approvals on release branches, so only if a maintainer
> has
> > >> decided to enable auto-merge for a PR to a release branch would an
> approval
> > >> ever cause a PR to be auto-merged.
> > >>
> > >>
> > >>
> https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/automatically-merging-a-pull-request
> > >>
> > >> Personally, I could see this being useful when myself or a contributor
> > >> has just pushed to PR and I want to merge it, but also can't sit
> around for
> > >> 20 minutes until all the checks pass.
> > >>
> > >> Thanks (and congratulations to all the new team members),
> > >> Levi
> > >>
> > >> --
> > >> You received this message because you are subscribed to the Google
> Groups
> > >> "Prometheus Developers" group.
> > >> To unsubscribe from this group and stop receiving emails from it,
> send an
> > >> email to prometheus-developers+unsubscr...@googlegroups.com.
> > >> To view this discussion on the web visit
> > >>
> https://groups.google.com/d/msgid/prometheus-developers/b32fd1fc-2e43-4177-a17f-7b7207ff2d42n%40googlegroups.com
> > >> <
> https://groups.google.com/d/msgid/prometheus-developers/b32fd1fc-2e43-4177-a17f-7b7207ff2d42n%40googlegroups.com?utm_medium=email_source=footer
> >
> > >> .
> > >>
> > >
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Prometheus Developers" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to prometheus-developers+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAJrMkW6uv8p_Mhffk2LwEuPZmNb0qBDQrVuQb1HxNqTc-mdG3A%40mail.gmail.com
> .
>
> --
> Julien Pivotto
> @roidelapluie
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/20211125171629.GA774651%40hydrogen
> .
>

-- 
You received this message 

Re: [prometheus-developers] Option to disable security on Prometheus health endpoints, /-/healthy and /-/ready

2021-10-26 Thread Matthias Rampke
It seems to me that these are two different directions – locking down the
admin endpoints more vs. not locking down the health endpoints at all.

In what scenario would one want to have /-/healthy and /-/ready protected?

/MR


On Thu, Sep 23, 2021 at 6:11 PM Julien Pivotto 
wrote:

> On 23 Sep 07:57, 'Robin Wittler' via Prometheus Developers wrote:
> > Hello,
> >
> > I want to start a discussion if Prometheus should have config options to
> > disable security on the "/-/healthy" and "/-/ready" endpoints.
> >
> > Thanks to Amrit Pal Singh to bring this to the github issue list at
> > first: https://github.com/prometheus/prometheus/issues/9166
> >
> > Running Prometheus with enabled basic Auth on K8S actually requires some
> > workarounds to be able to use the liveness and/or readiness checks. One
> > would be the mentioned "httpHeaders" option - which requires to put
> > somewhat plain credentials in the K8S definitions (which I really do not
> > want).
> >
> > Currently I've disabled Basic Auth in Prometheus and use an nginx in
> Front
> > that takes care about Auth on all endpoints, except for /-/ready and
> > /-/healthy. But I do not like this either. :)
> >
> > Julien Pivotto suggested to talk about this at the dev mailing list ...
> so
> > please add your thoughts about this. Thx.
>
> Yes, I'd like to discuss how we could work with other usecases:
>
> - Restricting prometheus admin endpoints to certain users.
> - Restricting certain pushgateway users to certain path (to force them
>   to only post on their metrics).
>
> I feel like we could either decide we do not want those usecases or find
> a solution that would fit them all.
>
>
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Prometheus Developers" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to prometheus-developers+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/fd2122fc-9aca-4b98-976a-6fa6e61c1eb3n%40googlegroups.com
> .
>
>
> --
> Julien Pivotto
> @roidelapluie
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/20210923181118.GA86116%40hydrogen
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_ga8Gw8BQ%3Df-kUHNMN4yZyWmP%3DXJD5md51ZuMaY8Kw7i7Q%40mail.gmail.com.


Re: [prometheus-developers] HA alertmanager clusters may merge into one if they run in the same flat network

2021-09-20 Thread Matthias Rampke
What should happen if the DNS resolution does not result in the expected
number of peers either? How would a deliberate shrinking or growing of a
cluster work?

Another solution I have seen (e.g. in Cassandra) is to have a cluster
identity, such as a cluster name. Instances would refuse to talk to other
instances if they announce the wrong cluster name.

There could be a default cluster name (or a special case for when it's
empty), so that it doesn't change anything for single-cluster use cases. It
should also support the transition from older versions, or no cluster name,
to a named cluster, with a rolling restart.

/MR

On Thu, Sep 9, 2021, 10:33 Андрей Еньшин  wrote:

> Hi prometheus folks,
>
> I have a question about alertmanager.
>
> Here is an one year old issue about merging few HA alertmanager clusters
> into one big over time:
> https://github.com/prometheus/alertmanager/issues/2250
>
> I managed to reproduce it on my local k8s kind cluster. Seems there is
> small discrepancy between a list of peers reported by gossip library and a
> list of peers from am config file.
>
> We can workaround it by using k8s network policy. However more proper fix
> would be on alertmanager side: keep eye on number of peers and compare with
> desired number. In case there is some unexpected state, clear table of
> peers, do DNS resolution once more and do form a new peer table. Maybe
> there is better solution. What do you think?
>
> Probably I even can introduce a PR if we can agree on a way to fix it and
> someone can support me with review : )
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/45dd29f4-cae7-4c42-9756-0ca92aa76884n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYY6ABgHBNQCZ80dmTLuvPB5HtDzafsxUzT5ZF43aOPVA%40mail.gmail.com.


Re: [prometheus-developers] Change language on blackbox vs whitebox to closedbox vs openbox

2021-09-09 Thread Matthias Rampke
*resurrecting thread*

I agree that we should rename it. I wince every time I need to call it by
name. I often just call it prober anyway and even unaffiliated people
immediately know what I'm talking about.

More broadly, I think "blackbox" / "whitebox" monitoring is not very
descriptive, and would propose using "probing" and "instrumentation"
respectively.

It would take a bit of transition planning, maybe with one "double release"
under both names.

What do the maintainers think?

/MR

On Thu, Jun 11, 2020, 18:49 Ben Kochie  wrote:

> +1 for "prober". We could call it simply "prober". It more directly
> describes what it's doing.
>
>
>
> On Thu, Jun 11, 2020 at 12:35 PM Julius Volz 
> wrote:
>
>> Btw. in case we do end up going for a rename, I think Probe Exporter or
>> Prober Exporter would be a nice and descriptive name for what it's doing
>> (active probing of things).
>>
>> On Thu, Jun 11, 2020, 12:03 Julius Volz  wrote:
>>
>>> Hi,
>>>
>>> Good motivation, personally I think that kind of replacement is a good
>>> thing to do with blacklist/whitelist (as is happening with flag names in
>>> the Node Exporter). But I don't know how people feel about blackbox /
>>> whitebox yet, as it doesn't carry the same negative/positive association
>>> with colors. It would mean more invasive changes, like renaming the entire
>>> repository and related documentation web pages, and potential Google
>>> findability confusion (old blog posts etc. mentioning the old name).
>>>
>>> I haven't heard blackbox/whitebox crop up in the same way as e.g.
>>> blacklist/whitelist yet, but would be interested to affected Black people's
>>> opinion on whether this seems troublesome as well. If this turns out to be
>>> offensive to affected people, that'd be a good argument for renaming it IMO.
>>>
>>> Julius
>>>
>>> On Thu, Jun 11, 2020 at 11:29 AM Frederic Branczyk 
>>> wrote:
>>>
 Hi all,

 I would like to propose to change all occurrences and namings within
 the Prometheus project of whitebox and blackbox monitoring. In itself the
 term Black box  doesn't seem
 to come from a racist background, but I think it's problematic when the
 opposite is "white", in particular as this has a connotation in relation to
 whitelist/blacklist namings which are undoubtedly problematic. I would like
 to propose to replace them with open/closed box monitoring, which not only
 removes any potential of being offensive, it actually conveys much more
 clearly what is meant without having to explain.

 The biggest impact this would have would be renaming of the
 blackbox_exporter  to
 closedbox_exporter, all other occurrences of this language seem to be
 limited to documentation.

 Best regards,
 Frederic

 --
 You received this message because you are subscribed to the Google
 Groups "Prometheus Developers" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to prometheus-developers+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/prometheus-developers/98b3d620-d0d6-4436-bf15-3d90915f7909o%40googlegroups.com
 
 .

>>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/CA%2BT6Yoyawu8gP7XU95jFyO24Dk_bZ8VnKy-7wUwHE%2BVDnRTcNQ%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmrju%2BxYo6pvjEOhNh-gFXa9%2BB5rKyHwXsSjgSy-KCSFSA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

Re: [prometheus-developers] consul_exporter - Expose health statuses as values?

2021-08-17 Thread Matthias Rampke
What would some common queries be that this affects, and how would they
look in the future? For example, "what fraction of nodes is down" "which
nodes have multiple services down?"

/MR

On Mon, Aug 16, 2021, 22:31 Matt Russi  wrote:

> Currently, the consul_exporter exposes 4 series per health_node and
> health_service status check. Each with a label indicating the status
> (maintenance, warning, critical, or passing). In larger environments, this
> creates quite a few extra series.
>
> As somewhat of a precedent, the status is already being mapped to a value
> for the consul_serf_lan_member_status metric (as Consul's API provides this
> mapping).
> # HELP consul_serf_lan_member_status Status of member in the cluster.
> 1=Alive, 2=Leaving, 3=Left, 4=Failed.
>
> I wanted to get some thoughts around this before pursuing a PR.
>
> In my example, I used -2=maintenance, -1=warning, 0=critical, and
> 1=passing to fall in line with the Prometheus paradigm of up=0 (down) and
> up=1 (up). Since we have two additional values, the negative numbers play
> more nicely when trying to do a value mapping in Grafana. Not married to
> the values themselves though. :)
>
> Present Example:
> consul_health_node_status{check="serfHealth",node="example_node",status="critical"}
> 0
> consul_health_node_status{check="serfHealth",node="example_node",status="maintenance"}
> 0
> consul_health_node_status{check="serfHealth",node="example_node",status="passing"}
> 1
> consul_health_node_status{check="serfHealth",node="example_node",status="warning"}
> 0
>
> consul_health_service_status{check="service:10.0.0.1_443",node="example_node",service_id="10.0.0.1_443",service_name="auth_service",status="critical"}
> 0
> consul_health_service_status{check="service:10.0.0.1_443",node="example_node",service_id="10.0.0.1_443",service_name="auth_service",status="maintenance"}
> 0
> consul_health_service_status{check="service:10.0.0.1_443",node="example_node",service_id="10.0.0.1_443",service_name="auth_service",status="passing"}
> 1
> consul_health_service_status{check="service:10.0.0.1_443",node="example_node",service_id="10.0.0.1_443",service_name="auth_service",status="warning"}
> 0
>
> Proposed Example:
> # HELP consul_health_node_status Status of health checks associated with a
> node. -2=maintenance, -1=warning, 0=critical, 1=passing
> consul_health_node_status{check="serfHealth",node="example_node"} 1
>
> # HELP consul_health_service_status Status of health checks associated
> with a service. -2=maintenance, -1=warning, 0=critical, 1=passing
> consul_health_service_status{check="service:10.0.0.1_443",node="example_node",service_id="10.0.0.1_443",service_name="auth_service"}
> 1
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/9bb6b446-728d-47d9-8a08-355dec88d572n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_ga0z4O3v_DbhQ-vgjJ42vH-UZWF3XrHCExPjDeXEZ8nbA%40mail.gmail.com.


Re: [prometheus-developers] Deprecating https://github.com/prometheus/nagios_plugins (?)

2021-08-17 Thread Matthias Rampke
I think the no-magic route is better. You can also archive the repo[0] to
make it clear that it's read only (with this GitHub feature, do we still
need to graveyard anything ourselves?

/MR



[0]
https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/archiving-a-github-repository/archiving-repositories

On Fri, Aug 13, 2021, 17:57 Bjoern Rabenstein  wrote:

> Hi,
>
> More than a year ago, I added a pointer from
> https://github.com/prometheus/nagios_plugins (the "old repo") to its
> fork https://github.com/magenta-aps/check_prometheus_metric (the "new
> repo"), see https://github.com/prometheus/nagios_plugins/pull/26 .
>
> I've never heard any complaints about the new plugin, so I think it's
> about time to properly deprecate the old repo.
>
> First of all: Does anyone have any objections?
>
> Assuming we can go forward with it: What do you think is the best
> procedure? Ideally, we would redirect from the old to the new
> repo. However, that's not as easy as it looks. So far, I think this
> would require the following gymnastics:
>
> - Delete the new repo.
> - Transfer the ownership of the old repo to magenta-aps with
>   the same name as the (deleted) new repo.
> - Replay all the commits that happened in the new repo to the
>   transfered repo to make it appear like the new repo before,
>   just not as a fork.
>
> Does anyone have a better idea?
>
> And if not, should we really do that or would it be better to apply less
> magic, just put a big and fat deprecation warning onto the old repo,
> and graveyard it after another half year or so?
>
> Any feedback welcome.
> --
> Björn Rabenstein
> [PGP-ID] 0x851C3DA17D748D03
> [email] bjo...@rabenste.in
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/20210813155716.GE3669%40jahnn
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYjKWMns7DgkhBXr3g%3DKVonp4fyGG%2B6GhnhCiO5tEkAjA%40mail.gmail.com.


Re: [prometheus-developers] Python Multiprocess

2021-04-09 Thread Matthias Rampke
Would you mind making a PR to improve the documentation? As the expert it
is easy to write documentation that is *technically correct* but not
helpful to an unsuspecting user; you are now in the best position to make
this clear to the next one reading it.

Best,
MR

On Wed, Apr 7, 2021 at 6:14 PM Esau Rodriguez  wrote:

> Hi Chris,
> thanks a lot for your response. I definitively didn't understand that line
> of the documentation.
>
> I just tested the solution on the simple test and it works as expected.
>
> Thanks a lot,
> Esau.
> On Wednesday, April 7, 2021 at 5:00:11 PM UTC+1 csmarc...@gmail.com wrote:
>
>> Hello,
>>
>> It appears that there is a subtle bug/misunderstanding in the code that
>> is linked, though that is possibly due to the multiprocess documentation
>> not being clear enough. When the code specifies a registry for each metric (
>> example
>> ),
>> it is causing both a process-local metric, and the multi process metrics to
>> be registered, so depending on which process handles the request you will
>> get different responses for the metric with "Request latency" in the HELP
>> text. In the multiprocess documentation
>> 
>> this is what is meant by "Registries can not be used as normal, all
>> instantiated metrics are exported". If you remove the registry=registry
>> lines in the example you will see just the multiprocess output as expected.
>>
>> You could also move the registry and MultiProcessCollector code into the
>> request handler to make it clear that the registry used by the
>> MultiProcessCollector should not have anything registered to it, as seen in
>> the example in the multiprocess documentation I linked above.
>>
>> Let me know if that was unclear or you have more questions,
>> Chris
>>
>> On Wed, Apr 7, 2021 at 8:12 AM Esau Rodriguez  wrote:
>>
>>> Hi all,
>>> I'm not sure if I'm missing something but I'm seeing a behaviour with
>>> the python client and multiprocess using gunicorn and flask I'm not sure if
>>> I'm missing something or there's a bug there.
>>>
>>> When I hit the endpoint producing the prometheus text to be scrapped I'm
>>> seeing 2 versions for the same metrics with different help texts. I would
>>> expect to see only one metric (the multiprocess one).
>>>
>>> I thought I had something wrong in my setup so I tried it with e pretty
>>> simple project that I found here
>>> https://github.com/amitsaha/python-prometheus-demo/tree/master/flask_app_prometheus_multiprocessing
>>> (not my code).
>>>
>>> I hit a random url and then the `/metrics` endpoint
>>>
>>> You can see in the raw response down here we have 2 entries for each
>>> metric, with different `types` and `help` texts. In this example there
>>> really wasn't any processes but in the real example in prod we have several
>>> processes and we see the prometheus scraper `picks` a different value
>>> depending on the order of the response.
>>>
>>> Am I missing something or is there a bug there?
>>>
>>> The raw response was:
>>>
>>> 
>>> % curl --location --request GET 'http://localhost:5000/metrics'
>>> # HELP request_latency_seconds Multiprocess metric
>>> # TYPE request_latency_seconds histogram
>>> request_latency_seconds_sum{app_name="webapp",endpoint="/metrics"}
>>> 0.00040912628173828125
>>> request_latency_seconds_sum{app_name="webapp",endpoint="/"}
>>> 0.0001652240753173828
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.005"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.01"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.025"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.05"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.075"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.1"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.25"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.5"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="0.75"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="1.0"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="2.5"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="5.0"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="7.5"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="10.0"}
>>> 1.0
>>> request_latency_seconds_bucket{app_name="webapp",endpoint="/metrics",le="+Inf"}
>>> 1.0
>>> request_latency_seconds_count{app_name="webapp",endpoint="/metrics"} 

Re: [prometheus-developers] Re: [VOTE] Allow environment variable expansion on external label values

2021-03-26 Thread Matthias Rampke
YES

On Thu, Mar 25, 2021 at 10:09 PM Julien Pivotto 
wrote:

> On 25 Mar 23:08, Julien Pivotto wrote:
> > On 25 Mar 23:06, Julien Pivotto wrote:
> > > Hereby I am calling a vote to allow the expansion on environment
> > > variables in the prometheus configuration file.
> > > Because it can be seen as an override of a previous vote[1], I am
> calling a
> > > new vote for this specific part.
> > >
> > > The consensus in the dev summit is:
> > >
> > > We will allow substitution of ENV variables into label values in the
> > > external_label configuration block only, behind an experimental feature
> > > flag.
> >
> > For full clarity, the vote is to give this dev-summit consensus sentence
> > the force of a vote.
>
> YES
>
> >
> > >
> > > The vote is open for a week (until April 2nd), or until we have 9 ayes
> or 9 noes.
> > > Any Prometheus team member is eligible to vote[2].
> > >
> > > 1:
> https://groups.google.com/g/prometheus-developers/c/tSCa4ukhtUw/m/J-j0bSEYCQAJ
> > > 2: https://prometheus.io/governance/
> > >
> > > --
> > > Julien Pivotto
> > > @roidelapluie
> >
> > --
> > Julien Pivotto
> > @roidelapluie
>
> --
> Julien Pivotto
> @roidelapluie
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/20210325220903.GA1331978%40oxygen
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYAW1ooRFhpJ6RsOzWD_4aAs1Wy%3DqB31k00DhA4_ip8xQ%40mail.gmail.com.


Re: [prometheus-developers] Add collector for database/sql#DBStats

2021-03-23 Thread Matthias Rampke
Ah, even before following the link I had the same questions as Björn: Could
this be a separate thing? Does it benefit significantly from being part of
the standard client_golang, or could it just as well be something people
pull in on demand?

I see at least two  libraries
 that already do this, do we need
a third? What is different about your approach?

/MR

On Mon, Mar 22, 2021 at 11:06 AM Mitsuo Heijo 
wrote:

> Hi Prometheus Developers.
>
> I was guided here from
> https://github.com/prometheus/client_golang/pull/848
>
> There are several agenda items.
>
> 1. Whether client_golang should include DBStatsCollector.
> 2. Should the metrics be prefixed with go_... ? While the database/sql
> package is Go-specific, those metrics aren't really coming from the Go
> runtime.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/ecb8b635-8e88-4cd2-880a-c1f00a76d41an%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gbvhRNuKHBJC%2BqRMeT-vWZUsfYoUj%2BM_xdPDN%3Dcoob-%3DA%40mail.gmail.com.


Re: [prometheus-developers] Mount Point missing alarm

2021-03-05 Thread Matthias Rampke
Moving this to prometheus-users where it fits better.

Try using the "unless" operator to compare against a metric that is present
for all instances that should have this mountpoint. Assuming that is the
case for all targets under this job:

up{job=XX"}  unless on(instance)
node_filesystem_readonly{fstype="nfs2",mountpoint="/NFS2"}

(or something like that) The idea is that any instance of this job is
suspect, unless it has the mount point specific metric.

/MR


On Sun, Feb 28, 2021, 10:56 s.saurab...@gmail.com <
s.saurabhjain...@gmail.com> wrote:

>
> Hi Everyone,
>
> I have specific requirement from the client that prometheus should
> generate alert in case any mount point on the server goes missing.
>
> For Eg: If server has 3 mount points like /data1 /NFS1 /NFS2 and if by any
> reason ,/NFS2 gets delinked from the server in that case prometheus should
> generate alert.
>
> When I tried with below query,it is working fine(as this metric goes
> missing when /NFS2 got delinked from the server)
>
> absent(node_filesystem_readonly{device="XX:/NFS2",fstype="nfs2",hostname="EAST_WB_XX",instance="XX:9100",job="XX",mountpoint="/NFS2"})
> == 1
>
> However there are 800 servers which are required to get monitor therefore
> it is not possible to add 800 rules for each IP in the rules.yml.
>
> When I add below rule,it didn't generate the missing alert.
>
> absent(node_filesystem_readonly{mountpoint="/NFS2"}) == 1
>
> Please advice if we can achieve this with some tweaking in the query so
> that it can be generic for all servers.
>
> Looking forward for your response.
>
> Thanks,
> Saurabh
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/d95c5b8d-d81b-4152-882d-6a0d0e54954an%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZwvdmqxufLMdaBcSpvab4epMgxU%2BBO7E%2Bu0L4xuCHKfg%40mail.gmail.com.


Re: [prometheus-developers] Lazy consensus: Merging options

2020-12-03 Thread Matthias Rampke
… which we used to encourage inside the team.

I prefer real merging too, I like git and I don't feel the need to make
history linear.

I don't use rebase merging from PRs but I also don't think we should remove
that while allowing squash merging. I am okay with every maintainer having
their own style.

/MR

On Thu, Dec 3, 2020, 18:50 Sylvain Rabot  wrote:

> Squashing and rebasing also sucks for people who sign their commits.
>
> On Thu, 3 Dec 2020 at 16:55, Sylvain Rabot  wrote:
>
>> I also don’t like squashing.
>>
>> I don’t count the hours lost because I thought a commit referenced in a
>> merged PR was not in a tag because the squash generated a new commit id.
>>
>> On 3 Dec 2020, at 15:27, Frederic Branczyk  wrote:
>>
>> 
>> I don’t like squash merging, don’t think I’ve ever used rebase merging
>> but don’t feel too strongly about it. Merge commit is my preference.
>>
>> On Thu 3. Dec 2020 at 15:06, Julien Pivotto 
>> wrote:
>>
>>> On 03 Dec 14:59, Bartłomiej Płotka wrote:
>>> > I am ok with this proposal.
>>> >
>>> > Long term I would even vote for squash only, but we discussed this in
>>> the
>>> > past.
>>>
>>> How would you merge release branches in master?
>>>
>>> >
>>> > Kind Regards,
>>> > Bartek Płotka (@bwplotka)
>>> >
>>> >
>>> > On Thu, 3 Dec 2020 at 14:20, Brian Brazil <
>>> brian.bra...@robustperception.io>
>>> > wrote:
>>> >
>>> > > On Thu, 3 Dec 2020 at 13:15, Ben Kochie  wrote:
>>> > >
>>> > >> I'd like to adjust our defaults for GitHub merging settings:
>>> > >>
>>> > >> Right now, we allow all three modes for PR merges.
>>> > >> * Merge commits
>>> > >> * Squash merging
>>> > >> * Rebase merging
>>> > >>
>>> > >> Proposal: Remove rebase merging (aka fast-forward merges) so that we
>>> > >> stick to merge/squash and merge.
>>> > >>
>>> > >
>>> > > I use rebase merges sometimes to keep the history clean from
>>> > > unnecessary merge commits, so I'd like it to hang around.
>>> > >
>>> > > Brian
>>> > >
>>> > >
>>> > >>
>>> > >> [image: image.png]
>>> > >>
>>> > >> --
>>> > >> You received this message because you are subscribed to the Google
>>> Groups
>>> > >> "Prometheus Developers" group.
>>> > >> To unsubscribe from this group and stop receiving emails from it,
>>> send an
>>> > >> email to prometheus-developers+unsubscr...@googlegroups.com.
>>> > >> To view this discussion on the web visit
>>> > >>
>>> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmp0X26pjfvyATvaUxH9p_nwBh0QSMgtJGNzfDLnZJjdMQ%40mail.gmail.com
>>> > >> <
>>> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmp0X26pjfvyATvaUxH9p_nwBh0QSMgtJGNzfDLnZJjdMQ%40mail.gmail.com?utm_medium=email_source=footer
>>> >
>>> > >> .
>>> > >>
>>> > >
>>> > >
>>> > > --
>>> > > Brian Brazil
>>> > > www.robustperception.io
>>> > >
>>> > > --
>>> > > You received this message because you are subscribed to the Google
>>> Groups
>>> > > "Prometheus Developers" group.
>>> > > To unsubscribe from this group and stop receiving emails from it,
>>> send an
>>> > > email to prometheus-developers+unsubscr...@googlegroups.com.
>>> > > To view this discussion on the web visit
>>> > >
>>> https://groups.google.com/d/msgid/prometheus-developers/CAHJKeLpwuPY6iE0k7zRP8PFAGTrEx9hYzx6j%3DQT8p4hLQVF6-w%40mail.gmail.com
>>> > > <
>>> https://groups.google.com/d/msgid/prometheus-developers/CAHJKeLpwuPY6iE0k7zRP8PFAGTrEx9hYzx6j%3DQT8p4hLQVF6-w%40mail.gmail.com?utm_medium=email_source=footer
>>> >
>>> > > .
>>> > >
>>> >
>>> > --
>>> > You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> > To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/CAMssQwahWTP3uPQuEDcu8jQB_EBDe5AOKXrJYd6%2Bad-wqOpEFQ%40mail.gmail.com
>>> .
>>>
>>>
>>>
>>> --
>>> Julien Pivotto
>>> @roidelapluie
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/20201203140641.GA543460%40oxygen
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/CAOs1Umx8Ug%2BaT3sr7VEw7ryngY4Fm7Fzzdp7z6QO6ODpWXa7mQ%40mail.gmail.com
>> 
>> .
>>
>>
>
> 

Re: [prometheus-developers] How. to deal with outdated series in alerts.

2020-11-15 Thread Matthias Rampke
Restart the application that produces metrics.

Generally, the client libraries will remember metrics even if they are not
being incremented anymore (it cannot know that they won't be again).
Restarting clears the "seen" label set in each process.

Side note: this can also bite you the other way – if an endpoint is
unexpectedly never being requested after a restart (say, it wasn't hooked
up correctly in the code), this alert will not detect it, because generic
instrumentation of HTTP calls cannot know which valid paths there *should*
be.

In some cases, we alert like this because we want to check if a specific
business action has occurred. Instead of relying on the automatic HTTP
metrics, we can separately instrument the code with another metric that we
control, and where we can "increment by zero" for all possible label
combinations on startup.

/MR

On Sun, Nov 15, 2020, 17:00 Mounya A  wrote:

> Hello all,
>I have a question - how do we deal with  labels that are no longer
> there (intentionally stopped) in alerts. Will there be any threshold time
> to consider it as stale or unwanted.
>I have configured an alert when rate(requests[1m]) == 0. It is
> firing alerts for labels, that didn't show up in the past 7 days. I have
> intentionally stopped and don't want to alert in such conditions. How to
> deal with this.
>
>   Thanks in advance.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/d7c7ed77-0f69-4a5a-b36a-40c95c669546n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZyYhJ0pUiFsqyPqKZGeKxf3%2Bf1r1Os3KuYBkwigbViag%40mail.gmail.com.


Re: [prometheus-developers] Alertmanager HA

2020-11-15 Thread Matthias Rampke
The listen address is the one that alertmanager binds to. Generally, this
can be 0.0.0.0 (all interfaces), I believe that is the default and thus
sometimes omitted.

It will try to guess which address it can be reached at by other AMs (which
address it should advertise to them). In some circumstances this doesn't
work right. A Kubernetes pod probably isn't one of those circumstances, but
specifying the address explicitly doesn't hurt either.

So in short, I would expect everything to work out of the box in a standard
Kubernetes pod, but you can be more explicit if you want to be.

/MR


On Sat, Nov 14, 2020, 20:41 Dudi Cohen  wrote:

> Hi all, I would like to configure Alertmanager for high availability in
> k8s.
> Can anyone please explain the difference
> between `cluster.advertise-address` and `cluster.listen-address`? I've seen
> either one used in different examples, sometimes with `0.0.0.0` and
> sometimes with `$(POD_IP)` so i'm also not sure what the address should be.
> The documentation is a bit unclear regarding the differences:
> "The cluster.advertise-address flag is required if the instance doesn't
> have an IP address that is part of RFC 6890
>  with a default route."
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/4b78b5fd-4e04-4276-a8d0-6388d8398413n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gbycNqMd%2BKAVabWrATUpWbYrYv_THH6otdgCtsz%2Bed8dA%40mail.gmail.com.


Re: [prometheus-developers] Changing `master` to `main` across the org?

2020-07-03 Thread Matthias Rampke
I'm going to file an issue in the mysql exporter to discuss how we handle
this.

/MR

On Fri, Jul 3, 2020 at 9:00 AM Ben Kochie  wrote:

> Nice.
>
> On Thu, Jul 2, 2020 at 12:21 PM Julien Pivotto 
> wrote:
>
>> MySQL had decided
>>
>> https://mysqlhighavailability.com/mysql-terminology-updates/
>>
>>
>> Le mer. 1 juil. 2020 à 09:54, Matthias Rampke  a
>> écrit :
>>
>>> +1 for changing the default branch, +1 to seeing what GitHub does to
>>> help us, but if there is no movement on that we can also work out a plan
>>> ourselves.
>>>
>>> For the MySQL case, *if* we are willing to break compatibility and
>>> consistency, we could change the metric names even without waiting for the
>>> underlying commands to change. A risk is that we choose one terminology,
>>> and upstream ends up choosing another in a year. A chance is that we set a
>>> precedent and upstream eventually follows :)
>>>
>>> /MR
>>>
>>> On Wed, Jun 24, 2020 at 4:38 PM Frederic Branczyk 
>>> wrote:
>>>
>>>> Tobias’ idea sounds great, and I’m +1 with this!
>>>>
>>>> On Wed 24. Jun 2020 at 17:17, Tobias Schmidt  wrote:
>>>>
>>>>> +1
>>>>>
>>>>> As Github seems to be working on it already, I'd wait to see what they
>>>>> can provide to simplify the transition. Would it make sense to tweet from
>>>>> our Twitter account to let them know we're interested in such 
>>>>> functionality?
>>>>>
>>>>> I looked at other problematic terminologies across our code bases, and
>>>>> it'll be hard to do much about it until third-parties have changed it on
>>>>> their side, e.g.
>>>>> https://github.com/search?q=org%3Aprometheus+slave=Code
>>>>>
>>>>> On Wed, Jun 24, 2020 at 4:15 PM Bartłomiej Płotka 
>>>>> wrote:
>>>>>
>>>>>> +1 on this from Prometheus side, but also cc Thanos Team, I think we
>>>>>> should do that everywhere.
>>>>>>
>>>>>> Kind Regards
>>>>>> Bartek
>>>>>>
>>>>>> On Wed, 24 Jun 2020 at 16:06, Richard Hartmann <
>>>>>> richih.mailingl...@gmail.com> wrote:
>>>>>>
>>>>>>> Dear all,
>>>>>>>
>>>>>>> I talked about this with a few of you already and the general feeling
>>>>>>> of the room was "this is worthwhile, but it carries an opportunity
>>>>>>> cost". So: What do you think? Should this be a goal?
>>>>>>>
>>>>>>> CNCF is on board and assigned tech writer resources to switching
>>>>>>> over,
>>>>>>> and I suggested making CI/CD migrations etc part of Community Bridge
>>>>>>> outreach as it's a great intern project.
>>>>>>>
>>>>>>> It definitely makes sense to wait for GitHub to decide on a name and
>>>>>>> to provide tooling to minimize toil.
>>>>>>>
>>>>>>> Thoughts?
>>>>>>>
>>>>>>>
>>>>>>> Richard
>>>>>>>
>>>>>>> --
>>>>>>> You received this message because you are subscribed to the Google
>>>>>>> Groups "Prometheus Developers" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>>> send an email to prometheus-developers+unsubscr...@googlegroups.com.
>>>>>>> To view this discussion on the web visit
>>>>>>> https://groups.google.com/d/msgid/prometheus-developers/CAD77%2BgTerwefdtSU5PB0vPmYA%2BW-20VoDy2GG7AH6F%2BG2RoL3Q%40mail.gmail.com
>>>>>>> .
>>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "Prometheus Developers" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to prometheus-developers+unsubscr...@googlegroups.com.
>>>>>> To view this discussion on the web visit
>>>>>> https://groups.google.com/d/msgid/prometheus-developers/CAMssQwZPF36eS-Sz0c%3DTArmdy7scuZ7FGv2KQjaF2CzwnkLF1g%40mail.gmail.com
>>>>>> <https://groups.go

Re: [prometheus-developers] Changing `master` to `main` across the org?

2020-07-01 Thread Matthias Rampke
+1 for changing the default branch, +1 to seeing what GitHub does to help
us, but if there is no movement on that we can also work out a plan
ourselves.

For the MySQL case, *if* we are willing to break compatibility and
consistency, we could change the metric names even without waiting for the
underlying commands to change. A risk is that we choose one terminology,
and upstream ends up choosing another in a year. A chance is that we set a
precedent and upstream eventually follows :)

/MR

On Wed, Jun 24, 2020 at 4:38 PM Frederic Branczyk 
wrote:

> Tobias’ idea sounds great, and I’m +1 with this!
>
> On Wed 24. Jun 2020 at 17:17, Tobias Schmidt  wrote:
>
>> +1
>>
>> As Github seems to be working on it already, I'd wait to see what they
>> can provide to simplify the transition. Would it make sense to tweet from
>> our Twitter account to let them know we're interested in such functionality?
>>
>> I looked at other problematic terminologies across our code bases, and
>> it'll be hard to do much about it until third-parties have changed it on
>> their side, e.g.
>> https://github.com/search?q=org%3Aprometheus+slave=Code
>>
>> On Wed, Jun 24, 2020 at 4:15 PM Bartłomiej Płotka 
>> wrote:
>>
>>> +1 on this from Prometheus side, but also cc Thanos Team, I think we
>>> should do that everywhere.
>>>
>>> Kind Regards
>>> Bartek
>>>
>>> On Wed, 24 Jun 2020 at 16:06, Richard Hartmann <
>>> richih.mailingl...@gmail.com> wrote:
>>>
 Dear all,

 I talked about this with a few of you already and the general feeling
 of the room was "this is worthwhile, but it carries an opportunity
 cost". So: What do you think? Should this be a goal?

 CNCF is on board and assigned tech writer resources to switching over,
 and I suggested making CI/CD migrations etc part of Community Bridge
 outreach as it's a great intern project.

 It definitely makes sense to wait for GitHub to decide on a name and
 to provide tooling to minimize toil.

 Thoughts?


 Richard

 --
 You received this message because you are subscribed to the Google
 Groups "Prometheus Developers" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to prometheus-developers+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/prometheus-developers/CAD77%2BgTerwefdtSU5PB0vPmYA%2BW-20VoDy2GG7AH6F%2BG2RoL3Q%40mail.gmail.com
 .

>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/CAMssQwZPF36eS-Sz0c%3DTArmdy7scuZ7FGv2KQjaF2CzwnkLF1g%40mail.gmail.com
>>> 
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Thanos Team" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to thanos-io+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/thanos-io/CAChBsdC01oRuGXRmTk-gJVwB1MmbwF5nGfj4OD732Oswfbp1dA%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAOs1Umwj%3DtUA8RzN2z2bUGd7uoopf9yidh01K1qLMUyngvB%3DKw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZ8UM0S-fV%3D%3D%3D6ZbQ4hiMkRqX0vqfg3zPj83xSTC0%2BNTw%40mail.gmail.com.


Re: [prometheus-developers] [VOTE] Allow Kelvin as temperature unit in some cases

2020-06-02 Thread Matthias Rampke
YES

Kelvin is the standard unit.

On Fri, May 29, 2020 at 9:12 AM Tom Wilkie  wrote:

> YES
>
> On Fri, May 29, 2020 at 8:13 AM Ben Kochie  wrote:
>
>> YES
>>
>> On Thu, May 28, 2020 at 8:52 PM Bjoern Rabenstein 
>> wrote:
>>
>>> Dear Prometheans,
>>>
>>> So far, we have recommended Celsius as the base unit for temperatures,
>>> despite Kelvin being the SI unit. That was well justified by the
>>> overwhelming majority of use cases, where Kelvin would be just
>>> weird. I'd really like to see more scientific usage of Prometheus, so
>>> I was never super happy with that recommendation, but since it was
>>> just a recommendation, I could live with it.
>>>
>>> Now Matt Layher came up with another, more technical use case: color
>>> temperature. Here, using Celsius would be even weirder. So there is a
>>> case where you clearly do not want to follow the suggestion of the
>>> linter, which is more in line with typical Prometheus use cases than
>>> my arguably somewhat far fetched time series for low-temperature
>>> experiments.
>>>
>>> Therefore, Matt suggested to make the metrics linter not complain
>>> about kelvin.
>>>
>>> I think this is a clearly defined problem with clear arguments and a
>>> clear disagreement between Brian Brazil on the one side and Matt and
>>> myself on the other side. The appropriate amount of effort has been
>>> spent to find a consensus. All arguments can be found in
>>> https://github.com/prometheus/client_golang/pull/761 and
>>> https://github.com/prometheus/docs/pull/1648 .
>>>
>>> I hereby call a vote for the following proposal:
>>>
>>> Allow Kelvin as a base unit in certain cases and update our
>>> documented recommendation and the linter code accordingly.
>>>
>>>
>>> (The changes may take the form of the two PRs out there, but the vote
>>> in about the general idea above, not the implementation detail.)
>>>
>>>
>>> The vote closes on 2020-06-04 20:00 UTC.
>>> --
>>> Björn Rabenstein
>>> [PGP-ID] 0x851C3DA17D748D03
>>> [email] bjo...@rabenste.in
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/20200528185216.GK2326%40jahnn
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/CABbyFmrynK5YGgw10%2BdwoA1_2kQO0RegD3AY27oNRZwJy9oqYg%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAB58Z11Kgy-QJB%3D-2nqfY%2BV4hHxjBp5nJXeCuKcbaE5BL41cZw%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYNBDBSQOwp-pTPcrMPXe1asgas4JEGQ5DKjzg-iX5ihw%40mail.gmail.com.


Re: [prometheus-developers] [VOTE] Allow listing non-SNMP exporters for devices that can already be monitored via the SNMP Exporter

2020-05-29 Thread Matthias Rampke
YES

Looking beyond SNMP, I regularly encounter cases where it is technically
possible to monitor something using the graphite or statsd exporters. Where
this is painful I do guide users into a different integration that is more
special-purpose.
My stance is that we should promote what works best, not the lowest number
of somethings. There are cases where different options work best for
different environments.

/MR

On Fri, May 29, 2020 at 3:01 PM Bjoern Rabenstein 
wrote:

> On 28.05.20 21:30, Julius Volz wrote:
> >
> > I therefore call a vote for the following proposal:
> >
> > Allow adding exporters to
> https://prometheus.io/docs/instrumenting/exporters/
> >  although the devices or applications that they export data for can
> already be
> > monitored via SNMP (and thus via the SNMP Exporter). This proposal does
> not
> > affect other criteria that we may use in deciding whether to list an
> exporter
> > or not.
>
> YES
>
> It would obviously be better if those exporter listing decisions would
> "just work" with best judgement and we didn't need to vote about
> individual guideline. However, the discussion in
> https://github.com/prometheus/docs/pull/1640 circled back to the SNMP
> Exporter argument multiple times. The single person on the one side of
> the argument explained their concerns, they were considered, but
> failed to convince. With the room leaning so obviously to the other
> side, one might ask why that circling back had to happen. The vote can
> help here to prune at least one branch of the meandering
> discussion. In particular with the often used reasoning that "that's
> how we did it before", it's good to know if perhaps "that's not how we
> want to do it in the future".
>
> Having said that, I do believe that we should have a more fundamental
> discussion about revising "our" criteria of accepting exporter
> listings. My impression is that the way it is done right now doesn't
> represent our collective intentions very well. Even worse, I am fairly
> certain that the process is partially defeating its purpose. In
> particular, instead of encouraging the community to join efforts, we
> are causing even more fragmentation. Which is really tragic, given how
> much time and effort Brian invests in the review work. Kickstarting
> such a discussion has been on my agenda for a long time, but given how
> my past attempts to move the needle went, it appeared to be a quite
> involved effort, for which I'm lacking the capacity. (Others told me
> similar things, which reminds me of the "capitulation" topic in
> RFC7282, where people cease to express their point of view because
> "they don't have the energy to argue against it". Votes, like this
> particular one, might then just be an attempt to get out of the many
> branches and loops created by persistently upholding objections that
> most of the room considers addressed already.)
>
>
> --
> Björn Rabenstein
> [PGP-ID] 0x851C3DA17D748D03
> [email] bjo...@rabenste.in
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/20200529150058.GS2326%40jahnn
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZzt_69pmGOtSK1yTOm-YwXgSMCmLTnpb75Sr5Tb0%3Dykw%40mail.gmail.com.


Re: [prometheus-developers] Chef Server SD

2020-05-04 Thread Matthias Rampke
As Ben said, write out file SD files, usually using chef-client, but you
can also script something else up if you want to decouple the SD lifecycle
from chef-client runs. The file SD is just JSON, so that's very easy to
generate in Ruby/Chef.

Because Chef queries are so slow, and Prometheus SD runs on every scrape,
you need to decouple them. The file SD mechanism already provides that.

/MR

On Sat, May 2, 2020 at 6:44 PM José Antonio Abraham Palomo <
abraham...@gmail.com> wrote:

> I think you have a reason, but the goal is to be able to easily use the
> ability to enumerate hosts as azure SD.
>
> Metrics don't necessarily needs to scrape every 30 seconds, for default
> may be more time.
>
> Do you know about a good practice for this?
>
>
>
> El sábado, 2 de mayo de 2020, 14:33:12 (UTC-4), Ben Kochie escribió:
>>
>> For Chef users, people typically implement Chef search based templates
>> and file_sd_configs.
>>
>> IMO, there's no real point into adding Chef support directly into
>> Prometheus. Chef searches are too slow/expensive to add directly to
>> something that might send multiple queries every 30-60 seconds.
>>
>> On Sat, May 2, 2020 at 8:00 PM José Antonio Abraham Palomo <
>> abrah...@gmail.com> wrote:
>>
>>> Hi, I hope all people here be fine, in my current job we are working
>>> with *chef server*, and I want to develop that Service Discovery for
>>> prometheus, but before I want to ask the community if there is a PR or
>>> someone working on it.
>>>
>>> Thanks regards.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/ebe13d9f-fc40-4870-bf2d-8514354eb884%40googlegroups.com
>>> 
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/b3bac54d-329f-4253-99fa-37a1029b37d0%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYQjS-t12Z9sX81VL9yy6xnNV8jWcmbo2aHcQ16%2BcrixQ%40mail.gmail.com.


Re: [prometheus-developers] Re: Call for Consensus: node_exporter 1.0.0 release

2020-04-23 Thread Matthias Rampke
I agree, if we plan on releasing 1.0, have an RC, a security review for a
feature marked experimental doesn't need to hold things up. We should make
it clear when we consider TLS "ready for serious use" but that's for
another release.

/MR

On Thu, Apr 23, 2020 at 11:47 AM Richard Hartmann <
richih.mailingl...@gmail.com> wrote:

> Yes
>
> On Thu, Apr 23, 2020 at 1:40 PM Richard Hartmann
>  wrote:
> >
> > Dear all,
> >
> > This is a call for consensus within Prometheus-team on releasing
> > node_exporter 1.0.0 as-is.
> >
> > node_exporter 1.0.0-rc.0 has been cut on 2020-02-20[1]. It features
> > experimental TLS support[2]. We are planning to use this TLS support
> > as a template for all other exporters within and outside of Prometheus
> > proper. To make sure we didn’t build a footgun nor that we’re holding
> > it wrong, CNCF is sponsoring an external security review by Cure53. We
> > have not been giving a clear timeline but work should start in week 22
> > (May 25th) at the latest with no time to completion stated.
> >
> > There are two positions:
> > * Wait for the security review to finish before cutting 1.0.0
> > * Release ASAP, given that this feature is clearly marked as
> > experimental and it will not see wider testing until we cut 1.0.0
> >
> > I am asking Prometheus-team to establish rough consensus with a hum.
> >
> > Should the maintainers (Ben & Fish) be allowed to release without
> > waiting for the audit to finish?
> >
> >
> > Best,
> > Richard
> >
> > [1] https://github.com/prometheus/node_exporter/releases/tag/v1.0.0-rc.0
> > [2] https://github.com/prometheus/node_exporter/pull/1277
>
>
>
> --
> Richard
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAD77%2BgTDcHz%2BBtn3wbLJPCRtW1DJ64w63U8jRx_GCtzin5fFMw%40mail.gmail.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gZJqYJ4LbiwepQXv2X91jHY0oC5j_YR9bkK2f2%3Dwy%2B2iw%40mail.gmail.com.


Re: [prometheus-developers] match a metric path and split it on a colon (:)

2020-04-07 Thread Matthias Rampke
This question is better suited to the prometheus-users mailing list, I'm
moving it there.

>From the config I take it you are trying to achieve this in the graphite
exporter?

What you are trying will not be possible using the (faster) glob match
type. You are on the right track with using match_type: regex, only your
regex is not quite right.

Try

mappings:
- match: 'panos\.(.*):(.*)'
  match_type: regex
  name: test_graphite_metric
  labels:
device: $1
job: test_graphite
entity: $2

With this it works for me:

[I] ~/s/g/p/graphite_exporter (master|✔) $ echo "panos.device1:entity1 1234
"(date +%s) | nc localhost 9109
[I] ~/s/g/p/graphite_exporter (master|✔) $ curl -sSf
http://127.0.0.1:9108/metrics | fgrep test
# HELP test_graphite_metric Graphite metric test_graphite_metric
# TYPE test_graphite_metric gauge
test_graphite_metric{device="device1",entity="entity1",job="test_graphite"}
1234

Hope that helps!
MR

On Tue, Apr 7, 2020 at 11:05 AM Panem78  wrote:

> Hello everyone,
> I am looking to match a metric path such as
> "*panos.device1:entity1*"
> in order to be translated into prometheus labels such as :
>
> test_graphite_metric{device="device1",entity="entity1",instance="localhost:9108",job="test_graphite",tsdb="prometheus"}
>
> Can i achieve this somehow with a regex or '*'s act as wildcards for ONLY
> dot-separated metrics ?
>
> Some of my non successful attempts include the following mapping
> (mentioning it here to be somewhat more clear what i want to achieve)
>
> - match: panos.*\\:*
>   match_type: regex
>   name: test_graphite_metric
>   labels:
> device: $1
> job: test_graphite
> entity: $2
>
> Thanks in advance!
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/d99475a4-f3df-40d0-84e1-38478c74c2d9%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAMV%3D_gYgHuV%2BObEbDvQhW-eduny-NypVexyB4h8sbKs%3Dz23tUw%40mail.gmail.com.


Re: [prometheus-developers] Extending tsdb tests

2020-03-22 Thread 'Matthias Rampke' via Prometheus Developers
I am a little concerned about "tests that are okay to fail" … especially
since merging one PR with that means we have no signal for subsequent PRs.
Meanwhile corthanos might adjust their usage, breaking older branches that
don't even touch this (solvable with rebasing but adds additional round
trips).

How could we mark these tests as less important? How could we mark expected
failures?

Are there other testing strategies that would maintain the "this breaks
interfaces" signal without coupling things too closely?

/MR

On Sat, 21 Mar 2020, 22:02 Julien Pivotto,  wrote:

> Hi there,
>
> Would you think that it would be valuable to test the pull requests with
> the third parties that use our codebase (corthanos)?
>
> What I imagine would be that those tests would run tests against
> corthanos master, update the prometheus dependencies to the commit and
> the PR, and report back test results.
>
> I don't think failures over interface changes should prevent us to
> merge, but would it bring extra coverage and information regarding to
> our code? Would that be valuable?
>
> --
>  (o-Julien Pivotto
>  //\Open-Source Consultant
>  V_/_   Inuits - https://www.inuits.eu
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/20200321210234.GA18497%40oxygen
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAFU3N5UTSTNXp7sadvZc8%3D8tuwd5utWdG4t19%3DRtce1x4OJ73w%40mail.gmail.com.


Re: [prometheus-developers] Prometheus Alert handle/resolve handling

2020-03-03 Thread 'Matthias Rampke' via Prometheus Developers
I think it helps to think about Alertmanager webhooks differently.

Alertmanager does not notify about individual alerts but about *groups* of
alerts. These groups come into being, the number of alert instances in them
potentially changes over time. Subsequent webhooks about the same groups
are updates about the status, not a separate instance of anything. The
resolution notification closes the group.

By design, you cannot rely on 1 webhook call = 1 alert. To ensure delivery,
Alertmanager will err on the side of notifying more than needed if it is
not sure what you already got. This is especially the case with clustered
Alertmanager.

The webhooks contain a groupKey field. Use this field to identify which
group a notification is about, and update that one if you already have it
in your UI. That way, there is only one thing to close as well.


/MR

On Tue, 3 Mar 2020, 16:54 EnthuDeveloper,  wrote:

> Hi ,
> I am just curious to know if we have a custom webhook implemented to
> receive alerts from Prometheus Alertmanager then what should be the
> considerations for implementing the custom logic for resolved alerts ?
> My question is more from a standpoint that if repeat_interval is
> configured to be a frequent interval , then our system might have received
> multiple alerts with firing status for the same alert. So when Prometheus
> identifies that alert should be resolved now would it just send one alert
> with resolved status ?
>
> My confusion is shouldn’t we mark all the existing alert instances with
> matching alert name as resolved in our custom logic once Prometheus sees an
> alert transitioning from firing to resolved status.
>
> Note : This logic is needed on our custom framework side to avoid listing
> resolved alerts on the alert dashboard.
>
>
> Any input would be greatly appreciated.
>
>
> Thanks.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/ceaf9c38-3b17-47e0-a4bd-e79e7d229e3f%40googlegroups.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAFU3N5W5sOy-fy4ZyrGq%2BZw7b9-aNGjKe2CNy%3DjnqMr4eFQsLQ%40mail.gmail.com.


Re: [prometheus-developers] Checking if NFS is hanged or not using node_exporter.

2020-03-03 Thread 'Matthias Rampke' via Prometheus Developers
The trouble is that the only sure way to know if MFS hangs is to try and
use it. For one, the node exporter is not a health prober per se, bur more
importantly, when NFS does hang, this leaves a thread permanently stuck in
an uninterruptible syscall. Very soon the exporter would run out of threads
and stop working altogether. For the same reason, do not put textfile
metrics files on NFS.

I believe the only way to do this safely is a separate check script/loop.
Don't start the script with cron, you need to make sure that once it is
stuck it stops trying until it is unstuck again, or you will fill up your
process table with stuck processes that cannot be killed.

/MR


On Tue, 3 Mar 2020, 16:54 Yagyansh S. Kumar, 
wrote:

> Hello experts.
> I want to check if the NFS is hanged(i.e whether it is accessible from the
> server or not, and if yes then what is the response time it is getting). I
> have already enabled the nfs and nfsd collectors, but haven't found any
> that can accurately tell me every time the NFS hangs. Any help would be
> appreciated.
> Thanks in advance.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/1b166f35-fdb0-44fe-9fda-294fcde864f5%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAFU3N5UCRoZGsEr88W7Kz%3DxCWbKObjwSiehTBZetkYsV8sUuig%40mail.gmail.com.


Re: [prometheus-developers] Official Prometheus video playlist

2020-03-02 Thread 'Matthias Rampke' via Prometheus Developers
Do we have a somewhat up-to-date curated list of Prometheus-related talks /
recordings that are already around?

/MR

On Sun, Mar 1, 2020 at 8:45 AM Julius Volz  wrote:

> I always wanted to create some YouTube videos covering certain topics, but
> never did it yet because I have certain quality expectations, and it costs
> a lot of (free and unpaid) time and energy to create good videos. Videos
> have some upsides like being able to visualize and quickly demonstrate
> things via screencasts, animations, etc., or just by more casual
> explanation than in a documentation.
>
> But they also have a bunch of downsides, like:
>
> - It's almost impossible to keep them up to date unless you have a
> professional video recording team re-recording things whenever something
> changes.
> - It's harder with videos to just skim a topic quickly and find just the
> information you care about (for that, written documentation is usually
> easier).
> - They generally just take a large effort to do properly, and the joy and
> use you get from them as a learner really depends a lot on the quality of
> the instructor and the video production quality.
>
> I'd still love to explain certain topics via video at some point if I find
> the time and energy, but it would probably never be able to be a
> replacement for documentation, just explaining certain key concepts or
> demoing how to set up Prometheus in various ways, etc. So more of a
> complementary resource.
>
> On Sat, Feb 29, 2020 at 10:25 PM apoorve kalot 
> wrote:
>
>> Hello all,
>>
>> I have some ideas which i want to work upon, under Prometheus, i don't
>> know if it would be of that level, but i need of opinion of your all.
>>
>> The idea is, *developing a series of official YouTube tutorials which
>> will be part of official Documentation* for working and playing with
>> Prometheus, like from basics of node exporter to writing custom exporter in
>> custom language, or working with 3rd party exporters,
>>
>> *Why it is necessary/ should be done* : In the community IRC, there are
>> times when people might ask a information, which might be redundant, or
>> might be useful to other user at later point of time, and these tutorials,
>> or videos about basic misconceptions, would help to clear all these related
>> problems
>>
>> *Example* : Some of the other open source communities have these
>> playlist as part of official documentation, which helps community as well,
>> along with people who want use/contribute to the product itself, like
>> Kubernetes, TensorFlow, etc.
>>
>> *Advantages of using it* :
>> * The Redundant discussion of basic usage of Prometheus and other related
>> tools.
>> * Would indirectly help in extending support to community itself, and
>> thus would also help in getting new members/contributors which might turn
>> into individual maintainers as well.
>> *  Learning through videos is more efficient as compared to
>> documentation, or written stuff [ for this opinion not all might be in
>> favor, but everyone would agree about the fact that former is faster than
>> latter one ]. For serious bugs and problems, one can always refer and ask
>> help back from Prometheus community IRC.
>>
>> These videos, other than basic implementation, would also contain basic
>> practices which involves while using Prometheus and it's services.
>>
>> Note : i haven't have selected the specifics topics which should be
>> included in these videos, but i would be asking other
>> Developers/Maintainers of the Prometheus as well as regular users of
>> Prometheus. [ I asked some of the maintainers on IRC and they liked the
>> idea, so i posted it here with some courage : ) ]
>>
>> Hoping to get your response/criticism about this, Thank you.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/1e85982e-4cf8-459b-adaa-67a213c049cb%40googlegroups.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CA%2BT6YoyXL2So6ofY8DEzvj%3DMBP%3DoJX55cOG4FRcPtyhB6KDFWQ%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 

Re: [prometheus-developers] [VOTE] New governance document

2020-02-20 Thread 'Matthias Rampke' via Prometheus Developers
Yes

On Thu, 20 Feb 2020, 06:32 Julien Pivotto,  wrote:

> Yes
>
> - Original Message -
> From: Richard Hartmann 
> To: Prometheus Developers 
> Sent: Wed, 19 Feb 2020 21:43:52 +0100 (CET)
> Subject: [prometheus-developers] [VOTE] New governance document
>
> Dear all,
>
> I am hereby calling for a vote on merging
> https://github.com/prometheus/docs/pull/1552 at commit
> de2266c36d8a2ea1f139f97632808e12b354bb76.
>
> References and discussion can be found in said PR and in the email
> thread "Pre-vote feedback on proposed governance changes" on
> prometheus-team (not visible to the public).
>
> This vote will run until 2020-02-26 23:59 UTC or when supermajority
> has been reached, whichever comes first.
>
>
> Please note that while we are voting in public, only Prometheus team
> members are eligible to vote.
>
>
> Best,
> Richard
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAD77%2BgTHNk%3DORUiU%3DKucKtgKDUtSEfGv%2BQVLFKB-d0sVc%2ByBNQ%40mail.gmail.com
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/1227202491.1267.1582176757771.JavaMail.zimbra%40inuits.eu
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAFU3N5VNYDB14FwhqKsnOgB3vVUh%2B%3DZMNRHis46rWPZBpLmuEQ%40mail.gmail.com.


Re: [prometheus-developers] Moving "official" JIRA Alertmanager integration (github.com/free/jiralert) to prometheus-community Organization.

2020-02-17 Thread 'Matthias Rampke' via Prometheus Developers
+1 given the prominence of "warning alert = ticket" in SRE lore, having a
building block available for this in -community is a good thing.

/MR

On Sun, Feb 16, 2020 at 11:08 AM Bartłomiej Płotka 
wrote:

> Hi,
>
> As per https://github.com/prometheus-community/community/issues/6 I would
> like to propose moving https://github.com/free/jiralert project to the
> promethus-community organization.
>
> I am a maintainer of the Jiralert and I am happy to continue those duties
> once moved to prometheus-community org. Initial author, Alin
>  approved the idea as well. Of course, if anyone
> wants to help us in maintaining Jiralert, let us know. (:
>
> I believe it makes sense to host in our common community org as this is
> official JIRA integration for Alertmanager as per
> https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver.
> Reasons are: more visibility and collaborating with other community
> projects on best-maintaining practices etc
>
> Any objections? (:
>
> Kind Regards,
> Bartek
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAMssQwaP_uB%3DHzkuFPXqNhtXaPmM8YxuvLvSLUMrLJk_89QfoA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAFU3N5WQLychTHkSL2e1s1ipk1WvLxr7gp4UF0LkRmOCHBFNnQ%40mail.gmail.com.


Re: [prometheus-developers] prometheus/prometheus Changelog Management

2020-02-14 Thread 'Matthias Rampke' via Prometheus Developers
How do I make it so there is no entry?

On Fri, 14 Feb 2020, 18:07 Simon Pasquier,  wrote:

> Correct, the PR is in the promu repository (I've updated it just now
> to address comments from Brian though it should have been done long
> ago):
> https://github.com/prometheus/promu/pull/170
>
> Right now, it leverages the PR labels to classify the change (BUGFIX,
> CHANGE, ...) and it uses the PR's title as the changelog entry. It
> wouldn't be hard to mimic what Kubernetes is doing and lookup the
> changelog entry in the PR description as Matthias suggested. If
> nothing is found it can always fallback to the title.
> I agree that asking every PR to include the changelog update might not
> be convenient (both for the contributor and the maintainer).
>
> On Fri, Feb 14, 2020 at 8:10 AM Frederic Branczyk 
> wrote:
> >
> > I recall Simon having a tool that would largely generate the changelog
> automatically, that worked pretty well last time I was release shepherd.
> Otherwise I'm also happy to discuss a process like in Kubernetes where the
> changelog item is written into the PR. On Thanos we have in the PR template
> that people have ensured that the changelog item was added respective to
> the change. Seems like there are options, I personally would favor
> something that would be done at contribution time, so not all the
> responsibility falls on the release shepherd as it does today, and more
> generally it seems like the person contributing the change probably is also
> a good candidate to describe it in the changelog.
> >
> > On Fri, 14 Feb 2020 at 08:05, Callum Styan 
> wrote:
> >>
> >> Hi all,
> >>
> >> I'd like to start a discussion around changing how we manage the
> prometheus/prometheus changelog, specifically the fact that the changelog
> is generated manually by the release shepherd as part of the release
> process.
> >>
> >> We can discuss options for what the new process would look like, such
> as requiring PR's to include changelog entries before merging or the next
> release shepherd periodically updating the changelog prior to the release,
> in more detail later. However I'd first like to get a sense of whether
> anyone else feels strongly about either changing or not changing this part
> of the release process.
> >>
> >> Thanks,
> >> Callum.
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups "Prometheus Developers" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an email to prometheus-developers+unsubscr...@googlegroups.com.
> >> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAN2d5OTjOrCfpRF_NXGcQB5nOz%3DVPgnz3LdEk15ucV4PFz%2B4BQ%40mail.gmail.com
> .
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Prometheus Developers" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to prometheus-developers+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAOs1UmyOfHbC75bdk55frFQt-KYgD6cg7vh%2BCPSmVmMnSV3sng%40mail.gmail.com
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to prometheus-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-developers/CAM6RFu79WLX7ZZezxMvQdinV5VMdP-0pfPSfTPs45g7RO-qGcA%40mail.gmail.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAFU3N5XcbOkOrPVrQ-Mzd-FX%2B7W%2B6kdL_uKOCQ%3DLn1X8f2%2BdtQ%40mail.gmail.com.


Re: [prometheus-developers] prometheus/prometheus Changelog Management

2020-02-14 Thread 'Matthias Rampke' via Prometheus Developers
The friction is real – DCO is not a submission quality issue but a
roundtrip one. This would be even more difficult with wording.

 I agree that in *many* cases contributors can write the changelog entry;
having the field in the PR template would encourage them to do so
proactively.

/MR

On Fri, Feb 14, 2020 at 1:12 PM Ben Kochie  wrote:

> DCO is a strawman argument. We're always going to have issues with
> submission quality.
>
> I've had very good luck asking for changelog entries on the node_exporter.
>
> On Fri, Feb 14, 2020 at 8:22 AM Brian Brazil <
> brian.bra...@robustperception.io> wrote:
>
>> On Fri, 14 Feb 2020 at 07:10, Frederic Branczyk 
>> wrote:
>>
>>> I recall Simon having a tool that would largely generate the changelog
>>> automatically, that worked pretty well last time I was release shepherd.
>>> Otherwise I'm also happy to discuss a process like in Kubernetes where the
>>> changelog item is written into the PR. On Thanos we have in the PR template
>>> that people have ensured that the changelog item was added respective to
>>> the change. Seems like there are options,
>>>
>>
>>
>>
>>> I personally would favor something that would be done at contribution
>>> time, so not all the responsibility falls on the release shepherd as it
>>> does today, and more generally it seems like the person contributing the
>>> change probably is also a good candidate to describe it in the changelog.
>>>
>>
>> This is additional friction to contributions, we already have enough fun
>> getting the DCO signed. It's also an additional burden on every single PR,
>> we need to individually figure out if it's worth mentioned in the changelog
>> (many PRs aren't) and then get it in the right category, with good wording,
>> and handling the regular conflicts as everyone would be touching the same
>> lines in the same file.
>>
>> Even with all that the release shepard would still need to go through all
>> the commits and double check that nothing was missed, plus fixing poor
>> wording. I don't think saving 2-3 minutes off a release is worth all these
>> downsides.
>>
>> Brian
>>
>>
>>>
>>> On Fri, 14 Feb 2020 at 08:05, Callum Styan 
>>> wrote:
>>>
 Hi all,

 I'd like to start a discussion around changing how we manage the
 prometheus/prometheus changelog, specifically the fact that the changelog
 is generated manually by the release shepherd as part of the release
 process.

 We can discuss options for what the new process would look like, such
 as requiring PR's to include changelog entries before merging or the next
 release shepherd periodically updating the changelog prior to the release,
 in more detail later. However I'd first like to get a sense of whether
 anyone else feels strongly about either changing or not changing this part
 of the release process.

 Thanks,
 Callum.

 --
 You received this message because you are subscribed to the Google
 Groups "Prometheus Developers" group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to prometheus-developers+unsubscr...@googlegroups.com.
 To view this discussion on the web visit
 https://groups.google.com/d/msgid/prometheus-developers/CAN2d5OTjOrCfpRF_NXGcQB5nOz%3DVPgnz3LdEk15ucV4PFz%2B4BQ%40mail.gmail.com
 
 .

>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/CAOs1UmyOfHbC75bdk55frFQt-KYgD6cg7vh%2BCPSmVmMnSV3sng%40mail.gmail.com
>>> 
>>> .
>>>
>>
>>
>> --
>> Brian Brazil
>> www.robustperception.io
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Prometheus Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to prometheus-developers+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/prometheus-developers/CAHJKeLrFL_kN28EiagWYFbKMr5XWC%2Bk7h8n9D8VijvmOnX_5Tw%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to 

Re: [prometheus-developers] prometheus/prometheus Changelog Management

2020-02-14 Thread 'Matthias Rampke' via Prometheus Developers
No – I mean to explicitly *not* use commit messages or anything that
requires the contributor to change. I want to keep it in the PR
*description* that is editable through the GitHub UI.

/MR

On Fri, Feb 14, 2020 at 1:05 PM Bartłomiej Płotka 
wrote:

> I think I like this idea of reusing a commit message for this! We can
> definitely build some automation around this and it looks like such
> workflow would be a huge improvement!
>
> Thanks Matthias.
>
> Kind Regards,
> Bartek
>
> On Fri, 14 Feb 2020 at 13:02, 'Matthias Rampke' via Prometheus Developers <
> prometheus-developers@googlegroups.com> wrote:
>
>> In the exporters that I maintain I specifically ask contributors not to
>> fill in the changelog. I want to keep a somewhat editorial voice there. I
>> often rephrase changes to highlight what the change means for users, and
>> usually provide extra remarks like upgrade instructions or deprecation
>> notices.
>>
>> Having changelog entries added as part of PR commits also leads to
>> endless merge conflicts.
>>
>> I usually update the changelog right after merging. I would appreciate
>> building this into the PR flow in a way where I can write the changelog
>> entry without having to use a command line.
>>
>> In Kubernetes, this seems to be done automatically by bots based on a
>> section in the PR description. A big benefit of that is that as committers,
>> we can edit it during review.
>>
>> My ideal flow would be:
>>
>> - the PR template has an empty entry for the changelog. a  encourages contributors to fill it in but notes that the maintainers
>> will take care of it
>> - it also has an optional entry for additional changelog remarks (we can
>> leave this out if it's too much)
>> - as the maintainer, if I want to change or edit it, I edit the PR
>> description
>> - if we don't want an entry for the PR, we delete it or leave it empty
>> - once I hit merge, an automatic mechanism adds both to the changelog
>> (can CircleCI commit?)
>> - when creating the release, the shepherd only looks over the changelog,
>> possibly adds or consolidates notes about an overarching theme (say, if
>> multiple PRs together introduce a change)
>>
>> This allows users to contribute the changelog entry, but we can edit it
>> without the back-and-forth of changing commits. It splits the
>> responsibility between the committer (to edit the changelog entry, if one
>> is desired, for the concrete change), and the release shepherd (to make
>> sure the changelog as a whole is good). The release shepherd would no
>> longer need to look at every merge since the last release. Having a "field"
>> in the description makes it easy for committers to edit, but keeps the
>> distinction between "what does this PR do" and "what does this mean for
>> users".
>>
>> /MR
>>
>>
>> On Fri, 14 Feb 2020, 08:22 Brian Brazil, <
>> brian.bra...@robustperception.io> wrote:
>>
>>> On Fri, 14 Feb 2020 at 07:10, Frederic Branczyk 
>>> wrote:
>>>
>>>> I recall Simon having a tool that would largely generate the changelog
>>>> automatically, that worked pretty well last time I was release shepherd.
>>>> Otherwise I'm also happy to discuss a process like in Kubernetes where the
>>>> changelog item is written into the PR. On Thanos we have in the PR template
>>>> that people have ensured that the changelog item was added respective to
>>>> the change. Seems like there are options,
>>>>
>>>
>>>
>>>
>>>> I personally would favor something that would be done at contribution
>>>> time, so not all the responsibility falls on the release shepherd as it
>>>> does today, and more generally it seems like the person contributing the
>>>> change probably is also a good candidate to describe it in the changelog.
>>>>
>>>
>>> This is additional friction to contributions, we already have enough fun
>>> getting the DCO signed. It's also an additional burden on every single PR,
>>> we need to individually figure out if it's worth mentioned in the changelog
>>> (many PRs aren't) and then get it in the right category, with good wording,
>>> and handling the regular conflicts as everyone would be touching the same
>>> lines in the same file.
>>>
>>> Even with all that the release shepard would still need to go through
>>> all the commits and double check that nothing was missed, plus fixing poor
>>> wording. I don't think

Re: [prometheus-developers] prometheus/prometheus Changelog Management

2020-02-14 Thread 'Matthias Rampke' via Prometheus Developers
In the exporters that I maintain I specifically ask contributors not to
fill in the changelog. I want to keep a somewhat editorial voice there. I
often rephrase changes to highlight what the change means for users, and
usually provide extra remarks like upgrade instructions or deprecation
notices.

Having changelog entries added as part of PR commits also leads to endless
merge conflicts.

I usually update the changelog right after merging. I would appreciate
building this into the PR flow in a way where I can write the changelog
entry without having to use a command line.

In Kubernetes, this seems to be done automatically by bots based on a
section in the PR description. A big benefit of that is that as committers,
we can edit it during review.

My ideal flow would be:

- the PR template has an empty entry for the changelog. a 
encourages contributors to fill it in but notes that the maintainers will
take care of it
- it also has an optional entry for additional changelog remarks (we can
leave this out if it's too much)
- as the maintainer, if I want to change or edit it, I edit the PR
description
- if we don't want an entry for the PR, we delete it or leave it empty
- once I hit merge, an automatic mechanism adds both to the changelog (can
CircleCI commit?)
- when creating the release, the shepherd only looks over the changelog,
possibly adds or consolidates notes about an overarching theme (say, if
multiple PRs together introduce a change)

This allows users to contribute the changelog entry, but we can edit it
without the back-and-forth of changing commits. It splits the
responsibility between the committer (to edit the changelog entry, if one
is desired, for the concrete change), and the release shepherd (to make
sure the changelog as a whole is good). The release shepherd would no
longer need to look at every merge since the last release. Having a "field"
in the description makes it easy for committers to edit, but keeps the
distinction between "what does this PR do" and "what does this mean for
users".

/MR


On Fri, 14 Feb 2020, 08:22 Brian Brazil, 
wrote:

> On Fri, 14 Feb 2020 at 07:10, Frederic Branczyk 
> wrote:
>
>> I recall Simon having a tool that would largely generate the changelog
>> automatically, that worked pretty well last time I was release shepherd.
>> Otherwise I'm also happy to discuss a process like in Kubernetes where the
>> changelog item is written into the PR. On Thanos we have in the PR template
>> that people have ensured that the changelog item was added respective to
>> the change. Seems like there are options,
>>
>
>
>
>> I personally would favor something that would be done at contribution
>> time, so not all the responsibility falls on the release shepherd as it
>> does today, and more generally it seems like the person contributing the
>> change probably is also a good candidate to describe it in the changelog.
>>
>
> This is additional friction to contributions, we already have enough fun
> getting the DCO signed. It's also an additional burden on every single PR,
> we need to individually figure out if it's worth mentioned in the changelog
> (many PRs aren't) and then get it in the right category, with good wording,
> and handling the regular conflicts as everyone would be touching the same
> lines in the same file.
>
> Even with all that the release shepard would still need to go through all
> the commits and double check that nothing was missed, plus fixing poor
> wording. I don't think saving 2-3 minutes off a release is worth all these
> downsides.
>
> Brian
>
>
>>
>> On Fri, 14 Feb 2020 at 08:05, Callum Styan  wrote:
>>
>>> Hi all,
>>>
>>> I'd like to start a discussion around changing how we manage the
>>> prometheus/prometheus changelog, specifically the fact that the changelog
>>> is generated manually by the release shepherd as part of the release
>>> process.
>>>
>>> We can discuss options for what the new process would look like, such as
>>> requiring PR's to include changelog entries before merging or the next
>>> release shepherd periodically updating the changelog prior to the release,
>>> in more detail later. However I'd first like to get a sense of whether
>>> anyone else feels strongly about either changing or not changing this part
>>> of the release process.
>>>
>>> Thanks,
>>> Callum.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/CAN2d5OTjOrCfpRF_NXGcQB5nOz%3DVPgnz3LdEk15ucV4PFz%2B4BQ%40mail.gmail.com
>>> 
>>> .
>>>
>> --
>> You received this message because you are 

Re: [prometheus-developers] How to delay de-duplication of alerts in AlertManager

2020-02-12 Thread 'Matthias Rampke' via Prometheus Developers
Hey,

this is a new question, and not really development related. You'll more
likely get an answer if you open a new thread on the prometheus-users
mailing list.

Thank you!
Matthias

On Tue, Feb 11, 2020 at 10:52 PM Dhiman Barman  wrote:

> Hi,
>
> We have the following metrics in our Prometheus instances. There are three
> instances of Prometheus and similar number of Alertmanager instances. AM
> instances form a mesh. Each Prometheus instance sends an alert to all the
> AM instances.  The metrics that we have in Prometheus are as follows:
>
> metric_name{label1=a, label2=b}
> metric_name{label1=b, label2=a}
>
> Is it possible to de-dup two alarms based on the above (label, value)
> combinations ?  The metric name is same and we want to create one JIRA
> ticket where label1 and label2 have values which are related.
> label1 could be "from" and label2 could be "to".
> If it is possible to do it in Alertmanager without having to generate a
> combined label in Prometheus,  can someone show an example configuration ?
>
> Thanks,
> Dhiman
>
>
>
>
>
>
>
>
> On Wed, Aug 14, 2019 at 1:40 PM Matthias Rampke  wrote:
>
>> The second notification is regularly only sent after the group interval,
>> which defaults to 5 minutes. If you're getting duplicates in under a
>> minute, it's caused by a failure in clustering. By design, if the
>> Alertmanager instances can't communicate that they sent the notification,
>> the next in the cluster will.
>>
>> This should be traceable from the Alertmanager logs if you set verbosity
>> high enough.
>>
>> The main question then is why they can't communicate reliably. It could
>> be something in your environment.
>>
>> /MR
>>
>> On Wed, 14 Aug 2019, 20:14 Dhiman Barman,  wrote:
>>
>>> Hi,
>>>
>>> We have multiple instances of Prometheus running and the same number of
>>> AlertManager instances forming a peer-mesh.
>>> We are observing that in the production 15-20% de-dup alerts are failing
>>> - that is, JIRA is creating new tickets. This happens
>>> when duplicate alerts are sent my AlertManager in quick succession.
>>>
>>> Is there a way to configure AlertManager so that first alert is sent to
>>> JIRA soon and sub-sequent duplicate alerts can be delayed
>>> by configurable amount of time ?
>>>
>>> Yes, it's possible that first and duplicate alerts may be served by
>>> different instances of AlertManager.
>>>
>>>
>>> Thanks,
>>> Dhiman
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Prometheus Developers" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to prometheus-developers+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/prometheus-developers/a0fab027-3e73-47c5-a330-6a9a2c64ad11%40googlegroups.com
>>> <https://groups.google.com/d/msgid/prometheus-developers/a0fab027-3e73-47c5-a330-6a9a2c64ad11%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/CAFU3N5V_EYUffSsoDhrqbG9Bo2fzru1cVhj7mdjPcC_ZDMSNhg%40mail.gmail.com.