The application publishes metrics to a remote-write endpoint in a 
Prometheus shard at the moment; in future we've plans to migrate to pull 
model as much as possible after building service discovery for native 
deployments -- but for backward compatibility, we are adopting this 
approach currently. 



On Sunday, August 7, 2022 at 3:48:44 PM UTC-4 [email protected] wrote:

> To put it another way. If you can read every event raw from a log line, 
> like every request has a "took X milliseconds", there are better ways to 
> reconstruct metrics for your use case.
>
> On Sun, Aug 7, 2022 at 9:46 PM Ben Kochie <[email protected]> wrote:
>
>> Right, but more basic, how do you get this information from the 
>> application right now? Are you reading logs? Does it emit statsd data? 
>>
>> You're saying what, but not how.
>>
>> On Sun, Aug 7, 2022 at 7:15 PM Johny <[email protected]> wrote:
>>
>>> Gauge contains most recent values of a metric, sampled every 1 min or 
>>> so, and exported by a user application, e.g. some latency sampled at 1 
>>> minute intervals by a client application. Lets presume this time series 
>>> (scraped by Prometheus or sent via remote write) is absolute containing all 
>>> the information we need for calculating derived statistics. In the most raw 
>>> form, you can fetch the data points, sort them and calculate percentile. 
>>> Incidentally, legacy backend has efficient mechanisms to calculate 
>>> percentiles by scanning and reducing data using map-reduce. 
>>>
>>>
>>>  
>>> On Sunday, August 7, 2022 at 7:49:05 AM UTC-4 [email protected] wrote:
>>>
>>>> So, let's take a step back and find out some more information, because 
>>>> this question is sounding a lot like an XY Problem.
>>>>
>>>> How are the current applications generating their metrics right now?
>>>> How are you getting the data to create these histograms?
>>>>
>>>> On Sun, Aug 7, 2022 at 9:23 AM Johny <[email protected]> wrote:
>>>>
>>>>> We are migrating telemetry backend from legacy database to Prometheus 
>>>>> and require estimating percentiles on gauge metrics published by user 
>>>>> applications. Estimating percentiles on a gauge metric in Prometheus is 
>>>>> not 
>>>>> feasible and for a number of reasons, client applications will be 
>>>>> difficult 
>>>>> to modify to start publishing histograms. 
>>>>>
>>>>> I am exploring feasibility of creating a histogram in a recording rule 
>>>>> in Prometheus based on the metrics published by users. The partial work 
>>>>> put 
>>>>> in so far seems inefficient, also illegible. Is there a recommended 
>>>>> approach to solve this problem? As stated earlier, it will be extremely 
>>>>> hard to solve the problem on the client side and I am looking for a 
>>>>> solution within Prometheus.
>>>>>
>>>>> *Current metric is a gauge with with values representing request 
>>>>> latency.*
>>>>> http_duration_milliseconds_gauge{instance="instance1:port1"}[1h]
>>>>> 1659752188  100
>>>>> 1659752068  120
>>>>> ..
>>>>> 1659751708   150
>>>>> 1659751588    160
>>>>>
>>>>> *Desired histogram after conversion -*
>>>>> http_duration_milliseconds_hist_bucket{instance="instance1:port1", 
>>>>> le=100}  133
>>>>> http_duration_milliseconds_hist_bucket{instance="instance1:port1", 
>>>>> le=120}  222
>>>>> http_duration_milliseconds_hist_bucket{instance="instance1:port1", 
>>>>> le=140}  311
>>>>> http_duration_milliseconds_hist_bucket{instance="instance1:port1", 
>>>>> le=160}  330
>>>>> http_duration_milliseconds_hist_bucket{instance="instance1:port1", 
>>>>> le=180}  339
>>>>> http_duration_milliseconds_hist_bucket{instance="instance1:port1", 
>>>>> le=200}  340
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> -- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "Prometheus Users" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to [email protected].
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/prometheus-users/f95b5512-1c81-4e12-9670-7c7eb0d29f5en%40googlegroups.com
>>>>>  
>>>>> <https://groups.google.com/d/msgid/prometheus-users/f95b5512-1c81-4e12-9670-7c7eb0d29f5en%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Prometheus Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/prometheus-users/c8e3c184-e88d-4217-badc-f5f779b52af3n%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/prometheus-users/c8e3c184-e88d-4217-badc-f5f779b52af3n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/905b945d-ea00-4d4d-9f2b-89ed8a76fab8n%40googlegroups.com.

Reply via email to