As Stuart says, use the pushgateway, have Prometheus scrape the pushgateway every minute. When you query, you will get the latest value pushed. This is how you do things.
On Tue, Nov 17, 2020 at 1:53 PM kiran <[email protected]> wrote: > So there is no way to store data in Prometheus where the frequency is once > a day? > > On Monday, November 16, 2020, Stuart Clark <[email protected]> > wrote: > >> On 16/11/2020 04:06, kiran wrote: >> >> Thank you, Stuart. >> If I push the data using push gateway once in 24 hours, is querying still >> a problem(you mentioned it is marked as stale and most queries won't work)? >> If I have data and want to get the most recent metrics older than 2 >> mins(recommended maximum scrape) lets say within 24 hours from the current >> time point, what query can I use? So depending on the query time, the most >> recent metric could be a couple of minutes older to a few hours to max 24 >> hours older than current time. >> >> >> If you use the Push gateway you'd have Prometheus scrape it every 2 >> minutes, so there would be no issues with staleness. >> >> >> On Fri, Nov 13, 2020 at 4:03 AM Stuart Clark <[email protected]> >> wrote: >> >>> On 13/11/2020 02:47, kiran wrote: >>> > Hello all >>> > >>> > I have a use case where I have a metric coming once every 24 hours and >>> > that time varies per team. Now for each team we want to get most >>> > recent value of that metric. Here issue is I don’t know the offset or >>> > time duration as to when was the last update of that metric. So here I >>> > need to use offset 24h ? >>> > Does using offset get matches greater than 24 hours from current time >>> > or latest metric within past 24 hours? From documentation not able to >>> > figure out the definition of offset. >>> >>> >>> Offset will just look at 24 hours before now (or whatever the time >>> specified for the query). >>> >>> As with all metrics in Prometheus you need to ensure they are >>> successfully scraped regularly, with the maximum recommended scrape >>> interval being about 2 minutes. To find the latest value is then a very >>> simple query. >>> >>> If you scrape less frequently you will end up with a metric which is >>> regularly marked as stale, and therefore most queries won't work - they >>> just won't find any valid data and will return nothing. >>> >>> If the source of this daily change can't be scraped directly every 2 >>> minutes, this could be a use case for the Push gateway or the textfile >>> collector of the node exporter. Your daily process would publish the >>> metrics to either, which are then kept to allow them to be regularly >>> scraped. One common pattern is to also include a metric where the value >>> is the timestamp the process started or finished, to allow you to >>> detect/alert on failures of this daily process. >>> >> >> -- > You received this message because you are subscribed to the Google Groups > "Prometheus Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/prometheus-users/CAOnWYZWmcswRQLTH%3Ds%3DRTfPJHAD54RkTN5N3jArCgpSgjAetyA%40mail.gmail.com > <https://groups.google.com/d/msgid/prometheus-users/CAOnWYZWmcswRQLTH%3Ds%3DRTfPJHAD54RkTN5N3jArCgpSgjAetyA%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/CABbyFmqwAUdaz%2BBjBeYw%3DkuPFitfHiVD_BLkAr4FdQL2Wz3EfQ%40mail.gmail.com.

