Hi Goutham,
Is this feature in prod?
Can we backfill data now?

Thanks and Regards
Saket

On Thursday, September 13, 2018 at 8:36:12 PM UTC+5:30, Goutham 
Veeramachaneni wrote:
>
> Hi Dhiman,
>
> It'll tackle backfilling. The data need not be contiguous.
>
> Thanks,
> Goutham.
>
> On Sep 13 2018, at 8:27 am, [email protected] <javascript:> wrote:
>
>
> Hi Goutham,
>
> Thanks for your response. I want to distinguish backfilling requirement 
> from gap-avoidance requirement.
>
> In backfilling - one would require to insert old data into TSDB. Data will 
> be old w.r.t to current Prometheus timestamp. It is not necessary that data 
> is contagiously available on the timeline even in the past.
>
> In gap-avoidance, one would require to insert old but ‘contiguous’ data 
> into TSDB. There is no gap in the data stream. As I mentioned in my 
> original post, suppose Prometheus acts as a Kafka consumer and when 
> Prometheus connects to Kafka bus, data from old times might already be 
> there in the bus. In that case, Prometheus has to go and read old data 
> before
> consuming new data.
>
> Whatever work is in progress, is it related to backfilling or 
> gap-avoidance ?
>
> Thanks,
> Dhiman
>
>
>
>
> On Wednesday, September 12, 2018 at 2:34:31 AM UTC-7, Goutham 
> Veeramachaneni wrote:
>
> Hi Dhiman,
>
>
> Backfilling is currently WIP with no real ETA. Hopefully in the next 
> release or two. The groundwork for that is here: 
> https://github.com/prometheus/tsdb/pull/370 Once that is in, adding an 
> API to prometheus would be simple.
>
>
> Thanks,
> Goutham.
>
> On Wednesday, September 12, 2018 at 12:41:40 AM UTC+5:30, Dhiman Barman 
> wrote:Hi,
>
>
>
> Would like to know the behavior of Prometheus (in newer versions) with 
> respect to ingestion of old data. Is there a hard limit on the time-window 
> such that any samples outside this window are dropped ?
>
>
>
> I have been looking at Prometheus test this behavior. I have seen 
> Prometheus ingesting 25-30 mins old data without any complaint. However, if 
> data is 4 hrs old, the messages are not accepted and are dropped. The error 
> messages contain something like:
>
>
>
> msg="Error on ingesting samples that are too old or are too far into the 
> future" num_dropped=47190
>
>
>
> Regarding this requirement on Prometheus roadmap page, 
> https://prometheus.io/docs/introduction/roadmap/, it says
>
>
>
> “Backfilling will permit bulk loads of data in the past. This will allow 
> for retroactive rule evaluations, and transferring old data from other 
> monitoring systems.”
>
>
>
> Are any of these functionalities implemented in Prometheus today ?
>
>
>
> I am testing Prometheus behavior/performance by making Prometheus consume 
> data from a Kafka topic. If we make Prometheus not scrape REST end points,
>
> then we also need to handle situations when Prometheus has to read and 
> catch up reading old data from Kafka before consuming new data. So, knowing 
> Prometheus’s
>
> behavior becomes important. Is there any alternative or better way to 
> back-fill old data into Prometheus ? Are there any APIs to push old bulk 
> data into Prometheus TSDB ?
>
>
>
> Thanks,
>
> Dhiman
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Prometheus Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To post to this group, send email to [email protected] 
> <javascript:>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/prometheus-developers/900d3a75-51dd-4b3b-9b31-5e93b65fcb95%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-developers/42d2c5a2-21d4-4d50-907e-380c0f60d2f7%40googlegroups.com.

Reply via email to