I have an application where I am trying to aggregate metrics from hosts 
that have spotty connectivity. I want to be logging locally when the hosts 
are periodically offline, then sync all the metrics to a central server 
once they connect again. Another complication is that these remote hosts 
sometimes get rebooted. In the ideal case the remote hosts would delete 
data locally after some amount of time, allowing plenty of time for it all 
to be synced to the centralized server.

It sounds like I could run Prometheus on all my remote hosts, with 
appropriate retention rules. Then have my central Prometheus server pull 
from them all periodically via remote_read? Will I be able to get ALL data 
to the central server that way? All the hosts are connected over a VPN so 
there are no firewall issues.

Does this sound like an appropriate use of Prometheus?

Thank you!

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/29c35e2a-58f7-48b7-ad99-28e890eacc26n%40googlegroups.com.

Reply via email to