BTW, someone also make another suggestions, maybe useful for people who 
have same scenerio. Instead of having a agent to scrape data for every 
tenant,  we can use one prometheus to scrape the data, and write a remote 
write adapter to receive these data, split them, and route to different 
remote storage for different tenant. It seems also very light weight and 
simple

在2020年11月23日星期一 UTC+8 下午10:33:28<jun min> 写道:

> awesome, I'll give a shot
>
> 在2020年11月23日星期一 UTC+8 下午7:28:22<[email protected]> 写道:
>
>> Having one prometheus server writing to thousands of different remote 
>> write endpoints doesn't sound like a sensible way to work.
>>
>> Maybe you want a proper multi-tenant solution, like Cortex 
>> <https://cortexmetrics.io/>, or the cluster/multi-tenant version of 
>> VictoriaMetrics 
>> <https://victoriametrics.github.io/Cluster-VictoriaMetrics.html>.
>>
>> A simpler option would be a separate prometheus instance per tenant doing 
>> the scraping. Even more lightweight, look at the vmagent 
>> <https://victoriametrics.github.io/vmagent.html> part of 
>> VictoriaMetrics, which can be used for scraping and remote write without a 
>> local TSDB.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/c3aa0c4f-a98e-4beb-81cf-b88ee1494974n%40googlegroups.com.

Reply via email to