Can we solve this issue with load balancer?

On Wednesday, June 2, 2021 at 6:01:38 PM UTC+8 nina guo wrote:

> So the better solution would be mount another storage rather than NFS 
> separately to each Pod.
> For example, 2 Prometheus Pods are running with 2 separate volumes, if one 
> of the Pod goes down(but the data is still in memory), according to k8s 
> mechanism, another Pod will be started automatically. Currently the data 
> which was in memeroy will be lost. It will cause data inconsistency. 
> Because the other running Pod probably already have written the data to 
> persistent volume.
>
> On Wednesday, June 2, 2021 at 4:39:16 PM UTC+8 Stuart Clark wrote:
>
>> On 02/06/2021 09:22, nina guo wrote:
>> > If Prometheus deploys in k8s with multiple Pods, the Prometheus Pods 
>> > are running independently, am I right?
>> That is correct.
>>
>> -- 
>> Stuart Clark
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/e5a07521-29a7-4f47-930d-a722eb5d188dn%40googlegroups.com.

Reply via email to