I came out an idea, the following 2 pods are scraping the same metrics.
Pod 1 - current load 50m - during the recent 1m(not sure if the duration 
makes sense), the load is 55m. 
Pod 2 - current load 50m -  during the recent 1m(not sure if the duration 
makes sense), the load is 55m.

The rule is if the metrics load reaches 50m, the autoscaling will be 
triggered.
Pod 1 - current load 50m - during the recent 1m(not sure if the duration 
makes sense), the load is 55m. -> a new Pod 1-1 will be created to afford 
the extra 5m load
Pod 2 - current load 50m -  during the recent 1m(not sure if the duration 
makes sense), the load is 55m. -> a new Pod 2-1 will be created to afford 
the extra 5m load

So if autoscaling happened on the 2 Pods at the same time feasible?
On Monday, June 7, 2021 at 5:20:46 PM UTC+8 nina guo wrote:

>  I still have a question that - any conflict between autoscaling and High 
> Availability?
> Because for my understanding, if currently the solution is with 
> autoscaling, the multiple pods may scrape different metrics,  then this 
> siutation will not have HA.
> but if now the solution is with HA, the mulitpes pods are exactly scraping 
> the same metrics, if then start autoscaling, it will break HA.
>
> On Monday, June 7, 2021 at 4:51:56 PM UTC+8 nina guo wrote:
>
>> Many thanks for your detailed answers Julius.
>>
>> On Friday, June 4, 2021 at 6:12:02 PM UTC+8 [email protected] wrote:
>>
>>> Hi Nina,
>>>
>>> if you run multiple HA replicas of Prometheus and one of them becomes 
>>> unavailable for some reason and you query that broken replica, the queries 
>>> will indeed fail. You could either load-balance (with dead backend 
>>> detection) between the replicas to avoid this, or use something like Thanos 
>>> (https://thanos.io/) to aggregate over multiple HA replicas and merge / 
>>> deduplicate their data intelligently, even if one of the replicas is dead.
>>>
>>> Regarding data consistency: two HA replicas do not talk to each other 
>>> (in terms of clustering) and just independently scrape the same data, but 
>>> at slightly different phases, so they will never contain 100% the same 
>>> data, just conceptually the same. Thus if you naively load-balance between 
>>> two HA replicas without any further logic, you will see your e.g. Grafana 
>>> graphs jump around a tiny bit, depending on which replica you are currently 
>>> scraping through the load balancer, and when exactly it scraped some 
>>> target. But other than that, you shouldn't really care, both replicas are 
>>> "correct", so to say.
>>>
>>> For autoscaling on Kubernetes, take a look at the Prometheus Adapter (
>>> https://github.com/kubernetes-sigs/prometheus-adapter), which you can 
>>> use together with the Horizonal Pod Autoscaler to do autoscaling based on 
>>> Prometheus metrics.
>>>
>>> Regards,
>>> Julius
>>>
>>> On Fri, Jun 4, 2021 at 9:25 AM nina guo <[email protected]> wrote:
>>>
>>>> Thank you very much.
>>>> If I deploy multiple Prometheus Pods, and mount separate volumes to 
>>>> each Pod:
>>>> 1. If one of the k8s nodes goes down, is there any chance the access is 
>>>> currently on the crashed nodes, then the query will be failed?
>>>> 2. If multiple Pods are running in k8s cluster, is there any data 
>>>> inconsistence issue?(they scrape the same targets.)
>>>>
>>>> On Friday, June 4, 2021 at 1:40:05 AM UTC+8 [email protected] 
>>>> wrote:
>>>>
>>>>> Hi Nina,
>>>>>
>>>>> No, by default, the Prometheus Operator uses an emptyDir for the 
>>>>> Prometheus storage, which gets lost when the pod is rescheduled.
>>>>>
>>>>> This explains how to add persistent volumes: 
>>>>> https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/storage.md
>>>>>
>>>>> Regards,
>>>>> Julius
>>>>>
>>>>> On Thu, Jun 3, 2021 at 9:08 AM nina guo <[email protected]> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> If using Prometheus Operator to install in k8s cluster, the data pv 
>>>>>> will be created automatically or not?
>>>>>>
>>>>>> -- 
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "Prometheus Users" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>> send an email to [email protected].
>>>>>> To view this discussion on the web visit 
>>>>>> https://groups.google.com/d/msgid/prometheus-users/e7f3ea4f-b7ad-473d-9095-170529fd32f5n%40googlegroups.com
>>>>>>  
>>>>>> <https://groups.google.com/d/msgid/prometheus-users/e7f3ea4f-b7ad-473d-9095-170529fd32f5n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>> .
>>>>>>
>>>>>
>>>>>
>>>>> -- 
>>>>> Julius Volz
>>>>> PromLabs - promlabs.com
>>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "Prometheus Users" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>>
>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/prometheus-users/06279fe0-978b-4806-afc6-09b79ba6f6f7n%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/prometheus-users/06279fe0-978b-4806-afc6-09b79ba6f6f7n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>
>>>
>>> -- 
>>> Julius Volz
>>> PromLabs - promlabs.com
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/c194cea4-449e-4179-aceb-799a2452659cn%40googlegroups.com.

Reply via email to