Hi,
Few Clarifications on the HA setup on K8S Multi Master setup.
Just want to know what will happens to the Worker nodes of the Master which
failed due to the some reason .
will jobs runs on those worker nodes through another Master node.
Regards,
Basanta
--
You received this message
What do you mean with "those jobs"? Which ones?
Multi-master avoids having a problem if a master fails (or repairs quickly
the situation), so I'd say yes.
It also depends on how you manage etcd, to have concensus and that stuff.
But yeah, it won't be an issue
On Monday, August 27, 2018, Basanta
Nodes are not assigned to specific coordinators in an HA setup. They should
just go through the other one and continue working just fine.
/MR
On Mon, Aug 27, 2018 at 7:03 AM Basanta Kumar Panda
wrote:
> Hi,
> Few Clarifications on the HA setup on K8S Multi Master setup.
> Just want to know
I'm trying to access .NET Web API wich I dokerize and mount in an Kubernet
Cluster on Microsoft Azure.
The application works fine on local docker machine. The cluster is
runnning, my deployment was correct and the pods where created. Everything
I check is fine, but I cannot access my
We currently are containerizing the airflow application Below is the
configuration for the same
Master Pod
1. Airflow schedular
2. Airflow webserver
3. Airflow flower
Worker pod
1. Airflow worker
Redis pod
1. Redis
Mariadb pod
1. Mariadb
We have a default airflow.cfg that has the broker and
Instead of `exec`ing around, is it possible to run the backup command from
another container in the same pod? Possibly by mounting enough volumes into
both? Then you could just run the backup "cron" as part of the Gitlab pod,
using cron itself, some lightweight alternative, or even just a shell
It has been asked several times, there are really lengthy answers with
different trade offs.
I use different, I value testing Kubernetes upgrades in my stag/qa envs.
On Monday, August 27, 2018, Gabriel Sousa
wrote:
> hello
>
> what is the best ?
> one kubernetes cluster for all env ?
> or
Hi Amatzia, you can define API request which will return several resources
at one call. Just type: "kubectl get pods,svc,...,... -all-namespaces"
Thanks
Best Regards
Jaroslav Vojtek
po 27. 8. 2018 o 15:50 Amatzia Brandwein
napĂsal(a):
> Hello Guys
>
> is there any way to preform and API
Thanks for the reply
you mean the worker nodes of the failed Master nodes will be using by the
other masters for running the jobs.
but as per my understanding the workers nodes are joined to the masters
with the tokens of each masters.In that case the worker nodes are attached
to a master.
Hello Guys
is there any way to preform and API request which return all resources
(like running command 'kubectl get all')
I know i can get pods/services/etc.. by themselves but i am looking for a
solution for getting everything in 1 call
and not running multiple api calls to get each
Basically if the First Master node fails then the second master node takes
the control and will the jobs execute on the Worker nodes of the failed
Master.
On Monday, August 27, 2018 at 2:40:14 PM UTC+5:30, Matthias Rampke wrote:
>
> Nodes are not assigned to specific coordinators in an HA
hello
what is the best ?
one kubernetes cluster for all env ?
or seperate PROD of UAT/QA ?
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
12 matches
Mail list logo