As I said before, using multiple times the command "kubectl apply -f
my-deployment.yaml" (changing from time to time the image version inside the
yaml) I noticed that Kubernetes never deploys 2 pod in a same node.
I tested this behavior many times so yes it's working as I need :)
If I had
I think that the situation is more complicated if we start looking at machine
prices.
Let me use some real data:
1) I have to use a db machine like gcloud n1-standard-16 ---> kubernetes
cluster with 1 node for 500$/month
2) I have to use 9 web server like n1-standard-2 ---> kubernetes cluster
Hi all!
I would like to know if there is a way to force Kubernetes, during a deploy, to
use every node in the cluster.
The question is due some attempts that I have done where I noticed a situation
like this:
- a cluster of 3 nodes
- I update a deployment with a command like: kubectl set
I'm reading the documentation and it's just what I was looking for.
Many thanks!
But is there a way to create a single yaml deployment file to ensure that every
pod will be deployed in a single node?
So a single file to be executed and not 2 different yaml files as in the example
--
You
At
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
there is an example (perfect for my case) where 2 files yaml are used: one for
redis-cache and the other for web-store.
Anyway I'll try to concatenate them.
Thanks
--
You received this message
Sorry but now I'm facing another problem :-(
The deployment with the options podAntiAffinity/podAffinity is working but when
I try to update the deployment with the command:
kubectl set image deployment/apache-deployment apache-container=xx:v2.1.2
then I get this error:
Today during a deploy I get a pod with 2 containers -,-
I can confirm that the best solution to make sure you have only one pod per
node is using the DaemonSet.
Unfortunately using the approach to reapply the deployment yaml does not
guarantee that after deployment each node has only a single
With Daemonset now everything is working properly.
Anyway I'm using Kubernetes 1.8.1-gke.1
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
I have a situation like this:
- a cluster of web machines
- a cluster of db machines and other services
The question is how put in communication the 2 clusters in order to use some
hostnames in /etc/hosts of web machines.
To protect your data, is it safe create an ingress service to make
I tried some solutions and one that is working at the moment is simply based to
change my deployment.yaml every time.
I mean:
1) I have to deploy for the 1' time my application based on a pod with some
containers. These pods should be deployed on every cluster node (I have 3
nodes). I did the
So re-apply the deployment.yaml is an acceptable solution considering that my
only requirement is have one pod for node?
Unfortunately I have a very close due date so I would like to find the
faster-simpler and quite stable solution to do a code upgrade :)
--
You received this message
Hello,
I'm writing because I found a workaround in order to deploy minor
updates without causing the restart of the containers. Maybe this idea
could be useful for others..
The workaround is based on adding another business logic in the start
script inside the docker image.
In my Dockerfile,
Hi,
I don't know why (I'm going crazy trying to understand the reason) but from
today I have a problem with my Locust master that always returns this error:
[2018-01-09 14:17:26,379]
locust-master-deployment-262643481-9kfw8/INFO/locust.main: Starting web monitor
at *:8089
[2018-01-09
[Locust Dockerfile]
FROM python:3.6.2
# Install packages
COPY requirements.txt /tmp/
RUN pip install --requirement /tmp/requirements.txt
# Add tasks directory
COPY locust-tasks /locust-tasks
# Set the entrypoint
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT
Hello,
tonight some strange things have happened in my cluster:
1) some process tried to re-create a load balancer (that already existed)
Here some logs from Stackdriver:
I Ensured load balancer
I Deleted load balancer
I Ensuring load balancer
I Ensuring load balancer
I NGINX
Hi,
I have to update my ssl certificate for my (ingress) https load balancer.
When I created the cluster I executed these commands:
> kubectl create secret tls mysecret --key mykey.key --cert mycert.crt
> kubectl apply -f ./ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
Hi,
thanks for your suggestion.
I can confirm that this procedure is working:
1) create another secret with the new ssl certificate:
> kubectl create secret tls mynewsecret --key mynewkey.key --cert mynewcert.crt
2) edit ingress.yaml file in order to change the secretName:
apiVersion:
> And fundamentally why not rebuild ok SVN changes? You can
automate that. Take into account that if you don't have different
images with the code, you can't use Kubernetes to rollback either.
Or you should check in some other way which pod had which svn
revision at any moment
Thanks for these suggestions!
But do these solutions use persistent disk?
In my case the persistent disk is a necessary requirement because in certain
rare situations the pods restart. Therefore it is necessary to use a persistent
disk so that the code does not change in case of reboot..
Just
Hi all,
I have a Kubernetes cluster on my production environment that is composed by 6
pods.
At the moment when I have to make a new deploy, I create a new docker image on
my local machine where I execute a svn update.
Then I push the new image on GCE and finally I can execute a rolling update.
Il giorno lunedì 23 aprile 2018 16:52:20 UTC+2, Rodrigo Campos ha scritto:
> Sorry, there are different parts that I don't follow. Why daemon set?
No problem.
So why daemon set? Because I have a cluster with 6 nodes (but in the future
this number could be greater) and to ensure that every node
21 matches
Mail list logo