lbudai74

Thank you.
Laszlo


On 07.10.2017 02:47, 'David Oppenheimer' via Kubernetes user discussion and Q&A 
wrote:
If you tell me your Github username I can @ mention you on the issue.


On Fri, Oct 6, 2017 at 4:46 PM, Laszlo Budai <laszlo.bu...@gmail.com 
<mailto:laszlo.bu...@gmail.com>> wrote:

    The worst part of the story is that if we run cluster components like 
kube-proxy and the network plugin as DaemonSets (kubeadm does that)  then if a 
user sets a NoSchedule taint on a node that will render the node close to 
useless as it will evict the DS managed Pods ....

    Can I follow the github issue somehow?

    Kind regards,
    Laszlo


    On Saturday, October 7, 2017 at 2:37:17 AM UTC+3, David Oppenheimer wrote:

        Yeah, sorry, I forgot about DaemonSet. It tolerates certain system-added 
taints by default (you can see the list here 
<https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/daemon/daemon_controller.go#L1147>)
 but user-added taints will cause eviction. The thing that seems less than ideal is 
that user-added NoSchedule taints will cause eviction. I don't know if this was 
intentional or accidental, but I will file a Github issue.

        On Fri, Oct 6, 2017 at 4:32 PM, Laszlo Budai <laszlo...@gmail.com> 
wrote:

            I have just repeated the test using a deployment also. The pods 
belonging to the deployment were not affected. Only those belonging to the 
daemonset as you can see below:

            root@master1 $> kubectl get po -o wide
            NAME       READY     STATUS    RESTARTS   AGE       IP          NODE
            nginx-deployment-4234284026-pbt8m   1/1       Running   0          
42s       10.32.0.4   worker1
            nginx-deployment-4234284026-spp7j   1/1       Running   0          
13s       10.47.0.3   worker2
            nginx-deployment-4234284026-v479b   1/1       Running   0          
42s       10.44.0.3   worker3
            pod1       1/1       Running   2          2h        10.44.0.1   
worker3
            taint-ds-test-1d9d7       1/1       Running   0          10m       
10.32.0.3   worker1
            taint-ds-test-5x20k       1/1       Running   0          10m       
10.47.0.2   worker2
            taint-ds-test-j5g69       1/1       Running   0          1m        
10.44.0.2   worker3

            root@master1 $> kubectl taint nodes worker3 mykey=myvalue:NoSchedule
            node "worker3" tainted

            root@master1 $> kubectl get po -o wide
            NAME       READY     STATUS        RESTARTS   AGE       IP          
NODE
            nginx-deployment-4234284026-pbt8m   1/1       Running       0       
   1m        10.32.0.4   worker1
            nginx-deployment-4234284026-spp7j   1/1       Running       0       
   41s       10.47.0.3   worker2
            nginx-deployment-4234284026-v479b   1/1       Running       0       
   1m        10.44.0.3   worker3
            pod1       1/1       Running       2          2h        10.44.0.1   
worker3
            taint-ds-test-1d9d7       1/1       Running       0          10m    
   10.32.0.3   worker1
            taint-ds-test-5x20k       1/1       Running       0          10m    
   10.47.0.2   worker2
            taint-ds-test-j5g69       0/1       Terminating   0          1m        
<none>      worker3


            Kind regards,
            Laszlo


            On Saturday, October 7, 2017 at 2:22:32 AM UTC+3, Laszlo Budai 
wrote:

                It seems it causes pod eviction if the pod was created by a 
daemonset ... Just tested it with the following steps:

                1. created a test daemonset:

                root@master1 $> cat ds1.yaml
                apiVersion: extensions/v1beta1
                kind: DaemonSet
                metadata:
                    name: taint-ds-test
                spec:
                    selector:
                      matchLabels:
                        app: nginx
                    template:
                      metadata:
                        labels:
                          app: nginx
                      spec:
                        containers:
                        - name: nginx
                          image: nginx:1.7.9
                          ports:
                          - containerPort: 80


                root@master1 $> kubectl create -f ds1.yaml
                daemonset "taint-ds-test" created

                root@master1 $> kubectl get pod -o wide
                NAME                  READY     STATUS    RESTARTS   AGE       
IP          NODE
                pod1                  1/1       Running   2          2h        
10.44.0.1   worker3
                taint-ds-test-1d9d7   1/1       Running   0          16s       
10.32.0.3   worker1
                taint-ds-test-5x20k   1/1       Running   0          16s       
10.47.0.2   worker2
                taint-ds-test-99z5q   1/1       Running   0          16s       
10.44.0.2   worker3
                root@master1 $>

                2. added a taint to worker3

                root@master1 $> kubectl taint nodes worker3 
mykey=myvalue:NoSchedule
                node "worker3" tainted

                3. tested the pods again:

                root@master1 $> kubectl get pod -o wide
                NAME                  READY     STATUS        RESTARTS   AGE    
   IP          NODE
                pod1                  1/1       Running       2          2h     
   10.44.0.1   worker3
                taint-ds-test-1d9d7   1/1       Running       0          1m     
   10.32.0.3   worker1
                taint-ds-test-5x20k   1/1       Running       0          1m     
   10.47.0.2   worker2
                taint-ds-test-99z5q   0/1       Terminating   0          1m        
<none>      worker3

                and agin later on:

                root@master1 $> kubectl get pod -o wide
                NAME                  READY     STATUS    RESTARTS   AGE       
IP          NODE
                pod1                  1/1       Running   2          2h        
10.44.0.1   worker3
                taint-ds-test-1d9d7   1/1       Running   0          5m        
10.32.0.3   worker1
                taint-ds-test-5x20k   1/1       Running   0          5m        
10.47.0.2   worker2
                root@master1 $>


                so pod1 (created directly as a pod) was not affected by the 
NoSchedule taint, while all the pods managed by a DaemonSet were evicted ...

                Am I missing something related to taints and daemonsets?

                Kind regards,
                Laszlo




                On 07.10.2017 01:46, 'David Oppenheimer' via Kubernetes user 
discussion and Q&A wrote:
                 > Adding a NoSchedule taint to a node should not cause any pod 
evictions. Something else must be going on.
                 >

-- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
            To unsubscribe from this group and stop receiving emails from it, 
send an email to kubernetes-use...@googlegroups.com.
            To post to this group, send email to kubernet...@googlegroups.com.
            Visit this group at https://groups.google.com/group/kubernetes-users 
<https://groups.google.com/group/kubernetes-users>.
            For more options, visit https://groups.google.com/d/optout 
<https://groups.google.com/d/optout>.


-- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
    To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com 
<mailto:kubernetes-users+unsubscr...@googlegroups.com>.
    To post to this group, send email to kubernetes-users@googlegroups.com 
<mailto:kubernetes-users@googlegroups.com>.
    Visit this group at https://groups.google.com/group/kubernetes-users 
<https://groups.google.com/group/kubernetes-users>.
    For more options, visit https://groups.google.com/d/optout 
<https://groups.google.com/d/optout>.


--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
kubernetes-users+unsubscr...@googlegroups.com 
<mailto:kubernetes-users+unsubscr...@googlegroups.com>.
To post to this group, send email to kubernetes-users@googlegroups.com 
<mailto:kubernetes-users@googlegroups.com>.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes... Budai Laszlo
    • Re: [k... 'David Oppenheimer' via Kubernetes user discussion and Q&A
      • Re... Budai Laszlo
        • ... Laszlo Budai
          • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
            • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
            • ... Laszlo Budai
              • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
                • ... Budai Laszlo
                • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
                • ... Budai Laszlo
                • ... klaus1982 . cn
                • ... klaus1982 . cn

Reply via email to