It seems it causes pod eviction if the pod was created by a daemonset ... Just 
tested it with the following steps:

1. created a test daemonset:

root@master1 $> cat ds1.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: taint-ds-test
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80


root@master1 $> kubectl create -f ds1.yaml
daemonset "taint-ds-test" created

root@master1 $> kubectl get pod -o wide
NAME                  READY     STATUS    RESTARTS   AGE       IP          NODE
pod1                  1/1       Running   2          2h        10.44.0.1   
worker3
taint-ds-test-1d9d7   1/1       Running   0          16s       10.32.0.3   
worker1
taint-ds-test-5x20k   1/1       Running   0          16s       10.47.0.2   
worker2
taint-ds-test-99z5q   1/1       Running   0          16s       10.44.0.2   
worker3
root@master1 $>

2. added a taint to worker3

root@master1 $> kubectl taint nodes worker3 mykey=myvalue:NoSchedule
node "worker3" tainted

3. tested the pods again:

root@master1 $> kubectl get pod -o wide
NAME                  READY     STATUS        RESTARTS   AGE       IP          
NODE
pod1                  1/1       Running       2          2h        10.44.0.1   
worker3
taint-ds-test-1d9d7   1/1       Running       0          1m        10.32.0.3   
worker1
taint-ds-test-5x20k   1/1       Running       0          1m        10.47.0.2   
worker2
taint-ds-test-99z5q   0/1       Terminating   0          1m        <none>      
worker3

and agin later on:

root@master1 $> kubectl get pod -o wide
NAME                  READY     STATUS    RESTARTS   AGE       IP          NODE
pod1                  1/1       Running   2          2h        10.44.0.1   
worker3
taint-ds-test-1d9d7   1/1       Running   0          5m        10.32.0.3   
worker1
taint-ds-test-5x20k   1/1       Running   0          5m        10.47.0.2   
worker2
root@master1 $>


so pod1 (created directly as a pod) was not affected by the NoSchedule taint, 
while all the pods managed by a DaemonSet were evicted ...

Am I missing something related to taints and daemonsets?

Kind regards,
Laszlo




On 07.10.2017 01:46, 'David Oppenheimer' via Kubernetes user discussion and Q&A 
wrote:
Adding a NoSchedule taint to a node should not cause any pod evictions. 
Something else must be going on.


On Fri, Oct 6, 2017 at 3:03 PM, Budai Laszlo <laszlo.bu...@gmail.com 
<mailto:laszlo.bu...@gmail.com>> wrote:

    Dear All,

    I have a question related to Taints
    I'm testing taints on a K8s 1.6.3 cluster with weave-net network plug-in, 
and the kube-proxy deployed as a DaemonSets.

    When I'm adding a taint like mykey:myvalue:NoSchedule to a node I see that 
the weave-net pod, and the kube-proxy pod are deleted from that node.

    Why is that happening?

    According to the documentation the `NoSchedule` effect should leave 
untouched the pods that are already running on a node, and I can see that other 
pods remain there (for example the kube-dns, or a test pod created by me).

    Thank you in advance for any suggestion.

    Kind regards,
    Laszlo

-- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
    To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com 
<mailto:kubernetes-users%2bunsubscr...@googlegroups.com>.
    To post to this group, send email to kubernetes-users@googlegroups.com 
<mailto:kubernetes-users@googlegroups.com>.
    Visit this group at https://groups.google.com/group/kubernetes-users 
<https://groups.google.com/group/kubernetes-users>.
    For more options, visit https://groups.google.com/d/optout 
<https://groups.google.com/d/optout>.


--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
kubernetes-users+unsubscr...@googlegroups.com 
<mailto:kubernetes-users+unsubscr...@googlegroups.com>.
To post to this group, send email to kubernetes-users@googlegroups.com 
<mailto:kubernetes-users@googlegroups.com>.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes... Budai Laszlo
    • Re: [k... 'David Oppenheimer' via Kubernetes user discussion and Q&A
      • Re... Budai Laszlo
        • ... Laszlo Budai
          • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
            • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
            • ... Laszlo Budai
              • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
                • ... Budai Laszlo
                • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
                • ... Budai Laszlo
                • ... klaus1982 . cn
                • ... klaus1982 . cn

Reply via email to