Hello community,

here is the log from the commit of package kubernetes-salt for openSUSE:Factory 
checked in at 2018-05-11 09:18:02
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/kubernetes-salt (Old)
 and      /work/SRC/openSUSE:Factory/.kubernetes-salt.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "kubernetes-salt"

Fri May 11 09:18:02 2018 rev:19 rq:606230 version:4.0.0+git_r761_6b2cce7

Changes:
--------
--- /work/SRC/openSUSE:Factory/kubernetes-salt/kubernetes-salt.changes  
2018-05-10 15:50:49.754120664 +0200
+++ /work/SRC/openSUSE:Factory/.kubernetes-salt.new/kubernetes-salt.changes     
2018-05-11 09:18:05.498914112 +0200
@@ -1,0 +2,77 @@
+Wed May  9 16:15:31 UTC 2018 - [email protected]
+
+- Commit e286f9b by Flavio Castelli [email protected]
+ Make crictl handling more robust
+ 
+ Some of our states are now depending on `crictl` tool. All these states have
+ to depend on the `kubelet service.running` one, otherwise the
+ `crictl` socket won't be available and the state will fail.
+ 
+ Also, with these changes, the "blame" of a failure should point directly to
+ the guilty (`kubelet` service not running for whatever reason) instead of
+ falling on the `haproxy` one.
+ 
+ Finally, the check looking for `crictl` socket has been changed to ensure the
+ socket file exists and the service is actually listening.
+ 
+ This will help with bugs like bsc#1091419
+ 
+ Signed-off-by: Flavio Castelli <[email protected]>
+
+
+-------------------------------------------------------------------
+Wed May  9 16:12:15 UTC 2018 - [email protected]
+
+- Commit bcf5415 by Flavio Castelli [email protected]
+ kubelet: allow resource reservation
+ 
+ Allow kubelet to take into account resource reservation and eviction
+ threshold.
+ 
+ == Resource reservation ==
+ 
+ It's possible to reserve resources for the `kube` and the `system`
+ components.
+ 
+ The `kube` component is the one including the kubernetes components: api
+ server, controller manager, scheduler, proxy, kubelet and the container
+ engine components (docker, containerd, cri-o, runc).
+ 
+ The `system` component is the `system.slice`, basically all the system
+ services: sshd, cron, logrotate,...
+ 
+ By default don't specify any kind of resource reservation. Note well: when
+ the resource reservations are in place kubelet will reduce the amount or
+ resources allocatable by the node. However **no** enforcement will be done
+ neither on the `kube.slice` nor on the `system.slice`.
+ 
+ This is not happening because:
+ 
+ * Resource enforcement is done using cgroups.
+ * The slices are created by systemd.
+ * systemd doesn't manage all the available cgroups yet.
+ * kubelet tries to manage cgroups that are not handled by systemd,
+ resulting in the kubelet failing at startup.
+ * Changing the cgroup driver to `systemd` doesn't fix the issue.
+ 
+ Moreover enforcing limits on the `system` and the `kube` slices can lead to
+ resource starvation of core components of the system. As advised even by the
+ official kubernetes docs, this is something that only expert users should do
+ only after extensive profiling of their nodes.
+ 
+ Finally, even if we wanted to enforce the limits, the right place would be
+ systemd (by tuning the slice settings).
+ 
+ For more information see the official documentation:
+ https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
+ 
+ == Eviction threshold ==
+ 
+ By default no eviction threshold is set.
+ 
+ bsc#1086185
+ 
+ Signed-off-by: Flavio Castelli <[email protected]>
+
+
+-------------------------------------------------------------------

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ kubernetes-salt.spec ++++++
--- /var/tmp/diff_new_pack.DPVnbC/_old  2018-05-11 09:18:06.350883244 +0200
+++ /var/tmp/diff_new_pack.DPVnbC/_new  2018-05-11 09:18:06.358882954 +0200
@@ -32,7 +32,7 @@
 
 Name:           kubernetes-salt
 %define gitrepo salt
-Version:        4.0.0+git_r757_3c2b52a
+Version:        4.0.0+git_r761_6b2cce7
 Release:        0
 BuildArch:      noarch
 Summary:        Production-Grade Container Scheduling and Management

++++++ master.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/salt-master/pillar/params.sls 
new/salt-master/pillar/params.sls
--- old/salt-master/pillar/params.sls   2018-05-08 16:14:27.000000000 +0200
+++ new/salt-master/pillar/params.sls   2018-05-09 18:15:58.000000000 +0200
@@ -112,6 +112,22 @@
 
 kubelet:
   port:           '10250'
+  compute-resources:
+    kube:
+      cpu: ''
+      memory: ''
+      ephemeral-storage: ''
+      # example:
+      # cpu: 100m
+      # memory: 100M
+      # ephemeral-storage: 1G
+    system:
+      cpu: ''
+      memory: ''
+      ephemeral-storage: ''
+    eviction-hard: ''
+    # example:
+    # eviction-hard: memory.available<500M
 
 proxy:
   http:           ''
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/salt-master/salt/_modules/caasp_cri.py 
new/salt-master/salt/_modules/caasp_cri.py
--- old/salt-master/salt/_modules/caasp_cri.py  2018-05-08 16:14:27.000000000 
+0200
+++ new/salt-master/salt/_modules/caasp_cri.py  2018-05-09 18:15:58.000000000 
+0200
@@ -197,14 +197,18 @@
     at bootstrap time, where the CRI is not yet running
     but some state interacting with it is applied.
     '''
-
-    socket = cri_runtime_endpoint()
     timeout = int(__salt__['pillar.get']('cri:socket_timeout', '20'))
     expire = time.time() + timeout
 
     while time.time() < expire:
-        if os.path.exists(socket):
+        cmd = "crictl --runtime-endpoint {socket} info".format(
+                socket=cri_runtime_endpoint())
+        result = __salt__['cmd.run_all'](cmd,
+                                         output_loglevel='trace',
+                                         python_shell=False)
+        if result['retcode'] == 0:
             return
+
         time.sleep(0.3)
 
     raise CommandExecutionError(
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/salt-master/salt/_modules/caasp_nodes.py 
new/salt-master/salt/_modules/caasp_nodes.py
--- old/salt-master/salt/_modules/caasp_nodes.py        2018-05-08 
16:14:27.000000000 +0200
+++ new/salt-master/salt/_modules/caasp_nodes.py        2018-05-09 
18:15:58.000000000 +0200
@@ -500,3 +500,15 @@
             return master
 
     return ''
+
+
+def is_admin_node():
+    '''
+    Returns true if the node has the 'admin' and/or the 'ca'
+    roles.
+
+    Returns false otherwise
+    '''
+
+    node_roles = __salt__['grains.get']('roles', [])
+    return any(role in ('admin', 'ca') for role in node_roles)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/salt-master/salt/_modules/caasp_pillar.py 
new/salt-master/salt/_modules/caasp_pillar.py
--- old/salt-master/salt/_modules/caasp_pillar.py       2018-05-08 
16:14:27.000000000 +0200
+++ new/salt-master/salt/_modules/caasp_pillar.py       2018-05-09 
18:15:58.000000000 +0200
@@ -33,3 +33,31 @@
             return False
 
     return res
+
+
+def get_kubelet_reserved_resources(component):
+    '''
+    Returns the kubelet cli argument specifying the
+    reserved computational resources of the specified component.
+
+    Returns an empty string if no reservations are in place for the specified
+    component.
+
+    Example values for `component`: `kube`, `system`
+
+    See 
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
+
+    '''
+    reservations = []
+
+    for resource in ('cpu', 'memory', 'ephemeral-storage'):
+        quantity = get(
+                'kubelet:compute-resources:{component}:{resource}'.format(
+                    component=component,
+                    resource=resource))
+        if quantity:
+            reservations.append('{resource}={quantity}'.format(
+                resource=resource,
+                quantity=quantity))
+
+    return ','.join(reservations)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/salt-master/salt/container-feeder/init.sls 
new/salt-master/salt/container-feeder/init.sls
--- old/salt-master/salt/container-feeder/init.sls      2018-05-08 
16:14:27.000000000 +0200
+++ new/salt-master/salt/container-feeder/init.sls      2018-05-09 
18:15:58.000000000 +0200
@@ -19,7 +19,7 @@
   service.running:
     - enable: True
     - require:
-      {% if not salt.caasp_cri.needs_docker() %}
+      {% if not salt.caasp_nodes.is_admin_node() %}
       # the admin node uses docker as CRI, requiring its state
       # will cause the docker daemon to be restarted, which will
       # lead to the premature termination of the orchestration.
@@ -32,7 +32,7 @@
       - file: /etc/sysconfig/container-feeder
       - file: /etc/container-feeder.json
     - watch:
-      {% if not salt.caasp_cri.needs_docker() %}
+      {% if not salt.caasp_nodes.is_admin_node() %}
       - service: {{ pillar['cri'][salt.caasp_cri.cri_name()]['service'] }}
       {% endif %}
       - file: /etc/containers/storage.conf
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/salt-master/salt/haproxy/init.sls 
new/salt-master/salt/haproxy/init.sls
--- old/salt-master/salt/haproxy/init.sls       2018-05-08 16:14:27.000000000 
+0200
+++ new/salt-master/salt/haproxy/init.sls       2018-05-09 18:15:58.000000000 
+0200
@@ -1,7 +1,13 @@
-{% if not salt.caasp_cri.needs_docker() %}
+{% if not salt.caasp_nodes.is_admin_node() %}
+# This state is executed also on the admin node. On the admin
+# node we cannot require the kubelet state otherwise the node will
+# join the kubernetes cluster and some system workloads might be
+# scheduled there. All these services would then fail due to the network
+# not being configured properly, and that would lead to slow and always
+# failing orchestrations.
 include:
-  - {{ salt['pillar.get']('cri:chosen', 'docker') }}
   - kubelet
+  - {{ salt['pillar.get']('cri:chosen', 'docker') }}
   - container-feeder
 {% endif %}
 
@@ -62,8 +68,9 @@
     - timeout: 60
     - onchanges:
       - file: /etc/caasp/haproxy/haproxy.cfg
-{% if not salt.caasp_cri.needs_docker() %}
+{% if not salt.caasp_nodes.is_admin_node() %}
     - require:
+      - service: kubelet
       - service: container-feeder
 {% endif %}
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/salt-master/salt/kubelet/kubelet.jinja 
new/salt-master/salt/kubelet/kubelet.jinja
--- old/salt-master/salt/kubelet/kubelet.jinja  2018-05-08 16:14:27.000000000 
+0200
+++ new/salt-master/salt/kubelet/kubelet.jinja  2018-05-09 18:15:58.000000000 
+0200
@@ -17,6 +17,21 @@
 
 # Add your own!
 KUBELET_ARGS="\
+    --cgroups-per-qos \
+    --cgroup-driver=cgroupfs \
+    --cgroup-root=/ \
+    --kube-reserved-cgroup=podruntime.slice \
+{% if salt.caasp_pillar.get_kubelet_reserved_resources('kube') -%}
+    --kube-reserved={{ 
salt.caasp_pillar.get_kubelet_reserved_resources('kube') }} \
+{% endif -%}
+    --system-reserved-cgroup=system \
+{% if salt.caasp_pillar.get_kubelet_reserved_resources('system') -%}
+    --kube-reserved={{ 
salt.caasp_pillar.get_kubelet_reserved_resources('system') }} \
+{% endif -%}
+    --enforce-node-allocatable=pods \
+{% if pillar['kubelet']['compute-resources']['eviction-hard'] -%}
+    --eviction-hard={{ pillar['kubelet']['compute-resources']['eviction-hard'] 
}} \
+{% endif -%}
     --cluster-dns={{ pillar['dns']['cluster_ip'] }} \
     --cluster-domain={{ pillar['dns']['domain'] }} \
     --node-ip={{ salt.caasp_net.get_primary_ip() }} \
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/salt-master/salt/top.sls new/salt-master/salt/top.sls
--- old/salt-master/salt/top.sls        2018-05-08 16:14:27.000000000 +0200
+++ new/salt-master/salt/top.sls        2018-05-09 18:15:58.000000000 +0200
@@ -7,7 +7,13 @@
     - ca-cert
     - cri
     - container-feeder
-    {% if not salt.caasp_cri.needs_docker() %}
+    {% if not salt.caasp_nodes.is_admin_node() %}
+      # the admin node uses docker as CRI, requiring its state
+      # will cause the docker daemon to be restarted, which will
+      # lead to the premature termination of the orchestration.
+      # Hence let's not require docker on the admin node.
+      # This is not a big deal because the admin node has already
+      # working since the boot time.
     - {{ salt['pillar.get']('cri:chosen', 'docker') }}
     {% endif %}
     - swap


Reply via email to