[
https://issues.apache.org/jira/browse/YUNIKORN-334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175910#comment-17175910
]
Wilfred Spiegelenburg commented on YUNIKORN-334:
------------------------------------------------
Based on the k8s behaviour there is not much we can do about this. The
ConfigMap update is what I would call random within a 2 minute time frame at
best. Moving forward we are adding a REST call to update the config. I think
that is a better solution than any workaround trying to get the config map
update to become predictable.
There is an open issue on the k8s side:
https://github.com/kubernetes/kubernetes/issues/30189
That points to a change they made in the documentation to explain the how, what
and why via: https://github.com/kubernetes/website/pull/18082/files
This is the documentation comment they added:
{code}
When a ConfigMap already being consumed in a volume is updated, projected keys
are eventually updated as well. Kubelet is checking whether the mounted
ConfigMap is fresh on every periodic sync. However, it is using its local
ttl-based cache for getting the current value of the ConfigMap. As a result,
the total delay from the moment when the ConfigMap is updated to the moment
when new keys are projected to the pod can be as long as kubelet sync period (1
minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet.
You can trigger an immediate refresh by updating one of the pod's annotations.
{code}
In other words: we cannot rely on the change to be made in an instant when we
use the ConfigMap.
> Configmap updates are not consumed and updated for queues
> ---------------------------------------------------------
>
> Key: YUNIKORN-334
> URL: https://issues.apache.org/jira/browse/YUNIKORN-334
> Project: Apache YuniKorn
> Issue Type: Bug
> Components: core - scheduler
> Reporter: Ayub Pathan
> Assignee: Wilfred Spiegelenburg
> Priority: Major
>
> Update configmap with new property(for example: stateaware app sorting or
> viceversa) and applied to the queues correctly
> * Apply the below config
> {noformat}
> kubectl describe configmaps -n yunikorn yunikorn-configs
> Name: yunikorn-configs
> Namespace: yunikorn
> Labels: app=yunikorn
> chart=yunikorn-0.9.0
> heritage=Helm
> release=yunikorn
> Annotations: helm.sh/hook: pre-install
> helm.sh/hook-weight: 2
> Data
> ====
> queues.yaml:
> ----
> partitions:
> -
> name: default
> placementrules:
> - name: tag
> value: namespace
> create: true
> queues:
> - name: root
> submitacl: '*'
> Events: <none>
> {noformat}
> * kubectl create namespace dev7
> * Create an app under this queue
> {noformat}
> apiVersion: v1
> kind: Pod
> metadata:
> labels:
> app: sleep-157
> applicationId: sleep-157
> name: sleep-157-2
> namespace: dev7
> spec:
> schedulerName: yunikorn
> restartPolicy: Never
> containers:
> - name: sleep-60s-1
> image: "alpine:latest"
> command: ["sleep", "60"]
> resources:
> requests:
> cpu: "300m"
> memory: "300M"
> {noformat}
> * VErify the queues API response.
> {noformat}
> {
> queuename: "dev7",
> status: "Active",
> capacities: {
> capacity: "[]",
> maxcapacity: "[]",
> usedcapacity: "[memory:300 vcore:300]",
> absusedcapacity: "[]"
> },
> queues: null,
> properties: {
> application.sort.policy: "stateaware"
> }
> }
> {noformat}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]