Hey, I'm curious if any best practices have been established for something
I've run into where applications (particularly Java NIO-based ones) try to
configure themselves based on the number of CPUs available and how that
interacts with resource requests/limits.

It is common with Java async/non-blocking frameworks (such as Netty) to set
a default on something like the number of workers in a thread pool based on
`Runtime.getRuntime().availableProcessors()`.

As an example, if I create a pod with CPU request and limit set to 2, and
it is scheduled on a node with 16 CPUs, then code that relies on
Runtime.getRuntime().availableProcessors() to set a default number of
workers will create 16 workers.

In recent versions of Java
<https://bugs.openjdk.java.net/browse/JDK-6515172>,
Runtime.getRuntime().availableProcessors() will return the number of CPUs
in the taskset, but since a pod's resource request and limit is translated
into --cpu-shares and --cpu-quota, a container in the example pod described
above will still see a value of 16 CPUs with this change.

How do people tend to deal with this? Is it best to provide configuration
to the container to override the default logic of "how many CPUs does the
machine have?" with an explicit count that is the same as the resource
request or limit?

Thanks!
Matt

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to