No one with 2 nodes and 11 namespaces needs 20GB.
Up to a few hundred pods and ten nodes the openshift processes themselves
shouldn't be using more than a few hundred megs. If you plan on growing or
haven't upgraded to etcd3 you obviously want to leave some wiggle room, but
even very large
Firstly, leaving swap enabled is an anti-pattern in general [0] as
OpenShift is then unable to recognize OOM conditions until performance is
thoroughly degraded. Secondly, we generally recommend to our customers
that they have at least 20GB [1] for Masters. I've seen many customers go
far past
You can hit the master prometheus endpoint to see what is going on (or run
prometheus from the release-3.6 branch in examples/prometheus):
oc get --raw /metrics
As an admin will dump the apiserver prometheus metrics for that server.
You can look at (going from memory here)
Hi Clayton,
We’re running 3.6.1 I believe. It was installed a few weeks ago using
OpenShift ansible on the the release-3.6 branch.
We’re running 11 namespaces, 2 nodes, 7 pods, so it’s pretty minimal.
I’ve never run this prune.
What version are you running? How many nodes, pods, and namespaces?
Excessive memory use can be caused by not running prune or having an
automated process that creates lots of an object. Excessive CPU use can be
caused by an errant client or component stuck in a hot loop repeatedly
taking the