I am able to free cache memory for Jenkins master pod docker container . As per docker image behaviour its taking resources from k8s node where it is deployed .
You can verify memory usage by below commands from docker container . bash-4.4$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes bash-4.4$ cat /sys/fs/cgroup/memory/memory.max_usage_in_bytes bash-4.4$ cat /sys/fs/cgroup/memory/memory.stat | grep cache bash-4.4$ Solution , delete *cache memory for k8s node where your Jenkins master pod docker container is running* , there will be no downtime required for Jenkins service . Following document help me : https://www.tecmint.com/clear-ram-memory-cache-buffer-and-swap-space-on-linux/ Thanks , On Thursday, January 30, 2020 at 2:57:30 AM UTC+5:30, James Nord wrote: > > unfortunately there is no quick answer. > I have seen much bigger instances work flawlessly with 4GB and much > smaller instances need 32GB. > > the big difference is on what plugins you have installed, especially > around report visualisation / transformation and if you are using pipelines > that you are following best practices and not putting build logic into the > pipeline bit only flow. > > as with any service in production I would recommend a monitoring service > that shows you the pods consumed memory / CPU and JVM heap / off heap > memory along with GC logs and tune based on your actual workload. > with k8s I would recommend that you specify a request and limit for CPU > and memory to avoid any suprises, the memory will need some headroom o we > what Jenkins itself uses as it will often spawn processes for SCM > integration / polling > > -- You received this message because you are subscribed to the Google Groups "Jenkins Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-users/69bf1bc8-e7f8-4181-adfa-b70900128021%40googlegroups.com.
