GitHub user lhotari added a comment to the discussion: Pulsar upgrade to 3.0.5 
causes prometheus metrics timeouts on brokers

The jstack didn't reveal any deadlocks. What is odd is the high number of 
threads. There seems to be 36 threads for each thread pool in many cases. This 
could cause some waste in Netty buffer pools and that might matter if your 
memory is limited. Thread locals will also waste a lot of memory with such high 
thread counts.

What k8s resource settings and what JVM args do you have? You could use `cat 
/proc/1/cmdline` in the pod to find out the command line.

What k8s version and flavour are you using? How about broker configuration?


GitHub link: 
https://github.com/apache/pulsar/discussions/22897#discussioncomment-9778711

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to