Weird one. I have a go application that, on occasion, will suddenly jump 
from some very low CPU usage (1% - 10%) to 400% (4 core system). I run this 
app at a heightened priority and it can nearly lock the machine (Pi CM4) up.

I can kill it with SIGKILL and I'm able to get a stack trace by sending 
SIGQUIT. (Here's a gist 
<https://gist.github.com/ssokol/b168de8b4546efd9b43a9d6af8538de9> of the 
stack trace.)

The trace would seem to indicate that everything is idle - virtually all 
the goroutines are in runtime.gopark. None of them ever get out of it. I've 
tried adding a watchdog timer and it locks up along with all the other 
goroutines, so it never fires once the runaway event starts.

The app itself isn't anything special. It listens for data on a set of 
about  16 Redis pubsub channels and forwards any data it receives to a very 
limited number of clients over UDP. Throughput ranges from 34 - 38 KB/sec, 
and the message counts are in the hundreds. Most of the time it eats about 
5% of one core on the CM4.

I guess this could be an error on my part, but there's nothing in my code 
that seems likely to cause this - no obvious place for a multi-threaded 
tight loop. I thought for a while that it might be some sort of an issue in 
Redigo (golang redis client) but from what that stack trace shows, it looks 
like all the redis listeners are happy and healthy and stuck waiting in 
gopark along with all the other goroutines.

Has anyone seen this kind of weird before?

Thanks,

-S

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/5465198d-9a48-4db1-98d5-498922b1ac39n%40googlegroups.com.

Reply via email to