Hello Bg,

I faced a similar problem and after searching everywhere and checking every 
source possible, it turned out to be a faulty URL. An encoding/decoding 
problem which when passed through express it caused an infinite loop 
somewhere in the express library. So a way to check that might be tracing 
(logging) all the URLs handled by your servers and check them when the 100% 
spike happens. And see what was the last 10-20 URLs your servers received 
(in my case when the spike happened everything stopped including logging) 
and  then try to replay those on your development/staging environment. Also 
for the getting enough sleep. As a temp solution you can write a 10 lines 
bash script which auto restart the node processes (using ps or any other 
cpu stats command) when their CPU usage exceed 90% or w/e for more than 1 
minute (you can do that by flags) so that you don't have to wakeup and 
restart the node processes manually knowing that "forever" and other NPM 
packages didn't work for me on the restart part because of the 100% CPU 

On Monday, October 10, 2016 at 3:53:03 PM UTC+2, Bgsosh wrote:
> Hi,
> I'm having a tough time tracking down an issue we currently have in 
> production.  Our node processes will sometimes suddenly spike in CPU usage, 
> and then stay pegged at 100% until restarted.
> I'm not able to reproduce on development machine.  Could anyone offer any 
> tips for tracking this down?  Any advice would be appreciated as I'm 
> currently not getting enough sleep!
> Thank you
> Bg

Job board: http://jobs.nodejs.org/
New group rules: 
Old group rules: 
You received this message because you are subscribed to the Google Groups 
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nodejs+unsubscr...@googlegroups.com.
To post to this group, send email to nodejs@googlegroups.com.
To view this discussion on the web visit 
For more options, visit https://groups.google.com/d/optout.

Reply via email to