Re: high CPU usage on tomcat 7

2012-10-06 Thread Mark Thomas


Kirill Kireyev kir...@instagrok.com wrote:

Thanks for all your thorough advice Shanti! (and everyone else)

Here are my findings so far:
0) My servlet sessions store a large number (~10s of M) or data in RAM.

This is by design, to optimize performance. I can also have ~3K active 
sessions at any one time. Hence a large heap size.
1) When I (1) manually expire inactive sessions through Tomcat web 
interface and (2) manually hit Perform GC through jvisualvm console 
attached to the tomcat process, everything works great, and the memory 
is successfully reclaimed, and the used heap  size drops back to what
it 
should be when the application initializes.
2) However, this doesn't seem to work automatically. More specifically:
 a) Sessions are not expiring without manually doing it (I can see 
the number growing in the Tomcat web interface). Even though my 
conf/web.xml says:
 session-config
 session-timeout20/session-timeout
 /session-config
b) A full garbage collection is not being performed unless I do it 
manually. I'm attaching the GC records from my logs/catalina.out.

Any insights?

Session expiration is performed by the background processing thread. What is 
that thread doing (a thread dump - well several over time - will tell you).

Fix the session expiration issue and the GC issue will be solved. Note the JVM 
may well not perform a full GC unless it absolutely has to. In fact, the JVM 
may not perform any GC if it doesn't have to.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-10-06 Thread Jeff MAURY
Aren't you're clients polling the server, this may cause the session not to
expire even if the user is not using the UI ?

Jeff


On Sat, Oct 6, 2012 at 9:42 PM, Mark Thomas ma...@apache.org wrote:



 Kirill Kireyev kir...@instagrok.com wrote:

 Thanks for all your thorough advice Shanti! (and everyone else)
 
 Here are my findings so far:
 0) My servlet sessions store a large number (~10s of M) or data in RAM.
 
 This is by design, to optimize performance. I can also have ~3K active
 sessions at any one time. Hence a large heap size.
 1) When I (1) manually expire inactive sessions through Tomcat web
 interface and (2) manually hit Perform GC through jvisualvm console
 attached to the tomcat process, everything works great, and the memory
 is successfully reclaimed, and the used heap  size drops back to what
 it
 should be when the application initializes.
 2) However, this doesn't seem to work automatically. More specifically:
  a) Sessions are not expiring without manually doing it (I can see
 the number growing in the Tomcat web interface). Even though my
 conf/web.xml says:
  session-config
  session-timeout20/session-timeout
  /session-config
 b) A full garbage collection is not being performed unless I do it
 manually. I'm attaching the GC records from my logs/catalina.out.
 
 Any insights?

 Session expiration is performed by the background processing thread. What
 is that thread doing (a thread dump - well several over time - will tell
 you).

 Fix the session expiration issue and the GC issue will be solved. Note the
 JVM may well not perform a full GC unless it absolutely has to. In fact,
 the JVM may not perform any GC if it doesn't have to.

 Mark

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




-- 
Jeff MAURY


Legacy code often differs from its suggested alternative by actually
working and scaling.
 - Bjarne Stroustrup

http://www.jeffmaury.com
http://riadiscuss.jeffmaury.com
http://www.twitter.com/jeffmaury


Re: high CPU usage on tomcat 7

2012-10-05 Thread Shanti Suresh
Hi Kirill,

Do you happen to have a QA or a test environment by any chance?

25GB is very big for a JVM heap size.

Some questions I have are:
(1) What does the application do?   How many concurrent users do you have?
(2) What was the heap set at before you increased it to 25GB?
(3) Why did you think you needed 25GB?  Is this just an arbitrary size
because your machine has 32GB RAM total and you wanted to allocate out the
JVM size to maximum available?
(4) What symptoms did you face before you increased the heap to 25 GB?
(5) What size are the heapdumps that the earlier script generated?  It
would be a challenge to troubleshoot a heapdump of 20 - 25 GB.

Suggestions for diagnostic steps:
(1) The first step, I think, is to look at the JConsole memory use graph
after application restart and see if the memory is rising linearly
throughout the application's use.  You seem to have a memory leak like
Charles pointed out.
(2) Restart QA.  Bring up JConsole.  Have only one user use the system.
Study JConsole Overview panel - Memory, CPU, threads, #loaded classes.
(3) Take a heapdump.
(4) Have two users use QA.  Observe graphs.  Take heapdump.
(5) Have three users use QA.  Observe graphs.  Take heapdump.
(6) Open each heapdump using Eclipse Memory Analyzer.  MAT has useful
features.  One of them is to generate a leak suspect report.  By just
viewing the available graphs, you'll get an idea of what classes are
increasing in size and not being reclaimed.

Others may have other useful tips.

Thanks.

   -Shanti

On Thu, Oct 4, 2012 at 2:08 PM, Jeff MAURY jeffma...@gmail.com wrote:

 Le 4 oct. 2012 14:38, Caldarale, Charles R chuck.caldar...@unisys.com
 a
 écrit :
 
   From: Kirill Kireyev [mailto:kir...@instagrok.com]
   Subject: Re: high CPU usage on tomcat 7
 
   Perhaps what I need is to have the JVM do garbage collections more
   frequently, so that they don't become a huge CPU-hogging ordeals
   when they do happen.
 
  That's not how it works.  The amount of time spent in a GC operation is
 dependent almost entirely on the number of live objects in the heap, not
 the amount of space being reclaimed.  Having a smaller heap might improve
 your cache hit rate, but it will have little effect on the time used by GC.
 
  It is highly likely that your webapps have a memory leak.  You need to
 use a profiler and find out what's eating up all that memory.  If you set
 the heap size to a smaller value, you should be able to provoke the
 situation earlier, but it certainly won't fix it.
 +1

 Jeff
 
   - Chuck
 
 
  THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
 MATERIAL and is thus for use only by the intended recipient. If you
 received this in error, please contact the sender and delete the e-mail and
 its attachments from all computers.
 
 
  -
  To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
  For additional commands, e-mail: users-h...@tomcat.apache.org
 



RE: high CPU usage on tomcat 7

2012-10-04 Thread Caldarale, Charles R
 From: Kirill Kireyev [mailto:kir...@instagrok.com] 
 Subject: Re: high CPU usage on tomcat 7

 Perhaps what I need is to have the JVM do garbage collections more 
 frequently, so that they don't become a huge CPU-hogging ordeals 
 when they do happen.

That's not how it works.  The amount of time spent in a GC operation is 
dependent almost entirely on the number of live objects in the heap, not the 
amount of space being reclaimed.  Having a smaller heap might improve your 
cache hit rate, but it will have little effect on the time used by GC.

It is highly likely that your webapps have a memory leak.  You need to use a 
profiler and find out what's eating up all that memory.  If you set the heap 
size to a smaller value, you should be able to provoke the situation earlier, 
but it certainly won't fix it.

 - Chuck 


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: high CPU usage on tomcat 7

2012-10-04 Thread Jeff MAURY
Le 4 oct. 2012 14:38, Caldarale, Charles R chuck.caldar...@unisys.com a
écrit :

  From: Kirill Kireyev [mailto:kir...@instagrok.com]
  Subject: Re: high CPU usage on tomcat 7

  Perhaps what I need is to have the JVM do garbage collections more
  frequently, so that they don't become a huge CPU-hogging ordeals
  when they do happen.

 That's not how it works.  The amount of time spent in a GC operation is
dependent almost entirely on the number of live objects in the heap, not
the amount of space being reclaimed.  Having a smaller heap might improve
your cache hit rate, but it will have little effect on the time used by GC.

 It is highly likely that your webapps have a memory leak.  You need to
use a profiler and find out what's eating up all that memory.  If you set
the heap size to a smaller value, you should be able to provoke the
situation earlier, but it certainly won't fix it.
+1

Jeff

  - Chuck


 THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
MATERIAL and is thus for use only by the intended recipient. If you
received this in error, please contact the sender and delete the e-mail and
its attachments from all computers.


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-10-03 Thread Shanti Suresh
 webinf top_all_threads.log 1 top_user_webinf_threads.log');

 # Get the thread dump
 my @output=`/usr/bin/jstack -l ${preview_pid}`;
 open (my $file, '', 'preview_threaddump.txt') or die Could not open file:
 $!;
 print $file @output;
 close $file;

 open LOG, top_user_webinf_threads.log or die $!;
 open (STDOUT, | tee -ai top_cpu_preview_threads.log);
 print PID\tCPU\tMem\tJStack Info\n;
 while ($l = LOG) {
 chop $l;
 $pid = $l;
 $pid =~ s/webinf.*//g;
 $pid =~ s/ *//g;
 ##  Hex PID is available in the Sun HotSpot Stack Trace */
 $hex_pid = sprintf(%#x, $pid);
 @values = split(/\s+/, $l);
 $pct = $values[8];
 $mem = $values[9];
 # Debugger breakpoint:
 $DB::single = 1;

 # Find the Java thread that corresponds to the thread-id from the TOP output
 for my $j (@output) {
 chop $j;
 ($j =~ m/nid=$hex_pid/)print $hex_pid . \t . $pct . \t .
 $mem . \t .  $j . \n;
 }
 }

 close (STDOUT);

 close LOG;


 --end of script --

 Thanks.

   -Shanti


 On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
 millebi.subscripti...@gmail.com wrote:


  I agree; we have reproducible instances where PermGen is not set to our
 requirements on the Tomcat startup parameters and it will cause a lockup
 every time. Do some JMX monitoring and you may discover a memory spike
 that's killing Tomcat.

 Bill
 -Original Message-
 From: Jeff MAURY [mailto:jeffma...@gmail.com jeffma...@gmail.com]
 Sent: September-27-2012 2:01 PM
 To: Tomcat Users List
 Subject: Re: high CPU usage on tomcat 7

 This is probably due to out of memory, I have the same problem on my ubuntu
 ci machine Did you monitor your tomcat with jmx ?

 Jeff


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



 --
 *Kirill Kireyev, PhD*
 Founder/CTO instaGrok.com http://www.instagrok.com
 kir...@instagrok.com
 Twitter: @instaGrok http://twitter.com/InstaGrok
 FB: facebook.com/instagrok http://www.facebook.com/instagrok
  http://www.instagrok.com



Re: high CPU usage on tomcat 7

2012-10-03 Thread Shanti Suresh
/sudo /bin/kill -3 $preview_pid);
 }

 # Capture Preview thread dump
 system (/usr/bin/jmap
 -dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid);

 # Gather the top threads; keep around for reference on what other threads
 are running
 @top_cmd = (/usr/bin/top,  -H, -n1, -b);
 @sort_cmd = (/bin/sort, -r, -n, -k, 9,9);
 @sed_cmd = (/bin/sed, -n, '8,$p');
 system(@top_cmd 1 top_all_threads.log);

 # Get your tomcat user's threads, i.e. threads of user, webinf
 system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k 9,9 |
 /bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

 # Get the thread dump
 my @output=`/usr/bin/jstack -l ${preview_pid}`;
 open (my $file, '', 'preview_threaddump.txt') or die Could not open file:
 $!;
 print $file @output;
 close $file;

 open LOG, top_user_webinf_threads.log or die $!;
 open (STDOUT, | tee -ai top_cpu_preview_threads.log);
 print PID\tCPU\tMem\tJStack Info\n;
 while ($l = LOG) {
 chop $l;
 $pid = $l;
 $pid =~ s/webinf.*//g;
 $pid =~ s/ *//g;
 ##  Hex PID is available in the Sun HotSpot Stack Trace */
 $hex_pid = sprintf(%#x, $pid);
 @values = split(/\s+/, $l);
 $pct = $values[8];
 $mem = $values[9];
 # Debugger breakpoint:
 $DB::single = 1;

 # Find the Java thread that corresponds to the thread-id from the TOP output
 for my $j (@output) {
 chop $j;
 ($j =~ m/nid=$hex_pid/)print $hex_pid . \t . $pct . \t .
 $mem . \t .  $j . \n;
 }
 }

 close (STDOUT);

 close LOG;


 --end of script --

 Thanks.

   -Shanti


 On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
 millebi.subscripti...@gmail.com wrote:


  I agree; we have reproducible instances where PermGen is not set to our
 requirements on the Tomcat startup parameters and it will cause a lockup
 every time. Do some JMX monitoring and you may discover a memory spike
 that's killing Tomcat.

 Bill
 -Original Message-
 From: Jeff MAURY [mailto:jeffma...@gmail.com jeffma...@gmail.com]
 Sent: September-27-2012 2:01 PM
 To: Tomcat Users List
 Subject: Re: high CPU usage on tomcat 7

 This is probably due to out of memory, I have the same problem on my ubuntu
 ci machine Did you monitor your tomcat with jmx ?

 Jeff


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



 --
 *Kirill Kireyev, PhD*
 Founder/CTO instaGrok.com http://www.instagrok.com
 kir...@instagrok.com
 Twitter: @instaGrok http://twitter.com/InstaGrok
 FB: facebook.com/instagrok http://www.facebook.com/instagrok
  http://www.instagrok.com





Re: high CPU usage on tomcat 7

2012-10-03 Thread Shanti Suresh
($preview_diag_dir) or die Can't chdir into $preview_diag_dir $!\n;

 # Capture Preview thread dump
 my $process_pattern = preview;
 my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
 my $login = getpwuid($) ;
 if (kill 0, $preview_pid){
 #Possible to send a signal to the Preview Tomcat - either webinf
 or root
 my $count = kill 3, $preview_pid;
 }else {
 # Not possible to send a signal to the VCM - use sudo
 system (/usr/bin/sudo /bin/kill -3 $preview_pid);
 }

 # Capture Preview thread dump
 system (/usr/bin/jmap
 -dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid);

 # Gather the top threads; keep around for reference on what other threads
 are running
 @top_cmd = (/usr/bin/top,  -H, -n1, -b);
 @sort_cmd = (/bin/sort, -r, -n, -k, 9,9);
 @sed_cmd = (/bin/sed, -n, '8,$p');
 system(@top_cmd 1 top_all_threads.log);

 # Get your tomcat user's threads, i.e. threads of user, webinf
 system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k 9,9 |
 /bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

 # Get the thread dump
 my @output=`/usr/bin/jstack -l ${preview_pid}`;
 open (my $file, '', 'preview_threaddump.txt') or die Could not open file:
 $!;
 print $file @output;
 close $file;

 open LOG, top_user_webinf_threads.log or die $!;
 open (STDOUT, | tee -ai top_cpu_preview_threads.log);
 print PID\tCPU\tMem\tJStack Info\n;
 while ($l = LOG) {
 chop $l;
 $pid = $l;
 $pid =~ s/webinf.*//g;
 $pid =~ s/ *//g;
 ##  Hex PID is available in the Sun HotSpot Stack Trace */
 $hex_pid = sprintf(%#x, $pid);
 @values = split(/\s+/, $l);
 $pct = $values[8];
 $mem = $values[9];
 # Debugger breakpoint:
 $DB::single = 1;

 # Find the Java thread that corresponds to the thread-id from the TOP output
 for my $j (@output) {
 chop $j;
 ($j =~ m/nid=$hex_pid/)print $hex_pid . \t . $pct . \t .
 $mem . \t .  $j . \n;
 }
 }

 close (STDOUT);

 close LOG;


 --end of script --

 Thanks.

   -Shanti


 On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
 millebi.subscripti...@gmail.com wrote:


  I agree; we have reproducible instances where PermGen is not set to our
 requirements on the Tomcat startup parameters and it will cause a lockup
 every time. Do some JMX monitoring and you may discover a memory spike
 that's killing Tomcat.

 Bill
 -Original Message-
 From: Jeff MAURY [mailto:jeffma...@gmail.com jeffma...@gmail.com]
 Sent: September-27-2012 2:01 PM
 To: Tomcat Users List
 Subject: Re: high CPU usage on tomcat 7

 This is probably due to out of memory, I have the same problem on my ubuntu
 ci machine Did you monitor your tomcat with jmx ?

 Jeff


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



 --
 *Kirill Kireyev, PhD*
 Founder/CTO instaGrok.com http://www.instagrok.com
 kir...@instagrok.com
 Twitter: @instaGrok http://twitter.com/InstaGrok
 FB: facebook.com/instagrok http://www.facebook.com/instagrok
  http://www.instagrok.com






Re: high CPU usage on tomcat 7

2012-10-03 Thread Kirill Kireyev
ser_webinf_threads.log');

# Get the thread dump
my @output=`/usr/bin/jstack -l ${preview_pid}`;
open (my $file, '', 'preview_threaddump.txt') or die "Could not open file:
$!";
print $file @output;
close $file;

open LOG, "top_user_webinf_threads.log" or die $!;
open (STDOUT, "| tee -ai top_cpu_preview_threads.log");
print "PID\tCPU\tMem\tJStack Info\n";
while ($l = LOG) {
chop $l;
$pid = $l;
$pid =~ s/webinf.*//g;
    $pid =~ s/ *//g;
##  Hex PID is available in the Sun HotSpot Stack Trace */
$hex_pid = sprintf("%#x", $pid);
@values = split(/\s+/, $l);
$pct = $values[8];
$mem = $values[9];
# Debugger breakpoint:
$DB::single = 1;

# Find the Java thread that corresponds to the thread-id from the TOP output
for my $j (@output) {
chop $j;
($j =~ m/nid=$hex_pid/)print $hex_pid . "\t" . $pct . "\t" .
$mem . "\t" .  $j . "\n";
}
}

close (STDOUT);

close LOG;


--end of script --

Thanks.

  -Shanti


On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
millebi.subscripti...@gmail.com wrote:


  
I agree; we have reproducible instances where PermGen is not set to our
requirements on the Tomcat startup parameters and it will cause a "lockup"
every time. Do some JMX monitoring and you may discover a memory spike
that's killing Tomcat.

Bill
-Original Message-
From: Jeff MAURY [mailto:jeffma...@gmail.com]
Sent: September-27-2012 2:01 PM
To: Tomcat Users List
Subject: Re: high CPU usage on tomcat 7

This is probably due to out of memory, I have the same problem on my ubuntu
ci machine Did you monitor your tomcat with jmx ?

Jeff


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



  



  


  -- 
Kirill Kireyev, PhD
Founder/CTO instaGrok.com
kir...@instagrok.com
Twitter: @instaGrok
FB: facebook.com/instagrok
  
  
  

  
  



-- 
  Kirill Kireyev, PhD
  Founder/CTO instaGrok.com
  kir...@instagrok.com
  Twitter: @instaGrok
  FB: facebook.com/instagrok
  
  



Re: high CPU usage on tomcat 7

2012-10-02 Thread Kirill Kireyev
tem("@top_cmd 1 top_all_threads.log");

# Get your tomcat user's threads, i.e. threads of user, "webinf"
system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k "9,9" |
/bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

# Get the thread dump
my @output=`/usr/bin/jstack -l ${preview_pid}`;
open (my $file, '', 'preview_threaddump.txt') or die "Could not open file:
$!";
print $file @output;
close $file;

open LOG, "top_user_webinf_threads.log" or die $!;
open (STDOUT, "| tee -ai top_cpu_preview_threads.log");
print "PID\tCPU\tMem\tJStack Info\n";
while ($l = LOG) {
chop $l;
$pid = $l;
$pid =~ s/webinf.*//g;
$pid =~ s/ *//g;
##  Hex PID is available in the Sun HotSpot Stack Trace */
$hex_pid = sprintf("%#x", $pid);
@values = split(/\s+/, $l);
$pct = $values[8];
$mem = $values[9];
# Debugger breakpoint:
$DB::single = 1;

# Find the Java thread that corresponds to the thread-id from the TOP output
for my $j (@output) {
chop $j;
($j =~ m/nid=$hex_pid/)print $hex_pid . "\t" . $pct . "\t" .
$mem . "\t" .  $j . "\n";
}
}

close (STDOUT);

close LOG;


--end of script --

Thanks.

  -Shanti


On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
millebi.subscripti...@gmail.com wrote:


  
I agree; we have reproducible instances where PermGen is not set to our
requirements on the Tomcat startup parameters and it will cause a "lockup"
every time. Do some JMX monitoring and you may discover a memory spike
that's killing Tomcat.

Bill
-Original Message-
From: Jeff MAURY [mailto:jeffma...@gmail.com]
Sent: September-27-2012 2:01 PM
To: Tomcat Users List
Subject: Re: high CPU usage on tomcat 7

This is probably due to out of memory, I have the same problem on my ubuntu
ci machine Did you monitor your tomcat with jmx ?

Jeff


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



  
  




-- 
  Kirill Kireyev, PhD
  Founder/CTO instaGrok.com
  kir...@instagrok.com
  Twitter: @instaGrok
  FB: facebook.com/instagrok
  
  



Re: high CPU usage on tomcat 7

2012-09-30 Thread Jeff MAURY
I don't think a cpu loop will make tomcat stopping responding to requests
I will make it very slow to respond
But a shortage on memory is hard to recover

Jeff

Le vendredi 28 septembre 2012, mailingl...@j-b-s.de a écrit :

 Maybe an infinite loop? We observed something similar due to a bug in the
 java regex impl and certain user input causes this regex looping behaviour.
 As this locked one core but to the user it simply looked like server was
 not responding, guess what happend? Right: they press refresh page and next
 core gone :-)
 So running state might be evil, too.
 Switch on GC logging, maybe its just related to a full gc. In case this
 happens again take a thread dump and search similarities. Take multiple
 dumps in a 5 sec interval. Try to find long running threads (in our case we
 noticed the regex bug)

 Jens

 Sent from my iPhone

 On 27.09.2012, at 22:05, Kirill Kireyev kir...@instagrok.com wrote:

  Thanks for all the advice everyone! There is a possibility that the CPU
 is caused by an app thread - I am looking into that possibility. Will let
 you know when I find out more.
 
  Thanks,
  Kirill
 
  On 9/27/12 12:17 PM, Shanti Suresh wrote:
  Hi Kirill,
 
  Like Mark, Bill and Jeff said, those threads are normal
 request-processing
  threads.  I have included a script that might help with isolating high
 CPU
  issues with Tomcat.
 
  Also, I think it might be helpful to see how the Java heap is
 performing as
  well.
  Please bring up Jconsole and let it run over the week.  Inspect the
 graphs
  for Memory, CPU and threads.  Since you say that high CPU occurs
  intermittently several times during the week and clears itself, I
 wonder if
  it is somehow related with the garbage collection options you are using
 for
  the server.  Or it may be a code-related problem.
 
  Things to look at may include:
 
  (1) Are high CPU times related to Java heap reductions happening at the
  same time?  == GC possibly needs tuning
  (2) Are high CPU times related to increase in thread usage?  ==
 possible
  livelock in looping code?
  (3) how many network connections come into the Tomcat server during
  high-CPU times?Possible overload-related?
 
  Here is the script.  I made a couple of small changes, for e.g.,
 changing
  the username.  But didn't test it after the change.  During high-CPU
 times,
  invoke the script a few times, say 30 seconds apart.  And then compare
 the
  thread-dumps.  I like to use TDA for thread-dump analysis of Tomcat
  thread-dumps.
 
  Mark, et al, please feel free to help me refine this script.  I would
 like
  to have a script to catch STUCK threads too :-)  Let me know if anyone
 has
  a script already.  Thanks.
 
  --high_cpu_diagnostics.pl:-
  #!/usr/bin/perl
  #
 
  use Cwd;
 
  # Make a dated directory for capturing current diagnostics
  my ($sec,$min,$hour,$mday,$mon,$year,
$wday,$yday,$isdst) = localtime time;
  $year += 1900;
  $mon += 1;
  my $pwd = cwd();
  my $preview_diag_dir =
 /tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec;
  print $preview_diag_dir\n;
  mkdir $preview_diag_dir, 0755;
  chdir($preview_diag_dir) or die Can't chdir into $preview_diag_dir
 $!\n;
 
  # Capture Preview thread dump
  my $process_pattern = preview;
  my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
  my $login = getpwuid($) ;
  if (kill 0, $preview_pid){
  #Possible to send a signal to the Preview Tomcat - either
 webinf
  or root
  my $count = kill 3, $preview_pid;
  }else {
  # Not possible to send a signal to the VCM - use sudo
  system (/usr/bin/sudo /bin/kill -3 $preview_pid);
  }
 
  # Capture Preview thread dump
  system (/usr/bin/jmap
  -dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof
 $preview_pid);
 
  # Gather the top threads; keep around for reference on what other
 threads
  are running
  @top_cmd = (/usr/bin/top,  -H, -n1, -b);
  @sort_cmd = (/bin/sort, -r, -n, -k, 9,9);
  @sed_cmd = (/bin/sed, -n, '8,$p');
  system(@top_cmd 1 top_all_threads.log);
 
  # Get your tomcat user's threads, i.e. threads of user, webinf
  system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k
 9,9 |
  /bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');
 
  # Get the thread dump instaGrok_sml.png



-- 
Jeff MAURY


Legacy code often differs from its suggested alternative by actually
working and scaling.
 - Bjarne Stroustrup

http://www.jeffmaury.com
http://riadiscuss.jeffmaury.com
http://www.twitter.com/jeffmaury


Re: high CPU usage on tomcat 7

2012-09-30 Thread mailingl...@j-b-s.de
Well, if you have 4 cores and all cores are looping tomcat definitely will not 
respond any more...

Von meinem iPad gesendet

Am 30.09.2012 um 12:42 schrieb Jeff MAURY jeffma...@jeffmaury.com:

 I don't think a cpu loop will make tomcat stopping responding to requests
 I will make it very slow to respond
 But a shortage on memory is hard to recover
 
 Jeff
 
 Le vendredi 28 septembre 2012, mailingl...@j-b-s.de a écrit :
 
 Maybe an infinite loop? We observed something similar due to a bug in the
 java regex impl and certain user input causes this regex looping behaviour.
 As this locked one core but to the user it simply looked like server was
 not responding, guess what happend? Right: they press refresh page and next
 core gone :-)
 So running state might be evil, too.
 Switch on GC logging, maybe its just related to a full gc. In case this
 happens again take a thread dump and search similarities. Take multiple
 dumps in a 5 sec interval. Try to find long running threads (in our case we
 noticed the regex bug)
 
 Jens
 
 Sent from my iPhone
 
 On 27.09.2012, at 22:05, Kirill Kireyev kir...@instagrok.com wrote:
 
 Thanks for all the advice everyone! There is a possibility that the CPU
 is caused by an app thread - I am looking into that possibility. Will let
 you know when I find out more.
 
 Thanks,
 Kirill
 
 On 9/27/12 12:17 PM, Shanti Suresh wrote:
 Hi Kirill,
 
 Like Mark, Bill and Jeff said, those threads are normal
 request-processing
 threads.  I have included a script that might help with isolating high
 CPU
 issues with Tomcat.
 
 Also, I think it might be helpful to see how the Java heap is
 performing as
 well.
 Please bring up Jconsole and let it run over the week.  Inspect the
 graphs
 for Memory, CPU and threads.  Since you say that high CPU occurs
 intermittently several times during the week and clears itself, I
 wonder if
 it is somehow related with the garbage collection options you are using
 for
 the server.  Or it may be a code-related problem.
 
 Things to look at may include:
 
 (1) Are high CPU times related to Java heap reductions happening at the
 same time?  == GC possibly needs tuning
 (2) Are high CPU times related to increase in thread usage?  ==
 possible
 livelock in looping code?
 (3) how many network connections come into the Tomcat server during
 high-CPU times?Possible overload-related?
 
 Here is the script.  I made a couple of small changes, for e.g.,
 changing
 the username.  But didn't test it after the change.  During high-CPU
 times,
 invoke the script a few times, say 30 seconds apart.  And then compare
 the
 thread-dumps.  I like to use TDA for thread-dump analysis of Tomcat
 thread-dumps.
 
 Mark, et al, please feel free to help me refine this script.  I would
 like
 to have a script to catch STUCK threads too :-)  Let me know if anyone
 has
 a script already.  Thanks.
 
 --high_cpu_diagnostics.pl:-
 #!/usr/bin/perl
 #
 
 use Cwd;
 
 # Make a dated directory for capturing current diagnostics
 my ($sec,$min,$hour,$mday,$mon,$year,
  $wday,$yday,$isdst) = localtime time;
 $year += 1900;
 $mon += 1;
 my $pwd = cwd();
 my $preview_diag_dir =
 /tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec;
 print $preview_diag_dir\n;
 mkdir $preview_diag_dir, 0755;
 chdir($preview_diag_dir) or die Can't chdir into $preview_diag_dir
 $!\n;
 
 # Capture Preview thread dump
 my $process_pattern = preview;
 my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
 my $login = getpwuid($) ;
 if (kill 0, $preview_pid){
#Possible to send a signal to the Preview Tomcat - either
 webinf
 or root
my $count = kill 3, $preview_pid;
 }else {
# Not possible to send a signal to the VCM - use sudo
system (/usr/bin/sudo /bin/kill -3 $preview_pid);
 }
 
 # Capture Preview thread dump
 system (/usr/bin/jmap
 -dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof
 $preview_pid);
 
 # Gather the top threads; keep around for reference on what other
 threads
 are running
 @top_cmd = (/usr/bin/top,  -H, -n1, -b);
 @sort_cmd = (/bin/sort, -r, -n, -k, 9,9);
 @sed_cmd = (/bin/sed, -n, '8,$p');
 system(@top_cmd 1 top_all_threads.log);
 
 # Get your tomcat user's threads, i.e. threads of user, webinf
 system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k
 9,9 |
 /bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');
 
 # Get the thread dump instaGrok_sml.png
 
 
 
 -- 
 Jeff MAURY
 
 
 Legacy code often differs from its suggested alternative by actually
 working and scaling.
 - Bjarne Stroustrup
 
 http://www.jeffmaury.com
 http://riadiscuss.jeffmaury.com
 http://www.twitter.com/jeffmaury

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-09-28 Thread mailingl...@j-b-s.de
:11 PM, Bill Miller 
 millebi.subscripti...@gmail.com wrote:
 
 I agree; we have reproducible instances where PermGen is not set to our
 requirements on the Tomcat startup parameters and it will cause a lockup
 every time. Do some JMX monitoring and you may discover a memory spike
 that's killing Tomcat.
 
 Bill
 -Original Message-
 From: Jeff MAURY [mailto:jeffma...@gmail.com]
 Sent: September-27-2012 2:01 PM
 To: Tomcat Users List
 Subject: Re: high CPU usage on tomcat 7
 
 This is probably due to out of memory, I have the same problem on my ubuntu
 ci machine Did you monitor your tomcat with jmx ?
 
 Jeff
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 
 
 -- 
 Kirill Kireyev, PhD
 Founder/CTO instaGrok.com
 kir...@instagrok.com
 Twitter: @instaGrok
 FB: facebook.com/instagrok
 instaGrok_sml.png


high CPU usage on tomcat 7

2012-09-27 Thread Kirill Kireyev

Hi!

I'm periodically getting unduly high (100%) CPU usage by the tomcat 
process on my server. This problems happens intermittently, several 
times a week. When the server goes into this high CPU it does not come 
back (and becomes unresponsive to new requests), and the only recourse 
is to restart the tomcat process.


I'm using Tomcat 7.0.30, with APR and Apache web server, on a Ubuntu 
11.10 server with 32g of RAM / 8 CPUs.


I've done several jstack stack traces when this occurs, and what I 
consistently see, are the connector threads in the RUNNABLE state every 
time, i.e.:


ajp-apr-8009-Acceptor-0 daemon prio=10 tid=0x010a1000 nid=0x539 
runnable [0x7f9364f8e000]

   java.lang.Thread.State: RUNNABLE
at org.apache.tomcat.jni.Socket.accept(Native Method)
at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)

at java.lang.Thread.run(Thread.java:722)

http-apr-8443-Acceptor-0 daemon prio=10 tid=0x0109b800 
nid=0x535 runnable [0x7f936551]

   java.lang.Thread.State: RUNNABLE
at org.apache.tomcat.jni.Socket.accept(Native Method)
at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)

at java.lang.Thread.run(Thread.java:722)

http-apr-8080-Acceptor-0 daemon prio=10 tid=0x015ab000 
nid=0x531 runnable [0x7f9365a92000]

   java.lang.Thread.State: RUNNABLE
at org.apache.tomcat.jni.Socket.accept(Native Method)
at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)

at java.lang.Thread.run(Thread.java:722)

Other threads are in RUNNBLE too in different cases, but these are the 
one that are always there when the high CPU occurs. That's why I'm 
starting to think it has something to do with Tomcat.


Can anyone shed some light on this? My current Connector configurations 
in server.xml are:
 Connector port=8080 
protocol=org.apache.coyote.http11.Http11AprProtocol

   connectionTimeout=2
   maxThreads=500 minSpareThreads=10 maxSpareThreads=20
   redirectPort=8443
   pollTime=10 /
...
Connector port=8443 
protocol=org.apache.coyote.http11.Http11AprProtocol
maxThreads=200 scheme=https secure=true 
SSLEnabled=true

SSLCACertificateFile=
SSLCertificateKeyFile=
SSLCertificateFile=***
enableLookups=false clientAuth=false sslProtocol=TLS
pollTime=10 /
...
Connector port=8009 protocol=AJP/1.3 redirectPort=8443
   acceptCount=100 connectionTimeout=5000 
keepAliveTimeout=2

   disableUploadTimeout=true enableLookups=false
   maxHttpHeaderSize=8192
   maxSpareThreads=75 maxThreads=150
   minSpareThreads=25
   executor=default /

Thanks a lot!
-Kirill

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-09-27 Thread Mark Thomas


Kirill Kireyev kir...@instagrok.com wrote:

Hi!

I'm periodically getting unduly high (100%) CPU usage by the tomcat 
process on my server. This problems happens intermittently, several 
times a week. When the server goes into this high CPU it does not come 
back (and becomes unresponsive to new requests), and the only recourse 
is to restart the tomcat process.

I'm using Tomcat 7.0.30, with APR and Apache web server, on a Ubuntu 
11.10 server with 32g of RAM / 8 CPUs.

I've done several jstack stack traces when this occurs, and what I 
consistently see, are the connector threads in the RUNNABLE state every

time, i.e.:

ajp-apr-8009-Acceptor-0 daemon prio=10 tid=0x010a1000
nid=0x539 
runnable [0x7f9364f8e000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.accept(Native Method)
 at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.java:722)

http-apr-8443-Acceptor-0 daemon prio=10 tid=0x0109b800 
nid=0x535 runnable [0x7f936551]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.accept(Native Method)
 at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.java:722)

http-apr-8080-Acceptor-0 daemon prio=10 tid=0x015ab000 
nid=0x531 runnable [0x7f9365a92000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.accept(Native Method)
 at 
org.apache.tomcat.util.net.AprEndpoint$Acceptor.run(AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.java:722)

Other threads are in RUNNBLE too in different cases, but these are the 
one that are always there when the high CPU occurs. That's why I'm 
starting to think it has something to do with Tomcat.

Those threads look ok to me. As acceptor threads that is what i would expect.

Can anyone shed some light on this?

With the information you have provided? Very unlikely.

What you need to do is use ps to look at CPU usage per thread (not per process) 
and then match the offending thread ID to the thread ID in the thread dump.

Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-09-27 Thread Jeff MAURY
This is probably due to out of memory, I have the same problem on my ubuntu
ci machine
Did you monitor your tomcat with jmx ?

Jeff
Le 27 sept. 2012 17:39, Kirill Kireyev kir...@instagrok.com a écrit :

 Hi!

 I'm periodically getting unduly high (100%) CPU usage by the tomcat
 process on my server. This problems happens intermittently, several times a
 week. When the server goes into this high CPU it does not come back (and
 becomes unresponsive to new requests), and the only recourse is to restart
 the tomcat process.

 I'm using Tomcat 7.0.30, with APR and Apache web server, on a Ubuntu 11.10
 server with 32g of RAM / 8 CPUs.

 I've done several jstack stack traces when this occurs, and what I
 consistently see, are the connector threads in the RUNNABLE state every
 time, i.e.:

 ajp-apr-8009-Acceptor-0 daemon prio=10 tid=0x010a1000 nid=0x539
 runnable [0x7f9364f8e000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 http-apr-8443-Acceptor-0 daemon prio=10 tid=0x0109b800 nid=0x535
 runnable [0x7f936551]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 http-apr-8080-Acceptor-0 daemon prio=10 tid=0x015ab000 nid=0x531
 runnable [0x7f9365a92000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 Other threads are in RUNNBLE too in different cases, but these are the one
 that are always there when the high CPU occurs. That's why I'm starting to
 think it has something to do with Tomcat.

 Can anyone shed some light on this? My current Connector configurations in
 server.xml are:
  Connector port=8080 protocol=org.apache.coyote.**
 http11.Http11AprProtocol
connectionTimeout=2
maxThreads=500 minSpareThreads=10 maxSpareThreads=20
redirectPort=8443
pollTime=10 /
 ...
 Connector port=8443 protocol=org.apache.coyote.**
 http11.Http11AprProtocol
 maxThreads=200 scheme=https secure=true SSLEnabled=true
 SSLCACertificateFile=**
 SSLCertificateKeyFile=**
 SSLCertificateFile=***
 enableLookups=false clientAuth=false sslProtocol=TLS
 pollTime=10 /
 ...
 Connector port=8009 protocol=AJP/1.3 redirectPort=8443
acceptCount=100 connectionTimeout=5000
 keepAliveTimeout=2
disableUploadTimeout=true enableLookups=false
maxHttpHeaderSize=8192
maxSpareThreads=75 maxThreads=150
minSpareThreads=25
executor=default /

 Thanks a lot!
 -Kirill

 --**--**-
 To unsubscribe, e-mail: 
 users-unsubscribe@tomcat.**apache.orgusers-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




RE: high CPU usage on tomcat 7

2012-09-27 Thread Bill Miller
I agree; we have reproducible instances where PermGen is not set to our
requirements on the Tomcat startup parameters and it will cause a lockup
every time. Do some JMX monitoring and you may discover a memory spike
that's killing Tomcat.

Bill
-Original Message-
From: Jeff MAURY [mailto:jeffma...@gmail.com] 
Sent: September-27-2012 2:01 PM
To: Tomcat Users List
Subject: Re: high CPU usage on tomcat 7

This is probably due to out of memory, I have the same problem on my ubuntu
ci machine Did you monitor your tomcat with jmx ?

Jeff
Le 27 sept. 2012 17:39, Kirill Kireyev kir...@instagrok.com a écrit :

 Hi!

 I'm periodically getting unduly high (100%) CPU usage by the tomcat 
 process on my server. This problems happens intermittently, several 
 times a week. When the server goes into this high CPU it does not come 
 back (and becomes unresponsive to new requests), and the only recourse 
 is to restart the tomcat process.

 I'm using Tomcat 7.0.30, with APR and Apache web server, on a Ubuntu 
 11.10 server with 32g of RAM / 8 CPUs.

 I've done several jstack stack traces when this occurs, and what I 
 consistently see, are the connector threads in the RUNNABLE state 
 every time, i.e.:

 ajp-apr-8009-Acceptor-0 daemon prio=10 tid=0x010a1000 
 nid=0x539 runnable [0x7f9364f8e000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 http-apr-8443-Acceptor-0 daemon prio=10 tid=0x0109b800 
 nid=0x535 runnable [0x7f936551]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 http-apr-8080-Acceptor-0 daemon prio=10 tid=0x015ab000 
 nid=0x531 runnable [0x7f9365a92000]
java.lang.Thread.State: RUNNABLE
 at org.apache.tomcat.jni.Socket.**accept(Native Method)
 at org.apache.tomcat.util.net.**AprEndpoint$Acceptor.run(**
 AprEndpoint.java:1013)
 at java.lang.Thread.run(Thread.**java:722)

 Other threads are in RUNNBLE too in different cases, but these are the 
 one that are always there when the high CPU occurs. That's why I'm 
 starting to think it has something to do with Tomcat.

 Can anyone shed some light on this? My current Connector 
 configurations in server.xml are:
  Connector port=8080 protocol=org.apache.coyote.** 
 http11.Http11AprProtocol
connectionTimeout=2
maxThreads=500 minSpareThreads=10 maxSpareThreads=20
redirectPort=8443
pollTime=10 /
 ...
 Connector port=8443 protocol=org.apache.coyote.** 
 http11.Http11AprProtocol
 maxThreads=200 scheme=https secure=true
SSLEnabled=true
 SSLCACertificateFile=**
 SSLCertificateKeyFile=**
 SSLCertificateFile=***
 enableLookups=false clientAuth=false sslProtocol=TLS
 pollTime=10 /
 ...
 Connector port=8009 protocol=AJP/1.3 redirectPort=8443
acceptCount=100 connectionTimeout=5000
 keepAliveTimeout=2
disableUploadTimeout=true enableLookups=false
maxHttpHeaderSize=8192
maxSpareThreads=75 maxThreads=150
minSpareThreads=25
executor=default /

 Thanks a lot!
 -Kirill

 --**--**--
 --- To unsubscribe, e-mail: 
 users-unsubscribe@tomcat.**apache.orgusers-unsubscribe@tomcat.apache.
 org For additional commands, e-mail: users-h...@tomcat.apache.org




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: high CPU usage on tomcat 7

2012-09-27 Thread Shanti Suresh
Hi Kirill,

Like Mark, Bill and Jeff said, those threads are normal request-processing
threads.  I have included a script that might help with isolating high CPU
issues with Tomcat.

Also, I think it might be helpful to see how the Java heap is performing as
well.
Please bring up Jconsole and let it run over the week.  Inspect the graphs
for Memory, CPU and threads.  Since you say that high CPU occurs
intermittently several times during the week and clears itself, I wonder if
it is somehow related with the garbage collection options you are using for
the server.  Or it may be a code-related problem.

Things to look at may include:

(1) Are high CPU times related to Java heap reductions happening at the
same time?  == GC possibly needs tuning
(2) Are high CPU times related to increase in thread usage?  == possible
livelock in looping code?
(3) how many network connections come into the Tomcat server during
high-CPU times?Possible overload-related?

Here is the script.  I made a couple of small changes, for e.g., changing
the username.  But didn't test it after the change.  During high-CPU times,
invoke the script a few times, say 30 seconds apart.  And then compare the
thread-dumps.  I like to use TDA for thread-dump analysis of Tomcat
thread-dumps.

Mark, et al, please feel free to help me refine this script.  I would like
to have a script to catch STUCK threads too :-)  Let me know if anyone has
a script already.  Thanks.

--high_cpu_diagnostics.pl:-
#!/usr/bin/perl
#

use Cwd;

# Make a dated directory for capturing current diagnostics
my ($sec,$min,$hour,$mday,$mon,$year,
  $wday,$yday,$isdst) = localtime time;
$year += 1900;
$mon += 1;
my $pwd = cwd();
my $preview_diag_dir = /tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec;
print $preview_diag_dir\n;
mkdir $preview_diag_dir, 0755;
chdir($preview_diag_dir) or die Can't chdir into $preview_diag_dir $!\n;

# Capture Preview thread dump
my $process_pattern = preview;
my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
my $login = getpwuid($) ;
if (kill 0, $preview_pid){
#Possible to send a signal to the Preview Tomcat - either webinf
or root
my $count = kill 3, $preview_pid;
}else {
# Not possible to send a signal to the VCM - use sudo
system (/usr/bin/sudo /bin/kill -3 $preview_pid);
}

# Capture Preview thread dump
system (/usr/bin/jmap
-dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid);

# Gather the top threads; keep around for reference on what other threads
are running
@top_cmd = (/usr/bin/top,  -H, -n1, -b);
@sort_cmd = (/bin/sort, -r, -n, -k, 9,9);
@sed_cmd = (/bin/sed, -n, '8,$p');
system(@top_cmd 1 top_all_threads.log);

# Get your tomcat user's threads, i.e. threads of user, webinf
system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k 9,9 |
/bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

# Get the thread dump
my @output=`/usr/bin/jstack -l ${preview_pid}`;
open (my $file, '', 'preview_threaddump.txt') or die Could not open file:
$!;
print $file @output;
close $file;

open LOG, top_user_webinf_threads.log or die $!;
open (STDOUT, | tee -ai top_cpu_preview_threads.log);
print PID\tCPU\tMem\tJStack Info\n;
while ($l = LOG) {
chop $l;
$pid = $l;
$pid =~ s/webinf.*//g;
$pid =~ s/ *//g;
##  Hex PID is available in the Sun HotSpot Stack Trace */
$hex_pid = sprintf(%#x, $pid);
@values = split(/\s+/, $l);
$pct = $values[8];
$mem = $values[9];
# Debugger breakpoint:
$DB::single = 1;

# Find the Java thread that corresponds to the thread-id from the TOP output
for my $j (@output) {
chop $j;
($j =~ m/nid=$hex_pid/)print $hex_pid . \t . $pct . \t .
$mem . \t .  $j . \n;
}
}

close (STDOUT);

close LOG;


--end of script --

Thanks.

  -Shanti


On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
millebi.subscripti...@gmail.com wrote:

 I agree; we have reproducible instances where PermGen is not set to our
 requirements on the Tomcat startup parameters and it will cause a lockup
 every time. Do some JMX monitoring and you may discover a memory spike
 that's killing Tomcat.

 Bill
 -Original Message-
 From: Jeff MAURY [mailto:jeffma...@gmail.com]
 Sent: September-27-2012 2:01 PM
 To: Tomcat Users List
 Subject: Re: high CPU usage on tomcat 7

 This is probably due to out of memory, I have the same problem on my ubuntu
 ci machine Did you monitor your tomcat with jmx ?

 Jeff


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org




Re: high CPU usage on tomcat 7

2012-09-27 Thread Shanti Suresh
Hi Kirill,

I mistook that the CPU issue clears itself.  Sorry.  It may or may not be
related to Garbage-collection settings then.

   -Shanti

On Thu, Sep 27, 2012 at 2:17 PM, Shanti Suresh sha...@umich.edu wrote:

 Hi Kirill,

 Like Mark, Bill and Jeff said, those threads are normal request-processing
 threads.  I have included a script that might help with isolating high CPU
 issues with Tomcat.

 Also, I think it might be helpful to see how the Java heap is performing
 as well.
 Please bring up Jconsole and let it run over the week.  Inspect the graphs
 for Memory, CPU and threads.  Since you say that high CPU occurs
 intermittently several times during the week and clears itself, I wonder if
 it is somehow related with the garbage collection options you are using for
 the server.  Or it may be a code-related problem.

 Things to look at may include:

 (1) Are high CPU times related to Java heap reductions happening at the
 same time?  == GC possibly needs tuning
 (2) Are high CPU times related to increase in thread usage?  == possible
 livelock in looping code?
 (3) how many network connections come into the Tomcat server during
 high-CPU times?Possible overload-related?

 Here is the script.  I made a couple of small changes, for e.g., changing
 the username.  But didn't test it after the change.  During high-CPU times,
 invoke the script a few times, say 30 seconds apart.  And then compare the
 thread-dumps.  I like to use TDA for thread-dump analysis of Tomcat
 thread-dumps.

 Mark, et al, please feel free to help me refine this script.  I would like
 to have a script to catch STUCK threads too :-)  Let me know if anyone has
 a script already.  Thanks.

 --high_cpu_diagnostics.pl:-
 #!/usr/bin/perl
 #

 use Cwd;

 # Make a dated directory for capturing current diagnostics
 my ($sec,$min,$hour,$mday,$mon,$year,
   $wday,$yday,$isdst) = localtime time;
 $year += 1900;
 $mon += 1;
 my $pwd = cwd();
 my $preview_diag_dir =
 /tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec;
 print $preview_diag_dir\n;
 mkdir $preview_diag_dir, 0755;
 chdir($preview_diag_dir) or die Can't chdir into $preview_diag_dir $!\n;

 # Capture Preview thread dump
 my $process_pattern = preview;
 my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
 my $login = getpwuid($) ;
 if (kill 0, $preview_pid){
 #Possible to send a signal to the Preview Tomcat - either webinf
 or root
 my $count = kill 3, $preview_pid;
 }else {
 # Not possible to send a signal to the VCM - use sudo
 system (/usr/bin/sudo /bin/kill -3 $preview_pid);
 }

 # Capture Preview thread dump
 system (/usr/bin/jmap
 -dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid);

 # Gather the top threads; keep around for reference on what other threads
 are running
 @top_cmd = (/usr/bin/top,  -H, -n1, -b);
 @sort_cmd = (/bin/sort, -r, -n, -k, 9,9);
 @sed_cmd = (/bin/sed, -n, '8,$p');
 system(@top_cmd 1 top_all_threads.log);

 # Get your tomcat user's threads, i.e. threads of user, webinf
 system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k 9,9
 | /bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

 # Get the thread dump
 my @output=`/usr/bin/jstack -l ${preview_pid}`;
 open (my $file, '', 'preview_threaddump.txt') or die Could not open
 file: $!;
 print $file @output;
 close $file;

 open LOG, top_user_webinf_threads.log or die $!;
 open (STDOUT, | tee -ai top_cpu_preview_threads.log);
 print PID\tCPU\tMem\tJStack Info\n;
 while ($l = LOG) {
 chop $l;
 $pid = $l;
 $pid =~ s/webinf.*//g;
 $pid =~ s/ *//g;
 ##  Hex PID is available in the Sun HotSpot Stack Trace */
 $hex_pid = sprintf(%#x, $pid);
 @values = split(/\s+/, $l);
 $pct = $values[8];
 $mem = $values[9];
 # Debugger breakpoint:
 $DB::single = 1;

 # Find the Java thread that corresponds to the thread-id from the TOP
 output
 for my $j (@output) {
 chop $j;
 ($j =~ m/nid=$hex_pid/)print $hex_pid . \t . $pct . \t .
 $mem . \t .  $j . \n;
 }
 }

 close (STDOUT);

 close LOG;


 --end of script --

 Thanks.

   -Shanti


 On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
 millebi.subscripti...@gmail.com wrote:

 I agree; we have reproducible instances where PermGen is not set to our
 requirements on the Tomcat startup parameters and it will cause a lockup
 every time. Do some JMX monitoring and you may discover a memory spike
 that's killing Tomcat.

 Bill
 -Original Message-
 From: Jeff MAURY [mailto:jeffma...@gmail.com]
 Sent: September-27-2012 2:01 PM
 To: Tomcat Users List
 Subject: Re: high CPU usage on tomcat 7

 This is probably due to out of memory, I have the same problem on my
 ubuntu
 ci machine Did you monitor your tomcat with jmx ?

 Jeff


 -
 To unsubscribe, e-mail: users

Re: high CPU usage on tomcat 7

2012-09-27 Thread Kirill Kireyev

  
  
Thanks for all the advice everyone!
  There is a possibility that the CPU is caused by an app thread - I
  am looking into that possibility. Will let you know when I find
  out more.
  
  Thanks,
  Kirill
  
  On 9/27/12 12:17 PM, Shanti Suresh wrote:


  Hi Kirill,

Like Mark, Bill and Jeff said, those threads are normal request-processing
threads.  I have included a script that might help with isolating high CPU
issues with Tomcat.

Also, I think it might be helpful to see how the Java heap is performing as
well.
Please bring up Jconsole and let it run over the week.  Inspect the graphs
for Memory, CPU and threads.  Since you say that high CPU occurs
intermittently several times during the week and clears itself, I wonder if
it is somehow related with the garbage collection options you are using for
the server.  Or it may be a code-related problem.

Things to look at may include:

(1) Are high CPU times related to Java heap reductions happening at the
same time?  == GC possibly needs tuning
(2) Are high CPU times related to increase in thread usage?  == possible
livelock in looping code?
(3) how many network connections come into the Tomcat server during
high-CPU times?Possible overload-related?

Here is the script.  I made a couple of small changes, for e.g., changing
the username.  But didn't test it after the change.  During high-CPU times,
invoke the script a few times, say 30 seconds apart.  And then compare the
thread-dumps.  I like to use TDA for thread-dump analysis of Tomcat
thread-dumps.

Mark, et al, please feel free to help me refine this script.  I would like
to have a script to catch STUCK threads too :-)  Let me know if anyone has
a script already.  Thanks.

--high_cpu_diagnostics.pl:-
#!/usr/bin/perl
#

use Cwd;

# Make a dated directory for capturing current diagnostics
my ($sec,$min,$hour,$mday,$mon,$year,
  $wday,$yday,$isdst) = localtime time;
$year += 1900;
$mon += 1;
my $pwd = cwd();
my $preview_diag_dir = "/tmp/Preview_Diag.$year-$mon-$mday-$hour:$min:$sec";
print "$preview_diag_dir\n";
mkdir $preview_diag_dir, 0755;
chdir($preview_diag_dir) or die "Can't chdir into $preview_diag_dir $!\n";

# Capture Preview thread dump
my $process_pattern = "preview";
my $preview_pid = `/usr/bin/pgrep -f $process_pattern`;
my $login = getpwuid($) ;
if (kill 0, $preview_pid){
#Possible to send a signal to the Preview Tomcat - either "webinf"
or "root"
my $count = kill 3, $preview_pid;
}else {
# Not possible to send a signal to the VCM - use "sudo"
system ("/usr/bin/sudo /bin/kill -3 $preview_pid");
}

# Capture Preview thread dump
system ("/usr/bin/jmap
-dump:format=b,file=$preview_diag_dir/preview_heapdump.hprof $preview_pid");

# Gather the top threads; keep around for reference on what other threads
are running
@top_cmd = ("/usr/bin/top",  "-H", "-n1", "-b");
@sort_cmd = ("/bin/sort", "-r", "-n", "-k", "9,9");
@sed_cmd = ("/bin/sed", "-n", "'8,$p'");
system("@top_cmd 1 top_all_threads.log");

# Get your tomcat user's threads, i.e. threads of user, "webinf"
system('/usr/bin/tail -n+6 top_all_threads.log | /bin/sort -r -n -k "9,9" |
/bin/grep webinf top_all_threads.log 1 top_user_webinf_threads.log');

# Get the thread dump
my @output=`/usr/bin/jstack -l ${preview_pid}`;
open (my $file, '', 'preview_threaddump.txt') or die "Could not open file:
$!";
print $file @output;
close $file;

open LOG, "top_user_webinf_threads.log" or die $!;
open (STDOUT, "| tee -ai top_cpu_preview_threads.log");
print "PID\tCPU\tMem\tJStack Info\n";
while ($l = LOG) {
chop $l;
$pid = $l;
$pid =~ s/webinf.*//g;
$pid =~ s/ *//g;
##  Hex PID is available in the Sun HotSpot Stack Trace */
$hex_pid = sprintf("%#x", $pid);
@values = split(/\s+/, $l);
$pct = $values[8];
$mem = $values[9];
# Debugger breakpoint:
$DB::single = 1;

# Find the Java thread that corresponds to the thread-id from the TOP output
for my $j (@output) {
chop $j;
($j =~ m/nid=$hex_pid/)print $hex_pid . "\t" . $pct . "\t" .
$mem . "\t" .  $j . "\n";
}
}

close (STDOUT);

close LOG;


--end of script --

Thanks.

  -Shanti


On Thu, Sep 27, 2012 at 2:11 PM, Bill Miller 
millebi.subscripti...@gmail.com wrote:


  
I agree; we have reproducible instances where PermGen is not set to our
requirements on the Tomcat startup parameters and it will cause a "lockup"
every time. Do some JMX monitoring and you may discover a m

Re: High CPU usage in Tomcat 7

2012-06-27 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

James,

On 6/20/12 12:27 PM, James Lampert wrote:
 We just had a report of extremely high CPU usage from the Tomcat
 job on one of our customer installations. A WRKACTJOB screen shot
 from before we forcibly shut Tomcat down and restarted it shows:
 
 Subsystem/Job   Type  CPU %  FunctionStatus CATALINA
 BCH  .0  CMD-QSH TIMW QP0ZSPWT  BCI   112.2
 JVM-org.apache  TIMW (QP0ZSPWT being the system-generated job
 that's doing the actual work for the CATALINA job.)
 
 Of particular interest is that, at least at the moment the screen
 shot was taken, the QP0ZSPWT job was taking up what appears to be
 more than an entire processor, even though it's in a time-wait
 state.
 
 Based on a Google search on tomcat 7 high cpu usage, I'm
 suspecting a previously unknown tightloop in our application (which
 was what I suspected even before I did the Google search). The
 pages I looked at also said something about profiling and thread
 dumps, to find the offending thread, but since the job has been
 terminated and restarted, and is not currently malfunctioning, I
 wouldn't be able to do so even if I knew how (which at present I
 don't).
 
 I've passed on the log files generated by our application itself
 to someone better equipped to deal with them than I, and I've asked
 the Java-400 List at Midrange.com about AS/400-specific steps to
 track down the offending thread if the problem is observed again,
 but I would also value any insights this list might offer.

The advice you got about thread dumps was spot-on: get yourself a
thread dump [1] whenever you think your process is using too much CPU
time. Better yet, take a few of them and compare. If you do have a
tight loop, you'll probably be able to see it because one thread will
be stuck in the same method for a while.

Taking a thread dump *should* be easy (not sure on AS/400) and it
doesn't take a long time to get one. That means you don't disturb
current users like taking a heap dump would (heap dumps in my
experience tend to pause the entire JVM). I suppose you're about to
take-down the JVM so user inconvenience isn't a huge deal.

You might also consider that high CPU usage isn't necessarily bad,
unless it's impacting the operation of one or more services. Assuming
that your suspected-tight-loop finally completes, it might be better
to just let it finish rather than taking-down the JVM entirely.

- -chris

[1]
http://wiki.apache.org/tomcat/HowTo#How_do_I_obtain_a_thread_dump_of_my_running_webapp_.3F
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk/rTZEACgkQ9CaO5/Lv0PDL5wCgu683MJwUBQzgn2HKPcDinUEF
PyYAnAvPWYtbSB8PiKF4OfFchPKbMTcL
=ETGk
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



High CPU usage in Tomcat 7

2012-06-20 Thread James Lampert
We just had a report of extremely high CPU usage from the Tomcat job on 
one of our customer installations. A WRKACTJOB screen shot from before 
we forcibly shut Tomcat down and restarted it shows:


Subsystem/Job   Type  CPU %  FunctionStatus
  CATALINA  BCH  .0  CMD-QSH TIMW
  QP0ZSPWT  BCI   112.2  JVM-org.apache  TIMW
(QP0ZSPWT being the system-generated job that's doing the actual work 
for the CATALINA job.)


Of particular interest is that, at least at the moment the screen shot 
was taken, the QP0ZSPWT job was taking up what appears to be more than 
an entire processor, even though it's in a time-wait state.


Based on a Google search on tomcat 7 high cpu usage, I'm suspecting a 
previously unknown tightloop in our application (which was what I 
suspected even before I did the Google search). The pages I looked at 
also said something about profiling and thread dumps, to find the 
offending thread, but since the job has been terminated and restarted, 
and is not currently malfunctioning, I wouldn't be able to do so even if 
I knew how (which at present I don't).


I've passed on the log files generated by our application itself to 
someone better equipped to deal with them than I, and I've asked the 
Java-400 List at Midrange.com about AS/400-specific steps to track down 
the offending thread if the problem is observed again, but I would also 
value any insights this list might offer.


--
JHHL

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org