Re: problem in GC tuning

2006-05-30 Thread charly
But the impact of GC logging on the performance should be low outside of the GC 
cycles ?

Isn't it?

Karl-heinz


- Original Message - 
From: Michael Echerer [EMAIL PROTECTED]

To: Tomcat Users List users@tomcat.apache.org
Sent: Friday, May 26, 2006 4:44 PM
Subject: Re: problem in GC tuning



2)  Do writing of logs at higher loads have some impact on the
performance?


You should definitely disable GC logging in production, if you don't
need it for diagnosis anylonger.

Cheers,
Michael







___ 
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de


-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: problem in GC tuning

2006-05-28 Thread Prashant kumar
I require that much of memory to avoid GC.
Is there any other way out for it ?



-Original Message-
From: Michael Echerer [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 26, 2006 8:14 PM
To: Tomcat Users List
Subject: Re: problem in GC tuning

Prashant kumar wrote:
 Hi All,

  
  1)  Is there any means  by which GC takes less CPU,  I mean that it
 can take longer time to complete  its one cycle (as memory is not a
 constraint) ? 

You've set quite a lot of heap 3400m. Do you really need that much
memory? The more you set, the more the GC needs to scan. If it's unused
however, try reducing Xmx and Xms to values that are sufficient for your
app.
Then try to size the generations to get smaller chunks to be scanned and
check the survivor ratios.

Give the incremental GC a try. Might get worse, but probably it works.

http://java.sun.com/docs/hotspot/gc1.4.2
  
 2)  Do writing of logs at higher loads have some impact on the
 performance?
  
You should definitely disable GC logging in production, if you don't
need it for diagnosis anylonger.

Cheers,
Michael



-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: problem in GC tuning

2006-05-26 Thread Michael Echerer
Prashant kumar wrote:
 Hi All,

  
  1)  Is there any means  by which GC takes less CPU,  I mean that it
 can take longer time to complete  its one cycle (as memory is not a
 constraint) ? 

You've set quite a lot of heap 3400m. Do you really need that much
memory? The more you set, the more the GC needs to scan. If it's unused
however, try reducing Xmx and Xms to values that are sufficient for your
app.
Then try to size the generations to get smaller chunks to be scanned and
check the survivor ratios.

Give the incremental GC a try. Might get worse, but probably it works.

http://java.sun.com/docs/hotspot/gc1.4.2
  
 2)  Do writing of logs at higher loads have some impact on the
 performance?
  
You should definitely disable GC logging in production, if you don't
need it for diagnosis anylonger.

Cheers,
Michael



-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



problem in GC tuning

2006-05-25 Thread Prashant kumar
Hi All,
 
I m working on an sip application server and want to increase its
performance. at max load the CPU of my server machine is around 45-50%
free. The issue I am facing is that for a short period of time (for
around 7-10 sec), free CPU goes below 25%. My server has a constraint
that it must have at least 25% CPU free every time. So at the time when
CPU goes below 25%, my server starts dropping requests. This limits my
server performance.
 
The reason why CPU goes below 25% is garbage collector. When GC occurs,
CPU goes below 25%.
 
My queries are,
 
 1)  Is there any means  by which GC takes less CPU,  I mean that it
can take longer time to complete  its one cycle (as memory is not a
constraint) ? 
 
2)  Do writing of logs at higher loads have some impact on the
performance?
 
3)  As memory is not a constraint, until now, are there some options
which can exploit this information for the betterment of the
performance?
 
 
I m using following garbage collector and heap option in my application
server:
 
export MEM_OPTS=-Xmx3400m -Xms3400m -XX:NewSize=194m
-XX:MaxNewSize=194m -XX:+UseTLAB
 
export GC_OPTS=${MEM_OPTS} -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
export GC_OPTS=${GC_OPTS} -XX:+CMSIncrementalMode
-XX:+CMSIncrementalPacing 
export GC_OPTS=${GC_OPTS} -Xloggc:logFileName -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
 
The typical GC output looks like:
 
7760.617: [CMS-concurrent-mark-start]
7765.387: [GC 7765.387: [ParNew: 198272K-0K(198464K), 0.1787951 secs]
996415K-812092K(3481408K) icms_dc=10 , 0.1790065 secs]
7774.972: [GC 7774.972: [ParNew: 198272K-0K(198464K), 0.1910697 secs]
1010364K-827429K(3481408K) icms_dc=10 , 0.1913241 secs]
7784.586: [GC 7784.586: [ParNew: 198272K-0K(198464K), 0.1910507 secs]
1025701K-842794K(3481408K) icms_dc=10 , 0.1912954 secs]
7792.251: [GC 7792.251: [ParNew: 198272K-0K(198464K), 0.1595620 secs]
1041066K-854987K(3481408K) icms_dc=10 , 0.1597866 secs]
7801.843: [GC 7801.843: [ParNew: 198272K-0K(198464K), 0.195 secs]
1053259K-870350K(3481408K) icms_dc=10 , 0.1902646 secs]
7806.146: [CMS-concurrent-mark: 5.022/45.529 secs]
7806.147: [CMS-concurrent-preclean-start]
7811.467: [GC 7811.467: [ParNew: 198272K-0K(198464K), 0.1943446 secs]
1068622K-885726K(3481408K) icms_dc=10 , 0.1945964 secs]
7820.015: [GC 7820.015: [ParNew: 198272K-0K(198464K), 0.1667043 secs]
1083998K-898767K(3481408K) icms_dc=10 , 0.1669592 secs]
7828.807: [GC 7828.807: [ParNew: 198272K-0K(198464K), 0.1982581 secs]
1097039K-914071K(3481408K) icms_dc=10 , 0.1984625 secs]
7838.324: [GC 7838.324: [ParNew: 198272K-0K(198464K), 0.2005505 secs]
1112343K-929384K(3481408K) icms_dc=10 , 0.2008112 secs]
7842.719: [CMS-concurrent-preclean: 4.244/36.572 secs]
7842.719: [GC7843.155: [weak refs processing, 0.0032912 secs] [1
CMS-remark: 929384K(3282944K)] 1021754K(3481408K), 0.4391301 secs]
7843.159: [CMS-concurrent-sweep-start]
7847.798: [GC 7847.798: [ParNew: 198272K-0K(198464K), 0.1952376 secs]
1113089K-929939K(3481408K) icms_dc=10 , 0.1954940 secs]
7855.450: [GC 7855.450: [ParNew: 198272K-0K(198464K), 0.1753630 secs]
1108874K-922848K(3481408K) icms_dc=10 , 0.1756148 secs]
7865.105: [GC 7865.105: [ParNew: 198272K-0K(198464K), 0.2015532 secs]
1092061K-909155K(3481408K) icms_dc=10 , 0.2018162 secs]
7874.678: [GC 7874.678: [ParNew: 198272K-0K(198464K), 0.1859961 secs]
1058045K-875118K(3481408K) icms_dc=10 , 0.1862536 secs]
7883.184: [GC 7883.184: [ParNew: 198272K-0K(198464K), 0.1593874 secs]
1003006K-818082K(3481408K) icms_dc=10 , 0.1596041 secs]
7891.866: [GC 7891.866: [ParNew: 198272K-0K(198464K), 0.1977762 secs]
963983K-781405K(3481408K) icms_dc=10 , 0.1979563 secs]
7896.965: [CMS-concurrent-sweep: 5.086/53.806 secs]
7896.965: [CMS-concurrent-reset-start]
7901.469: [GC 7901.469: [ParNew: 198272K-0K(198464K), 0.1865064 secs]
924647K-741740K(3481408K) icms_dc=10 , 0.1867165 secs]
7905.936: [CMS-concurrent-reset: 0.356/8.971 secs]
7911.426: [GC 7911.426: [ParNew: 198272K-0K(198464K), 0.1802738 secs]
940012K-757357K(3481408K) icms_dc=10 , 0.1805271 secs]
7913.587: [GC [1 CMS-initial-mark: 757357K(3282944K)] 846861K(3481408K),
0.2955865 secs]
7913.883: [CMS-concurrent-mark-start]
7918.611: [GC 7918.611: [ParNew: 198272K-0K(198464K), 0.1589979 secs]
955629K-768902K(3481408K) icms_dc=10 , 0.1592427 secs]
7928.066: [GC 7928.066: [ParNew: 198272K-0K(198464K), 0.1978113 secs]
967174K-784280K(3481408K) icms_dc=10 , 0.1980187 secs]
7937.656: [GC 7937.656: [ParNew: 198272K-0K(198464K), 0.1959725 secs]
982552K-799658K(3481408K) icms_dc=10 , 0.1962251 secs]
7945.307: [GC 7945.307: [ParNew: 198272K-0K(198464K), 0.1633872 secs]
997930K-811865K(3481408K) icms_dc=10 , 0.1636053 secs]
7954.925: [GC 7954.925: [ParNew: 198272K-0K(198464K), 0.1972658 secs]
1010137K-827248K(3481408K) icms_dc=10 , 0.1974907 secs]
7964.609: [GC 7964.610: [ParNew: 198272K-0K(198464K), 0.2009350 secs]
1025520K-842638K(3481408K) icms_dc=10 , 0.2011559 secs]
7968.873: [CMS-concurrent-mark: 5.773/54.990 secs]
7968.873: [CMS-concurrent-preclean-start]