Correction: the patch results in the RemoteManager trying to import and
declare a TimeoutWatchdogFactory which is not included in the list of
files.  However the RemoteManager is not actually using this so it can
be commented out.

Steve

> -----Original Message-----
> From: Steve Short 
> Sent: Monday, October 21, 2002 11:45 AM
> To: James Developers List
> Subject: RE: Latest diffs, source code
> 
> 
> Peter,
> 
> The class TimeoutWatchdogFactory is missing from this list.
> 
> Regards
> Steve
> 
> > -----Original Message-----
> > From: Peter M. Goldstein [mailto:peter_m_goldstein@;yahoo.com]
> > Sent: Saturday, October 19, 2002 2:01 PM
> > To: 'James Developers List'
> > Subject: Latest diffs, source code
> > 
> > 
> > 
> > All,
> > 
> > Attached is a set of source files and diffs that should allow
> > you to reproduce the code base that Noel and I have been 
> > using for our latest testing.
> > 
> > The test ran for over seven hours, with a rate that started
> > close to ~3000 messages/minute and stabilized at ~2500 
> > messages/minute.  We don't have a good explanation for this 
> > change, but the system was under a great deal of stress 
> > (logging, TCP/IP, etc.) that may have accounted for the 
> > eventual degradation.
> > 
> > Total number of threads was stable at 112 for the first hour
> > or two, but grew to 162 threads after that and stabilized.  
> > Analysis of the logs (~750 MB of them) confirmed that there 
> > was no thread leak (all worker threads were reused as 
> > expected over the course of the test run).  My suspicion is 
> > that the growth in the thread pool size was a result of a 
> > thread scheduler statistical anomaly (disposed but not exited 
> > connection handler/watchdog threads) and is not an actual 
> > cause for concern. Unfortunately this is difficult to confirm 
> > or refute, but it matches the observed behavior.
> > 
> > Java heap size grew slowly over the course of the test, from
> > about 4.5 MB to about 5MB.  We don't see this as a real cause 
> > for concern, considering the number of open resources, etc. 
> > that the product was managing.
> > 
> > The code executing threads from the thread pool is now
> > appropriately wrapped in try/catch/finally blocks that ensure 
> > that a temporarily empty thread pool does not constitute a 
> > fatal error.  If a thread is unavailable, the connection will 
> > simply be refused.  Obviously, if the thread pool is too 
> > small (i.e. no threads available at any time to service 
> > connections), then this will result in an unresponsive 
> > server. But the current configuration has proved to be more 
> > than adequate for our test.
> > 
> > Noel ended the test when he saw a notable performance
> > degradation (drop to <1000 messages/minute).  He later 
> > discovered that some other work was being done on the same 
> > server at that time that very well may have been responsible 
> > for the drop in performance.  So we don't actually know what 
> > caused this issue and it may not actually be an issue at all. 
> >  If Noel and I can find the time we will attempt to run 
> > another test to get info on this one way or another.
> > 
> > So here's the summary:
> > 
> > i) James' SMTP handler can process 2500-3000 messages/minute
> > consistently on a PII 400 MHz Celeron running Linux
> > 
> > ii) In the source of seven hours, over 1,000,000 messages
> > were pushed through the James SMTP service.
> > 
> > iii) There was no sign of the catastrophic memory leaks that
> > plagued earlier versions of James.
> > 
> > iv) There may be some outstanding "glitches" but they are
> > orders of magnitude less severe than the previous issues.
> > 
> > --Peter
> > 
> > 
> > 
> > 
> 
> --
> To unsubscribe, e-mail:   
> <mailto:james-dev-> [EMAIL PROTECTED]>
> For 
> additional commands, 
> e-mail: <mailto:james-dev-help@;jakarta.apache.org>
> 
> 

--
To unsubscribe, e-mail:   <mailto:james-dev-unsubscribe@;jakarta.apache.org>
For additional commands, e-mail: <mailto:james-dev-help@;jakarta.apache.org>

Reply via email to