Hello

Josip Almasi wrote:
> 
> rafael.munoz wrote:
>> Hello
>> 
>> I have been doing some stress and performance tests on James 2.3.1 and I
>> think I have found a bottleneck on the James spool storing code (I am
>> using 
>> the filesystem spool). 
>> 
>> I have configured James to behave as a simple SMTPServer and do almost
>> nothing more than receiving mails and storing it in the spool ("<mailet
>> match="All" class="Null"/>"). 
> 
> Eh, Null?
> 

I was only measuring James input so I was just destroying any incoming
message after retrieving from the spool. 'Null' is refering to the
NullMailet (http://james.apache.org/mailet/standard/mailet-report.html#Null)


Josip Almasi wrote:
> 
> ...
>> So, summarizing:
> ...
>> 2. Anyone knows why the FileOutputStream object creation takes more and
>> more
>> when James is stress out? The underlying OS is not reporting any problems
>> with the filesystem or the file descriptors.
> 
> Most filesystems store directory entries as lists. Lists are read each 
> time from beginning so to access Nth entry you'll access N-1 entries, 
> IOW N*(N-1)/2 complexity.
> In fact, AFAIK only FS that doesn't do that is ReiserFS, and I doubt you 
> can get that on solaris, so better switch to database storage.
> Databases use ballanced trees meaning IIRC max N*log(N) avg log(N) 
> complexity.
> 
> 

Umm .. interesting, I didn't know that. I would check the number on entries
on the spool directory when I start to get huge file FileOutputStream
creation times (that as you implies almost surely are linked to huge file
creation times on the filesystem). And about the database suggestion, I am
afraid it is not an option in our application :(.

Thanks for your answer!

regards,
Rafael Munoz

-- 
View this message in context: 
http://www.nabble.com/Bottleneck-on-James-spool-storing-code-tp22719950p22740379.html
Sent from the James - Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to