Hello,

if I boil down what all these logs point me to,
these are the lines that remain:

Am 21.08.2007 um 05:56 schrieb Jayan Chirayath Kurian:

> (1) The Catalina.log shows the following
>
> INFO: The Apache Tomcat Native library which allows optimal  
> performance in production environments was not found on the  
> java.library.path:
Good hint: Go

<http://tomcat.apache.org/tomcat-6.0-doc/apr.html#Windows>

> (2) localhost.log shows the following
>
>              at  
> org.dspace.app.webui.servlet.CommunityListServlet.doDSGet 
> (CommunityListServlet.java:108)
This is the end of the File, where the page gets displayed.
Is there an extraordinarily large list of (nested) communities
to process? I guess it just happens accidentally in this page
while running out of memory during processing all requests.

> Aug 20, 2007 4:25:30 PM  
> org.apache.catalina.core.StandardWrapperValve invoke
>
> WARNING: Servlet.service() for servlet submit threw exception
>
> java.io.IOException: Lock obtain timed out: [EMAIL PROTECTED]:\Tomcat 
> 6.0\temp 
> \lucene-e575723f586b663d697cde458e2fa736-write.lock
>
>             at org.apache.lucene.store.Lock.obtain(Lock.java:56)
>
>             at org.apache.lucene.index.IndexWriter.<init> 
> (IndexWriter.java:254)
...
>             at org.dspace.search.DSIndexer.indexContent 
> (DSIndexer.java:92)
>
>             at org.dspace.content.InstallItem.installItem 
> (InstallItem.java:149)
...
>             at  
> org.dspace.app.webui.servlet.SubmitServlet.processLicense 
> (SubmitServlet.java:1618)

This one is interesting. It reveals something about the way
how the ingestion process in DSpace works. By submitting
a new item in the processLicense step, the indexing gets
triggered. While this ensures being always up to date,
it incurs an obvious performance penalty. But I bet your
problem is not a perfomance problem. To reiterate:

 > Lock obtain timed out: [EMAIL PROTECTED]:\Tomcat 6.0\temp\lucene- 
e575723f586b663d697cde458e2fa736-write.lock

Why cant it get a write lock in that moment? I dont know
about the inner working of lucene, but the file name looks
as if it were created per thread. So this probably not a
concurrency thing for submitting two items with the same
name or metadata at the same time, is it? How do you
generate test data? Do you make sure that the data to be
submitted does not exist yet in your test instance? Do
you use random values generated on the fly or a file with
preconfigured test data? Do these files contain different
data for each test client or does the controller instance
take care to provide each test client with different data?
In case, these requirements are not met, you might run in
a problem during testing that never happens in real live.

But ok, as I mentioned, I dont think that is the point.
I guess that there was too much disc activity in that moment.
Is there a limit for opened files in NFTS? Was that limit
reached during your test? My bet is, yes there is a limit
and no, it was not reached. The disk is just too slow.
Is it the same hardware that will be used in production?
Is this an old disk? Small old disks are much slower than
large disks. This difference is more significant then
between SCSI and ATA as far as I can see. So if you
thought that a small disk is sufficient for testing,
this might have to do with it. The task manager should
also give you more information about the time spent
waiting for I/O. There might also be interference with
memory swapping in case you set the -Xmx value too high.
Try to lower it only by a small amount, lets say from
-Xmx1000m to -Xmx800m and find out whether this makes
things better. Restart the whole machine before running
another test to get a baseline for memory swapping.
You might find that swapping only starts after several
days uptime and rebooting the machine every two days
or so automatically might be a dirty but efficient
solution.

Bye, Christian


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
DSpace-tech mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dspace-tech

Reply via email to