What is your deployment model? Is this a multiprocess deployment? What database are you using?
There are various load tests for each database, which do far more than 7000 documents. I am concerned that you are seeing this because of some kind of cross-process synchronization issues, which might occur (for instance) if you are using a multiprocess environment with a single-process properties.xml file. Karl On Tue, Oct 9, 2012 at 9:12 AM, Maciej Liżewski <[email protected]> wrote: > Ok... it is not a getMaxDocumentRequest issue, because I was able to > get it even with getMaxDocumentRequest=1. Seems it occurs when > indenxing large sets of documents (in my case ~7000). It also happened > once for CIFS connecotr (with samba share)... > > result is like this: > > Name Status Start Time End Time Documents Active > Processed > Restart Pause Abort Mantis Running Tue Oct 09 13:56:59 CEST > 2012 5689 1600 4400 > > there is "active" documents count 1600 for about an hour now but there > is no server load and nothing changes... seems that it is hanging > somwhere inside manifold core. > > also - when hitting abort - nothing happens (job process remains in > "aborting" state)... > > Problem is that it happens irregularly (sometime 10 documents, > sometime 1600 and sometime all documents are indexed). Tried to check > that locally but on first pass everything went ok... really strange... > > > 2012/10/3 Karl Wright <[email protected]>: >> Hi Maciej, >> >> It sounds like your loop condition must be somehow incorrect. You may >> not receive the full number of documents specified by >> getMaxDocumentRequest(), but rather a number less than that. >> >> We have a number of connectors that use document batches > 1, e.g. the >> LiveLink connector, so this is likely not the problem. >> >> I'd recommend adding System.out.println() diagnostics to see exactly >> what is happening inside both getDocumentVersions() and >> processDocuments(). >> >> Karl >> >> >> On Wed, Oct 3, 2012 at 4:30 PM, Maciej Liżewski >> <[email protected]> wrote: >>> Hi, >>> >>> I have noticed strange problem with Connector (new one I am developing >>> right now) and getMaxDocumentRequest parameter. >>> When it returns 1 (default) everything seems ok, but when I set it to >>> anything higher (5, 10, 20) indexing job does not end but hangs when >>> there is only getMaxDocumentRequest documents left (when it should >>> process 5 documents in a row - 5 documents stays "active") >>> All document related functions seem written ok (they all iterate >>> throug passed arrays), there are no exceptions thrown (at least I do >>> not see any in console). >>> >>> What can be wrong and what should I look at to? any ideas? >>> >>> By the way - the new connector is for Mantis Bug tracker to index issues. >>> >>> TIA >>> Redguy
