Hi Marcel,
I've tried to find out more about my issue but didn't succeed.
There is small application which demonstrate the problem:
http://217.11.254.42/download/index-problem.zip
Steps to reproduce:
Prerequisites:
1. This test is resource consuming - need computer with at least 512 MB,
better 1 GB
2. Package contain all needed libs and also source code (10kb) - I was
not able to create smaller example - sorry for that.
Test can be run using script "run-test" or "run-test.bat" - at the
beginning of each file is path to the Java.
Whole test is trying to create 10000 nodes
- Phase 1: Create 20 nodes
- Phase 2: Query for last created 20 nodes
Problem is that on faster machines this test fail around 3000-5000th
document - it is not possible to locate file. Query in repository is
based on the attribute values.
Maybe that the problem is in my code.. I'll really appreciate if you can
look on it...
Thanks, Petr
Marcel Reutegger wrote:
Hi Petr,
can you please provide the source code to your test case that allows us
to reproduce the behaviour? Thanks
regards
marcel
Petr Pytelka wrote:
Hi all,
I'm testing performance of Jackrabbit and found one issue.
Test case:
1. Insert N items
2. Search each of inserted items (using Query)
3. Drop inserted item
When N is lower then aprox.1000 items every thinks works fine. But when
N is higher then aprox. 1000 items Jackrabbit (Lucene) some how "lost"
indexes and I'm not able to find required item (usualy I'm able to find
first 300 items).
If I delete index directory and re-run search, Jackrabbit build new
indexes during startup and I'm able to find all items.
Lucene: version 1.4.3
DB-backend: Derby
Any idea where can be problem ?
Thanks a lot, Petr Pytelka