Hi Erik,

I would say when the multiple partitions of your indexes are on different
hard-drives and those drives are not under the same RAID-xx set. For
example, when your index is spread over 4 HD under the same (or different)
SCSI controller, I would think that the parallelsearcher would be faster.

Multiple CPUs with multiple hard-drive would help (like 2 CPU/4HD) but is
not a necessity.

In a single/multiple CPU scenario, single HD scenario, I don't see directly
how it could be beneficial, but may be wrong.

I think it should be (much) faster when searching over multiple
RemoteSearchables as well, such as an index spread on multiple machines,
which was the original intent.

Note that it is creating a new thread for each Searchable, so in your case
26, which may be why you see such a slowdown. Using a thread-pool would
surely bring that figure very close to the original 20ms.

KR,

Jean-Francois Halleux


-----Original Message-----
From: Erik Hatcher [mailto:[EMAIL PROTECTED]
Sent: samedi 6 mars 2004 17:20
To: Lucene List
Subject: ParallelMultiSearcher vs. MultiSearcher


Under what conditions would a ParallelMultiSearcher be faster than a
MultiSearcher?  In my brief tests (26 partitioned indexes, all running
local), ParallelMultiSearcher is a factor of 2 or 3 times slower (we're
still talking 20ms versus 60ms though).  I'm searching a partitioned
set of indexes of around 40,000 documents through a RemoteSearchable.

I'm just curious under what circumstances you'd use the new
ParallelMultiSearcher instead of a MultiSearcher?  I saw mention of
multiple hard disk drives being a reason.  Has anyone used it in this
environment and shown that there was a benefit?

Thanks,
        Erik


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to