Okay, so Im trying to find the sweet spot on how many index segments I
should have.

I have 47 million records of contact data (Name + Address). I used 7
machines to build indexes that resulted in the following spread of
individual indexes:

1503000
1500000
1497000
5604750
5379750
1437000
1458000
1446000
1422000
1425000
1425000
1404000
1413000
1404000
4893750
4689750
4519500
4497750
46919250 Total Records
(The faster machines built the bigger indexes)
I also joined all these indexes together into one large 47 million
record index, and ran my query pounder against both data sets, one using
the ParallellMultiSearcher for the multi indexes, and one using a normal
IndexSearcher against the large index.
What I found was that for queries with one term (First Name), the large
index beat the multiple indexes hands down (280 Queries/per second vs
170 Q/s).
But for queries with multiple terms (Address), the multiple indexes beat
out the Large index. (26 Q/s vs 16 Q/s)
Btw, Im running these on a 2 proc box with 16GB of ram.

So what Im trying to determine Is if there is some equations out there
that can help me find the sweet spot for splitting my indexes. Most
queries are going to be multi-term, and clearly the big O of the single
term search appears to be log n. (I verified with 470 million records..
The single term search returns at 140 qps, consistent with what I
believe about search algorithms).  The equation that Im missing is the
big O for the union of the result sets that match particular terms.  Im
assuming (havent looked at the source yet) that lucene finds all the
documents that match the first term, and all the documents that match
each subsequent term, and then finds the union between all the sets. Is
this correct?  Anybody have any ideas on how to iron out an equation for
this?

Ryan

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to