I've been running some scalability tests on Lucene over the past couple of weeks.  
While there may be some flaws with some of my methods, I think they will be useful for 
people that want an idea as to how Lucene will scale.  If anyone has any questions 
about what I did, or wants clarifications on something, I'll be happy to provide them.

I'll start by filling out the form


      Hardware Environment
    * Dedicated machine for indexing: yes
    * CPU: 1 2.53 GHz Pentium 4
    * RAM: Self-explanatory
    * Drive configuration: 100 GB 7200 RPM IDE, 80 GB 7200 RPM IDE

      Software environment
    * Java Version: java version "1.3.1"
        Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1)
        Classic VM (build 1.3.1, J2RE 1.3.1 IBM Windows 32 build cn131-20020403 (JIT 
enabled: jitc))
    * OS Version: Win XP SP1
    * Location of index: Local File Systems

      Lucene indexing variables
    * Number of source documents: 43,779,000
    * Total filesize of source documents: ~350 GB -- never stored (documents were 
randomly generated)
    * Average filesize of source documents: 8 KB
    * Source documents storage location: Generated while indexing, never written to 
disk
    * File type of source documents: text
    * Parser(s) used, if any: None
    * Analyzer(s) used: Standard Analyzer
    * Number of fields per document: 2
    * Type of fields: text, Unstored
    * Index persistence: FSDirectory

      Figures
    * Time taken (in ms/s as an average of at least 3 indexing runs): See notes below
    * Time taken / 1000 docs indexed: 6.5 seconds/1000, not counting optimization 
time.  15 seconds/1000 when optimizing every 100,000 documents, and building an index 
to ~ 5 million documents.  Above 5 million documents, optimization took too much time. 
 See notes below.
    * Memory consumption: ~ 200 mb
   *  Index Size: 70.7 GB

      Notes
    * Notes: The documents were randomly generated on the fly as part of the indexing 
process from a list of ~100,000 words, who's average length was 7.   The documents had 
3 words in the title, and 500 words in the body.

While I was trying to build this index, the biggest limitation of Lucene that I ran 
into was optimization.  Optimization kills the indexers performance when you get 
between 3-5 million documents in an index.  On my Windows XP box, I had to reoptimize 
every 100,000 documents to keep from running out of file handles.  While I could build 
a 5 million document index in 24 hours... I could only add about another million over 
the next 24 hours due to the pain of the optimizer recopying the entire index over and 
over again (about 10 GB at this point), and it would only get worse from there.  So, 
to build this large of an index, I built several ~ 5 million document indexes, and 
then merged them at the end into a single index.  The second issue (though not really 
a problem) was that you have to have at least double the disk space available to build 
the index as you need when you are done.  I could have kept building the index bigger, 
but I ran out of disk space.  

When I was done building indexes, I ran some query's against them to see how the 
search performance varied with the size of the index.  Following are my results for 
various size indexes.

Index Size (GB)                 MS per query
4.53                            83
7.92                            83
10                              89
12.7                            112
52.5                            694
70.7                            944


These numbers are an average of 3 runs of 500 randomly generated queries being tossed 
at the index (single threaded) on the same hardware that built the index.  The queries 
were randomly generated (about 50 % of the queries had 0 results, 50% had 1 or more 
results) 

I was happy to see that these numbers make a nice linear plot (attached).  I'm not 
sure what other comments to add here, other to thank the authors of Lucene for their 
great design and implementation of Lucene.

If anyone has anything else they would like me to test on this index before I dump 
it... Speak up quick, I have to pull out one of the hard drives this weekend to pass 
it on to its real owner.

Dan

<<attachment: gb mv speed (ms).gif>>

--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to