http://wiki.apache.org/solr/EmbeddedSolr
 
Following the example on connecting to the Index directly without using
HTTP, I tried to optimize by passing the true flag to the
CommitUpdateCommand.
 
When optimizing an index with Lucene directly it doubles the size of the
index temporarily and then deletes the old segments that were optimized.
Instead, what happened was the old segments were still there.  Calling
optimize a second time did remove the old segments.
 
Lucene it's usually 
writer.optimize();
writer.close();
 
So what method call do I need to make afterwards so I don't have to call
optimize a second time with the Solr API?
 
public void index() {
    //do stuff 
    while (loop) {
      //add millions of documents and commit at intervals
    }
    optimize(); // optimize to reduce file handles
    optimize();//clean up old segments which still existed
    //WHAT SHOULD BE HERE INSTEAD OF ANOTHER OPTIMIZE?
}
 
public void commit() throws IOException {
    commit(false);
}
 
public void optimize() throws IOException {
    logger.info("Optimizing an index temporarily doubles size of index,"

    + " but reduces number of files" );
    commit(true);
}
 
private static void commit(boolean optimize) throws IOException {
    UpdateHandler updateHandler = core.getUpdateHandler(); 
    CommitUpdateCommand commitcmd = new CommitUpdateCommand(optimize);
    updateHandler.commit(commitcmd);
}
 
Paul Sundling

Reply via email to