Hello all,

I have integrated Solr into my project with success. I use a dataimporthandler 
to first import the data mapping the fields to my schema.xml. I use Solrj to 
query the data and also use faceting. Works great.

The question I have now is a general one on updating the index and how it 
works. Right now, I have a thread which runs a couple of times a day to update 
the index. My index is composed of about 20000 documents, and when this thread 
is run it takes the data of the 20000 documents in the db, I create a 
solrdocument for each and I then use this line of code to index the index.

SolrServer server = new 
CommonsHttpSolrServer("http://localhost:8080/apache-solr-1.4.1/";);
Collection<SolrInputDocument> docs = new ArrayList<SolrInputDocument>();

for (Iterator iterator = documents.iterator(); iterator.hasNext();) {
Document document = (Document) iterator.next();
SolrInputDocument solrDoc = SolrUtils.createDocsSolrDocument(document);
               docs.add(solrDoc);
}

UpdateRequest req = new UpdateRequest();
req.setAction(UpdateRequest.ACTION.COMMIT, false, false);
req.add(docs);
UpdateResponse rsp = req.process(server);

server.optimize();

This process takes 19 seconds, which is 10 seconds faster than my older 
solution using compass (another opensource search project we used). Is this the 
best was to update the index? If I understand correctly, an update is actually 
a delete in the index then an add. During the 19 seconds, will my index be 
locked only on the document being updated or the whole index could be locked? I 
am not in production yet with this solution, so I want to make sure my update 
process makes sense. Thanks

Greg

Reply via email to