[
https://issues.apache.org/jira/browse/SOLR-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12657882#action_12657882
]
Ryan McKinley commented on SOLR-906:
------------------------------------
I suppose... though I don't see it as a strict requirement -- if you need full
error handling, use a different SolrServer implementation.
I think a more reasonable error API would be a callback function rather then
polling -- the error could occur outside you loop (assuming you break at some
point). That callback could easily be converted to a polling api if desired.
The big thing to note with this API is that calling:
solr.add( doc )
just adds it to the queue processes it in the background. It is a
BlockingQueue, so after it hits the max size the client will block before it
can add -- but that should be transparent to the client.
the error caused by adding that doc may happen much later in time.
I'll go ahead and add that callback...
> Buffered / Streaming SolrServer implementaion
> ---------------------------------------------
>
> Key: SOLR-906
> URL: https://issues.apache.org/jira/browse/SOLR-906
> Project: Solr
> Issue Type: New Feature
> Components: clients - java
> Reporter: Ryan McKinley
> Assignee: Shalin Shekhar Mangar
> Fix For: 1.4
>
> Attachments: SOLR-906-StreamingHttpSolrServer.patch,
> SOLR-906-StreamingHttpSolrServer.patch,
> SOLR-906-StreamingHttpSolrServer.patch, StreamingHttpSolrServer.java
>
>
> While indexing lots of documents, the CommonsHttpSolrServer add(
> SolrInputDocument ) is less then optimal. This makes a new request for each
> document.
> With a "StreamingHttpSolrServer", documents are buffered and then written to
> a single open Http connection.
> For related discussion see:
> http://www.nabble.com/solr-performance-tt9055437.html#a20833680
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.