Hi all,

We observe that solr query time increases significantly with the number of
rows requested,  even all we retrieve for each document is just
fl=id,score.  Debugged a bit and see that most of the increased time was
spent in BinaryResponseWriter,  converting lucene document into
SolrDocument.


Inside convertLuceneDocToSolrDoc():


https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491
839e6a6b69/solr/core/src/java/org/apache/solr/response/
DocsStreamer.java#L182


   for (IndexableField f : doc.getFields())


I am a bit puzzled why we need to iterate through all the fields in the
document. Why can’t we just iterate through the requested fields in fl?
Specifically:



https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491
839e6a6b69/solr/core/src/java/org/apache/solr/response/
DocsStreamer.java#L156


if we change  sdoc = convertLuceneDocToSolrDoc(doc,
rctx.getSearcher().getSchema())  to


        sdoc = convertLuceneDocToSolrDoc(doc, rctx.getSearcher().getSchema(),
fnames)


and just iterate through fnames in convertLuceneDocToSolrDoc(),  there is a
significant performance boost in our case, the query time increase from
rows=128 vs rows=500 is much smaller.  Am I missing something here?


Thanks,

Wei

Reply via email to