Good point Erick, I will try it today, but I have already use cursorMark in my 
query for deep pagination.
Also I noticed that my cpu usage is pretty high, 8 cores, usage is over 700%. I 
am not sure it will help if I use ssd disk 


Sent from Yahoo Mail for iPhone


On Sunday, June 30, 2019, 2:57 PM, Erick Erickson <erickerick...@gmail.com> 
wrote:

Well, the first thing I’d do is see what’s taking the time, querying or 
updating? Should be easy enough to comment out whatever it is that sends docs 
to Solr.

If it’s querying, it sounds like you’re paging through your entire data set and 
may be hitting the “deep paging” problem. Use cursorMark in that case.

Best,
Erick

> On Jun 30, 2019, at 9:12 AM, Alexandre Rafalovitch <arafa...@gmail.com> wrote:
> 
> Only thing I can think of is to check whether you can do in-place
> rather than atomic updates:
> https://lucene.apache.org/solr/guide/8_1/updating-parts-of-documents.html#in-place-updates
> But the conditions are quite restrictive: non-indexed
> (indexed="false"), non-stored (stored="false"), single valued
> (multiValued="false") numeric docValues (docValues="true") field
> 
> The other option may be to use an external value field and not update
> Solr documents at all:
> https://lucene.apache.org/solr/guide/8_1/working-with-external-files-and-processes.html
> 
> Regards,
>  Alex.
> 
> On Sun, 30 Jun 2019 at 10:53, derrick cui
> <derrickcui...@yahoo.ca.invalid> wrote:
>> 
>> Thanks Alex,
>> My usage is that
>> 1. I execute query and get result, return id only 2. Add a value to a 
>> dynamic field3. Save to solr with batch size1000
>> I have define 50 queries and run them parallel Also I disable hard commit 
>> and soft commit per 1000 docs
>> 
>> I am wondering whether any configuration can speed it
>> 
>> 
>> 
>> 
>> Sent from Yahoo Mail for iPhone
>> 
>> 
>> On Sunday, June 30, 2019, 10:39 AM, Alexandre Rafalovitch 
>> <arafa...@gmail.com> wrote:
>> 
>> Indexing new documents is just adding additional segments.
>> 
>> Adding new field to a document means:
>> 1) Reading existing document (may not always be possible, depending on
>> field configuration)
>> 2) Marking existing document as deleted
>> 3) Creating new document with reconstructed+plus new fields
>> 4) Possibly triggering a marge if a lot of documents have been updated
>> 
>> Perhaps the above is a contributing factor. But I also feel that maybe
>> there is some detail in your question I did not fully understand.
>> 
>> Regards,
>>  Alex.
>> 
>> On Sun, 30 Jun 2019 at 10:33, derrick cui
>> <derrickcui...@yahoo.ca.invalid> wrote:
>>> 
>>> I have 400k data, indexing is pretty fast, only take 10 minutes, but add 
>>> dynamic field to all documents according to query results is very slow, 
>>> take about 1.5 hours.
>>> Anyone knows what could be the reason?
>>> Thanks
>>> 
>>> 
>>> 
>>> Sent from Yahoo Mail for iPhone
>> 
>> 
>> 



Reply via email to