Hi Karan,
clean=false will not delete existing documents in index, but if you reimport
documents with the same ID they will be overwritten. If you see the same doc
with updated timestamp, then it probably means that you did full-import of docs
with the same file name.
HTH,
Emir
--
Monitoring -
Hi Emir,
There is one behavior i noticed while performing the incremental import. I
added a new field into the managed-schema.xml to test the incremental
nature of using the clean=false.
**
Now xtimestamp is having a new value even on every DIH import with
clean=false property. Now i
If you need to make a request to Solr that has a lot of custom
parameters and values, you can create an additional definition for a
Request handler and all all those parameters in there, instead of
hardcoding them on the client side. See solrconfig.xml, there are lots
of examples there.
Regards,
Hi Karan,
Glad it worked for you.
I am not sure how to do it in C# client, but adding clean=false parameter in
URL should do the trick.
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 29
Thanks Emir :-) . Setting the property *clean=false* worked for me.
Is there a way, i can selectively clean the particular index from the
C#.NET code using the SolrNet API ?
Please suggest.
Kind regards,
Karan
On 29 January 2018 at 16:49, Emir Arnautović
wrote:
Hi Karan,
Did you try running full import with clean=false?
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 29 Jan 2018, at 11:18, Karan Saini wrote:
>
> Hi folks,
>
>
Hi folks,
Please suggest the solution for importing and indexing PDF files
*incrementally*. My requirements is to pull the PDF files remotely from the
network folder path. This network folder will be having new sets of PDF
files after certain intervals (for say 20 secs). The folder will be forced