Hi Karl, I was manually starting the job for test purpose, but even if I schedule it with job invocation "Complete" and "Scan every document once", the missing IDs from the database are not deleted in my Solr index (no trace of any 'document deletion' event in the history). I should mention that I only use the 'Seeding query' and 'Data query' and I am not using the $(STARTTIME) and $(ENDTIME) variables in my seeding query.
Julien Le 26.04.2017 16:05, Karl Wright a écrit : > Hi Julien, > > How are you starting the job? If you use "Start minimal", deletion would not > take place. If your job is a continuous one, this is also the case. > > Thanks, > Karl > > On Wed, Apr 26, 2017 at 9:52 AM, <[email protected]> wrote: > >> Hi the MCF community, >> >> I am using MCF 2.6 with the JDBC connector to crawl an Oracle Database and >> index the data into a Solr server, and it works very well. However, when I >> perform a delta re-crawl, the new IDs are correctly retrieved from the >> Database but those who have been deleted are not "detected" by the connector >> and thus, are still present in my Solr index. >> I would like to know if normally it should work and that I maybe have missed >> something in the configuration of the job, or if this is not implemented ? >> The only way I found to solve this issue is to reset the seeding of the job, >> but it is very time and resource consuming. >> >> Best regards, >> Julien Massiera
