Hi Karl,

I don't know for other MCF users, but we have many use cases where we need to crawl several millions of documents from different kinds of repositories. With those, we sometime have difficulties to manage issues when crawl jobs suddenly stop because of problematic files that can only be filtered to avoid the job to abort.

From past discussions in the mailing list, I think that from your point of view, it is preferable to stop a job when it encounters (or after several failing retries) an unknown and/or unexpected issue in order to be aware of this issue and fix it.

Although I can understand your point of view, I do not think it represents the exhaustivity of expected MCF behaviors in production. As a matter of fact, we have encountered several times scenarios where customers would prefer an approach where the crawl tries moving on, while still giving us the possibility to investigate any file that may have been skipped (One of the argument is that sometimes, jobs are started on Friday evenings, and if it aborts during the weekend, we lost at worse 60h of crawling before the admin can check the status of the job).

Yet as of now, this is not feasible, as jobs end up aborting when encountering non-clearly identified problematic files.

We have brainstormed internally, and we have a proposal which we think can satisfy both your view and ours, which we hope you consider as satisfying :

Whenever a job encounters an error that is not clearly identified :
1. It immediately retries one time;
2. If it succeeds, the crawl moves on as usual;
3. If it fails, the job moves this document to the current end of the processing pipeline, and crawls the remaining documents. It increments the counter of tentative for this document to 2. 4. When encountering this document again, the job tries again. If it succeeds, the crawl moves on as usual. If it fails, it moves this document to the current end of the processing pipeline, increment the counter of 1, and doubles the delay between two tentatives. 5. We iterate until the maximum number of tentatives of the crawl for the problematic document has been reached. If it fails, abort the crawl. With this behavior, a job is finally aborted on critical errors but at least we will be able to crawl a maximum number of non problematic documents till the failure.

Another more "direct" approach, could be to simply have an optional parameter for a job: a "skip errors" checkbox. This parameter would tell a job to skip any encountered error. This is assuming we properly log the errors in the log files and/or in the simple history, thus allowing us to debug later on.

We would gladly welcome your thoughts on these 2 approaches.

Regards,
Julien

Reply via email to