[ 
https://issues.apache.org/jira/browse/NUTCH-963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13008421#comment-13008421
 ] 

Markus Jelsma commented on NUTCH-963:
-------------------------------------

Solr deduplication makes its own (fuzzy) hashes on one or more fields. Separate 
algorithms on different fields can be combined. It does not take into account 
the score of a document if you mean the index-time boost on the document. But 
if there is a separate score (or boost) field then a combined signature on 
body, title and boost will work.

All aside, i agree we should go for a single Nutch command for cleaning an 
index, doing dedup and/or 404 cleaning in one swift go.

I'll rereview this patch and do further testing and won't forget CHANGES.txt. 
After that i believe we can create a new related issue for the new 
deduplication.

> Add support for deleting Solr documents with STATUS_DB_GONE in CrawlDB (404 
> urls)
> ---------------------------------------------------------------------------------
>
>                 Key: NUTCH-963
>                 URL: https://issues.apache.org/jira/browse/NUTCH-963
>             Project: Nutch
>          Issue Type: New Feature
>          Components: indexer
>    Affects Versions: 2.0
>            Reporter: Claudio Martella
>            Assignee: Markus Jelsma
>            Priority: Minor
>             Fix For: 1.3, 2.0
>
>         Attachments: NUTCH-963-command-and-log4j.patch, Solr404Deleter.java, 
> SolrClean.java
>
>
> When issuing recrawls it can happen that certain urls have expired (i.e. URLs 
> that don't exist anymore and return 404).
> This patch creates a new command in the indexer that scans the crawldb 
> looking for these urls and issues delete commands to SOLR.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to