Yes, absolutely.

The only optimization we could make here would be to send to SOLR only
updates about documents we know for sure they changed (i.e. based on
digests, like the deduplication code). I'm not sure how SOLR behaves if
you send an update with no change in the document.
I'm sure they pretty much do the same internally, so I guess what we'd
minimize is only the transmission of the update.

On 1/26/11 9:16 PM, Markus Jelsma wrote:
> This is default behaviour. If pages are scheduled for fetching they will show 
> up in the next segment. If you index that segment the old document in Solr is 
> overwritten.
>
>> But we also need to detect modified documents in order to trigger an
>> update command to Solr (an improvement of SolrIndexer). I was planning
>> to open a Jira issue on this missing functionality this week.
>>
>> Erlend
>>
>> On 26.01.11 18.12, Claudio Martella wrote:
>>> Today I had a look at the code and wrote this class. It works here on my
>>> test cluster.
>>>
>>> It scans the crawldb for entries carrying the STATUS_DB_GONE and it
>>> issues a delete to solr for those entries.
>>>
>>> Is that what you guys have in mind? Should i file a JIRA?
>>>
>>> On 1/24/11 10:26 AM, Markus Jelsma wrote:
>>>> Each item in the CrawlDB carries a status field. Reading the CrawlDB
>>>> will return this information as well, the same goes for a complete dump
>>>> with which you could create the appropriate delete statements for your
>>>> Solr instance.
>>>>
>>>> 51         /** Page no longer exists. */
>>>> 52         public static final byte STATUS_DB_GONE = 0x03;
>>>>
>>>> http://svn.apache.org/viewvc/nutch/branches/branch-1.3/src/java/org/apac
>>>> he/nutch/crawl/CrawlDatum.java?view=markup
>>>>
>>>>> Where is that information stored? it could be then easily used to issue
>>>>> deletes on solr.
>>>>>
>>>>> On 1/23/11 10:32 PM, Markus Jelsma wrote:
>>>>>> Nutch can detect 404's by recrawling existing URL's. The mutation,
>>>>>> however, is not pushed to Solr at the moment.
>>>>>>
>>>>>>> As far as I know, Nutch can only discover new URLs to crawl and send
>>>>>>> the parsed content to Solr. But what about maintaining the index?
>>>>>>> Say that you have a daily Nutch script that fetches/parses the web
>>>>>>> and updates the Solr index. After one month, several web pages have
>>>>>>> been modified and some have also been deleted. In other words, the
>>>>>>> Solr index is out of sync.
>>>>>>>
>>>>>>> Is it possible to detect such changes in order to send update/delete
>>>>>>> commands to Solr?
>>>>>>>
>>>>>>> It looks like the Aperture crawler has a workaround for this since
>>>>>>> the crawler handler have methods such as objectChanged(...):
>>>>>>> http://sourceforge.net/apps/trac/aperture/wiki/Crawlers
>>>>>>>
>>>>>>> Erlend


-- 
Claudio Martella
Digital Technologies
Unit Research & Development - Analyst

TIS innovation park
Via Siemens 19 | Siemensstr. 19
39100 Bolzano | 39100 Bozen
Tel. +39 0471 068 123
Fax  +39 0471 068 129
[email protected] http://www.tis.bz.it

Short information regarding use of personal data. According to Section 13 of 
Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we 
process your personal data in order to fulfil contractual and fiscal 
obligations and also to send you information regarding our services and events. 
Your personal data are processed with and without electronic means and by 
respecting data subjects' rights, fundamental freedoms and dignity, 
particularly with regard to confidentiality, personal identity and the right to 
personal data protection. At any time and without formalities you can write an 
e-mail to [email protected] in order to object the processing of your personal 
data for the purpose of sending advertising materials and also to exercise the 
right to access personal data and other rights referred to in Section 7 of 
Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, 
Siemens Street n. 19, Bolzano. You can find the complete information on the web 
site www.tis.bz.it.


Reply via email to