Hi Marco

Nutch now delegates the indexing and searching to SOLR, all the steps you
described (tokenization, lowercasing, etc...) are implemented there and
Nutch does not do anything special about it. From a Nutch point of view, the
indexing consists in gathering the data from various sources (crawldb,
segments, linkdb), apply some simple transformations (indexingfilters) then
send to SOLR.
You can of course write some custom map reduce function with SOLR embedded
but that's not what we do in Nutch. Have a look at the SOLR mailing lists,
you'll probably find more info there

HTH

Julien

PS: (shameful self promotion for one of my pet projects) Behemoth (
https://github.com/jnioche/behemoth) is about doing large scale text
processing on Hadoop. there is a component which delegates the indexing of
documents to SOLR but it could be modified to do what you described and have
SOLR instances within the map/reduce functions

On 8 February 2011 10:06, Marco Didonna <[email protected]> wrote:

> Hi everyone,
> I've build a little hadoop program to build an inverted index from a
> text collection. It performs basic analysis: tokenization,
> lowercasing, stopword removal. I was wondering if I could use some
> nutch components since I assume they've undergone a more intense
> tuning and so they're more efficient. I looked up in the javadoc
> (org.apache.nutch.indexer package) to find some hint but I didn't find
> any helping material...I hope someone of you can point me to the right
> place with - hopefully - some example code.
> I would like to underline that I do need anything but the indexing
> capabilities of nutch - no crawling or other stuff -  and I need the
> whole thing to work on hadoop :)
>
> Thanks for your time
>
> MD
>



-- 
*
*Open Source Solutions for Text Engineering

http://digitalpebble.blogspot.com/
http://www.digitalpebble.com

Reply via email to