* merge 80 segments into 1. A lot of IO involved... and you have to repeat it from time to time. Ugly.
I agree.

* implement a search server as a map task. Several challenges: it needs to partition the Lucene index, and it has to copy all parts of segments and indexes from DFS to the local storage, otherwise performance will suffer. However, the number of open files per machine would be reduced, because (ideally) each machine would deal with few or a single part of segment and a single part of index...

Well I played around and already had a kind of prototype.
I had seen following problems:

+ having a kind of repository of active search servers
possibility A: find all tasktrackers running a specific task (already discussed in the hadoop mailing list) possibility B: having a rpc server running in the jvm that runs the search server client, add the hostname to the jobconf and similar to task - jobtracker search server announce itself via hardbeat to the search server 'repository'.

+ having the index locally and the segment in the dfs.
++ adding to NutchBean init a dfs for index and one for segments could fix this, or more general add support for streamhandlers like dfs:// vs file://. (very long term)

This seems most interesting to me, as working around issues of a distributed index (while still getting reasonable performance) seem tricky. But being able to build a local index from NDFS, and access the segment data via NDFS when needed for summaries & cached files would seem fairly straightforward.

Though every time I think I understand Nutch I'm wrong - or the code changes :)

-- Ken
--
Ken Krugler
Krugle, Inc.
+1 530-210-6378
"Find Code, Find Answers"

Reply via email to