Hello,

I've just released a modified version of nutch071 and tomcat50 running
off a CDROM or local harddrive cross-platform:

http://sf.net/projects/vicaya

My ambitions are not 'the whole web' but a small and static collection
of pages. I intend to allow users to use nutch offline with the
occasional online content and index update (RSS, webstart, and/or
Subversion). Please let me know if such questions are out of scope.

I have found that reading the segments on CDROM is the biggest
performance bottleneck. However, I do not want to require that the
user copies the entire segments directory to disk. Is it possible to
separate some data - such as the reverse index from the other fields?
Would this require a change to Lucene or Nutch's source code?

I am considering importing content and index segments into an SVN
repository so that users may receive periodic updates. Will the
segments directory lend itself well to SVN patches? I have
experimented mostly with intranet search, but I've noticed that whole
web search creates dated indices. Might it be a matter of adding new
crawl segments since the last update?

Thanks,
Alex
--
Those who can make you believe absurdities can make you commit atrocities
-- François Marie Arouet (Voltaire)
http://cph.blogsome.com
http://genaud.org/alex/key.asc
--
CCC7 D19D D107 F079 2F3D BF97 8443 DB5A 6DB8 9CE1


-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid0944&bid$1720&dat1642
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to