wangxu wrote:
> Have anybody thought of replacing CrawlDb with any kind of Rational
> DB,mysql,for example?
> 
> Crawldb is so difficult to manipulate.
> I often have the requirements to edit several entries in crawdb;
> But that would cost too much waiting for the mapReduce.
> 

Once when I was young and restless I went through the path with
relational db. It kind of worked with few million records. I am not
trying to do it anymore.

Perhaps your problem is that you process too few records at the time?
Quite often I see examples where people fetch few hundred of few
thousand pages at a time. It might be good amount for small crawls, but
if your goal is bigger you need bigger segments to get there.

--
 Sami Siren



-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Nutch-developers mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/nutch-developers

Reply via email to