> > Please make the following test using your favorite relational DB:> > > > * 
> > create a table with 300 mln rows and 10 columns of mixed type> > > > * 
> > select 1 mln rows, sorted by some value> > > > * update 1 mln rows to 
> > different values> > > > If you find that these operations take less time 
> > than with the current > > crawldb then we will have to revisit this issue. 
> > :)> > That is so funny.I think the original question and the above answer 
> > shows the big difference in the ways that Nutch is being used. For a small 
> > niche searchengine with fewer than a few million pages, it would probably 
> > be performant to use a relational DB. I have a webdb with 5 million 
> > records, and usually fetch 20k pagesat a time. It takes me about 1 hour to 
> > do an updatedb. To inject just a few dozen new urls takes about 20 minutes. 
> > On a relational DB, I know the injecting would be *much* faster, and I 
> > think the updatedb step would be also.Also for smaller engines, the raw 
> > throughput doesn't matter as much, and other considerations like robustness 
> > and flexibility could be more important. With a relational DB, I could 
> > recover from a crashed crawl with a simple SQL update. Or I could remove a 
> > set of bogus URLs from thedb just as easily. Now when I want to tweak the 
> > webdb in an unanticipated way, I have to write a custom piece of Java to do 
> > it. Just thought I'd throw in a perspective from a niche search guy.Howie
_________________________________________________________________
Your friends are close to you. Keep them that way.
http://spaces.live.com/signup.aspx

Reply via email to