I'm using nutch 0.9-dev to crawl web on 1 linux server. With default hadoop
configuration (local file system, no distributed crawling), the Generator
and Fetcher spend unproportional amount of time on map-reduce opearations.
For example:

2006-11-01 20:32:44,074 INFO  crawl.Generator - Generator: segment:
crawl/segments/20061101203244
... (doing map and reduce for 2 hours )
2006-11-01 22:28:11,102 INFO  fetcher.Fetcher - Fetcher: segment:
crawl/segments/20061101203244
... (fetching 12 hours )
2006-11-02 11:15:10,590 INFO  mapred.LocalJobRunner - 175383 pages, 16583
errors, 3.8 pages/s, 687 kb/s,
2006-11-02 11:17:24,039 INFO  mapred.LocalJobRunner - reduce > sort
... (but doing reduce>sort and reduce>duce for 8 hours )
2006-11-02 19:13:38,882 INFO  crawl.CrawlDb - CrawlDb update: segment:
crawl/segments/20061101203244

Is there any configuration that can be set so that the time for map-reduce
can be reduced?  I have to improve the crawl performance. Will appreciate
your suggestion on how to optimize performance of running nutch on a single
server.

Thanks,
--
AJ Chen, PhD
http://web2express.org
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to