Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Nutch Wiki" for change 
notification.

The following page has been changed by JakeVanderdray:
http://wiki.apache.org/nutch/NutchTutorial

------------------------------------------------------------------------------
  
  For example, a typical call might be:
  
- bin/nutch crawl urls -dir crawl -depth 3 -topN 50
+  {{{ bin/nutch crawl urls -dir crawl -depth 3 -topN 50 }}}
+ 
  Typically one starts testing one's configuration by crawling at shallow 
depths, sharply limiting the number of pages fetched at each level (-topN), and 
watching the output to check that desired pages are fetched and undesirable 
pages are not. Once one is confident of the configuration, then an appropriate 
depth for a full crawl is around 10. The number of pages per level (-topN) for 
a full crawl can be from tens of thousands to millions, depending on your 
resources.
  
  Once crawling has completed, one can skip to the Searching section below.
  
- Whole-web Crawling
+ == Whole-web Crawling ==
+ 
  Whole-web crawling is designed to handle very large crawls which may take 
weeks to complete, running on multiple machines.
  
- Whole-web: Concepts
+ === Whole-web: Concepts ===
+ 
  Nutch data is composed of:
  
- The crawl database, or crawldb. This contains information about every url 
known to Nutch, including whether it was fetched, and, if so, when.
+  1. The crawl database, or crawldb. This contains information about every url 
known to Nutch, including whether it was fetched, and, if so, when.
- The link database, or linkdb. This contains the list of known links to each 
url, including both the source url and anchor text of the link.
+  1. The link database, or linkdb. This contains the list of known links to 
each url, including both the source url and anchor text of the link.
- A set of segments. Each segment is a set of urls that are fetched as a unit. 
Segments are directories with the following subdirectories:
+  1. A set of segments. Each segment is a set of urls that are fetched as a 
unit. Segments are directories with the following subdirectories:
- a crawl_generate names a set of urls to be fetched
+    * a ''crawl_generate'' names a set of urls to be fetched
- a crawl_fetch contains the status of fetching each url
+    * a ''crawl_fetch'' contains the status of fetching each url
- a content contains the content of each url
+    * a ''content contains'' the content of each url
- a parse_text contains the parsed text of each url
+    * a ''parse_text'' contains the parsed text of each url
- a parse_data contains outlinks and metadata parsed from each url
+    * a ''parse_data'' contains outlinks and metadata parsed from each url
- a crawl_parse contains the outlink urls, used to update the crawldb
+    * a ''crawl_parse'' contains the outlink urls, used to update the crawldb
- The indexesare Lucene-format indexes.
+  1. The indexes are Lucene-format indexes.
+ 
- Whole-web: Boostrapping the Web Database
+ === Whole-web: Boostrapping the Web Database ===
+ 
  The injector adds urls to the crawldb. Let's inject URLs from the DMOZ Open 
Directory. First we must download and uncompress the file listing all of the 
DMOZ pages. (This is a 200+Mb file, so this will take a few minutes.)
  
- wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
+ {{{ wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
- gunzip content.rdf.u8.gz
+ gunzip content.rdf.u8.gz }}}
+ 
  Next we select a random subset of these pages. (We use a random subset so 
that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ 
contains around three million URLs. We select one out of every 5000, so that we 
end up with around 1000 URLs:
  
- mkdir dmoz
+ {{{ mkdir dmoz
- bin/nutch org.apache.nutch.crawl.DmozParser content.rdf.u8 -subset 5000 > 
dmoz/urls
+ bin/nutch org.apache.nutch.crawl.DmozParser content.rdf.u8 -subset 5000 > 
dmoz/urls }}}
+ 
  The parser also takes a few minutes, as it must parse the full file. Finally, 
we initialize the crawl db with the selected urls.
  
- bin/nutch inject crawl/crawldb dmoz
+ {{{ bin/nutch inject crawl/crawldb dmoz }}}
+ 
  Now we have a web database with around 1000 as-yet unfetched URLs in it.
  
- Whole-web: Fetching
+ === Whole-web: Fetching ===
+ 
  To fetch, we first generate a fetchlist from the database:
  
- bin/nutch generate crawl/crawldb crawl/segments
+ {{{ bin/nutch generate crawl/crawldb crawl/segments }}}
+ 
  This generates a fetchlist for all of the pages due to be fetched. The 
fetchlist is placed in a newly created segment directory. The segment directory 
is named by the time it's created. We save the name of this segment in the 
shell variable s1:
  
- s1=`ls -d crawl/segments/2* | tail -1`
+ {{{ s1=`ls -d crawl/segments/2* | tail -1`
- echo $s1
+ echo $s1 }}}
+ 
  Now we run the fetcher on this segment with:
  
- bin/nutch fetch $s1
+ {{{ bin/nutch fetch $s1 }}}
+ 
  When this is complete, we update the database with the results of the fetch:
  
- bin/nutch updatedb crawl/crawldb $s1
+ {{{ bin/nutch updatedb crawl/crawldb $s1 }}}
+ 
  Now the database has entries for all of the pages referenced by the initial 
set.
  
  Now we fetch a new segment with the top-scoring 1000 pages:

Reply via email to