Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Nutch Wiki" for change 
notification.

The "RunningNutchAndSolr" page has been changed by LewisJohnMcgibbney:
http://wiki.apache.org/nutch/RunningNutchAndSolr?action=diff&rev1=70&rev2=71

  </property>
  }}}
   * mkdir -p urls
-  * create a file nutch under /urls with the following content (1 url per line 
for each site you want Nutch to crawl).
+  * create a text file nutch under /urls with the following content (1 url per 
line for each site you want Nutch to crawl).
  {{{
  http://nutch.apache.org/
  }}}
@@ -67, +67 @@

  }}} 
  
  This will include any url in the domain nutch.apache.org.
+ 
+ === 3.1 Using the Crawl Command ===
  
  Now we are ready to initiate a crawl, use the following parameters:
  
@@ -89, +91 @@

  {{{
  bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 5
  }}}
- If not then please read on for how to set up your Solr instance and index 
your crawl data.
+ If not then please skip to [[#4. Setup Solr for search|here]] for how to set 
up your Solr instance and index your crawl data.
+ 
+ Typically one starts testing one's configuration by crawling at shallow 
depths, sharply limiting the number of pages fetched at each level (-topN), and 
watching the output to check that desired pages are fetched and undesirable 
pages are not. Once one is confident of the configuration, then an appropriate 
depth for a full crawl is around 10. The number of pages per level (-topN) for 
a full crawl can be from tens of thousands to millions, depending on your 
resources.
+ 
+ === 3.2 Using Individual Commands for Whole-web Crawling ===
+ 
+ Whole-web crawling is designed to handle very large crawls which may take 
weeks to complete, running on multiple machines.  This also permits more 
control over the crawl process, and incremental crawling.  It is important to 
note that whole web crawling does not necessarily mean crawling the entire 
world wide web.  We can limit a whole web crawl to just a list of the URLs we 
want to crawl.  This is done by using a filter just like we the one we used 
when we did the crawl command (above).
+ 
+ ==== Step-by-Step: Concepts ====
+ Nutch data is composed of:
+ 
+  1. The crawl database, or crawldb. This contains information about every url 
known to Nutch, including whether it was fetched, and, if so, when.
+  2. The link database, or linkdb. This contains the list of known links to 
each url, including both the source url and anchor text of the link.
+  3. A set of segments. Each segment is a set of urls that are fetched as a 
unit. Segments are directories with the following subdirectories:
+   * a ''crawl_generate'' names a set of urls to be fetched
+   * a ''crawl_fetch'' contains the status of fetching each url
+   * a ''content'' contains the raw content retrieved from each url
+   * a ''parse_text'' contains the parsed text of each url
+   * a ''parse_data'' contains outlinks and metadata parsed from each url
+   * a ''crawl_parse'' contains the outlink urls, used to update the crawldb
+ 
+ ==== Step-by-Step: Seeding the Crawl DB with a list of URLS ====
+ ===== Option 1:  Bootstrapping from the DMOZ database. =====
+ The injector adds urls to the crawldb. Let's inject URLs from the DMOZ Open 
Directory. First we must download and uncompress the file listing all of the 
DMOZ pages. (This is a 200+Mb file, so this will take a few minutes.)
+ 
+ {{{
+ wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz
+ gunzip content.rdf.u8.gz
+ }}}
+ Next we select a random subset of these pages. (We use a random subset so 
that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ 
contains around three million URLs. We select one out of every 5000, so that we 
end up with around 1000 URLs:
+ 
+ {{{
+ mkdir dmoz
+ bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > 
dmoz/urls
+ }}}
+ The parser also takes a few minutes, as it must parse the full file. Finally, 
we initialize the crawl db with the selected urls.
+ 
+ {{{ 
+ bin/nutch inject crawl/crawldb dmoz 
+ }}}
+ 
+ Now we have a web database with around 1000 as-yet unfetched URLs in it.
+ 
+ ===== Option 2.  Bootstrapping from an initial seed list. =====
+ This option shadows the creation of the seed list as covered [[#3. Crawl your 
first website|here]].
+ 
+ {{{ bin/nutch inject crawldb urls }}}
+ 
+ ==== Step-by-Step: Fetching ====
+ To fetch, we first generate a fetch list from the database:
+ 
+ {{{ bin/nutch generate crawldb segments }}}
+ 
+ This generates a fetch list for all of the pages due to be fetched. The fetch 
list is placed in a newly created segment directory. The segment directory is 
named by the time it's created. We save the name of this segment in the shell 
variable {{{s1}}}:
+ 
+ {{{
+ s1=`ls -d crawl/segments/2* | tail -1`
+ echo $s1
+ }}}
+ Now we run the fetcher on this segment with:
+ 
+ {{{ bin/nutch fetch $s1 }}}
+ 
+ When this is complete, we update the database with the results of the fetch:
+ 
+ {{{ bin/nutch updatedb crawldb $s1 }}}
+ 
+ Now the database contains both updated entries for all initial pages as well 
as new entries that correspond to newly discovered pages linked from the 
initial set.
+ 
+ Then we parse the entries:
+ 
+ {{{ bin/nutch parse $1 }}}
+ 
+ Now we generate and fetch a new segment containing the top-scoring 1000 pages:
+ 
+ {{{
+ bin/nutch generate crawldb segments -topN 1000
+ s2=`ls -d segments/2* | tail -1`
+ echo $s2
+ 
+ bin/nutch fetch $s2
+ bin/nutch updatedb crawldb $s2
+ bin/nutch parse $s2
+ }}}
+ Let's fetch one more round:
+ 
+ {{{
+ bin/nutch generate crawldb segments -topN 1000
+ s3=`ls -d segments/2* | tail -1`
+ echo $s3
+ 
+ bin/nutch fetch $s3
+ bin/nutch updatedb crawldb $s3
+ bin/nutch parse $s3
+ }}}
+ By this point we've fetched a few thousand pages. Let's index them!
+ 
+ === Step-by-Step: Invertlinks ===
+ Before indexing we first invert all of the links, so that we may index 
incoming anchor text with the pages.
+ 
+ {{{ bin/nutch invertlinks linkdb -dir segments }}}
+ 
+ We are now ready to search with Apache Solr. 
  
  == 4. Setup Solr for search ==
  

Reply via email to