Hi,

I've followed the NutchTutorial at 
http://wiki.apache.org/nutch/NutchTutorial
<http://wiki.apache.org/nutch/NutchTutorial>   to the letter twice.

The first time, the crawl worked correctly.
This time, the crawl happened but no crawldb linkdb or segments files were
created. 

My first attempt has already been wiped so I can't compare configuration to
see if I made a mistake.
I am not getting any error messages.

I've reviewed my configuration and everything appears to be in order.
Here's what happens when I run the crawl...

root@myserver:~/nutch# bin/crawl urls/seed.txt testcrawl -dir crawl -depth 3
-topN 50
Injector: starting at 2014-07-04 14:27:57
Injector: crawlDb: testcrawl/crawldb
Injector: urlDir: urls/seed.txt
Injector: Converting injected urls to crawl db entries.
Injector: total number of urls rejected by filters: 0
Injector: total number of urls injected after normalization and filtering: 1
Injector: Merging injected urls into crawl db.
Injector: overwrite: false
Injector: update: false
Injector: finished at 2014-07-04 14:28:05, elapsed: 00:00:07

Not looking for someone to fix it, just point me in the right direction to
figure this out. I really want to learn Nutch/Solr inside and out. TIA!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/NutchTutorial-Followed-Crawldb-Not-Created-tp4145668.html
Sent from the Nutch - User mailing list archive at Nabble.com.

Reply via email to