Hi people,

I'm running a script to update the crawl database twice a week, and
today it failed.

"readdb crawl/crawldb -stats" still works fine, but searching using
the web application is returning 0 results.

I'm looking for some problem in the crawl script, but I'd also like to
be able to rebuild the database from what's there.

I'll post the script, and then the output from where it started to fail.

__start script___

#!/bin/bash

crawl_dir="crawl"
crawl_db="$crawl_dir/crawldb"
link_db="$crawl_dir/linkdb"
index_dir="$crawl_dir/index"
new_indexes_dir="$crawl_dir/indexes"
segments="$crawl_dir/segments"
merged_segment="$crawl_dir/merged"
nutch_bin="../bin"
url_dir="urls"
threads="50"
depth="1"
days="4" # ie: done twice a week NOTE this can't be a float

if [ ! -d $crawl_dir ]
then
        echo "initial crawl starting"
        $nutch_bin/nutch crawl $url_dir -dir $crawl_dir -threads
$threads -depth $depth

        #this should have been done by now anyway, but meh
        rm -r $new_indexes_dir
        echo "initial crawl finished"
        exit
fi

for ((i=1; i <= depth ; i++))
do
        $nutch_bin/nutch generate $crawl_db $segments -adddays $days
        segment=`ls -d $segments/* | tail -1`
        $nutch_bin/nutch fetch $segment
        $nutch_bin/nutch updatedb $crawl_db $segment
done

$nutch_bin/nutch mergesegs $merged_segment -dir $segments

rm -r $segments/*
mv $merged_segment/* $segments/
rm -r $merged_segment

$nutch_bin/nutch invertlinks $link_db -dir $segments

$nutch_bin/nutch index $new_indexes_dir $crawl_db $link_db $segments/*

$nutch_bin/nutch dedup $new_indexes_dir

$nutch_bin/nutch merge $crawl_dir $new_indexes_dir

rm -r $new_indexes_dir

#for some reason this is missing
if [[ -d $index_dir ]]
then
        rm -r $index_dir
fi

mv $crawl_dir/merge-output $index_dir

___end script___

__start logs___

CrawlDb update: done
Merging 2 segments to crawl/merged/20070915160114
SegmentMerger:   adding crawl/segments/20070912154605
SegmentMerger:   adding crawl/segments/20070915150016
SegmentMerger: using segment data from: content crawl_generate
crawl_fetch crawl_parse parse_data parse_text
Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:604)
        at org.apache.nutch.segment.SegmentMerger.merge(SegmentMerger.java:590)
        at org.apache.nutch.segment.SegmentMerger.main(SegmentMerger.java:638)
mv: cannot stat `crawl/merged/*': No such file or directory
rm: cannot remove `crawl/merged': No such file or directory
LinkDb: starting
LinkDb: linkdb: crawl/linkdb
LinkDb: URL normalize: true
LinkDb: URL filter: true
LinkDb: java.io.IOException: No input paths specified in input
        at 
org.apache.hadoop.mapred.InputFormatBase.validateInput(InputFormatBase.java:99)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:326)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:543)
        at org.apache.nutch.crawl.LinkDb.invert(LinkDb.java:232)
        at org.apache.nutch.crawl.LinkDb.run(LinkDb.java:377)
        at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
        at org.apache.nutch.crawl.LinkDb.main(LinkDb.java:333)

Indexer: starting
Indexer: linkdb: crawl/linkdb
Indexer: adding segment: crawl/segments/*
Indexer: org.apache.hadoop.mapred.InvalidInputException: Input Pattern
/home/ceims/ceims01_project/nutch-0.9/TEST1/crawl/segments/*/crawl_fetch
matches 0 files
Input Pattern 
/home/ceims/ceims01_project/nutch-0.9/TEST1/crawl/segments/*/parse_data
matches 0 files
Input Pattern 
/home/ceims/ceims01_project/nutch-0.9/TEST1/crawl/segments/*/parse_text
matches 0 files
        at 
org.apache.hadoop.mapred.InputFormatBase.validateInput(InputFormatBase.java:138)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:326)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:543)
        at org.apache.nutch.indexer.Indexer.index(Indexer.java:273)
        at org.apache.nutch.indexer.Indexer.run(Indexer.java:295)
        at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
        at org.apache.nutch.indexer.Indexer.main(Indexer.java:278)

Dedup: starting
Dedup: adding indexes in: crawl/indexes
DeleteDuplicates: org.apache.hadoop.mapred.InvalidInputException:
Input path doesnt exist :
/home/ceims/ceims01_project/nutch-0.9/TEST1/crawl/indexes
        at 
org.apache.hadoop.mapred.InputFormatBase.validateInput(InputFormatBase.java:138)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:326)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:543)
        at 
org.apache.nutch.indexer.DeleteDuplicates.dedup(DeleteDuplicates.java:439)
        at 
org.apache.nutch.indexer.DeleteDuplicates.run(DeleteDuplicates.java:506)
        at org.apache.hadoop.util.ToolBase.doMain(ToolBase.java:189)
        at 
org.apache.nutch.indexer.DeleteDuplicates.main(DeleteDuplicates.java:490)

merging indexes to: crawl
done merging
rm: cannot remove `crawl/indexes': No such file or directory

___end logs___

Reply via email to