[ 
https://issues.apache.org/jira/browse/NUTCH-1113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13917087#comment-13917087
 ] 

Sebastian Nagel commented on NUTCH-1113:
----------------------------------------

Hi [~markus17], junit tests TestSegmentMergerCrawlDatums fail on my machine:
# testRandomizedSequences() may not test for fetch_retry and fetch_notmodified 
as expected status because both are now explicitly excluded and will never 
remain in a merged segment. See attached patch for a fix.
# the second error happens every run in testSingleRandomSequence() and 
sometimes in testRandomizedSequences() and 
testRandomTestSequenceWithRedirects(): only one value (that of the first 
segment) reaches the reduce function. It's hardly a bug in SegmentMerger but 
something related to Hadoop or the set-up of the test. Added some debug logs to 
SegmentMerger:
{code}
2014-03-01 15:29:50,160 INFO  mapred.Merger - Merging 256 sorted segments
2014-03-01 15:29:50,216 INFO  mapred.Merger - Down to the last merge-pass, with 
1 segments left of total size: 92 bytes
2014-03-01 15:29:50,217 INFO  mapred.LocalJobRunner - 
2014-03-01 15:29:50,235 INFO  segment.SegmentMerger - http://nutch.apache.org/ 
[0] 0000000/crawl_fetch: Version: 7
Status: 37 (fetch_gone)
Fetch time: Sat Mar 01 15:29:35 CET 2014
Modified time: Thu Jan 01 01:00:00 CET 1970
Retries since fetch: 0
Retry interval: 0 seconds (0 days)
Score: 0.0
Signature: null
Metadata: 
2014-03-01 15:29:50,241 INFO  mapred.Task - 
Task:attempt_local1912959616_0001_r_000000_0 is done. And is in the process of 
commiting
{code}
Can anyone reproduce this second problem?

> Merging segments causes URLs to vanish from crawldb/index?
> ----------------------------------------------------------
>
>                 Key: NUTCH-1113
>                 URL: https://issues.apache.org/jira/browse/NUTCH-1113
>             Project: Nutch
>          Issue Type: Bug
>    Affects Versions: 1.3
>            Reporter: Edward Drapkin
>            Assignee: Markus Jelsma
>            Priority: Blocker
>             Fix For: 1.8
>
>         Attachments: NUTCH-1113-junit.patch, NUTCH-1113-junit.patch, 
> NUTCH-1113-junit.patch, NUTCH-1113-junit.patch, NUTCH-1113-junit.patch, 
> NUTCH-1113-junit.patch, NUTCH-1113-trunk-junit-fail.patch, 
> NUTCH-1113-trunk-junit-final.patch, NUTCH-1113-trunk.patch, 
> NUTCH-1113-trunk.patch, merged_segment_output.txt, unmerged_segment_output.txt
>
>
> When I run Nutch, I use the following steps:
> nutch inject crawldb/ url.txt
> repeated 3 times:
> nutch generate crawldb/ segments/ -normalize
> nutch fetch `ls -d segments/* | tail -1`
> nutch parse `ls -d segments/* | tail -1`
> nutch update crawldb `ls -d segments/* | tail -1`
> nutch mergesegs merged/ -dir segments/
> nutch invertlinks linkdb/ -dir merged/
> nutch index index/ crawldb/ linkdb/ -dir merged/ (I forward ported the lucene 
> indexing code from Nutch 1.1).
> When I crawl with merging segments, I lose about 20% of the URLs that wind up 
> in the index vs. when I crawl without merging the segments.  Somehow the 
> segment merger causes me to lose ~20% of my crawl database!



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to