Nancy Snyder wrote:
Previously I was using nutch-0.7.2 and was able to use SegmentReader
to read a specific segment
and then use the next method to read values from the open readers
(FetcherOutput, Content, ParseText, and ParseData).
segmentReader = new SegmentReader(nfs, allSegmentFiles[i],
true, true, true, true);
while(segmentReader.next(fo, co, pt, pd)) {
...
}
From using the fo (FetcherOutput), I could get the fetch date
(fo.getFetchDate()).
From using the pd (ParseData), I could get the title (fd.getTitle()).
From using the pt (ParseText), I could get the parsed text
(ft.getText()).
Now I trying to upgrade to nutch 0.8.x and I have downloaded nutch-0.8.1.
I am looking at the API and do not see how to read the data from the
crawled segments.
SegmentReader has changed and no longer has next. I see a get(Path
segment, UTF8 key, Writer writer, Map results)
but don't have an code example.
I want to loop thru the records (or documents) in a segment and get
the data (url, title, parsed text).
Can anyone, show me how to do this?
SegmentReader is itself an example how to use this API , although most
operations are not performed directly - instead the data is submitted to
a map-reduce job. Although the method that you mention (get) does
illustrate how to do it without running a map-reduce job.
Please see also http://issues.apache.org/jira/browse/HADOOP-175 for
utilities that allow to read directly data from each segment part,
without running a map-reduce job.
--
Best regards,
Andrzej Bialecki <><
___. ___ ___ ___ _ _ __________________________________
[__ || __|__/|__||\/| Information Retrieval, Semantic Web
___|||__|| \| || | Embedded Unix, System Integration
http://www.sigram.com Contact: info at sigram dot com