[
https://issues.apache.org/jira/browse/NUTCH-2463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268625#comment-16268625
]
Hudson commented on NUTCH-2463:
-------------------------------
SUCCESS: Integrated in Jenkins build Nutch-trunk #3469 (See
[https://builds.apache.org/job/Nutch-trunk/3469/])
NUTCH-2463 - Enable sampling CrawlDB (github:
[https://github.com/apache/nutch/commit/65651b5cce54736978356ba1a8dea8a10f405d3c])
* (edit) src/java/org/apache/nutch/crawl/CrawlDbReader.java
> Enable sampling CrawlDB
> -----------------------
>
> Key: NUTCH-2463
> URL: https://issues.apache.org/jira/browse/NUTCH-2463
> Project: Nutch
> Issue Type: Improvement
> Components: crawldb
> Reporter: Yossi Tamari
> Priority: Minor
> Fix For: 1.14
>
>
> CrawlDB can grow to contain billions of records. When that happens *readdb
> -dump* is pretty useless, and *readdb -topN* can run for ages (and does not
> provide a statistically correct sample).
> We should add a parameter *-sample* to *readdb -dump* which is followed by a
> number between 0 and 1, and only that fraction of records from the CrawlDB
> will be processed.
> The sample should be statistically random, and all the other filters should
> be applied on the sampled records.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)