[ 
https://issues.apache.org/jira/browse/NUTCH-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12508900
 ] 

Andrzej Bialecki  commented on NUTCH-392:
-----------------------------------------

Excellent work, Doğacan - thank you. The numbers for RECORD compression 
probably depend on some sweet spot in the environment, related to the CPU 
usage, how the OS pulls data from the disk / disk buffers, what is the hard 
drive cache, what is the size of internal mem buffers in Hadoop, etc, etc. I 
would venture a guess that compression NONE is raw disk I/O bound, whereas BOCK 
compression suffers from poor performance of seeking in compressed data.

I agree with your conclusions regarding the type of compression to use for each 
segment part.

Re: Nutch not doing any internal compression for Content and ParseText: Content 
is a versioned writable, so we can change its implementation and provide 
compatibility code to read older data. The same with ParseText.

> OutputFormat implementations should pass on Progressable
> --------------------------------------------------------
>
>                 Key: NUTCH-392
>                 URL: https://issues.apache.org/jira/browse/NUTCH-392
>             Project: Nutch
>          Issue Type: New Feature
>          Components: fetcher
>            Reporter: Doug Cutting
>            Assignee: Andrzej Bialecki 
>             Fix For: 1.0.0
>
>         Attachments: NUTCH-392.patch, ParseTextBenchmark.java
>
>
> OutputFormat implementations should pass the Progressable they are passed to 
> underlying SequenceFile implementations.  This will keep reduce tasks from 
> timing out when block writes are slow.  This issue depends on 
> http://issues.apache.org/jira/browse/HADOOP-636.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to