[ 
https://issues.apache.org/jira/browse/NUTCH-392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12508812
 ] 

Doğacan Güney commented on NUTCH-392:
-------------------------------------

OK, I have done a bit of testing on compression but I'm stuck. Here it is:

* I changed Content to be a regular Writable instead of a CompressedWritable 
and turned on BLOCK compression. Results were pretty impressive. Content size 
went down from ~1GB to ~500MB. Unfortunately, I haven't figured out how we can 
change Content in a backward compatible way. Reading first byte as version 
won't work (because first byte is not version, the first thing written is the 
size of the compressed data as int).

* This is where it gets strange. I was trying to test the performance impact of 
BLOCK compression (when generating summaries).  I fetched a sample 250000 url 
segment (a subset of dmoz). Then I made a small modification to 
ParseOutputFormat so that it outputs parse_text in all three compression 
formats ( http://www.ceng.metu.edu.tr/~e1345172/comp_parse.patch ). After 
parsing, segment looks like this:

828M    crawl/segments/20070626163143/content
35M     crawl/segments/20070626163143/crawl_fetch
23M     crawl/segments/20070626163143/crawl_generate
345M    crawl/segments/20070626163143/crawl_parse
196M    crawl/segments/20070626163143/parse_data
244M    crawl/segments/20070626163143/parse_text # NONE
232M    crawl/segments/20070626163143/parse_text_block # BLOCK
246M    crawl/segments/20070626163143/parse_text_record # RECORD

Not only parse_text_record is larger than parse_text and parse_text_block is 
only slightly smaller, but also crawl_parse is larger than any of them!

I probably messed up somewhere and I can't see it. Any help would be welcome.

> OutputFormat implementations should pass on Progressable
> --------------------------------------------------------
>
>                 Key: NUTCH-392
>                 URL: https://issues.apache.org/jira/browse/NUTCH-392
>             Project: Nutch
>          Issue Type: New Feature
>          Components: fetcher
>            Reporter: Doug Cutting
>            Assignee: Andrzej Bialecki 
>             Fix For: 1.0.0
>
>         Attachments: NUTCH-392.patch
>
>
> OutputFormat implementations should pass the Progressable they are passed to 
> underlying SequenceFile implementations.  This will keep reduce tasks from 
> timing out when block writes are slow.  This issue depends on 
> http://issues.apache.org/jira/browse/HADOOP-636.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Nutch-developers mailing list
Nutch-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nutch-developers

Reply via email to