[ https://issues.apache.org/jira/browse/NUTCH-506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12512011 ]
Doğacan Güney commented on NUTCH-506: ------------------------------------- For some reason, crawl_generate is not compressed, even though crawldb, crawl_parse and crawl_fetch are compressed. I tried "readseg -dump"ing an older 2000 url segment with this patch, and dump worked without problems. > Nutch should delegate compression to Hadoop > ------------------------------------------- > > Key: NUTCH-506 > URL: https://issues.apache.org/jira/browse/NUTCH-506 > Project: Nutch > Issue Type: Improvement > Reporter: Doğacan Güney > Fix For: 1.0.0 > > Attachments: compress.patch, NUTCH-506.patch > > > Some data structures within nutch (such as Content, ParseText) handle their > own compression. We should delegate all compressions to Hadoop. > Also, nutch should respect io.seqfile.compression.type setting. Currently > even if io.seqfile.compression.type is BLOCK or RECORD, nutch overrides it > for some structures and sets it to NONE (However, IMO, ParseText should > always be compressed as RECORD because of performance reasons). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online. ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ Nutch-developers mailing list Nutch-developers@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nutch-developers