[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15543699#comment-15543699
 ] 

churro morales edited comment on HADOOP-13578 at 10/3/16 11:10 PM:
-------------------------------------------------------------------

Ran the mapreduce jobs with 
{noformat} 
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 -Dmapreduce.map.output.compress=true 
-Dmapreduce.output.fileoutputformat.compress=true 
-Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 wcin wcout-zst 

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
  wcout-zst wcout-zst2

{noformat}

* Fixed the warnings for ZStandardDecompressor.c
* Updated the Building.txt
* Used the constant IO_COMPRESSION_CODEC_ZSTD_LEVEL_DEFAULT and fixed the 
default compression level 
* Sorted out the TODO for the compression overhead.

[~jlowe] what do you think about adding a test that would go through all the 
codecs and run the M/R job you used as your example with the MRMiniCluster, do 
you think that would be worthwhile?



was (Author: churromorales):
Ran the mapreduce jobs with 
{noformat} 
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 -Dmapreduce.map.output.compress=true 
-Dmapreduce.output.fileoutputformat.compress=true 
-Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
 wcin wcout-zst 

hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar 
wordcount 
-Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.ZStandardCodec
  wcout-zst wcout-zst2

{noformat}

* Fixed the warnings for ZStandardDecompressor.c
* Updated the Building.txt
* Used the constant IO_COMPRESSION_CODEC_ZSTD_LEVEL_DEFAULT and fixed the 
default compression level 
* Sorted out the TODO for the compression overhead.


> Add Codec for ZStandard Compression
> -----------------------------------
>
>                 Key: HADOOP-13578
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13578
>             Project: Hadoop Common
>          Issue Type: New Feature
>            Reporter: churro morales
>            Assignee: churro morales
>         Attachments: HADOOP-13578.patch, HADOOP-13578.v1.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to