[
https://issues.apache.org/jira/browse/HADOOP-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Arun C Murthy updated HADOOP-1926:
----------------------------------
Attachment: HADOOP-1926_2_20071002.patch
Fixed the javadoc oversight.
> Design/implement a set of compression benchmarks for the map-reduce framework
> -----------------------------------------------------------------------------
>
> Key: HADOOP-1926
> URL: https://issues.apache.org/jira/browse/HADOOP-1926
> Project: Hadoop
> Issue Type: Improvement
> Components: mapred
> Reporter: Arun C Murthy
> Assignee: Arun C Murthy
> Fix For: 0.15.0
>
> Attachments: HADOOP-1926_1_20071002.patch,
> HADOOP-1926_2_20071002.patch
>
>
> It would be nice to benchmark various compression codecs for use in the
> hadoop (existing codecs like zlib, lzo and in-future bzip2 etc.) and run
> these along with our nightlies or weeklies.
> Here are some steps:
> a) Fix HADOOP-1851 ( Map output compression codec cannot be set independently
> of job output compression codec)
> b) Implement a random-text-writer along the lines of examples/randomwriter to
> generate large amounts of synthetic textual data for use in sort. One way to
> do this is to pick a word randomly from {{/usr/share/dict/words}} till we get
> enough bytes per map. To be safe, we could store an array of Strings of a
> snap-shot of the words in examples/RandomTextWriter.java.
> c) Take a dump of wikipedia (http://download.wikimedia.org/enwiki/) and/or
> the ebooks from Project Gutenberg (http://www.gutenberg.org/MIRRORS.ALL) and
> use them as non-synthetic data to run sort/wordcount against.
> For both b) and c) we should setup nightly/weekly benchmark runs with
> different codecs for reduce-outputs and map-outputs (shuffle) and track each.
> Thoughts?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.