[
https://issues.apache.org/jira/browse/HDFS-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15003102#comment-15003102
]
Hudson commented on HDFS-8968:
------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #663 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/663/])
HDFS-8968. Erasure coding: a comprehensive I/O throughput benchmark (zhz: rev
7b00c8e20ee62885097c5e63f110b9eece8ce6b3)
*
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodeBenchmarkThroughput.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
*
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ErasureCodeBenchmarkThroughput.java
> Erasure coding: a comprehensive I/O throughput benchmark tool
> -------------------------------------------------------------
>
> Key: HDFS-8968
> URL: https://issues.apache.org/jira/browse/HDFS-8968
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: erasure-coding, test
> Affects Versions: 3.0.0
> Reporter: Kai Zheng
> Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HDFS-8968-HDFS-7285.1.patch,
> HDFS-8968-HDFS-7285.2.patch, HDFS-8968.3.patch, HDFS-8968.4.patch,
> HDFS-8968.5.patch
>
>
> We need a new benchmark tool to measure the throughput of client writing and
> reading considering cases or factors:
> * 3-replica or striping;
> * write or read, stateful read or positional read;
> * which erasure coder;
> * striping cell size;
> * concurrent readers/writers using processes or threads.
> The tool should be easy to use and better to avoid unnecessary local
> environment impact, like local disk.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)