[
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260848#comment-15260848
]
Mingliang Liu commented on HADOOP-13065:
----------------------------------------
Per offline discussion, moved from HDFS project to Hadoop Common project.
> add per-operation stats to FileSystem.Statistics
> ------------------------------------------------
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs
> Reporter: Ram Venkatesh
> Assignee: Mingliang Liu
> Attachments: HDFS-10175.000.patch, HDFS-10175.001.patch,
> HDFS-10175.002.patch, HDFS-10175.003.patch, HDFS-10175.004.patch,
> HDFS-10175.005.patch, HDFS-10175.006.patch, TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks.
> There is logic within DfsClient to map operations to these counters that can
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append,
> createSymlink, delete, exists, mkdirs, rename and expose them as new
> properties on the Statistics object. The operation-specific counters can be
> used for analyzing the load imposed by a particular job on HDFS.
> For example, we can use them to identify jobs that end up creating a large
> number of files.
> Once this information is available in the Statistics object, the app
> frameworks like MapReduce can expose them as additional counters to be
> aggregated and recorded as part of job summary.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)