[
https://issues.apache.org/jira/browse/HDFS-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16704817#comment-16704817
]
Steve Loughran commented on HDFS-14084:
---------------------------------------
We also have the StorageStatistics stats collection which is intended to
provide an extensible set of stats, with a standard set of names across
filestores and the option for implementations to use
The Azure stores (wasb, abfs) were the first clients to take the server-side
metrics client side; S3AStorageStatistics pulled that into the
StorageStatistics API
Why use that instead of implement your own methods?
# shipping APIs for clients to use. The S3A committers collect these stats and
aggregated them over entire jobs.
# consistent across filesystems
# transitive access through wrapper filesystems
Can I also point you at org.apache.hadoop.fs.s3a.commit.DurationInfo, which
collects & logs the duration of operations without all the copy and pasting;
it'd be straightforward to do something similar for metrics collections
> Need for more stats in DFSClient
> --------------------------------
>
> Key: HDFS-14084
> URL: https://issues.apache.org/jira/browse/HDFS-14084
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.0.0
> Reporter: Pranay Singh
> Assignee: Pranay Singh
> Priority: Minor
> Attachments: HDFS-14084.001.patch
>
>
> The usage of HDFS has changed from being used as a map-reduce filesystem, now
> it's becoming more of like a general purpose filesystem. In most of the cases
> there are issues with the Namenode so we have metrics to know the workload or
> stress on Namenode.
> However, there is a need to have more statistics collected for different
> operations/RPCs in DFSClient to know which RPC operations are taking longer
> time or to know what is the frequency of the operation.These statistics can
> be exposed to the users of DFS Client and they can periodically log or do
> some sort of flow control if the response is slow. This will also help to
> isolate HDFS issue in a mixed environment where on a node say we have Spark,
> HBase and Impala running together. We can check the throughput of different
> operation across client and isolate the problem caused because of noisy
> neighbor or network congestion or shared JVM.
> We have dealt with several problems from the field for which there is no
> conclusive evidence as to what caused the problem. If we had metrics or stats
> in DFSClient we would be better equipped to solve such complex problems.
> List of jiras for reference:
> -------------------------
> HADOOP-15538 HADOOP-15530 ( client side deadlock)
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]