[
https://issues.apache.org/jira/browse/HADOOP-14973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16222441#comment-16222441
]
Sean Mackrory commented on HADOOP-14973:
----------------------------------------
Yeah let me take a closer look at how well metrics2 would work for what I'm
trying to do. Essentially I've gotten numerous requests for cluster-wide data
on S3 to answer questions like, "how much are we even using S3?" and "how has
S3 performance been in general, not just in the workload I just did". Compute
engines can, should, and I'm sure will eventually be better about retrieving
what they can and storing them in a job context as well - and I agree that's
the best solution for job owners to be looking at this. This is all from more
of a cluster administrator perspective - aggregating an S3-specific metrics log
from 20 processes is a whole lot better than getting every user of every
framework to edit every job to be retrieving this.
> [s3a] Log StorageStatistics
> ---------------------------
>
> Key: HADOOP-14973
> URL: https://issues.apache.org/jira/browse/HADOOP-14973
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.0.0-beta1, 2.8.1
> Reporter: Sean Mackrory
> Assignee: Sean Mackrory
>
> S3A is currently storing much more detailed metrics via StorageStatistics
> than are logged in a MapReduce job. Eventually, it would be nice to get
> Spark, MapReduce and other workloads to retrieve and store these metrics, but
> it may be some time before they all do that. I'd like to consider having S3A
> publish the metrics itself in some form. This is tricky, as S3A has no daemon
> but lives inside various other processes.
> Perhaps writing to a log file at some configurable interval and on close()
> would be the best we could do. Other ideas would be welcome.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]