[
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=492016&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-492016
]
ASF GitHub Bot logged work on HADOOP-16830:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 28/Sep/20 15:04
Start Date: 28/Sep/20 15:04
Worklog Time Spent: 10m
Work Description: steveloughran commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-700065247
> For DurationTrackers in IOStatisticsStore() if we add a tracker in a try
block, what happens to it in case of failure should be looked at to avoid
inaccurate values for the trackers.
I was thinking about failure reporting myself
- we may want to count failures
- any failure which with a longer or shorter duration than successful
operations Will skew the results. Example: network failures -> long durations;
auth failures -> short ones.
At the same time, try-with-resources is nice. What to do?
For each set of duration stats, we add counter/mean/min/max of failures
on a failure, those statistics are updated instead.
Issue: how best to record a failure, given we can't get at the
try-with-resources classes in catch or finally? I'd initially thought we could
set it in the catch(), but it'd be out of scope.
1. Pessimistic: assume that all attempts are failure, make last operation in
every try clause set the success flag. Ugly.
1. Move construction out of try-with-resources and instead explicit catch
and finally. Differently ugly
Fancy lambda-expression wrapper thing? Doable.
```
object = DurationTrackerFactory.track("statistic", () ->
s3.listObjects());
```
Then in that code we'd put the code of option #2 in
Fancy curried-function-Haskell-elitism option
Duration tracker takes a function and returns a new one
```
FunctionRaisingIOE<A, B> track(String, FunctionRaisingIOE<A, B> inner)
```
you'd get a function back which you could then apply at leisure.
```
DurationTrackerFactory.track("statistic", () ->
s3.listObjects()).apply();
```
Maybe worth doing both. I could also look at adding into the S3A Invoke
code, as every iteration of a retried operation we'd want the statistic
updated.
```
At the same time: this gets complex fast. Could we make the design of this a
followup?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 492016)
Time Spent: 5h 10m (was: 5h)
> Add public IOStatistics API
> ---------------------------
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
> Issue Type: New Feature
> Components: fs, fs/s3
> Affects Versions: 3.3.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Major
> Labels: pull-request-available
> Time Spent: 5h 10m
> Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take,
> by collecting exactly those operations done during the execution of FS API
> calls by their individual worker threads, and returning these to their job
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one;
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread,
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]