[
https://issues.apache.org/jira/browse/HADOOP-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran resolved HADOOP-17461.
-------------------------------------
Fix Version/s: 3.3.9
Resolution: Fixed
> Add thread-level IOStatistics Context
> -------------------------------------
>
> Key: HADOOP-17461
> URL: https://issues.apache.org/jira/browse/HADOOP-17461
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs, fs/azure, fs/s3
> Affects Versions: 3.3.1
> Reporter: Steve Loughran
> Assignee: Mehakmeet Singh
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.3.9
>
> Time Spent: 11h 20m
> Remaining Estimate: 0h
>
> For effective reporting of the iostatistics of individual worker threads, we
> need a thread-level context which IO components update.
> * this contact needs to be passed in two background thread forming work on
> behalf of a task.
> * IO Components (streams, iterators, filesystems) need to update this context
> statistics as they perform work
> * Without double counting anything.
> I imagine a ThreadLocal IOStatisticContext which will be updated in the
> FileSystem API Calls. This context MUST be passed into the background threads
> used by a task, so that IO is correctly aggregated.
> I don't want streams, listIterators &c to do the updating as there is more
> risk of double counting. However, we need to see their statistics if we want
> to know things like "bytes discarded in backwards seeks". And I don't want to
> be updating a shared context object on every read() call.
> If all we want is store IO (HEAD, GET, DELETE, list performance etc) then the
> FS is sufficient.
> If we do want the stream-specific detail, then I propose
> * caching the context in the constructor
> * updating it only in close() or unbuffer() (as we do from S3AInputStream to
> S3AInstrumenation)
> * excluding those we know the FS already collects.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]