[ 
https://issues.apache.org/jira/browse/HADOOP-17461?focusedWorklogId=791466&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-791466
 ]

ASF GitHub Bot logged work on HADOOP-17461:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 15/Jul/22 14:58
            Start Date: 15/Jul/22 14:58
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on PR #4566:
URL: https://github.com/apache/hadoop/pull/4566#issuecomment-1185628638

   
   Here are the stats for a magic run where we also measure list calls.
   
   The fact that hsync is being called shows that the terasort output is also 
being picked up. not intentional.
   
   raises a big issue: should these committers reset the context before 
task/job commit.
   
   ```
    commit.AbstractCommitITest (AbstractCommitITest.java:printStatistics(118)) 
- Aggregate job statistics counters=((action_executor_acquired=7)
   (committer_bytes_committed=800042)
   (committer_commit_job=10)
   (committer_commits_completed=14)
   (committer_jobs_completed=10)
   (committer_magic_marker_put=7)
   (committer_materialize_file=14)
   (object_list_request=17)
   (object_multipart_initiated=7)
   (op_hsync=6)
   (stream_write_block_uploads=7)
   (stream_write_bytes=400021)
   (stream_write_total_data=400021));
   
   gauges=();
   
   minimums=((action_executor_acquired.min=0)
   (committer_commit_job.min=301)
   (committer_magic_marker_put.min=70)
   (committer_materialize_file.min=129)
   (object_list_request.min=64)
   (object_multipart_initiated.min=74));
   
   maximums=((action_executor_acquired.max=11)
   (committer_commit_job.max=643)
   (committer_magic_marker_put.max=127)
   (committer_materialize_file.max=478)
   (object_list_request.max=319)
   (object_multipart_initiated.max=97));
   
   means=((action_executor_acquired.mean=(samples=14, sum=65, mean=4.6429))
   (committer_commit_job.mean=(samples=10, sum=4974, mean=497.4000))
   (committer_magic_marker_put.mean=(samples=7, sum=641, mean=91.5714))
   (committer_materialize_file.mean=(samples=14, sum=3171, mean=226.5000))
   (object_list_request.mean=(samples=17, sum=2098, mean=123.4118))
   (object_multipart_initiated.mean=(samples=7, sum=598, mean=85.4286)));
   ```
   
   this is going to overreport, but i don't think the committers should be 
interfering with the context which should really be managed by the app




Issue Time Tracking
-------------------

    Worklog Id:     (was: 791466)
    Time Spent: 4h 40m  (was: 4.5h)

> Add thread-level IOStatistics Context
> -------------------------------------
>
>                 Key: HADOOP-17461
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17461
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs, fs/azure, fs/s3
>    Affects Versions: 3.3.1
>            Reporter: Steve Loughran
>            Assignee: Mehakmeet Singh
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> For effective reporting of the iostatistics of individual worker threads, we 
> need a thread-level context which IO components update.
> * this contact needs to be passed in two background thread forming work on 
> behalf of a task.
> * IO Components (streams, iterators, filesystems) need to update this context 
> statistics as they perform work
> * Without double counting anything.
> I imagine a ThreadLocal IOStatisticContext which will be updated in the 
> FileSystem API Calls. This context MUST be passed into the background threads 
> used by a task, so that IO is correctly aggregated.
> I don't want streams, listIterators &c to do the updating as there is more 
> risk of double counting. However, we need to see their statistics if we want 
> to know things like "bytes discarded in backwards seeks". And I don't want to 
> be updating a shared context object on every read() call.
> If all we want is store IO (HEAD, GET, DELETE, list performance etc) then the 
> FS is sufficient. 
> If we do want the stream-specific detail, then I propose
> * caching the context in the constructor
> * updating it only in close() or unbuffer() (as we do from S3AInputStream to 
> S3AInstrumenation)
> * excluding those we know the FS already collects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to