[ 
https://issues.apache.org/jira/browse/HADOOP-18106?focusedWorklogId=782523&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-782523
 ]

ASF GitHub Bot logged work on HADOOP-18106:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 17/Jun/22 19:25
            Start Date: 17/Jun/22 19:25
    Worklog Time Spent: 10m 
      Work Description: mukund-thakur commented on PR #4427:
URL: https://github.com/apache/hadoop/pull/4427#issuecomment-1159168468

   > +1. good to go
   
   Thanks @steveloughran . Although this one is the rebased patch 
https://github.com/apache/hadoop/pull/4445




Issue Time Tracking
-------------------

    Worklog Id:     (was: 782523)
    Time Spent: 2.5h  (was: 2h 20m)

> Handle memory fragmentation in S3 Vectored IO implementation.
> -------------------------------------------------------------
>
>                 Key: HADOOP-18106
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18106
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>            Reporter: Mukund Thakur
>            Assignee: Mukund Thakur
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> As we have implemented merging of ranges in the S3AInputStream implementation 
> of vectored IO api, it can lead to memory fragmentation. Let me explain by 
> example.
>  
> Suppose client requests for 3 ranges. 
> 0-500, 700-1000 and 1200-1500.
> Now because of merging, all the above ranges will get merged into one and we 
> will allocate a big byte buffer of 0-1500 size but return sliced byte buffers 
> for the desired ranges.
> Now once the client is done reading all the ranges, it will only be able to 
> free the memory for requested ranges and memory of the gaps will never be 
> released for eg here (500-700 and 1000-1200).
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to