[
https://issues.apache.org/jira/browse/HADOOP-18106?focusedWorklogId=781389&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-781389
]
ASF GitHub Bot logged work on HADOOP-18106:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 14/Jun/22 21:04
Start Date: 14/Jun/22 21:04
Worklog Time Spent: 10m
Work Description: mukund-thakur commented on code in PR #4427:
URL: https://github.com/apache/hadoop/pull/4427#discussion_r897321545
##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java:
##########
@@ -273,6 +273,7 @@ public boolean hasCapability(String capability) {
// new capabilities.
switch (capability.toLowerCase(Locale.ENGLISH)) {
case StreamCapabilities.IOSTATISTICS:
+ case StreamCapabilities.VECTOREDIO:
Review Comment:
thx
Issue Time Tracking
-------------------
Worklog Id: (was: 781389)
Time Spent: 50m (was: 40m)
> Handle memory fragmentation in S3 Vectored IO implementation.
> -------------------------------------------------------------
>
> Key: HADOOP-18106
> URL: https://issues.apache.org/jira/browse/HADOOP-18106
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Reporter: Mukund Thakur
> Assignee: Mukund Thakur
> Priority: Major
> Labels: pull-request-available
> Time Spent: 50m
> Remaining Estimate: 0h
>
> As we have implemented merging of ranges in the S3AInputStream implementation
> of vectored IO api, it can lead to memory fragmentation. Let me explain by
> example.
>
> Suppose client requests for 3 ranges.
> 0-500, 700-1000 and 1200-1500.
> Now because of merging, all the above ranges will get merged into one and we
> will allocate a big byte buffer of 0-1500 size but return sliced byte buffers
> for the desired ranges.
> Now once the client is done reading all the ranges, it will only be able to
> free the memory for requested ranges and memory of the gaps will never be
> released for eg here (500-700 and 1000-1200).
>
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]