[ 
https://issues.apache.org/jira/browse/HADOOP-18876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761752#comment-17761752
 ] 

Anmol Asrani commented on HADOOP-18876:
---------------------------------------

Many of our Hadoop distro partners have set this configuration to bytebuffer as 
default and we have not faced any escalation reporting OOM. One Customer who 
was spinning cluster up from OSS stack directly (hence config set to disk) had 
raised an escalation that their disk space ran out. 

Can you please share the details of the scenario where OOM was noticed due to 
bytebuffer leading to default being disk? We are open to making updates to 
handle that case as part of this change.

> ABFS: Change default from disk to bytebuffer for fs.azure.data.blocks.buffer
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-18876
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18876
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: build
>    Affects Versions: 3.3.6
>            Reporter: Anmol Asrani
>            Assignee: Anmol Asrani
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.3.6
>
>
> Change default from disk to bytebuffer for fs.azure.data.blocks.buffer.
> Gathered from multiple workload runs, the presented data underscores a 
> noteworthy enhancement in performance. The adoption of ByteBuffer for 
> *reading operations* exhibited a remarkable improvement of approximately 
> *64.83%* when compared to traditional disk-based reading. Similarly, the 
> implementation of ByteBuffer for *write operations* yielded a substantial 
> efficiency gain of about {*}60.75%{*}. These findings underscore the 
> consistent and substantial advantages of integrating ByteBuffer across a 
> range of workload scenarios.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to