[
https://issues.apache.org/jira/browse/HADOOP-18876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17761843#comment-17761843
]
Steve Loughran commented on HADOOP-18876:
-----------------------------------------
seen more with s3a client, which is what the buffering was taken from
(HADOOP-13560 and links); then there was HADOOP-17195.
If we can be confident that even on a system with 64 spark threads then I'll be
happy, but we need to make sure that max #of queued requests
This would be a really good time to add support for the IOStatisticsContext
into the abfs input and output streams: the s3a manifest committers will
collect some of this HADOOP-17461 , though i've never quite been successful
wiring it all the way through spark. now that spark is 3.3.5+ we can actually
do this without playing reflection games.
> ABFS: Change default from disk to bytebuffer for fs.azure.data.blocks.buffer
> ----------------------------------------------------------------------------
>
> Key: HADOOP-18876
> URL: https://issues.apache.org/jira/browse/HADOOP-18876
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: build
> Affects Versions: 3.3.6
> Reporter: Anmol Asrani
> Assignee: Anmol Asrani
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.3.6
>
>
> Change default from disk to bytebuffer for fs.azure.data.blocks.buffer.
> Gathered from multiple workload runs, the presented data underscores a
> noteworthy enhancement in performance. The adoption of ByteBuffer for
> *reading operations* exhibited a remarkable improvement of approximately
> *64.83%* when compared to traditional disk-based reading. Similarly, the
> implementation of ByteBuffer for *write operations* yielded a substantial
> efficiency gain of about {*}60.75%{*}. These findings underscore the
> consistent and substantial advantages of integrating ByteBuffer across a
> range of workload scenarios.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]