[
https://issues.apache.org/jira/browse/HADOOP-16022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16742380#comment-16742380
]
Steve Loughran commented on HADOOP-16022:
-----------------------------------------
That's a pretty old test suite there; you can look at it and see its time for a
rework.
# Maybe ask one of the original authors like [~chris.douglas].
# What if the magic numbers are retained, but upped to 8K and the tests driven
off that?
> Increase Compression Buffer Sizes - Remove Magic Numbers
> --------------------------------------------------------
>
> Key: HADOOP-16022
> URL: https://issues.apache.org/jira/browse/HADOOP-16022
> Project: Hadoop Common
> Issue Type: Improvement
> Components: io
> Affects Versions: 2.10.0, 3.2.0
> Reporter: BELUGA BEHR
> Assignee: BELUGA BEHR
> Priority: Minor
> Attachments: HADOOP-16022.1.patch
>
>
> {code:java|title=Compression.java}
> // data input buffer size to absorb small reads from application.
> private static final int DATA_IBUF_SIZE = 1 * 1024;
> // data output buffer size to absorb small writes from application.
> private static final int DATA_OBUF_SIZE = 4 * 1024;
> {code}
> There exists these hard coded buffer sizes in the Compression code. Instead,
> use the JVM default sizes, which, this day and age, are usually set for 8K.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]