[
https://issues.apache.org/jira/browse/HADOOP-18246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683352#comment-17683352
]
ASF GitHub Bot commented on HADOOP-18246:
-----------------------------------------
sauraank commented on PR #5120:
URL: https://github.com/apache/hadoop/pull/5120#issuecomment-1413577047
Hey @steveloughran , please review this PR about reducing the lower limit on
prefetching/caching block size. Reduced it to 1 byte.
Thanks.
> Remove lower limit on s3a prefetching/caching block size
> --------------------------------------------------------
>
> Key: HADOOP-18246
> URL: https://issues.apache.org/jira/browse/HADOOP-18246
> Project: Hadoop Common
> Issue Type: Sub-task
> Affects Versions: 3.4.0
> Reporter: Daniel Carl Jones
> Assignee: Ankit Saurabh
> Priority: Minor
> Labels: pull-request-available
>
> The minimum allowed block size currently is {{PREFETCH_BLOCK_DEFAULT_SIZE}}
> (8MB).
> {code:java}
> this.prefetchBlockSize = intOption(
> conf, PREFETCH_BLOCK_SIZE_KEY,
> PREFETCH_BLOCK_DEFAULT_SIZE, PREFETCH_BLOCK_DEFAULT_SIZE);{code}
> [https://github.com/apache/hadoop/blob/3aa03e0eb95bbcb066144706e06509f0e0549196/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L487-L488]
> Why is this the case and should we lower or remove it?
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]