[ https://issues.apache.org/jira/browse/HADOOP-18246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17620918#comment-17620918 ]
Daniel Carl Jones commented on HADOOP-18246: -------------------------------------------- Is there a more realistic number we should use for the lower bound? I'd initially suggested we can let users configure it as low as a single byte (which makes no sense to me but why stop them), but is there a lower bound that no-one would ever realistically need? Do you have any thoughts on this, [~ste...@apache.org]? > Remove lower limit on s3a prefetching/caching block size > -------------------------------------------------------- > > Key: HADOOP-18246 > URL: https://issues.apache.org/jira/browse/HADOOP-18246 > Project: Hadoop Common > Issue Type: Sub-task > Affects Versions: 3.4.0 > Reporter: Daniel Carl Jones > Priority: Minor > Labels: pull-request-available > > The minimum allowed block size currently is {{PREFETCH_BLOCK_DEFAULT_SIZE}} > (8MB). > {code:java} > this.prefetchBlockSize = intOption( > conf, PREFETCH_BLOCK_SIZE_KEY, > PREFETCH_BLOCK_DEFAULT_SIZE, PREFETCH_BLOCK_DEFAULT_SIZE);{code} > [https://github.com/apache/hadoop/blob/3aa03e0eb95bbcb066144706e06509f0e0549196/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L487-L488] > Why is this the case and should we lower or remove it? -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org