[
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825914#comment-17825914
]
ASF GitHub Bot commented on HADOOP-19102:
-----------------------------------------
saxenapranav commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1522580484
##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##########
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem
fs,
@Test
public void testPartialReadWithSomeData() throws Exception {
- for (int i = 0; i <= 4; i++) {
- for (int j = 0; j <= 2; j++) {
- int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
- int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
- final AzureBlobFileSystem fs = getFileSystem(true,
- fileSize, footerReadBufferSize);
- String fileName = methodName.getMethodName() + i;
- byte[] fileContent = getRandomBytesArray(fileSize);
- Path testFilePath = createFileWithContent(fs, fileName, fileContent);
- testPartialReadWithSomeData(fs, testFilePath,
- fileSize - AbfsInputStream.FOOTER_SIZE,
AbfsInputStream.FOOTER_SIZE,
- fileContent, footerReadBufferSize);
+ int fileIdx = 0;
+ List<Future> futureList = new ArrayList<>();
+ for (int fileSize : FILE_SIZES) {
+ final int fileId = fileIdx++;
+ futureList.add(executorService.submit(() -> {
+ try (AzureBlobFileSystem spiedFs = createSpiedFs(
+ getRawConfiguration())) {
+ String fileName = methodName.getMethodName() + fileId;
+ byte[] fileContent = getRandomBytesArray(fileSize);
+ Path testFilePath = createFileWithContent(spiedFs, fileName,
+ fileContent);
+ testParialReadWithSomeData(spiedFs, fileSize, testFilePath,
+ fileContent);
+ } catch (Exception ex) {
+ throw new RuntimeException(ex);
+ }
+ }));
+ }
+ for (Future future : futureList) {
+ future.get();
+ }
+ }
+
+ private void testParialReadWithSomeData(final AzureBlobFileSystem spiedFs,
+ final int fileSize, final Path testFilePath, final byte[] fileContent)
+ throws IOException {
+ for (int readBufferSize : READ_BUFFER_SIZE) {
+ for (int footerReadBufferSize : FOOTER_READ_BUFFER_SIZE) {
+ changeFooterConfigs(spiedFs, true,
+ fileSize, footerReadBufferSize, readBufferSize);
+
+ testPartialReadWithSomeData(spiedFs, testFilePath,
+ fileSize - AbfsInputStream.FOOTER_SIZE,
+ AbfsInputStream.FOOTER_SIZE,
+ fileContent, footerReadBufferSize, readBufferSize);
}
}
}
private void testPartialReadWithSomeData(final FileSystem fs,
Review Comment:
Taken. Refactored the names of non-test-entry methods.
> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> ----------------------------------------------------------------------
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Affects Versions: 3.4.0
> Reporter: Pranav Saxena
> Assignee: Pranav Saxena
> Priority: Major
> Labels: pull-request-available
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`.
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize =
> min(readBufferSizeConfig, footerBufferSizeConfig)
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]