[ 
https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=710492&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-710492
 ]

ASF GitHub Bot logged work on HADOOP-11867:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 18/Jan/22 13:14
            Start Date: 18/Jan/22 13:14
    Worklog Time Spent: 10m 
      Work Description: mehakmeet commented on a change in pull request #3499:
URL: https://github.com/apache/hadoop/pull/3499#discussion_r786719358



##########
File path: hadoop-tools/hadoop-aws/src/test/resources/log4j.properties
##########
@@ -52,7 +52,7 @@ log4j.logger.org.apache.hadoop.ipc.Server=WARN
 
 # for debugging low level S3a operations, uncomment these lines
 # Log all S3A classes
-#log4j.logger.org.apache.hadoop.fs.s3a=DEBUG
+log4j.logger.org.apache.hadoop.fs.s3a=DEBUG

Review comment:
       Maybe put in while testing, shouldn't be DEBUG by default I assume?

##########
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractVectoredReadTest.java
##########
@@ -0,0 +1,291 @@
+package org.apache.hadoop.fs.contract;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileRange;
+import org.apache.hadoop.fs.FileRangeImpl;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.impl.FutureIOSupport;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutionException;
+import java.util.function.IntFunction;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import static org.apache.hadoop.fs.contract.ContractTestUtils.createFile;
+
+@RunWith(Parameterized.class)
+public abstract class AbstractContractVectoredReadTest extends 
AbstractFSContractTestBase {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(AbstractContractVectoredReadTest.class);
+
+  public static final int DATASET_LEN = 1024;
+  private static final byte[] DATASET = ContractTestUtils.dataset(DATASET_LEN, 
'a', 32);
+  private static final String VECTORED_READ_FILE_NAME = "vectored_file.txt";
+  private static final String VECTORED_READ_FILE_1MB_NAME = 
"vectored_file_1M.txt";
+  private static final byte[] DATASET_MB = ContractTestUtils.dataset(1024 * 
1024, 'a', 256);
+
+
+  private final IntFunction<ByteBuffer> allocate;
+
+  private final String bufferType;
+
+  @Parameterized.Parameters

Review comment:
       Use "name" in Parameters, for better readability of the test on IDE. 
Something like 
   ```
   @Parameterized.Parameters(name = "Buffer type : {0}")
   ```
   Would be helpful in realizing which test is running direct or array 
bufferType.

##########
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/localfs/TestLocalFSConractVectoredRead.java
##########
@@ -0,0 +1,17 @@
+package org.apache.hadoop.fs.contract.localfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+public class TestLocalFSConractVectoredRead extends 
AbstractContractVectoredReadTest {

Review comment:
       typo: "Contract"

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -46,8 +51,17 @@
 import java.io.EOFException;
 import java.io.IOException;
 import java.net.SocketTimeoutException;
+import java.nio.ByteBuffer;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;

Review comment:
       nit: Not used, could be removed.

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -793,6 +810,208 @@ public void readFully(long position, byte[] buffer, int 
offset, int length)
     }
   }
 
+  /**
+   * {@inheritDoc}
+   * Vectored read implementation for S3AInputStream.
+   * @param ranges the byte ranges to read.
+   * @param allocate the function to allocate ByteBuffer.
+   * @throws IOException IOE if any.
+   */
+  @Override
+  public void readVectored(List<? extends FileRange> ranges,
+                           IntFunction<ByteBuffer> allocate) throws 
IOException {
+    // TODO   Cancelling of vectored read calls.
+    // No upfront cancelling is supported right now but all runnable
+    // tasks will be aborted when threadpool will shutdown during 
S3AFS.close();
+    // TODO   unbounded corner cases like starvation etc.
+    // think of creating a separate thread pool
+    // for vectored read api such that its bad usage doesn't cause
+    // starvation for other api's like list, delete etc.
+    // TODO: what if combined range becomes so big that memory can't be 
allocated.
+    checkNotClosed();
+    for (FileRange range : ranges) {
+      validateRangeRequest(range);
+      CompletableFuture<ByteBuffer> result = new CompletableFuture<>();
+      range.setData(result);
+    }
+
+    if (isOrderedDisjoint(ranges, 1, minSeekForVectorReads())) {

Review comment:
       Debug Logs indicating that the range was ordered disjoint and would not 
be merged and the same if it's merged.

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -100,6 +114,7 @@
   private S3ObjectInputStream wrappedStream;
   private final S3AReadOpContext context;
   private final InputStreamCallbacks client;
+  private final ThreadPoolExecutor unboundedThreadPool;

Review comment:
       A comment describing that this threadpool is used for vectored reads 
would be helpful

##########
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/MoreAsserts.java
##########
@@ -57,10 +62,28 @@
     Iterator<T> ita = actual.iterator();
     int i = 0;
     while (ite.hasNext() && ita.hasNext()) {
-      Assert.assertEquals("Element "+ i +" for "+s, ite.next(), ita.next());
+      Assert.assertEquals("Element " + i + " for " + s, ite.next(), 
ita.next());
     }
     Assert.assertTrue("Expected more elements", !ite.hasNext());
     Assert.assertTrue("Expected less elements", !ita.hasNext());
   }
 
+
+  public static <T> void 
assertFutureCompletedSuccessfully(CompletableFuture<T> future) {
+    Assertions.assertThat(future.isDone())
+            .describedAs("This future is supposed to be " +
+                    "completed successfully")
+            .isTrue();
+    Assertions.assertThat(future.isCompletedExceptionally())
+            .describedAs("This future is supposed to be " +
+                    "completed successfully")
+            .isFalse();
+  }
+
+  public static <T> void assertFutureFailedExceptionaly(CompletableFuture<T> 
future) {

Review comment:
       typo: "Exceptionally"




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 710492)
    Time Spent: 10.5h  (was: 10h 20m)

> FS API: Add a high-performance vectored Read to FSDataInputStream API
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-11867
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11867
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: fs, fs/azure, fs/s3, hdfs-client
>    Affects Versions: 3.0.0
>            Reporter: Gopal Vijayaraghavan
>            Assignee: Mukund Thakur
>            Priority: Major
>              Labels: performance, pull-request-available
>          Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> The most significant way to read from a filesystem in an efficient way is to 
> let the FileSystem implementation handle the seek behaviour underneath the 
> API to be the most efficient as possible.
> A better approach to the seek problem is to provide a sequence of read 
> locations as part of a single call, while letting the system schedule/plan 
> the reads ahead of time.
> This is exceedingly useful for seek-heavy readers on HDFS, since this allows 
> for potentially optimizing away the seek-gaps within the FSDataInputStream 
> implementation.
> For seek+read systems with even more latency than locally-attached disks, 
> something like a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would 
> take of the seeks internally while reading chunk.remaining() bytes into each 
> chunk (which may be {{slice()}}ed off a bigger buffer).
> The base implementation can stub in this as a sequence of seeks + read() into 
> ByteBuffers, without forcing each FS implementation to override this in any 
> way.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to