[ 
https://issues.apache.org/jira/browse/HADOOP-15245?focusedWorklogId=741461&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-741461
 ]

ASF GitHub Bot logged work on HADOOP-15245:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 15/Mar/22 06:58
            Start Date: 15/Mar/22 06:58
    Worklog Time Spent: 10m 
      Work Description: mehakmeet commented on a change in pull request #3927:
URL: https://github.com/apache/hadoop/pull/3927#discussion_r826616149



##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -781,6 +781,46 @@ public void readFully(long position, byte[] buffer, int 
offset, int length)
     }
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * This implements a more efficient method for skip. It calls lazy seek
+   * which will either make a new get request or do a default skip.
+   * If lazy seek fails, try doing a default skip.
+   *
+   * @param n Number of bytes to be skipped
+   * @return Number of bytes skipped
+   * @throws IOException on any problem
+   */
+  @Override
+  @Retries.OnceTranslated
+  public long skip(final long n) throws IOException {
+
+    if (n <= 0) {
+      return 0;
+    }
+
+    checkNotClosed();
+    streamStatistics.skipOperationStarted();
+
+    long targetPos = pos + n;

Review comment:
       use getPos() instead of pos.

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -781,6 +781,46 @@ public void readFully(long position, byte[] buffer, int 
offset, int length)
     }
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * This implements a more efficient method for skip. It calls lazy seek
+   * which will either make a new get request or do a default skip.
+   * If lazy seek fails, try doing a default skip.
+   *
+   * @param n Number of bytes to be skipped
+   * @return Number of bytes skipped
+   * @throws IOException on any problem
+   */
+  @Override
+  @Retries.OnceTranslated
+  public long skip(final long n) throws IOException {
+
+    if (n <= 0) {
+      return 0;
+    }
+
+    checkNotClosed();
+    streamStatistics.skipOperationStarted();
+
+    long targetPos = pos + n;
+    long skipped;
+
+    try {
+      lazySeek(targetPos, 1);
+      skipped = n;

Review comment:
       What would happen if we have already seeked or read some bits of the 
file, then we try to skip even further than the fileSize, we are returning the 
number of bytes we want to skip and not the number of bytes we actually skipped.

##########
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AInputStreamPerformance.java
##########
@@ -467,6 +467,35 @@ public void testRandomIORandomPolicy() throws Throwable {
         0, streamStatistics.getAborted());
   }
 
+  @Test
+  public void testSkip() throws Throwable {

Review comment:
       Nice test to show the functionality. We could have a little more 
verification that we actually skipped bytes by asserting the content read after 
skipping "n" bytes. As well as adding seeks/reads in between 2 skips to know 
that we are skipping from the current position.

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -781,6 +781,46 @@ public void readFully(long position, byte[] buffer, int 
offset, int length)
     }
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * This implements a more efficient method for skip. It calls lazy seek
+   * which will either make a new get request or do a default skip.
+   * If lazy seek fails, try doing a default skip.
+   *
+   * @param n Number of bytes to be skipped
+   * @return Number of bytes skipped
+   * @throws IOException on any problem
+   */
+  @Override
+  @Retries.OnceTranslated
+  public long skip(final long n) throws IOException {
+
+    if (n <= 0) {
+      return 0;
+    }
+
+    checkNotClosed();
+    streamStatistics.skipOperationStarted();
+
+    long targetPos = pos + n;
+    long skipped;
+
+    try {
+      lazySeek(targetPos, 1);
+      skipped = n;
+    } catch (EOFException e) {

Review comment:
       Maybe we can also LOG the reason for the failure of LazySeek?

##########
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
##########
@@ -781,6 +781,46 @@ public void readFully(long position, byte[] buffer, int 
offset, int length)
     }
   }
 
+  /**
+   * {@inheritDoc}
+   *
+   * This implements a more efficient method for skip. It calls lazy seek
+   * which will either make a new get request or do a default skip.
+   * If lazy seek fails, try doing a default skip.
+   *
+   * @param n Number of bytes to be skipped
+   * @return Number of bytes skipped
+   * @throws IOException on any problem
+   */
+  @Override
+  @Retries.OnceTranslated
+  public long skip(final long n) throws IOException {
+
+    if (n <= 0) {
+      return 0;
+    }
+
+    checkNotClosed();
+    streamStatistics.skipOperationStarted();
+
+    long targetPos = pos + n;

Review comment:
       What if targetPos is larger than fileSize? We can either limit the 
targetPos to fileSize or return quickly.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 741461)
    Time Spent: 2h 20m  (was: 2h 10m)

> S3AInputStream.skip() to use lazy seek
> --------------------------------------
>
>                 Key: HADOOP-15245
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15245
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.1.0
>            Reporter: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> the default skip() does a read and discard of all bytes, no matter how far 
> ahead the skip is. This is very inefficient if the skip() is being done on 
> S3A random IO, though exactly what to do when in sequential mode.
> Proposed: 
> * add an optimized version of S3AInputStream.skip() which does a lazy seek, 
> which itself will decided when to skip() vs issue a new GET.
> * add some more instrumentation to measure how often this gets used



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to