danielcweeks commented on code in PR #4608:
URL: https://github.com/apache/iceberg/pull/4608#discussion_r857028327


##########
aws/src/main/java/org/apache/iceberg/aws/s3/S3InputStream.java:
##########
@@ -111,6 +113,35 @@ public int read(byte[] b, int off, int len) throws 
IOException {
     return bytesRead;
   }
 
+  @Override
+  public void readFully(long position, byte[] buffer, int offset, int length) 
throws IOException {
+    Preconditions.checkPositionIndexes(offset, offset + length, buffer.length);
+
+    String range = String.format("bytes=%s-%s", position, position + length - 
1);
+
+    IOUtil.readFully(readRange(range), buffer, offset, length);
+  }
+
+  @Override
+  public void readTail(byte[] buffer, int offset, int length) throws 
IOException {
+    Preconditions.checkPositionIndexes(offset, offset + length, buffer.length);
+
+    String range = String.format("bytes=-%s", length);

Review Comment:
   @rdblue and @electrum I tested this and S3 does respect the range read from 
the end of file, but there are a few things to note:
   
   - if you try to read more than the object size, it will return just the size 
of the object (no error, just fewer bytes)
   - that will not work with how we implement readFully, but there's a way to 
be more flexible with reading the file
   - the s3 mock we use for testing however does not support this type of read, 
so we'll need another approach



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to