steveloughran commented on code in PR #1010:
URL: https://github.com/apache/parquet-mr/pull/1010#discussion_r1022697746


##########
parquet-common/src/main/java/org/apache/parquet/io/InputFile.java:
##########
@@ -41,4 +41,16 @@ public interface InputFile {
    */
   SeekableInputStream newStream() throws IOException;
 
+  /**
+   * Open a new {@link SeekableInputStream} for the underlying data file,
+   * in the range of '[offset, offset + length)'
+   *
+   * @param offset the offset in the file to read from
+   * @param length the total number of bytes to read
+   * @return a new {@link SeekableInputStream} to read the file
+   * @throws IOException if the stream cannot be opened
+   */
+  default SeekableInputStream newStream(long offset, long length) throws 
IOException {

Review Comment:
   you should go with the hadoop 
https://issues.apache.org/jira/browse/HADOOP-16202 options; s3a fs now reads 
them and it lines up abfs/gcs for the same. you can declare split start/end as 
well as file length so that
   * length => client can skip existence probes, they know the file limit
   * spilt range: they know to not prefetch past the end of the split, if they 
prefetch
   * read policy: standard set of policies and a parse policy of "csv list of 
policies -pick the first one you recognise". again, can be used by all the 
stores



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@parquet.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to