rustyconover commented on code in PR #7267:
URL: https://github.com/apache/iceberg/pull/7267#discussion_r1157692510


##########
parquet/src/main/java/org/apache/iceberg/parquet/ParquetUtil.java:
##########
@@ -222,6 +222,24 @@ private static MessageType getParquetTypeWithIds(
     return ParquetSchemaUtil.addFallbackIds(type);
   }
 
+  /**
+   * Returns a list of offsets in ascending order determined by the starting 
position of the row
+   * groups.
+   */
+  public static List<Long> getSplitOffsets(InputFile file) {

Review Comment:
   I'm adding files directly to a table, that isn't using a writer. 
   I already have existing Parquet files on S3.
   
   To add those files I'm using `DataFile.builder()` combined with 
`table.newAppend().appendFile().commit()`.  The problem is by not using a 
writer there is no way to obtain the split offsets without importing the Hadoop 
dependency to read the split offsets.  By adding this method I can avoid 
importing the Hadoop dependency saying about 300mb in a JAR file.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to