RussellSpitzer commented on code in PR #15239:
URL: https://github.com/apache/iceberg/pull/15239#discussion_r2854863050
##########
spark/v4.1/spark/src/main/java/org/apache/iceberg/spark/SparkReadOptions.java:
##########
@@ -84,6 +84,11 @@ private SparkReadOptions() {}
public static final String STREAMING_MAX_ROWS_PER_MICRO_BATCH =
"streaming-max-rows-per-micro-batch";
+ // Controls whether streaming checkpoint operations use table FileIO or
Hadoop FileSystem
+ public static final String STREAMING_CHECKPOINT_USE_TABLE_IO =
+ "streaming-checkpoint-use-table-io";
Review Comment:
I'm not sure if it should be use-table-io or use-hdfs ... I think either is
probably fine but maybe I slightly prefer use-hdfs because I know the opposite
should be using the table io?
I do think using an enum may be overboard here but maybe that's just cleaner
all around
streaming-checkpoint = {table-io, hdfs}
streaming-checkpoint_default = table-io
?
Wdyt?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]