rdblue commented on a change in pull request #3292:
URL: https://github.com/apache/iceberg/pull/3292#discussion_r729976427
##########
File path:
spark3/src/main/java/org/apache/iceberg/spark/actions/Spark3BinPackStrategy.java
##########
@@ -61,9 +61,15 @@ public Table table() {
SparkSession cloneSession = spark.cloneSession();
cloneSession.conf().set(SQLConf.ADAPTIVE_EXECUTION_ENABLED().key(),
false);
+ long targetReadSize = splitSize(inputFileSize(filesToRewrite));
+ // Ideally this would be the row-group size but the row group size is
not guaranteed to be consistent
Review comment:
The row group size is a table setting. can we use the table setting and
assume it is accurate instead of just assuming that there are 4 row groups in
each file?
##########
File path:
spark3/src/main/java/org/apache/iceberg/spark/source/SparkFilesScan.java
##########
@@ -37,9 +37,10 @@
class SparkFilesScan extends SparkBatchScan {
private final String taskSetID;
- private final long splitSize;
- private final int splitLookback;
- private final long splitOpenFileCost;
+ private final Long readSplitSize;
+ private final Long planTargetSize;
+ private final Integer splitLookback;
Review comment:
Why did the type of `splitLookback` and `splitOpenFileCost` change?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]