kbendick commented on a change in pull request #3400: URL: https://github.com/apache/iceberg/pull/3400#discussion_r739429030
########## File path: core/src/main/java/org/apache/iceberg/BaseMetadataTable.java ########## @@ -34,7 +34,7 @@ * using a {@link StaticTableOperations}. This way no Catalog related calls are needed when reading the table data after * deserialization. */ -abstract class BaseMetadataTable implements Table, HasTableOperations, Serializable { +public abstract class BaseMetadataTable implements Table, HasTableOperations, Serializable { Review comment: Throwing out an idea, but can you make a function `public static boolean isBaseMetadataTable` inside of this package and then use that for the metadata split size to avoid making this class `public`? ########## File path: spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/source/SparkBatchQueryScan.java ########## @@ -74,17 +100,31 @@ throw new IllegalArgumentException("Cannot only specify option end-snapshot-id to do incremental scan"); } - // look for split behavior overrides in options - this.splitSize = Spark3Util.propertyAsLong(options, SparkReadOptions.SPLIT_SIZE, null); - this.splitLookback = Spark3Util.propertyAsInt(options, SparkReadOptions.LOOKBACK, null); - this.splitOpenFileCost = Spark3Util.propertyAsLong(options, SparkReadOptions.FILE_OPEN_COST, null); + this.splitSize = table instanceof BaseMetadataTable ? readConf.metadataSplitSize() : readConf.splitSize(); + this.splitLookback = readConf.splitLookback(); + this.splitOpenFileCost = readConf.splitOpenFileCost(); + this.runtimeFilterExpressions = Lists.newArrayList(); } - @Override - protected List<CombinedScanTask> tasks() { - if (tasks == null) { + private Set<Integer> specIds() { + if (specIds == null) { + Set<Integer> specIdSet = Sets.newHashSet(); + for (FileScanTask file : files()) { + specIdSet.add(file.spec().specId()); + } + this.specIds = specIdSet; + } + + return specIds; + } + + private List<FileScanTask> files() { Review comment: Yeah I agree with that. I had to open up the file and scroll for a while to ensure that we returning the files as is if they were non-null. I'd personally go with the early return if the files are present, as the comment above `// lazy cache of files` makes the non-null value make sense. But I don't have that strong of an opinion either way. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org