SreeramGarlapati commented on a change in pull request #2611:
URL: https://github.com/apache/iceberg/pull/2611#discussion_r635010805
##########
File path:
spark3/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java
##########
@@ -159,8 +159,14 @@ private Schema schemaWithMetadataColumns() {
@Override
public Scan build() {
- return new SparkBatchQueryScan(
- spark, table, caseSensitive, schemaWithMetadataColumns(),
filterExpressions, options);
+ // TODO: understand how to differentiate that this is a spark streaming
microbatch scan.
+ if (false) {
Review comment:
@aokolnychyi / @RussellSpitzer / @holdenk
Spark3 gives ScanBuilder - abstraction - to define all types of Scans
(Batch, MicroBatch & Continuous). But, the current implementation / class
modelling - has SparkBatchScan as the Scan implementation.
Looking at some of the concerns of BatchScan - all the way from the State
maintenance of a single SnapshotId to read from, the asOfTimeStamp & features
like VectorizedReads - all of these don't seem relevant to Streaming Scans.
So, I feel that we need to divide out Streaming Scans into a different class.
Does this thought process - make sense?
If we go by this route - do you folks know - how to pass different Scan
objects to Spark based on Batch vs Streaming?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]