HeartSaVioR commented on code in PR #36963:
URL: https://github.com/apache/spark/pull/36963#discussion_r904529825
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala:
##########
@@ -113,7 +114,9 @@ class MicroBatchExecution(
v1.get.asInstanceOf[StreamingRelation].dataSource.createSource(metadataPath)
nextSourceId += 1
logInfo(s"Using Source [$source] from DataSourceV2 named
'$srcName' $dsStr")
- StreamingExecutionRelation(source, output)(sparkSession)
+ // We don't have a catalog table but may have a table identifier.
Given this is about
Review Comment:
Actually this else statement is to handle edge case where the data source is
based on DSv2 and implements continuous read but does not implement microbatch
read. Really a rare case.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]