rdblue commented on a change in pull request #23702: [SPARK-26785][SQL] data
source v2 API refactor: streaming write
URL: https://github.com/apache/spark/pull/23702#discussion_r258168593
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala
##########
@@ -513,13 +514,16 @@ class MicroBatchExecution(
val triggerLogicalPlan = sink match {
case _: Sink => newAttributePlan
- case s: StreamingWriteSupportProvider =>
- val writer = s.createStreamingWriteSupport(
- s"$runId",
- newAttributePlan.schema,
- outputMode,
- new DataSourceOptions(extraOptions.asJava))
- WriteToDataSourceV2(new MicroBatchWrite(currentBatchId, writer),
newAttributePlan)
+ case s: SupportsStreamingWrite =>
+ // TODO: we should translate OutputMode to concrete write actions like
truncate, but
+ // the truncate action is being developed in SPARK-26666.
Review comment:
Okay, thanks for the clarification. I have no problem keeping this as-is in
v1. I just want to make sure we fix those problems when we introduce the
features in v2. Given that we have a proposal for how to do that, I think we
should use it instead of continuing to add code that uses an unreliable
mechanism.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]