Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/20369#discussion_r163533890
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
---
@@ -281,11 +281,9 @@ final class DataStreamWriter[T] private[sql](ds:
Dataset[T]) {
trigger = trigger)
} else {
val ds = DataSource.lookupDataSource(source,
df.sparkSession.sessionState.conf)
- val sink = (ds.newInstance(), trigger) match {
- case (w: ContinuousWriteSupport, _: ContinuousTrigger) => w
- case (_, _: ContinuousTrigger) => throw new
UnsupportedOperationException(
- s"Data source $source does not support continuous writing")
- case (w: MicroBatchWriteSupport, _) => w
+ val disabledSources =
df.sparkSession.sqlContext.conf.disabledV2StreamingWriters.split(",")
--- End diff --
is this option created for data sources that implement both v1 and v2 APIs?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]