Github user jose-torres commented on a diff in the pull request:
https://github.com/apache/spark/pull/20097#discussion_r159497524
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -167,6 +167,24 @@ final class DataStreamReader
private[sql](sparkSession: SparkSession) extends Lo
className = source,
options = extraOptions.toMap)
ds match {
+ case s: MicroBatchReadSupport =>
+ val tempReader = s.createMicroBatchReader(
+ java.util.Optional.ofNullable(userSpecifiedSchema.orNull),
+ Utils.createTempDir(namePrefix =
s"temporaryReader").getCanonicalPath,
+ options)
+ // Generate the V1 node to catch errors thrown within generation.
+ try {
+ StreamingRelation(v1DataSource)
+ } catch {
+ case e: UnsupportedOperationException
--- End diff --
On reflection, there's actually a better way to do this which does not need
to use exceptions as control flow. I didn't notice before because
lookupDataSource returns Class[_] for some reason.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]