Github user jose-torres commented on a diff in the pull request:
https://github.com/apache/spark/pull/20243#discussion_r162132681
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousExecution.scala
---
@@ -69,7 +69,7 @@ class ContinuousExecution(
ContinuousExecutionRelation(source, extraReaderOptions,
output)(sparkSession)
})
case StreamingRelationV2(_, sourceName, _, _, _) =>
- throw new AnalysisException(
+ throw new UnsupportedOperationException(
--- End diff --
I think there's an argument that it is - you're asking the data source
(which is correct in the sense that it's a real, existing source) to do a type
of read/write it doesn't support.
The primary motivation is that the existing code has already made the
choice to throw an UnsupportedOperationException when you try to stream from a
source that only outputs in batch mode.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]