rdblue edited a comment on issue #24129: [SPARK-27190][SQL] add table 
capability for streaming
URL: https://github.com/apache/spark/pull/24129#issuecomment-482707941
 
 
   The check you linked to is done after the plan is analyzed because it is 
written as a rule that [transforms the 
`analyzedPlan`](https://github.com/apache/spark/blob/v2.4.1/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MicroBatchExecution.scala#L83).
 I think that an analyzer rule would actually catch problems earlier, but only 
slightly.
   
   But the main point is not when this is caught. The point is to avoid rules 
and validations scattered throughout the codebase. Certainly, we should 
validate that the execution mode is compatible with the plan when the execution 
mode is determined. But we also need to check that the plan is internally 
consistent to the extent possible because that's what the analyzer guarantees.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to