cloud-fan commented on issue #24129: [SPARK-27190][SQL] add table capability 
for streaming
URL: https://github.com/apache/spark/pull/24129#issuecomment-482792336
 
 
   > The check you linked to is done after the plan is analyzed
   
   The whole process is, the logical plan is first analyzed by the batch 
analyzer, then there are some transformations, like doing the streaming scan 
capability check, like adding the streaming writing logical plan. Finally we go 
through the streaming analyzer/optimizer/planner to get the physical plan.
   
   We can't move the check to the streaming analyzer rule, as it's too late. 
But it's also hard to move the streaming check to the batch analyzer rule, as 
the streaming execution mode is not determined at that time.
   
   > The point is to avoid rules and validations scattered throughout the 
codebase.
   
   Again I totally agree with it, but there is still a long way to completely 
fix the problem. For example, the batch scan capability check is done in 
`DataFrameReader`, not an analyzer rule. This PR doesn't make this problem 
worse.
   
   If you have some ideas which can centralize the check a little bit, I'll be 
happy to implement them. Otherwise, I'd like to move forward and have another 
PR to fix this problem further.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to