Github user stczwd commented on the issue:

    https://github.com/apache/spark/pull/21306
  
    > @stczwd, thanks for taking a look at this. What are the differences 
between batch and stream DDL that you think will come up?
    
    1. Source needs to be defined for stream table
    2. Stream table requires a special flags to indicate that it is a stream 
table.
    3. User and Program need to be aware of whether this table is a stream 
table.
    4. What would we do if the user wants to change the stream table to batch 
table or convert the batch table to stream table?
    5. What does the stream table metadata you define look like? What is the 
difference between batch table metadata and batch table metadata?
    I defined the Stream Table based on DataSource V1 (see in[ Support 
SQLStreaming in Spark](https://github.com/apache/spark/pull/22575)),  but found 
that the above problem can not be completely solved with the catalog api.
    How would you solve these in mew Catalog?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to