Github user jose-torres commented on a diff in the pull request:
    --- Diff: 
    @@ -27,6 +28,9 @@
      * Streaming queries are divided into intervals of data called epochs, 
with a monotonically
      * increasing numeric ID. This writer handles commits and aborts for each 
successive epoch.
    + *
    + * Note that StreamWriter implementations should provide instances of
    + * {@link StreamingDataWriterFactory}.
    --- End diff --
    That wouldn't be compatible with SupportsWriteInternalRow. We could add a 
StreamingSupportsWriteInternalRow, but that seems much more confusing both for 
Spark developers and for data source implementers.


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to