Input datasets which represent a input data stream only supports appending
of new rows, as the stream is modeled as an unbounded table where new data
in the stream are new rows being appended to the table. For transformed
datasets generated from the input dataset, rows can be updated and removed
as
Hello,
I am interested in using the new Structured Streaming feature of Spark SQL
and am currently doing some experiments on code at HEAD. I would like to
have a better understanding of how deletion should be handled in a
structured streaming setting.
Given some incremental query computing an arb