We have a stream of products, each with an ID, and each product has a price
which may be updated.
We want a running count of the number of products over £30.
Schema: Product(productID: Int, price: Int)
To handle these updates, we currently have…
——
val products =
I had the same question too. My use case is to take a streaming source and
perform few steps (some aggregations and transformations) and send it to
multiple output sources.
On Fri, Dec 16, 2016 at 3:58 AM, Michael Armbrust
wrote:
> What is your use case?
>
> On Thu, Dec
What is your use case?
On Thu, Dec 15, 2016 at 10:43 AM, ljwagerfield
wrote:
> The current version of Spark (2.0.2) only supports one aggregation per
> structured stream (and will throw an exception if multiple aggregations are
> applied).
>
> Roughly when will
The current version of Spark (2.0.2) only supports one aggregation per
structured stream (and will throw an exception if multiple aggregations are
applied).
Roughly when will Spark support multiple aggregations?
--
View this message in context: