Hi guys,

Is it possible to generate a single stream rdd which can be updated with new batch rdd content?

I know that we can use updateStateByKey to make aggregation,
but here just want to keep tracking all historical original content.

I also noticed that we can save to redis or other storage system,
but can we just use spark streaming mechanism to make it happen?

Thanks for any suggestion.

Regards,
Hawk

Reply via email to