Jianneng,
On Wed, Oct 8, 2014 at 8:44 AM, Jianneng Li wrote:
>
> I understand that Spark Streaming uses micro-batches to implement
> streaming, while traditional streaming systems use the record-at-a-time
> processing model. The performance benefit of the former is throughput, and
> the latter is
ecord-at-a-time for Spark Streaming? Would it be something that is
feasible to prototype in one or two months?
Thanks,
Jianneng
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Record-at-a-time-model-for-Spark-Streaming-tp15885.html
Sent from the Apache Spark