On Sun, Nov 9, 2014 at 1:51 AM, Tathagata Das <tathagata.das1...@gmail.com>
wrote:

> This causes a scalability vs. latency tradeoff - if your limit is 1000
> tasks per second (simplifying from 1500), you could either configure
> it to use 100 receivers at 100 ms batches (10 blocks/sec), or 1000
> receivers at 1 second batches.
>

This raises an interesting question, TD.

Do we have a benchmark for Spark Streaming that tests it at the extreme for
some key metric, perhaps processed messages per second per node? Something
that would press Spark's ability to process tasks quickly enough.

Given such a benchmark, it would probably be interesting to see how -- if
at all -- Sparrow has an impact on Spark Streaming's performance.

Nick

Reply via email to