Too bad Nick, I dont have anything immediately ready that tests Spark
Streaming with those extreme settings. :)

On Mon, Nov 10, 2014 at 9:56 AM, Nicholas Chammas
<nicholas.cham...@gmail.com> wrote:
> On Sun, Nov 9, 2014 at 1:51 AM, Tathagata Das <tathagata.das1...@gmail.com>
> wrote:
>>
>> This causes a scalability vs. latency tradeoff - if your limit is 1000
>> tasks per second (simplifying from 1500), you could either configure
>> it to use 100 receivers at 100 ms batches (10 blocks/sec), or 1000
>> receivers at 1 second batches.
>
>
> This raises an interesting question, TD.
>
> Do we have a benchmark for Spark Streaming that tests it at the extreme for
> some key metric, perhaps processed messages per second per node? Something
> that would press Spark's ability to process tasks quickly enough.
>
> Given such a benchmark, it would probably be interesting to see how -- if at
> all -- Sparrow has an impact on Spark Streaming's performance.
>
> Nick

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to