It's just to limit the maximum number of records a given executor needs to
deal with in a given batch.

Typical usage would be if you're starting a stream from the beginning of a
kafka log, or after a long downtime, and don't want ALL of the messages in
the first batch.



On Thu, Aug 13, 2015 at 8:50 AM, allonsy <luke1...@gmail.com> wrote:

> Hello everyone,
>
> in the new Kafka Direct API, what are the benefits of setting a value for
> *spark.streaming.maxRatePerPartition*?
>
> In my case, I have 2 seconds batches consuming ~15k tuples from a topic
> split into 48 partitions (4 workers, 16 total cores).
>
> Is there any particular value I should be setting the parameter to, in
> order
> to achieve better performances? And what happens if I don't set the value
> at
> all?
>
> I could not find any detailed explanation about this.
>
> Thank you!
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/spark-streaming-maxRatePerPartition-parameter-what-are-the-benefits-tp24241.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to