If your object size > 10MB you may need to change spark.akka.frameSize.

What is your spark, spark.akka.timeOut ?

did you change   spark.akka.heartbeat.interval  ?

BTW based on large size getting broadcasted across 25 nodes, you may
want to consider the frequency of such transfer and evaluate
alternative patterns.




On Tue, Jan 7, 2014 at 12:55 AM, Sebastian Schelter <[email protected]> wrote:

> Spark repeatedly fails broadcast a large object on a cluster of 25
> machines for me.
>
> I get log messages like this:
>
> [spark-akka.actor.default-dispatcher-4] WARN
> org.apache.spark.storage.BlockManagerMasterActor - Removing BlockManager
> BlockManagerId(3, cloud-33.dima.tu-berlin.de, 42185, 0) with no recent
> heart beats: 134689ms exceeds 45000ms
>
> Is there something wrong with my config? Do I have to increase some
> timeout?
>
> Thx,
> Sebastian
>

Reply via email to