What's the size of your large object to be broadcast?

On Tue, Jan 7, 2014 at 8:55 AM, Sebastian Schelter <[email protected]> wrote:

> Spark repeatedly fails broadcast a large object on a cluster of 25
> machines for me.
>
> I get log messages like this:
>
> [spark-akka.actor.default-dispatcher-4] WARN
> org.apache.spark.storage.BlockManagerMasterActor - Removing BlockManager
> BlockManagerId(3, cloud-33.dima.tu-berlin.de, 42185, 0) with no recent
> heart beats: 134689ms exceeds 45000ms
>
> Is there something wrong with my config? Do I have to increase some
> timeout?
>
> Thx,
> Sebastian
>

Reply via email to