Yes. the Overflowing memory would be locally persisted. As a result
performance will degrade but application will continue.


On Thu, Jan 30, 2014 at 6:20 AM, David Thomas <[email protected]> wrote:

> How does Spark handle the situation where the RDD does not fit into the
> memory of all the machines in the cluster together?
>

Reply via email to