I don't have in my code any object broadcasting.
I do have broadcast join hints (df1.join(broadcast(df2)))
I tried, starting and stopping the spark context for every test (and not
once per suite),
and it did stop the OOM errors, so I guess that there is no leakage after
the context is stopped.
als
Did you unpersist the broadcast objects?
On Mon, Oct 17, 2016 at 10:02 AM lev wrote:
> Hello,
>
> I'm in the process of migrating my application to spark 2.0.1,
> And I think there is some memory leaks related to Broadcast joins.
>
> the application has many unit tests,
> and each individual tes