How long does it get stuck for? This is a common sign for the OS thrashing
due to out of memory exceptions. If you keep it running longer, does it
throw an error?

Depending on how large your other RDD is (and your join operation), memory
pressure may or may not be the problem at all. It could be that spilling
your shuffles
to disk is slowing you down (but probably shouldn't hang your application).
For the 5 RDDs case, what happens if you set spark.shuffle.spill to false?


2014-06-17 5:59 GMT-07:00 MEETHU MATHEW <meethu2...@yahoo.co.in>:

>
>  Hi all,
>
> I want  to do a recursive leftOuterJoin between an RDD (created from
>  file) with 9 million rows(size of the file is 100MB) and 30 other
> RDDs(created from 30 diff files in each iteration of a loop) varying from 1
> to 6 million rows.
> When I run it for 5 RDDs,its running successfully  in 5 minutes.But when I
> increase it to 10 or 30 RDDs its gradually slowing down and finally getting
> stuck without showing any warning or error.
>
> I am running in standalone mode with 2 workers of 4GB each and a total of
> 16 cores .
>
> Any of you facing similar problems with JOIN  or is it a problem with my
> configuration.
>
> Thanks & Regards,
> Meethu M
>

Reply via email to