This behavior totally depends on the job that you are doing. Usually
increasing the # of partitions will sort out this issue. It would be good
if you can paste the code snippet or explain what type of operations that
you are doing.

Thanks
Best Regards

On Mon, Sep 28, 2015 at 11:37 AM, Saurav Sinha <sauravsinh...@gmail.com>
wrote:

> Hi Spark Users,
>
> I am running some spark jobs which is running every hour.After running for
> 12 hours master is getting killed giving exception as
>
> *java.lang.OutOfMemoryError: GC overhead limit exceeded*
>
> It look like there is some memory issue in spark master.
> Spark Master is blocker. Any one please suggest me any thing.
>
>
> Same kind of issue I noticed with spark history server.
>
> In my job I have to monitor if job completed successfully, for that I am
> hitting curl to get status but when no of jobs has increased to >80 apps
> history server start responding with delay.Like it is taking more then 5
> min to respond status of jobs.
>
> Running spark 1.4.1 in standalone mode on 5 machine cluster.
>
> Kindly suggest me solution for memory issue it is blocker.
>
> Thanks,
> Saurav Sinha
>
> --
> Thanks and Regards,
>
> Saurav Sinha
>
> Contact: 9742879062
>

Reply via email to