On Tue, Feb 13, 2018 at 4:43 PM, akshay naidu <akshaynaid...@gmail.com>
wrote:

> Hello,
> I'm try to run multiple spark jobs on cluster running in yarn.
> Master is 24GB server with 6 Slaves of 12GB
>
> fairscheduler.xml settings are -
> <pool name="default">
>     <schedulingMode>FAIR</schedulingMode>
>     <weight>10</weight>
>     <minShare>2</minShare>
> </pool>
>
> I am running 8 jobs simultaneously , jobs are running parallelly but not
> all.
> at a time only 7 of then runs simultaneously while the 8th one is in queue
> WAITING for a job to stop.
>
> also, out of the 7 running jobs, 4 runs comparatively much faster than
> remaining three (maybe resources are not distributed properly) .
>
> I want to run n number of jobs at a time and make them run faster , Right
> now, one job is taking more than three minutes while processing a max of
> 1GB data .
>
> Kindly assist me. what am I missing.
>
> Thanks.
>

Reply via email to