Resources belong to the application, not each job, so the latter.

On Wed, Nov 4, 2015 at 9:24 AM, Nisrina Luthfiyati
<nisrina.luthfiy...@gmail.com> wrote:
> Hi all,
>
> I'm running some spark jobs in java on top of YARN by submitting one
> application jar that starts multiple jobs.
> My question is, if I'm setting some resource configurations, either when
> submitting the app or in spark-defaults.conf, would this configs apply to
> each job or the entire application?
>
> For example if I lauch it with:
>
> spark-submit --class org.some.className \
>     --master yarn-client \
>     --num-executors 3 \
>     --executor-memory 5g \
>     someJar.jar \
>
> , would the 3 executor x 5G memory be allocated to each job or would all
> jobs share the resources?
>
> Thank you!
> Nisrina
>



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to