Re: Is the resources specified in configuration shared by all jobs?
Resources belong to the application, not each job, so the latter. On Wed, Nov 4, 2015 at 9:24 AM, Nisrina Luthfiyatiwrote: > Hi all, > > I'm running some spark jobs in java on top of YARN by submitting one > application jar that starts multiple jobs. > My question is, if I'm setting some resource configurations, either when > submitting the app or in spark-defaults.conf, would this configs apply to > each job or the entire application? > > For example if I lauch it with: > > spark-submit --class org.some.className \ > --master yarn-client \ > --num-executors 3 \ > --executor-memory 5g \ > someJar.jar \ > > , would the 3 executor x 5G memory be allocated to each job or would all > jobs share the resources? > > Thank you! > Nisrina > -- Marcelo - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
Re: Is the resources specified in configuration shared by all jobs?
Hi Nisrina, The resources you specify are shared by all jobs that run inside the application. -Sandy On Wed, Nov 4, 2015 at 9:24 AM, Nisrina Luthfiyati < nisrina.luthfiy...@gmail.com> wrote: > Hi all, > > I'm running some spark jobs in java on top of YARN by submitting one > application jar that starts multiple jobs. > My question is, if I'm setting some resource configurations, either when > submitting the app or in spark-defaults.conf, would this configs apply to > each job or the entire application? > > For example if I lauch it with: > > spark-submit --class org.some.className \ > --master yarn-client \ > --num-executors 3 \ > --executor-memory 5g \ > someJar.jar \ > > , would the 3 executor x 5G memory be allocated to each job or would all > jobs share the resources? > > Thank you! > Nisrina > >
Is the resources specified in configuration shared by all jobs?
Hi all, I'm running some spark jobs in java on top of YARN by submitting one application jar that starts multiple jobs. My question is, if I'm setting some resource configurations, either when submitting the app or in spark-defaults.conf, would this configs apply to each job or the entire application? For example if I lauch it with: spark-submit --class org.some.className \ --master yarn-client \ --num-executors 3 \ --executor-memory 5g \ someJar.jar \ , would the 3 executor x 5G memory be allocated to each job or would all jobs share the resources? Thank you! Nisrina
Re: Is the resources specified in configuration shared by all jobs?
Got it. Thanks! On Nov 5, 2015 12:32 AM, "Sandy Ryza"wrote: > Hi Nisrina, > > The resources you specify are shared by all jobs that run inside the > application. > > -Sandy > > On Wed, Nov 4, 2015 at 9:24 AM, Nisrina Luthfiyati < > nisrina.luthfiy...@gmail.com> wrote: > >> Hi all, >> >> I'm running some spark jobs in java on top of YARN by submitting one >> application jar that starts multiple jobs. >> My question is, if I'm setting some resource configurations, either when >> submitting the app or in spark-defaults.conf, would this configs apply to >> each job or the entire application? >> >> For example if I lauch it with: >> >> spark-submit --class org.some.className \ >> --master yarn-client \ >> --num-executors 3 \ >> --executor-memory 5g \ >> someJar.jar \ >> >> , would the 3 executor x 5G memory be allocated to each job or would all >> jobs share the resources? >> >> Thank you! >> Nisrina >> >> >