Sometimes it's convenient to start a spark-shell on cluster, like
./spark/bin/spark-shell --master yarn --deploy-mode client --num-executors
100 --executor-memory 15g --executor-cores 4 --driver-memory 10g --queue
myqueue
However, with command like this, those allocated resources will be occupied
until the console exits.

Just wandering if it is possible to start a spark-shell with
dynamicAllocation enabled? If it is, how to specify the configs? Can anyone
give an quick example? Thanks!

Reply via email to