Hi

I am using Spark 1.6. I have one query about Fine Grained model in Spark.
I have a simple Spark application which transforms A -> B.  Its a single
stage application.  To begin the program, It starts with 48 partitions.
When the program starts running, in mesos UI it shows 48 tasks and 48 CPUs
allocated to job.  Now as the tasks get done, the number of active tasks
number starts decreasing.  How ever, the number of CPUs does not decrease
propotionally.  When the job was about to finish, there was a single
remaininig task, however CPU count was still 20.

My questions, is why there is no one to one mapping between tasks and cpus
in Fine grained?  How can these CPUs be released when the job is done, so
that other jobs can start.


Regards
Sumit Chawla

Reply via email to