I have a small application configured to use 6 cpu cores and I run it on
standalone cluster. Such configuration means that only 6 task can be active
in one moment and if all of them are waitng(IO for example) then not whole
CPU is used. 

My questions:
1. Is it true that number of active tasks per executor is equal to number of
available cores to that executor? If it is so, it means that by increasing
parallelism we can only get smaller tasks, but not increase number of used
threads. By active tasks I mean the number visible at webUI/executors. Or
maybe waiting threads are not included in "active tasks"?

2.Is it a good practice to overcommit CPU cores if we now that waiting is
significant part of our tasks?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Not-maximum-CPU-usage-tp22794.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to