We are now implementing a matrix multiplication algorithm on Spark, which was
designed in the traditional MPI working way before. It assumes every core in
the grid computes in parallel. 

Now in our develop environment, each executor node has 16 cores, and I
assign 16 tasks to each executor node to hope every core do once submatrix
multiplication. But by checking the log and the monitor web ui, I find some
task do once submatrix multiplication, while some do twice, some never do.
This is not what I expect to let every core do once multiplication.

Is there any way to increase the Concurrence?

Moreover, when I decrease the value *--total-executor-cores* to let every
executor has less working cores, 16 tasks on per node will not launch
simultaneously. In the official Tuning Spark doc: / In general, we recommend
2-3 tasks per CPU core in your cluster. / Thus I want to know why  recommend
2-3 tasks per CPU core?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Why-recommend-2-3-tasks-per-CPU-core-tp14869.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to