Thanks a lot for clarify it.
Then following there are some questions:
1, if normally we have 1 executor per machine. Then if we have a cluster
with different hardware capacity, for example: one 8 core worker and one 4
core worker (ignore the driver machine), then if we set executor-cores =4,
then
An executor is specific to a Spark application, just as a mapper is
specific to a MapReduce job. So a machine will usually be running many
executors, and each is a JVM.
A Mapper is single-threaded; an executor can run many tasks (possibly
from different jobs within the application) at once. Yes, 5
Hi All,
I try to clarify some behavior in the spark for executor. Because I am from
Hadoop background, so I try to compare it to the Mapper (or reducer) in
hadoop.
1, Each node can have multiple executors, each run in its own process? This
is same as mapper process.
2, I thought the sp