Thanks for the explanation, Steve. 

I don't want to control where the work is done. What I wanted to understand
is if Spark could take advantage of the underlying architecture features.
For example, if the CPUs on the nodes support some improved vector
instructions, can the Spark jobs (if they have a lot of vector operations)
benefit from this? If yes, how does it happen, inside Spark, or the JVM
where the the job TAR is running on?

Also, for the GPU part you mentioned, labeling the GPU nodes, and scheduling
work to those GPU-enabled system does not mean the GPU computation power
will be utilized, right? The user has to provide CUDE codes
(openCL/CUDA/etc) and somehow link them to the system. Is my understanding
correct? 


Thanks,
Boric



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/How-Spark-utilize-low-level-architecture-features-tp16052p16072.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to