Hi Lang,
If the Linux kernel on those machines recognize all the cores then Spark
will use them all naturally with no extra work. Are you seeing otherwise?
Andrew
On Oct 9, 2014 2:00 PM, Lang Yu lysubscr...@gmail.com wrote:
Hi,
Currently all the workloads are run on CPUs. Is it possible that
Hi,
I have set up Spark 1.0.2 on the cluster using standalone mode and the input is
managed by HDFS. One node of the cluster has Intel Xeon Phi 5110P coprocessor.
Is there any possibility that spark could be aware of the existence of Phi and
run job on Xeon Phi or recognize Phi as an
Hi Lang,
What special features of the Xeon Phil do you want Spark to take advantage
of?
On Thu, Oct 9, 2014 at 4:50 PM, Lang Yu lysubscr...@gmail.com wrote:
Hi,
I have set up Spark 1.0.2 on the cluster using standalone mode and the
input is managed by HDFS. One node of the cluster has Intel
Hi,
I have set up Spark 1.0.2 on the cluster using standalone mode and the input is
managed by HDFS. One node of the cluster has Intel Xeon Phi 5110P coprocessor.
Is there any possibility that spark could be aware of Phi and run job on Xeon
Phi? Do I have to modify the code of scheduler?
What are the specific features of intel Xeon Phi that can be utilized by
Spark?
2014-10-03 18:09 GMT+08:00 余 浪 yulan...@gmail.com:
Hi,
I have set up Spark 1.0.2 on the cluster using standalone mode and the
input is managed by HDFS. One node of the cluster has Intel Xeon Phi 5110P