Hi Lang,

If the Linux kernel on those machines recognize all the cores then Spark
will use them all naturally with no extra work. Are you seeing otherwise?

Andrew
On Oct 9, 2014 2:00 PM, "Lang Yu" <lysubscr...@gmail.com> wrote:

> Hi,
>
> Currently all the workloads are run on CPUs. Is it possible that Spark
> could recognize Phi as one worker and run workloads on it?
>
> Thanks
>
> On Oct 10, 2014, at 4:54 AM, Andrew Ash <and...@andrewash.com> wrote:
>
> Hi Lang,
>
> What special features of the Xeon Phil do you want Spark to take advantage
> of?
>
> On Thu, Oct 9, 2014 at 4:50 PM, Lang Yu <lysubscr...@gmail.com> wrote:
>
>> Hi,
>>
>> I have set up Spark 1.0.2 on the cluster using standalone mode and the
>> input is managed by HDFS. One node of the cluster has Intel Xeon Phi 5110P
>> coprocessor. Is there any possibility that spark could be aware of the
>> existence of Phi and run job on Xeon Phi or recognize Phi as an individual
>> worker? Do I have to modify the code of scheduler?
>>
>> Thanks!
>>
>> Lang Yu
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>
>

Reply via email to