Hi Allen,
Thank you for having an interest.

For quick start, I prepared a new page "Quick Start" at 
https://github.com/kiszk/spark-gpu/wiki/Quick-Start. You can install the 
package with two lines and run a sample program with one line.

We mean that "off-loading" is to exploit GPU for a task execution of 
Spark. For this, it is necessary to map a task into GPU kernels (While the 
current version requires a programmer to write CUDA code, future versions 
will prepare GPU code from a Spark program automatically). To execute GPU 
kernels requires data copy between CPU and GPU. To reduce data copy 
overhead, our prototype keeps data as a binary representation in RDD using 
a column format.

The current version does not specify the number of CUDA cores for a job by 
using a command line option. There are two ways to specify resources in 
GPU.
1) to specify the number of GPU cards by setting CUDA_VISIBLE_DEVICES in 
conf/spark-env.sh (refer to 
http://devblogs.nvidia.com/parallelforall/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/
)
2) to specify the number of CUDA threads for processing a partition in a 
program as 
https://github.com/kiszk/spark-gpu/blob/dev/examples/src/main/scala/org/apache/spark/examples/SparkGPULR.scala#L89
 
(Sorry for no documentation now).

We are glad to support requested features or to looking forward to getting 
pull requests.
 
Best Regard,
Kazuaki Ishizaki



From:   "Allen Zhang" <allenzhang...@126.com>
To:     Kazuaki Ishizaki/Japan/IBM@IBMJP
Cc:     dev@spark.apache.org
Date:   2016/01/04 13:29
Subject:        Re:Support off-loading computations to a GPU



Hi Kazuaki,

I am looking at http://kiszk.github.io/spark-gpu/ , can you point me where 
is the kick-start scripts that I can give it a go?

to be more specifically, what does *"off-loading"* mean? aims to reduce 
the copy overhead between CPU and GPU?
I am a newbie for GPU, how can I specify how many GPU cores I want to use 
(like --executor-cores) ?





At 2016-01-04 11:52:01, "Kazuaki Ishizaki" <ishiz...@jp.ibm.com> wrote:
Dear all,

We reopened the existing JIRA entry 
https://issues.apache.org/jira/browse/SPARK-3785to support off-loading 
computations to a GPU by adding a description for our prototype. We are 
working to effectively and easily exploit GPUs on Spark at 
http://github.com/kiszk/spark-gpu. Please also visit our project page 
http://kiszk.github.io/spark-gpu/.

For now, we added a new format for a partition in an RDD, which is a 
column-based structure in an array format, in addition to the current 
Iterator[T] format with Seq[T]. This reduces data 
serialization/deserialization and copy overhead between CPU and GPU.

Our prototype achieved more than 3x performance improvement for a simple 
logistic regression program using a NVIDIA K40 card.

This JIRA entry (SPARK-3785) includes a link to a design document. We are 
very glad to hear valuable feedback/suggestions/comments and to have great 
discussions to exploit GPUs in Spark.

Best Regards,
Kazuaki Ishizaki


 


Reply via email to