Dear POCL developers,

I have been following your project for a while, and I think you are
doing great work! I have been using GPUs for scientific computation
over five years, and look forward to the day when the open-source
implementations of OpenCL for multi-core CPUs and GPUs become ready
for deployment on computational clusters.

While the Gallium framework for computation on NVIDIA and AMD GPUs is
still a work in progress, would it be feasible to develop a device
backend for POCL that builds on the LLVM NVPTX backend and the CUDA
driver API [1]?

[1] http://docs.nvidia.com/cuda/cuda-driver-api/

In case you are familiar with the OpenCL support of the NVIDIA driver,
it has not been getting any better with the recent CUDA 5.0 release.
As I had to experience, their OpenCL implementation seems to be
serial, which prevents multi-GPU scheduling, or kernel execution and
memory transfer overlapping, when using a single host thread. I fear
the worst case, that OpenCL support would eventually be dropped
entirely.

A PTX device backend to POCL would allow users to develop portable
OpenCL (1.2) codes now, and run them on existing installations of
NVIDIA GPUs using the CUDA driver, until we have mature open-source
support for AMD and NVIDIA GPUs in distributions.

What do you think about this idea? Does the CUDA driver API expose
enough functionality to implement OpenCL? How difficult would it
be to use the NVPTX backend of LLVM for compilation?

Regards,
Peter

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
_______________________________________________
pocl-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/pocl-devel

Reply via email to