Hi,

My name is Gregory Diamos. I am a PhD student at the Georgia Institute of Technology currently working on compiler backends for performing general purpose computation on GPU architectures. My research group at Georgia Tech (CASL) group develops and maintains the Ocelot backend compiler for CUDA (http://code.google.com/p/gpuocelot/). We are currently trying to perform architecture characterizations between different SIMD and multicore architectures and already have a detailed simulation and compilation infrastructure built around CUDA for NVIDIA GPUs, AMD GPUs, and Intel/AMD CPUs. We are particularly interested in evaluating the Intel GPU architecture due to the tight integration between the CPU and GPU in GMA3150 and in the roadmaps for Sandybridge (http://en.wikipedia.org/wiki/Sandy_Bridge_%28microarchitecture%29). After looking over the documentation in the programming guide, it seems like there is enough information of the ISA and buffer commands to do a backend for CUDA by generating an Intel binary for each CUDA program, and using buffer commands to setup memory, load the binary, and execute it.

I would appreciate it if people who have worked with the open source linux driver could provide the following information so that I can gauge the feasibility and level of effort of this project.

1. Is there an interface for writing directly to the GPU ring buffer that is exposed by the driver? If not, would it be straightforward to add such an interface?

2. Are there any restrictions (security, DRM, etc) for loading/executing binaries on the media pipeline?

3. How modular is the driver? Is there an interface for user-level applications to issue hardware commands in a way that is isolated and time-shared with commands from the graphics driver? If not, would it be straightforward to modify the driver to disable the graphics components of the driver and retain only an interface for passing commands to the device?

4. How is memory sharing between the CPU and GPU handled? Is it possible to map memory from the CPU's address space into the GPU's address space? Sorry if this is already documented somewhere, I might have missed it.

I appreciate any help that you can offer.

Regards,

Gregory Diamos
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to