[Intel-gfx] CUDA port for intel graphics

2010-06-23 Thread Gregory Diamos

Hi,

My name is Gregory Diamos.  I am a PhD student at the Georgia Institute 
of Technology currently working on compiler backends for performing 
general purpose computation on GPU architectures.  My research group at 
Georgia Tech (CASL) group develops and maintains the Ocelot backend 
compiler for CUDA (http://code.google.com/p/gpuocelot/).  We are 
currently trying to perform architecture characterizations between 
different SIMD and multicore architectures and already have a detailed 
simulation and compilation infrastructure built around CUDA for NVIDIA 
GPUs, AMD GPUs, and Intel/AMD CPUs.  We are particularly interested in 
evaluating the Intel GPU architecture due to the tight integration 
between the CPU and GPU in GMA3150 and in the roadmaps for Sandybridge 
(http://en.wikipedia.org/wiki/Sandy_Bridge_%28microarchitecture%29).
After looking over the documentation in the programming guide, it seems 
like there is enough information of the ISA and buffer commands to do a 
backend for CUDA by generating an Intel binary for each CUDA program, 
and using buffer commands to setup memory, load the binary, and execute 
it.


I would appreciate it if people who have worked with the open source 
linux driver could provide the following information so that I can gauge 
the feasibility and level of effort of this project.


1. Is there an interface for writing directly to the GPU ring buffer 
that is exposed by the driver?  If not, would it be straightforward to 
add such an interface?


2. Are there any restrictions (security, DRM, etc) for loading/executing 
binaries on the media pipeline?


3. How modular is the driver?  Is there an interface for user-level 
applications to issue hardware commands in a way that is isolated and
time-shared with commands from the graphics driver?  If not, would it be 
straightforward to modify the driver to disable the graphics components 
of the driver and retain only an interface for passing commands to the 
device?


4. How is memory sharing between the CPU and GPU handled? Is it possible 
to map memory from the CPU's address space into the GPU's address space? 
 Sorry if this is already documented somewhere, I might have missed it.


I appreciate any help that you can offer.

Regards,

Gregory Diamos
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] CUDA port for intel graphics

2010-06-23 Thread Chris Wilson
Hi Gregory,

I think most of your questions can be answered by reading the [interface]
design document for GEM - the Graphics Execution Manager.

http://lwn.net/Articles/283798/

That will give you a better idea of the separation of the execution and
memory management which is performed by the kernel and how it is
controlled by userspace. All userspace clients are [more or less] equal
and submit batch buffers to the kernel to be scheduled for execution. Each
batch is a list of buffers [your textures, command streams, vertex buffers
etc] which the kernel then maps into the graphics aperture and performs
relocations upon the command streams. As such the GPU is then shared
between multiple independent clients. If you want to perform a privileged
operation such as modifying the ring buffer or registers prior to the
execution of your batch, you will need to extend the GEM interface to
allow you to do so.

Hope this helps, and you have a lot of fun programming with the GPU
directly.
-- 
Chris Wilson, Intel Open Source Technology Centre
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] CUDA port for intel graphics

2010-06-23 Thread Keith Packard
On Wed, 23 Jun 2010 00:14:39 -0700, Gregory Diamos gregory.dia...@gatech.edu 
wrote:

 1. Is there an interface for writing directly to the GPU ring buffer 
 that is exposed by the driver?  If not, would it be straightforward to 
 add such an interface?

Take a look at 'drm' in the mesa git repository; that contains a library
(libdrm) which provides to ability to execute arbitrary commands in the
GPU with in-kernel memory management. This is used by the X 2D driver,
the Mesa GL driver and the VaAPI library.

git clone git://anongit.freedesktop.org/git/mesa/drm

 2. Are there any restrictions (security, DRM, etc) for loading/executing 
 binaries on the media pipeline?

Nope. You need permission to open the device, which is currently managed
by having a connection to the X server, so for now you'd have to be
running under X. There's no requirement to deal with X other than
authenticating access to the device though. It would be useful to
fix this so that arbitrary programs could use the GPU without needing to
be authenticated through the window system.

 3. How modular is the driver?  Is there an interface for user-level 
 applications to issue hardware commands in a way that is isolated and
 time-shared with commands from the graphics driver?  If not, would it be 
 straightforward to modify the driver to disable the graphics components 
 of the driver and retain only an interface for passing commands to the 
 device?

Yes, that's precisely what the DRM infrastructure provides -- the ability
to execute arbitrary commands on the hardware from multiple programs at
the same time.

 4. How is memory sharing between the CPU and GPU handled? Is it possible 
 to map memory from the CPU's address space into the GPU's address space? 
   Sorry if this is already documented somewhere, I might have missed
 it.

As above, the kernel is used to manage memory objects and the driver
knows how to deal with the lack of hardware cache coherency with help
From the application.

-- 
keith.pack...@intel.com


pgp2IgAoX7BoD.pgp
Description: PGP signature
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx