Thanks for all your answers.

> As we use the runtime version of CUDA it is absolutely necessary to include a 
> class which
> encapsulate CUDA programs. However, if you just want to add a simple CUDA 
> operation use a
> osgCompute::LaunchCallback with your kernels, similar to
> for OpenGL calls within OSG. We tried a lot of times to allow a developer to 
> use CUDA
> stuff everywhere in a module. The point is that OSG initializes an 
> OpenGL-Context during
> the first traversal of the scene graph (See osgViewer class). However, the 
> constructor
> of a module is called beforehand. One solution would be to write a new viewer 
> class which
> handles the OpenGL-contexts differently (We thought about it but we currently 
> have no time).
> If you have another solution for the problem that would be great. Another 
> thing is that you
> can call osgViewer::Viewer::realize() and 
> osgCompute::GLMemory::bindToContext( viewerContext ).
> with that new context of the root camera. After this you can call your CUDA 
> code everywhere.
> Please see the following source files:
> http://www.cg.informatik.uni-siegen.de/data/Downloads/svt/osgCUDAEverywhere.zip

I see the difficulty now. Although not the cleanest solution, I can live with 
the 'four-lines solution' you provided in your code. Perhaps it might be an 
idea to add a static function like setupOsgCompute(Viewer &v) or 
initOsgCompute(Viewer &v). Then you can execute the 'four-lines solution' on 
the viewer you get by reference. This way you hide low-level details from the 
user. My suggestion might be a problem for people who want to use multiple GL 
contexts (I personally never made use of multiple GL-context). You can also 
rename the function to setupSingleContextOsgCompute(Viewer &v)  (a bit verbose, 
but clear)

Editing the Viewer class would indeed be the best solution. Perhaps some 
cooperation between the maintainer of the Viewer class and you guys might solve 
this issue?

As for the LaunchCallBack solution you suggest, I never thought it could be 
done like that. When I read the documentation of this class I though its 
purpose was the change the launch order of the modules. Since I only use one 
module, I did not look at it any further into that class. Well, it seems it can 
be used for other things as well. Thanks for the suggestion.



> Would it be helpful to rename it to osgCompute::Program? Please make a 
> suggestion. 

The documentation says the following about Module and Computation:
"A module is the base class to implement application specific parallel 
algorithms working on resources. Modules implement a strategy design pattern in 
connection with a computation node. Think of a module as a separated algorithm 
executed once in each frame like osg::Program objects are executed during 
rendering. However, modules are much more flexible as execution is handed over 
to the module."
"A computation is a container where you can add your osgCompute::Module 
objects, just like osg::Program is a container for osg::Shader objects"
In Module the actual calls to CUDA kernels reside and it's the place where we 
allocate and set CUDA memory, so I would call it Computation. Program is also 
okay. Because this class sets up the CUDA memory and launches the CUDA 
computation/program.
The Computation class is currently a container class for Modules. If you want 
to keep you analogy with shader programs (where a program contains vertex, 
geometry, or fragment shaders), you could call this class Program. In that case 
a Program class is a container for Computations. However, if you prefer to 
rename your Module to Program, then you can rename your Computation to 
ProgramContainer (or ProgramCollection). Then the user instantly sees those two 
classes belong together and how they are connected to each other.



> I think that is an application specific concern and one should not try to 
> encapsulate it
> in a general memory handling system. The user always has to define how its 
> memory is organized.
> When you write an algorithm you have to be shure that subsequent operations 
> can work with the
> resulting data (e.g. your results is stored as float4). If you already know 
> that it is an
> GLBufferObject than it is clear that your Module receives the required 
> informations from the
> accompanying osg::Geometry object. Please tell me if i understood your 
> question wrong. 

I think you understood me wrong. The user does not know how osgCuda::Geometry 
object organizes its memory. For example, I construct a TestModule tm and an 
osgCuda::Geometry geo and set geo's 3D vertices, 3D normals, 2D texture 
coordinates. I attach the identifier "thisIsAGeo" to geo and then call 
tm->addModule(geo->getMemory()). In the TestModule class there is an 
acceptResource, which is called with geo as argument. Now I want to extract the 
vertices, normals, texture coordinates from this geo object. How do I do this? 
The follow code demonstrates what I mean:

Code:
void TestModule::acceptResource(osgCompute::Resource& resource) {
    if (resource.isIdentifiedBy("thisIsAGeo")) {
                //Is it first positions, then normals, then texCoords?
                positions = dynamic_cast<osgCompute::Memory*>(&resource);
                normals = dynamic_cast<osgCompute::Memory*>(&resource + 
numVertices * sizeof(Vec3f)); 
                texCoords = dynamic_cast<osgCompute::Memory*>(&resource + 2 * 
numVertices * sizeof(Vec3f)); 
                //Or is it first positions, then texCoords, then normals?
                positions = dynamic_cast<osgCompute::Memory*>(&resource);
                texCoords = dynamic_cast<osgCompute::Memory*>(&resource + 
numVertices * sizeof(Vec3f)); 
                normals = dynamic_cast<osgCompute::Memory*>(&resource + 
numVertices * (sizeof(Vec3f) + sizeof(Vec2f)); 
                //Or is it first normals, then positions, then texCoords?
                normals = dynamic_cast<osgCompute::Memory*>(&resource);
                positions = dynamic_cast<osgCompute::Memory*>(&resource + 
sizeof(Vec3f) * numVertices); 
                texCoords = dynamic_cast<osgCompute::Memory*>(&resource + 2 * 
numVertices * sizeof(Vec3f));
                //Or is it....
                //I don't know, because I don't know how osgCuda::Geometry 
organizes its memory
        }
}





> I do not understand what you mean here. To disable a module means that you 
> can turn off the functionality of a module

Will this stop the module only from calling launch each frame (i.e. disable the 
callback only)? So that if I call launch manually whenever I want, it still 
executes this function and hence my CUDA stuff. Or will the module completely 
stop working: ignore launch hints, will no longer accept resources, remove 
resources, get resources, etc.?


Based on you statements of the new version of osgCompute, I am curious about 
and looking forward to this new version. Is there a (rough) release date of 
this new version?

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=43272#43272





_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to