Re: [osg-users] Graphics Context and Window Handle

2013-09-19 Thread Robert Osfield
Hi Sajjadul,

The general approach looks fine.  I haven't worked with OpenCL yet, and
haven't seen the rest of your window/OpenCL setup so can't say what might
be amiss.

The only thing I can suggest is that you need to make sure that the
GraphcisWindow is realized before you can start using the Window handles.

Robert.


On 18 September 2013 23:26, Sajjadul Islam dosto.wa...@gmail.com wrote:

 Hi forum,

 I think Graphics Context actually encapsulates information on the way in
 which scene objects are drawn and rendering states are applied. OSG uses
 the osg::GraphicsContext class to represent the abstract graphics context.

 In one of the functions that i am writing now, takes a
 osg::GraphicsContext class object as parameter and from this i need to
 derive a OS specific window handle and i am not sure how to do it. I did
 not find any example to it either.

 Right now i am doing it in the following manner and i think it is not
 correct.


 Code:


 #ifdef _WIN32
 #include osgViewer/api/Win32/GraphicsHandleWin32
 #elif defined(__GNUC__)
 #include osgViewer/api/X11/GraphicsHandleX11
 #elif defined( __APPLE__)
 #include osgViewer/api/Carbon/GraphicsHandleCarbon
 #endif
 ..
 ..

 bool Context::setupDeviceAndContext(osg::GraphicsContext ctx)
 {
 ...
 #ifdef defined(_WIN32)
   osgViewer::GraphicsHandleWin32 *windowsContext = NULL;
 #elif defined( __GNUC__)
   osgViewer::GraphicsHandleX11 *linuxContext = NULL;
 #elif defined( __APPLE__ )
   osgViewer::GraphicsHandleCarbon *osxContext = NULL;
 #endif

   //platform dependent casting for the OpenCL context creation
 #ifdef defined(_WIN32)
   windowsContext = dynamic_castosgViewer::GraphicsHandleWin32*(ctx);

   if(NULL == windowsContext)
   {
  osg::notify(osg::FATAL)  Win32 Graphics Context Casting is
 unsuccessful  std::endl;
  return false;
   }
 #elif defined(__GNUC__)
   linuxContext = dynamic_castosgViewer::GraphicsHandleX11*(ctx);

   if(NULL == linuxContext)
   {
  osg::notify(osg::FATAL)  X11 Graphics Context Casting is
 unsuccessful  std::endl;
  return false;
   }
 #elif defined(__APPLE__)
   osxContext = dynamic_castosgViewer::GraphicsHandleCarbon*(ctx);

   if(NULL == osxContext)
   {
  osg::notify(osg::FATAL)  MACOSX Graphics Context Casting is
 unsuccessful.   std::endl;
  return false;
   }
 #endif
 
 }




 .
 I need this OS specific Window Handle to create a context property for
 OpenCL device as follows:


 Code:

  cl_context_properties contextProperties[] =
 {
 #ifdef defined(_WIN32)
CL_CONTEXT_PLATFORM, (cl_context_properties) _m_clPlatform,
CL_GL_CONTEXT_KHR, (cl_context_properties)
 windowsContext-getWGLContext(),
CL_WGL_HDC_KHR, (cl_context_properties)
 windowsContext-getHDC(),
 #elif defined(__GNUC__)
CL_CONTEXT_PLATFORM, (cl_context_properties) _m_clPlatform,
CL_GL_CONTEXT_KHR, (cl_context_properties)
 linuxContext-getContext(),
CL_GLX_DISPLAY_KHR, (cl_context_properties)
 linuxContext-getDisplay(),
 #elif defined(__APPLE__)
CGLContextObj glContext = CGLGetCurrentContext(); // get the
 current GL context
CGLShareGroupObj shareGroup = CGLGetShareGroup(glContext); //
 share group

CL_CONTEXT_PROPERTY_USE_CGL_SHAREGROUP_APPLE,
(cl_context_properties) shareGroup,
 #endif
0
 };





 You do not need to worry about the OpenCL stuff here. I just need to know
 how to derive OS specific window handles from osg::GraphicsContext ?

 Is it the proper way to do it as i am doing right now ? I am having
 trouble in this manner because OpenCL context creation is giving me
 undefined behavior.

 If i  am correct , then i have to concentrate some where else in the code.

 I really need your hint over this issue.


 Cheers,
 Sajjadul

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=56407#56407





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Graphics Context and Window Handle

2013-09-19 Thread Sajjadul Islam
Thanks Robert,

I am providing you with more details on this.  I think i am realizing the 
graphics window before using the handle. Still to make sure i am providing with 
the code for your kind review.
Please Check the following code snippet:


Code:

int main(int argc, char *argv[])
{
   osg::setNotifyLevel( osg::FATAL );

   //load the model
   osg::ref_ptrosg::Node model = osgDB::readNodeFile(cow.osg);
   
   //
   // SETUP VIEWER //
   //
   osgViewer::Viewer viewer;
   viewer.addEventHandler(new osgViewer::StatsHandler);
   viewer.addEventHandler(new osgOpenCL::StatsHandler);
   viewer.addEventHandler(new osgViewer::HelpHandler);
   viewer.setUpViewInWindow( 50, 50, 640, 480);
   viewer.getCamera()-setClearColor( osg::Vec4(0.15, 0.15, 0.15, 1.0) );

   //create a singleton opencl context to make sure that
   //we are only working on one context
   osgOpenCL::SingletonosgOpenCL::Context::init();
   
   
osgOpenCL::SingletonosgOpenCL::Context::getPtr()-setupOsgOpenCLAndViewer(viewer);

   viewer.setSceneData(setupScene());

   return viewer.run();

}



 

osgOpenCL::SingletonosgOpenCL::Context::getPtr()-setupOsgOpenCLAndViewer(viewer);
  is doing the OpenCL context creation from the OpenGL context. As i mentioned 
before you do not need to worry about OpenCL related stuff. Now Lets get indie 
the function setupOsgOpenCLAndViewer(viewer)


Code:

   bool Context::setupOsgOpenCLAndViewer(osgViewer::ViewerBase viewer,
  int ctxID /*= -1 */)
   {
  // You must use single threaded version since osgCompute currently
  // does only support single threaded applications.
  viewer.setThreadingModel( osgViewer::ViewerBase::SingleThreaded );

  // Does create a single OpenGL context
  // which is not released at the end of a frame to secure
  // CUDA launches everywhere
  viewer.setReleaseContextAtEndOfFrameHint(false);

  // Create the current OpenGL context and make it current
  if( !viewer.isRealized() )
 viewer.realize();

  osgViewer::ViewerBase::Contexts ctxs;
  //return a list of all active contexts
  viewer.getContexts( ctxs, true );

  //make sure that we are not getting any empty context
  if( ctxs.empty() )
  {
 osg::notify(osg::FATAL) __FUNCTION__  : no valid OpenGL context is 
found.std::endl;
 return false;
  }


  osg::GraphicsContext* ctx = NULL;


  if( ctxID != -1 )
  {   // Find context with ctxID and make it current.
 for( unsigned int c=0; cctxs.size(); ++c )
 {
if( ctxs[c]-getState()-getContextID() == ctxID )
{
   ctx = ctxs[c];
}
 }
  }
  else
  {
//make the very first context in the list
//the current context
ctx = ctxs.front();
  }

  //make sure that the context is not NULL
  if( NULL == ctx )
  {
osg::notify(osg::FATAL) __FUNCTION__  : cannot find valid OpenGL 
context.std::endl;
return false;
  }

  if(!setupDeviceAndContext(*ctx))
  {
 osg::notify(osg::FATAL) __FUNCTION__  : cannot setup OpenGL with 
OpenCL.std::endl;
 return false;
  }
  return true;
   }




Now Inside the setupDeviceAndContext() i am doing the opencl initialization and 
only part of the snippet is related for your review. Here it goes:


Code:

   bool Context::setupDeviceAndContext(osg::GraphicsContext ctx)
   {
  if( ctx.getState() == NULL )
  {
 osg::notify(osg::WARN)
 __FUNCTION__  : osg::GraphicsContext must have a valid state.
 std::endl;

 return false;
  }

  // Use first context to be found and make it current.
  ctx.makeCurrent();

  if( NULL != osgCompute::GLMemory::getContext() 
  osgCompute::GLMemory::getContext()-getState()-getContextID() != 
ctx.getState()-getContextID() )
  {
 osg::notify(osg::WARN)
 __FUNCTION__  : osgOpenCL can handle only a single context.
  However multiple contexts are detected.
  Please make sure to share a GL context among all windows.
 std::endl;

 return false;
  }

  // Bind context to osgCompute::GLMemory
  if( osgCompute::GLMemory::getContext() == NULL )
 osgCompute::GLMemory::bindToContext( ctx );
  else
 return false;


#ifdef defined(_WIN32)
  osgViewer::GraphicsHandleWin32 *windowsContext = NULL;
#elif defined( __GNUC__)
  osgViewer::GraphicsHandleX11 *linuxContext = NULL;
#elif defined( __APPLE__ )
  osgViewer::GraphicsHandleCarbon *osxContext = NULL;
#endif

  //platform dependent casting for the OpenCL context creation
#ifdef defined(_WIN32)
  windowsContext = dynamic_castosgViewer::GraphicsHandleWin32*(ctx);

  if(NULL == windowsContext)
  {
 osg::notify(osg::FATAL)  Win32 Graphics Context Casting is 
unsuccessful  std::endl;
 return false;
  }
#elif defined(__GNUC__)
  linuxContext = 

[osg-users] Weird ui methods in image class?

2013-09-19 Thread Remo Eichenberger
Hi,

I've just found weird methods in osg::Image

virtual bool sendFocusHint(bool /*focus*/) { return false; }
virtual bool sendPointerEvent(int /*x*/, int /*y*/, int /*buttonMask*/) { 
return false; }
virtual bool sendKeyEvent(int /*key*/, bool /*keyDown*/) { return false; }

Are they really needed in a core image class ? 

Cheers,
Remo

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=56412#56412





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Weird ui methods in image class?

2013-09-19 Thread Robert Osfield
Hi Remo,

These methods are to help implement features like interactive browser, pdf
and vnc textured geometry in your scene.  There are various plugins that
subclass from osg::Image to provide the backend implementations for these
various 3rd party ways to render and interact.

Robert.


On 19 September 2013 11:40, Remo Eichenberger osgfo...@tevs.eu wrote:

 Hi,

 I've just found weird methods in osg::Image

 virtual bool sendFocusHint(bool /*focus*/) { return false; }
 virtual bool sendPointerEvent(int /*x*/, int /*y*/, int /*buttonMask*/) {
 return false; }
 virtual bool sendKeyEvent(int /*key*/, bool /*keyDown*/) { return false; }

 Are they really needed in a core image class ?

 Cheers,
 Remo

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=56412#56412





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] graphics context capabilities in cull phase

2013-09-19 Thread PCJohn
Hi Robert,

I noticed one missing feature in OSG and I would be glad to discuss the 
solution with you before making further steps.

The problem:

- In cull phase, it would be useful to be able to find out which OpenGL 
extensions are supported and GLSL version available in the graphics context. 
Why anybody might need it? The programmer might want to cull different scene 
graphs for contexts with different capabilities. For example, if target 
platform supports two sided stencil, it might cull one scene graph (faster and 
simpler graph), while providing another scene graph if only single side 
stenciling is available. The same for geometry shaders - they might be 
supported through GL_ARB_geometry_shader4 extension - requiring their own 
shader code, through OpenGL 3.2 with different shader code, or programmer might 
provide still different graph for OpenGL 4.0 geometry shader, profiting from 
multiple invocations capability introduced in GLSL 4.0. However, to make this 
work, we need to get graphics context capabilities in cull phase.

Proposed solutions (for you to judge and possibly suggest the best approach):

- To avoid extensive changes to OSG, I think, we can use the existing approach 
of Extensions nested class, like Drawable::Extensions, Stencil::Extensions, 
etc. The advantage of the approach is that the user may detect whatever he may 
want or think of inside Extensions::setupGLExtensions() while the users are 
already familiar with this concept. The problem with the Extensions class is 
that it is not initialized until the first getExtensions() call inside draw 
phase. Usually it means that  we can not get valid Extensions instance in the 
first frame cull phase. And it is still not guaranteed to be initialized for 
the second frame.

- My idea, and I ask your counsel, was to auto-initialize all *::Extensions 
instances for any new graphics context that is realized in the application. If 
this might me overkill for some reason, we might provide some per-graphics-
context flag for the user and he might choose which context should use this 
functionality and which should not.

- To implement such functionality, we might use proxy approach and register 
all *::Extensions classes in some global list. Then, we might, for instance, 
go through the list and call setupGLExtensions() for all registered 
*::Extensions classes. Such procedure would be done whenever a context is 
realized.

- Another approach might be to give the user the possiblity to register 
GraphicsContext::realizeCallback (something like swapCallback but for 
realize). This way, the user may initialize required *::Extensions classes 
himself. The disadvantage of this approach is that the user would be required 
to write some code (the initialization code and callback registration). The 
proxy approach mentioned above would do the job for him automatically.

Before discussing implementation details, do you think that the proposal is 
reasonable or would you prefer different approach for the problem? Essentially, 
it is about ability to know graphics context capabilities during cull phase to 
be able to cull the scene graph that is the best suitable for the graphics 
context.

Thanks for the answer and keeping OSG growing.
John
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] graphics context capabilities in cull phase

2013-09-19 Thread Robert Osfield
Hi John,

I don't believe it's necessary to initialize all extensions together as if
you do have code paths/scene graph subgraph selection that is dependent on
extension/version presence it'll be for specific extensions/versions, as
you could initialize the ones of interest for you application within a
Viewer Realizer operation, see osgvolume for an example of this.  So
without any changes to the OSG can achieve what you want with a little bit
of coding.

Initializing all extension on realize is no bad thing though, one would
need to have a registry of extension objects and have a proxy object for
each one to register itself with the extension registry.  One could then
get the extension supported by calling the registry.  However, this type of
scheme would add extra complexity, and increase the footprint of the OSG in
memory and the speed of initialize as you'd need to register all extensions
even the ones that you application doesn't need.

Robert.




On 19 September 2013 14:43, PCJohn pec...@fit.vutbr.cz wrote:

 Hi Robert,

 I noticed one missing feature in OSG and I would be glad to discuss the
 solution with you before making further steps.

 The problem:

 - In cull phase, it would be useful to be able to find out which OpenGL
 extensions are supported and GLSL version available in the graphics
 context.
 Why anybody might need it? The programmer might want to cull different
 scene
 graphs for contexts with different capabilities. For example, if target
 platform supports two sided stencil, it might cull one scene graph (faster
 and
 simpler graph), while providing another scene graph if only single side
 stenciling is available. The same for geometry shaders - they might be
 supported through GL_ARB_geometry_shader4 extension - requiring their own
 shader code, through OpenGL 3.2 with different shader code, or programmer
 might
 provide still different graph for OpenGL 4.0 geometry shader, profiting
 from
 multiple invocations capability introduced in GLSL 4.0. However, to make
 this
 work, we need to get graphics context capabilities in cull phase.

 Proposed solutions (for you to judge and possibly suggest the best
 approach):

 - To avoid extensive changes to OSG, I think, we can use the existing
 approach
 of Extensions nested class, like Drawable::Extensions,
 Stencil::Extensions,
 etc. The advantage of the approach is that the user may detect whatever he
 may
 want or think of inside Extensions::setupGLExtensions() while the users are
 already familiar with this concept. The problem with the Extensions class
 is
 that it is not initialized until the first getExtensions() call inside draw
 phase. Usually it means that  we can not get valid Extensions instance in
 the
 first frame cull phase. And it is still not guaranteed to be initialized
 for
 the second frame.

 - My idea, and I ask your counsel, was to auto-initialize all *::Extensions
 instances for any new graphics context that is realized in the
 application. If
 this might me overkill for some reason, we might provide some per-graphics-
 context flag for the user and he might choose which context should use this
 functionality and which should not.

 - To implement such functionality, we might use proxy approach and register
 all *::Extensions classes in some global list. Then, we might, for
 instance,
 go through the list and call setupGLExtensions() for all registered
 *::Extensions classes. Such procedure would be done whenever a context is
 realized.

 - Another approach might be to give the user the possiblity to register
 GraphicsContext::realizeCallback (something like swapCallback but for
 realize). This way, the user may initialize required *::Extensions classes
 himself. The disadvantage of this approach is that the user would be
 required
 to write some code (the initialization code and callback registration). The
 proxy approach mentioned above would do the job for him automatically.

 Before discussing implementation details, do you think that the proposal is
 reasonable or would you prefer different approach for the problem?
 Essentially,
 it is about ability to know graphics context capabilities during cull
 phase to
 be able to cull the scene graph that is the best suitable for the graphics
 context.

 Thanks for the answer and keeping OSG growing.
 John
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Geode getDisplaySize functionality?

2013-09-19 Thread Zach Basanese
Hello,

I am wondering if there is any way to get the display size of a geode on 
screen. Specifically, I would like to get the display size of a Cylinder, our 
sun, on the screen so that I can make some calculations. I'm using the number 
of samples visible given by an Occlusion Query Node to see how much of the sun 
is visible. The max number of samples changes depending on the size of the sun, 
so I would like to be able to find that number at any given time. For example, 
the max when zoomed out is 1, while the max when zoomed in is much more.
Any help is much appreciated.

Thanks in advance!

Zach

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=56417#56417





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org