Re: [osg-users] [build] Build problem

2010-01-04 Thread Leif Delgass
On Mon, Jan 4, 2010 at 12:26 PM, Andy Garrison garrison_tho...@bah.com wrote:
 Hi,

 I have been trying to build OpenScenGraph on RedHat Linux 5 64 bit and get 
 the error missiing glu.h and glut.h files. They are suppose to reside in the 
 /usr/local/include/GL directory. I have installed all the packages with 
 RedHat and those files do not exist. I had OSG on a 32 bit version with 
 RedHat 4 and it worked fine Has anyone seen this problem and where and how do 
 I go about resolving this to get it installed?

 Thank you!

 Cheers,
 Andy

Hi Andy,

You need to install 2 packages:
% yum install mesa-libGLU-devel freeglut-devel

This will give you /usr/include/GL/glu.h and /usr/include/GL/glut.h,
along with the 64-bit libraries for GLU and glut in /usr/lib64.

Note that the mesa libGLU package is independent of the Mesa/DRI
OpenGL hardware drivers, so it won't cause any conflict with the
NVIDIA or AMD/ATI proprietary drivers if you use them.

-Leif
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Red, Green and Blue, now I've tried two!

2009-04-10 Thread Leif Delgass
On Thu, Apr 9, 2009 at 5:01 AM, Robert Osfield robert.osfi...@gmail.com wrote:
 Things aren't prefect though.   No Texture3D support, so volume rendering
 support fails.  There is also no PBuffer support.  No texture compression
 support either so standard VPB generated models just result in white
 models.  The vertex throughput is also very poor, even small models like
 cow.osg just return max frame rates of 200fps while I normally get many
 thousand of fps on ATI and NVidia.  Big town models also really slow when
 lots of objects/geometry are in the scene.  I kinda suspect that the drivers
 aren't well optimized for the vertex load.

Robert,

Regarding texture compression: by default Mesa DRI drivers don't
enable texture compression due to patent and conformance issues.
There is no software compression/decompression in the Mesa
distribution because of patents, but it has hooks for a software
implementation in a shared library.  If the hardware (and driver)
supports decompression, you can force the s3tc extensions on using
driconf (there is a force_s3tc_enable option that it can write to an
XML config in ~/.drirc).  Without the libtxc_dxtn library, using the
GL extension for compressing textures or reading back compressed
textures won't work, but texturing from pre-compressed textures should
work.  Because conformance to the extension spec. requires those
features, this option is off by default.  The legality of using the
external library depends on patent laws in your area, but there is a
source distribution and more info. available here:

http://homepage.hispeed.ch/rscheidegger/dri_experimental/s3tc_index.html

I don't have much experience with the intel DRI drivers, but I'd be
surprised if they didn't have support for decompression where the
hardware can do it, since it is fairly simple to implement in a Mesa
driver if you have the hardware documentation.

Leif Delgass
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Feedback sought on osgViewer swap ready support for clusters

2008-04-14 Thread Leif Delgass
On Mon, Apr 14, 2008 at 11:26 AM, Robert Osfield
[EMAIL PROTECTED] wrote:
 Hi Mike,


 On Mon, Apr 14, 2008 at 4:11 PM, Mike Weiblen [EMAIL PROTECTED]
 wrote:

  fyi I also have need for swap sync across a networked cluster, but at
  the the generic OpenGL rather than OSG level.

 What do you mean by generic OpenGL rather than OSG level?  I am trying to
 get my head around the topic so specifics is very useful right now.

  We've implemented an
  out of band handshake protocol in our network pipeline, but would be
  glad to adopt a GL standard if possible.  So I'm following this topic
  w/ interest.

 So you've implemented a software swap ready barrier then?

 How do you currently wire this up with the OSG?

 Robert.


Hi Robert,

I have experience using both software-based swap ready, and the NVIDIA
hardware synchronization with OSG.  In both cases this was in
conjunction with VR Juggler handling the context setup and main loop
(with an OSG SceneView per window/node).  VR Juggler includes a
TCP-based software barrier.  For hardware sync, I had hacked support
into the VR Juggler context/window setup code, but there is now core
support for that in VR Juggler.  I would recommend having a software
method available as a fall back, because the NVIDIA hardware
implementation hasn't always worked for me, especially on large
clusters (I work with a 27-node tiled wall now), but I've mostly
worked with Quadro cards with the sync feature integrated vs. the new
G-Sync cards (we've used those in a 4 node system with success, IIRC).
 There is finally a spec in the registry for NVIDIA's extension, which
is based on the old SGIX extensions:

http://www.opengl.org/registry/specs/NV/glx_swap_group.txt
http://www.opengl.org/registry/specs/NV/wgl_swap_group.txt
http://www.opengl.org/registry/specs/SGIX/swap_group.txt
http://www.opengl.org/registry/specs/SGIX/swap_barrier.txt

Note that these also interact with the swap interval in
GLX_SGI/WGL_swap_control, so you can sync on a multiple of the
blanking interval.

I think in theory you can use the swap group to get your barrier for
multiple contexts on a single machine, and then bind the swap group to
the global swap barrier, but I've only worked with one context and one
screen per node (single card in each node).  In my case the NVIDIA
extension supported a single swap group for GLX windows on each
machine and a single global barrier (you can query the limit with the
NV extension).  I always have sync to vblank enabled either by
environment variable or through nvidia-settings, and you need to set
up all the cards and the sync master in the Frame Lock settings of
nvidia-settings as well (I usually have a shell script wrapper that
ensures all that is set up by calling nvidia-settings from the command
line).

Also, (as with any extensions) remember to check for the extension
string in the GLX extensions before attempting to use it, since some
implementations may return non-NULL entry points even if the extension
isn't supported.

-- 
Leif Delgass
[EMAIL PROTECTED]
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Feedback sought on osgViewer swap ready support for clusters

2008-04-14 Thread Leif Delgass
On Mon, Apr 14, 2008 at 2:29 PM, Leif Delgass [EMAIL PROTECTED] wrote:
 On Mon, Apr 14, 2008 at 11:26 AM, Robert Osfield

 [EMAIL PROTECTED] wrote:


  Hi Mike,
  
  
   On Mon, Apr 14, 2008 at 4:11 PM, Mike Weiblen [EMAIL PROTECTED]
   wrote:
  
fyi I also have need for swap sync across a networked cluster, but at
the the generic OpenGL rather than OSG level.
  
   What do you mean by generic OpenGL rather than OSG level?  I am trying to
   get my head around the topic so specifics is very useful right now.
  
We've implemented an
out of band handshake protocol in our network pipeline, but would be
glad to adopt a GL standard if possible.  So I'm following this topic
w/ interest.
  
   So you've implemented a software swap ready barrier then?
  
   How do you currently wire this up with the OSG?
  
   Robert.
  

  Hi Robert,

  I have experience using both software-based swap ready, and the NVIDIA
  hardware synchronization with OSG.  In both cases this was in
  conjunction with VR Juggler handling the context setup and main loop
  (with an OSG SceneView per window/node).  VR Juggler includes a
  TCP-based software barrier.  For hardware sync, I had hacked support
  into the VR Juggler context/window setup code, but there is now core
  support for that in VR Juggler.  I would recommend having a software
  method available as a fall back, because the NVIDIA hardware
  implementation hasn't always worked for me, especially on large
  clusters (I work with a 27-node tiled wall now), but I've mostly
  worked with Quadro cards with the sync feature integrated vs. the new
  G-Sync cards (we've used those in a 4 node system with success, IIRC).
   There is finally a spec in the registry for NVIDIA's extension, which
  is based on the old SGIX extensions:

  http://www.opengl.org/registry/specs/NV/glx_swap_group.txt
  http://www.opengl.org/registry/specs/NV/wgl_swap_group.txt
  http://www.opengl.org/registry/specs/SGIX/swap_group.txt
  http://www.opengl.org/registry/specs/SGIX/swap_barrier.txt

  Note that these also interact with the swap interval in
  GLX_SGI/WGL_swap_control, so you can sync on a multiple of the
  blanking interval.

  I think in theory you can use the swap group to get your barrier for
  multiple contexts on a single machine, and then bind the swap group to
  the global swap barrier, but I've only worked with one context and one
  screen per node (single card in each node).  In my case the NVIDIA
  extension supported a single swap group for GLX windows on each
  machine and a single global barrier (you can query the limit with the
  NV extension).  I always have sync to vblank enabled either by
  environment variable or through nvidia-settings, and you need to set
  up all the cards and the sync master in the Frame Lock settings of
  nvidia-settings as well (I usually have a shell script wrapper that
  ensures all that is set up by calling nvidia-settings from the command
  line).

  Also, (as with any extensions) remember to check for the extension
  string in the GLX extensions before attempting to use it, since some
  implementations may return non-NULL entry points even if the extension
  isn't supported.


I forgot to mention, you are correct that you don't need an explicit
sync, the normal swap buffers call is all you need once you have bound
the swap barrier (and enabled frame lock in the driver settings).

Leif
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Feedback sought on osgViewer swap ready support for clusters

2008-04-14 Thread Leif Delgass
On Mon, Apr 14, 2008 at 3:03 PM, Robert Osfield
[EMAIL PROTECTED] wrote:
 Hi Leif,

 Thanks the info, very useful.  Do you still have access to the various
 hardware?  I'm thinking of what we can do in terms of testing if/when we get
 code integrated into osgViewer.

I'll try to help out as time permits.  The biggest issue is having a
hardware configuration that is known working to test on!  I have had
problems getting even simple demo tests to sync properly across our
cluster with the hardware method.  I might need to try subsets of
nodes to find a stable configuration.

 W.r.t swap groups and barriers, I am wondering about putting the
 group/barrier ID's as parameters of osg::GraphicsContext::Traits, let them
 be ints and have a -1 value signifying inactive.   If they are positive then
 they could be used in graphics context realize code to assign to appropriate
 groups/barriers.  We'd possibly also need an overall hint in
 osg::DisplaySettings to say whether we want swap/groups activated by
 default, or to disable ones being requested in case of flaky driver
 implementations.

An overall hint would be useful.  If you are lucky, you'll just slip
the barrier if the hardware/driver has problems, but a lock up isn't
out of the realm of possibilty, so a global disable is nice. ;)  As
far as the traits go, what you suggest will work for the NVIDIA
implementation (and as I said in my case the ID had to be 1 for both
group and barrier).  If you want to support the SGI extentsion, it's a
bit tricker, since you bind a window to the group by supplying the
drawable of another window which is already bound! (yuck).  The thing
is, even when we still had an Onyx running, I don't recall having to
use the extension, it seemed that frame lock was enabled at the driver
configuration level, but I don't have access to SGI hardware anymore
to test.  Maybe the extesion would be supported on the Prism?

 Would it be possible to list the API setup/entry points to the software sync
 functions, this will help me get an idea of what would be required.

 Robert.

I'm not sure if this will help you, since I don't use osgViewer.  In
my case, VR Juggler handles the render loop with each context handled
by a separate thread/SceneView (but in my case, I have a single
context/thread per node).  Synchronization happens after calling
SceneView::draw(), so a callback from OSG isn't necessary (from VRJ's
perspective it's don't call us, we'll call you).  The sync happens
in VRJ's kernel and draw manager classes.  There is a semaphore for
the draw threads, and then the cluster barrier is implemented by a
cluster plugin that handles the sync packets.  There is also a start
barrier plugin to ensure that all nodes wait to begin the first frame
until all are ready, again with TCP.  From the application developer's
perspective, you subclass your application class from an OsgApp class
and hand it to the Juggler kernel, so you don't need to deal with the
render loop and sync code, you just fill in methods that are called
before drawing the frame (pre- and post- cluster sync), during frame
draw, post-frame, etc.

-- 
Leif Delgass
[EMAIL PROTECTED]
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Shader question concerning GL_LIGHTING etc.

2008-04-07 Thread Leif Delgass
On Mon, Apr 7, 2008 at 11:01 PM, Mike Weiblen [EMAIL PROTECTED] wrote:
 Hi,


  On Mon, Apr 7, 2008 at 4:13 PM, Yefei He [EMAIL PROTECTED] wrote:
   Hi, Folks,
  
   This maybe OT but I'm hoping other OSG users may have worked on
the same issue. I would like to use per pixel shader lights as car
headlights in place of standard GL spotlights in my driving simulator
program based on OSG. The geometric models are loaded from external
files such as .ive or .flt models. My concern is, I don't seem to be
able to retrieve enough information for the current fragment inside
the shader. For example, some faces in a model may be set to not
to be lit, i.e. GL_LIGHTING is disabled on those faces. Is there a
way to retrieve this information from inside the shader?

  Yes, you can pass it via a uniform.  But if you're asking if there's a
  built-in uniform, no there isn't.  You can see the list of built-ins
  (at least for GLSL 1.10) on the glsl_quickref.pdf.

  And in general, from the GLSL standpoint as a standard, the momentum
  is away from built-ins.  Else it winds up as the union of every
  possible application's desired state being exposed as uniforms, adding
  burden to every shader and consuming uniform storage arrrays, which
  are a finite resource.

  -- mew

To add to that, the trend is toward making the API align better with
modern hardware (see plans for OpenGL 3 and beyond), and modern
hardware doesn't have those state bits/registers anymore (with the
exception of the parts of the pipeline that are not yet programmable).
 Instead of the driver emulating fixed functionality that you don't
really want to use anyway, the responsibility for this type of state
management is shifted to the application developer who understands
what the shader needs as inputs.  The hardware has general purpose
registers/stream-buffers, so that's what you'll be asked to use in the
leaner API (and driver) world.

You may want to generate shaders optimized for particular state
combinations so you don't waste instructions on code paths you know
will not be taken in the shader for many primitives (the driver is
undoubtedly doing this to some extent for the emulated fixed-function
paths).  In that case you would be swapping between shaders instead of
changing uniform values to make your state changes.  An alternative is
to use an ubershader that has many conditionals to switch behavior
based on uniform values.  Given the number of different combinations,
you might end up with something in-between (e.g. a small number of
shaders with a small number of conditionals) or perhaps a system to
generate shaders from code fragments at runtime.

There are new NV extensions that let you set up uniform buffers
(bindable uniforms) so you can stream your uniform values to the
card instead of using immediate mode calls (glUniform*) to set your
uniforms.  I'm not sure if support for that extension has landed in
OSG core yet, and it requires recent hardware (DX 10 class only?).

Leif Delgass
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] 2D Scrolled Window Implementation

2007-11-12 Thread Leif Delgass
On Nov 12, 2007 3:46 PM, Vladimir Shabanov [EMAIL PROTECTED] wrote:
 2007/11/12, Jeremy L. Moles [EMAIL PROTECTED]:
  On Mon, 2007-11-12 at 19:41 +, Robert Osfield wrote:
   Hi Jeremy,
  
   I wouldn't worry about users defined clip planes, or special texture
   matrix set up.  I suggest getting to grips with plain OpenGL modelview
   and projection matrices and how they work, the OSG's Camera view and
   projection matrices map directly to OpenGL's so docs on the web and
   OpenGL books on the topic of cameras should help you.  Once you grasp
   this stuff hopefully things will become second nature and the
   solutions will pop out of this better understanding.  I'm no great
   educator though, so I have to defer to other texts for this, but I do
   know that the time you invest in understanding modelview and
   projection matrices will repay you many time over throughout the rest
   of your career.
 
  :)
 
  Well, I wouldn't go so far as to say I don't have ANY understanding of
  those concepts--it's just that translating your recommendation into
  implementation isn't immediately obvious given both my understanding of
  GL in general and my familiarity with OSG.
 
  At any rate, introducing clipping may solve the issue of not rendering
  the undesirable portions of a piece of a geometry, but how does it
  affect picking? Is it possible to pick objects that are clipped (this
  seems to be the case from my small tests here, though I'm probably doing
  something wrong)...

 As far as I understand no picking problems should appear when
 osg::Camera's viewport is set correcly.

 You don't need to remember math for these A,B,C,D. Just use
 osg::Plane( Vec3 normal, Vec3 point ) constructor. Something like
 Plane( Vec3( 1,0,0 ), Vec3( x1,y1,0) )
 Plane( Vec3( 0,1,0 ), Vec3( x1,y1,0) )
 Plane( Vec3( -1,0,0 ), Vec3( x2,y2,0) )
 Plane( Vec3( 0,-1,0 ), Vec3( x2,y2,0) )

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

I used clip planes for a sort of 2D/3D scrolling window (a sort of spreadsheet
with textures, text and some 3D models in cells).  The problem I ran into was
that once you have more than one scrolling window, you run into the limitation
of OSG not supporting re-use of OpenGL clipping planes without setting up
multiple render stages.  I think there may have been a patch submitted to allow
re-use of clip planes with multiple render bins, but I'm not sure of the status
of that.

Leif Delgass
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] full screen mode in osgviewer and options in command line

2007-10-03 Thread Leif Delgass
   MapNotify
   MapNotify x=0 y=0 width=1440, height=900
   Expose x=0 y=0 width=1440, height=900
   ConfigureNotify x=0 y=0 width=1440, height=900
  
  
   Robert Osfield wrote:
  
   Hi Cedric,
  
   Which version of the OSG are you working against?  What window manager
   are you using?
  
   On 10/3/07, Cedric Pinson [EMAIL PROTECTED] wrote:
  
  
   Hi,
  
   I have a strange behviour when i ran osgviewer it start in fullscreen
   mode and when i tried to disable fullscreen to go in window mode with
   the key F. it does not success to make the switch. I will dig but i
   would like to know if i am alone or not.
  
   and i saw that
   osgviewer --help-keys
   Usage: osgviewerd [options] filename ...
  
   can someone report the same behaviour ?
  
   Cedric
  
   --
   +33 (0) 6 63 20 03 56  Cedric Pinson mailto:[EMAIL PROTECTED] 
   http://www.plopbyte.net
  
  
   ___
   osg-users mailing list
   osg-users@lists.openscenegraph.org
   http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
  
  
   ___
   osg-users mailing list
   osg-users@lists.openscenegraph.org
   http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
  
   --
   +33 (0) 6 63 20 03 56  Cedric Pinson mailto:[EMAIL PROTECTED] 
   http://www.plopbyte.net
  
  
   ___
   osg-users mailing list
   osg-users@lists.openscenegraph.org
   http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
  
   ___
   osg-users mailing list
   osg-users@lists.openscenegraph.org
   http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
 
  --
  +33 (0) 6 63 20 03 56  Cedric Pinson mailto:[EMAIL PROTECTED] 
  http://www.plopbyte.net
 
 
  ___
  osg-users mailing list
  osg-users@lists.openscenegraph.org
  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



-- 
Leif Delgass
[EMAIL PROTECTED]
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Call for feedback : glu.h does it contain _GLUfuncptr?

2007-10-01 Thread Leif Delgass
On 10/1/07, Robert Osfield [EMAIL PROTECTED] wrote:
 Hi All,

 In tracking down a problem with build that exists on the CMake buiild
 under OSX and I believe under certain Mingw (or perhaps Cyginw) setups
 as well - it comes down to some glu.h defining the glu tesselator
 callback in different ways.  This issue in past has resulted in hacks
 to include/osg/GLU to try and choose the right form, alas this hasn't
 proved full proof.  So onward we must go to find out what combination
 might work...

 I'd like you feedback, how is the glu tesselator callback defined on
 your system, on my linux box I have /usr/include/GL/glu.h



 /* Internal convenience typedefs */
 typedef void (GLAPIENTRYP _GLUfuncptr)();

 
 

 GLAPI void GLAPIENTRY gluNurbsCallback (GLUnurbs* nurb, GLenum which,
 _GLUfuncptr CallBackFunc);


 So what is the equivalent definition on your machine?  Please feel
 free to copy and past the relevant sections.  The more platforms I can
 get feedback on the better.

 Robert.

Hi Robert,

I have the same as you in Fedora Core 5.  This header comes from the
Mesa 6.4.2 libGLU devel rpm on my system, and is located in
/usr/include/GL.

Leif Delgass
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Call for feedback : glu.h does it contain _GLUfuncptr?

2007-10-01 Thread Leif Delgass
On 10/1/07, Leif Delgass [EMAIL PROTECTED] wrote:
 On 10/1/07, Robert Osfield [EMAIL PROTECTED] wrote:
  Hi All,
 
  In tracking down a problem with build that exists on the CMake buiild
  under OSX and I believe under certain Mingw (or perhaps Cyginw) setups
  as well - it comes down to some glu.h defining the glu tesselator
  callback in different ways.  This issue in past has resulted in hacks
  to include/osg/GLU to try and choose the right form, alas this hasn't
  proved full proof.  So onward we must go to find out what combination
  might work...
 
  I'd like you feedback, how is the glu tesselator callback defined on
  your system, on my linux box I have /usr/include/GL/glu.h
 
 
 
  /* Internal convenience typedefs */
  typedef void (GLAPIENTRYP _GLUfuncptr)();
 
  
  
 
  GLAPI void GLAPIENTRY gluNurbsCallback (GLUnurbs* nurb, GLenum which,
  _GLUfuncptr CallBackFunc);
 
 
  So what is the equivalent definition on your machine?  Please feel
  free to copy and past the relevant sections.  The more platforms I can
  get feedback on the better.
 
  Robert.

 Hi Robert,

 I have the same as you in Fedora Core 5.  This header comes from the
 Mesa 6.4.2 libGLU devel rpm on my system, and is located in
 /usr/include/GL.

 Leif Delgass

I just checked a Red Hat Enterprise 4 box and it has:
/* Internal convenience typedefs */
#ifdef __cplusplus
typedef GLvoid (*_GLUfuncptr)();
#else
typedef GLvoid (*_GLUfuncptr)(GLvoid);
#endif

In RHEL4, this header is part of the xorg-x11-devel package for X.org 6.8.2.

Leif Delgass
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] MODKEY_CTRL problem

2007-09-27 Thread Leif Delgass
On 9/27/07, Robert Osfield [EMAIL PROTECTED] wrote:
 On 9/27/07, Panagiotis Papadakos [EMAIL PROTECTED] wrote:
  Hi Robert. Linux/KDE.

 OK. I've reproduce the problem in osgkeyboard, it looks like the alt
 tab is prevent GraphicsWindowX11 from getting any events, and if it
 isn't getting any events then there is no way for it to know that
 anything has changed.  Pressing 'alt' on its own in the osgkeyboard
 window fixes the problem.

 As to a solution? I don't know.  One might need to dynamically query
 the modifier state rather than relying on events.  This would require
 substantial changes though, and is not something I have time to go
 chasing after right now.  Others are welcome to investigate.

 Robert.

Since I've been trying to learn more Xlib, I took a look at this and
it seems that OSG is using KeyPress/Release events on the modifier
keys themselves to set the GUIEventAdapter modifier mask rather than
using the modifier state contained in the X key, button, motion and
border crossing events (the 'state' member of those events).

I used xev to trace the events, and I think what happens is: Alt is
pressed, the EventQueue's modifier state is set, and then the window
loses keyboard focus when the window manager sees Alt-Tab.  If the Alt
key is released while switched away (or the user Alt-Tabs back to the
OSG window), the window never gets a KeyRelease event for Alt because
the event happens while it has lost keyboard focus.  The Alt state in
the EventQueue is then stuck on until it is pressed and released
again.

One solution would be to ignore KeyPress/Release events for modifier
keys and use the modifier state masks in the other keyboard and
pointer events.  This also honors the current modifier mapping
(MappingNotify would also need to be handled) as reported by xmodmap.
Howver, another option is to select for KeymapNotify events to get the
current modifier state when the keyboard/pointer focus comes back
(KeymapNotify follows EnterNotify and FocusIn events).  The
KeymapNotify event includes state for all keys (a 256 bit vector
indexed by keycode).  EnterNotify also includes the modifier state in
the event, but FocusIn does not.

Leif Delgass
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] BUG?: mouse coordinate changes after window move

2007-09-21 Thread Leif Delgass
On 9/21/07, Robert Osfield [EMAIL PROTECTED] wrote:
 HI Lief,

 Thanks for the looking into this.  I'm open to your suggest for a
 virtual fullscreen method.  Possible this could be wrapped up into the
 setWindowRectangleImplementation which is automatically detects that
 the dimensions are fullsize.  Any changes we make will need rolling
 out to Win32 and Carbon as well.

I just tried the approach of setting the fullscreen state in
setWindowRectangleImplementation,  and this works without changing the
GraphicsWindow base class.  The only issue is: it has the side effect
that changing the window size to the screen size through
setWindowRectangle makes the window fullscreen, even if the window
decorations have not been disabled through setWindowDecoration (this
also leaves the windowDecoration context trait out of sync unless I
set it to false in setWindowRectangle when going fullscreen).

That means, for example, that cycling through the windowed resolutions
with , in osgviewer will put the window into fullscreen when you
reach the screen resolution (and back to windowed when you reduce the
resolution).  I could check the decoration trait (in addition to
window size) to determine if the window should be fullscreen, but that
would mean you'd need to turn decorations on/off before setting the
window size (ViewerEventHandler uses this order, but it could be a
source of confusion).  I suppose I could also add the screen size
dimensions check and fullscreen state code in the
setWindowDecorationImplementation, so that disabling decorations with
a screen size window goes into fullscreen.  What are your thoughts on
this?

By the way, I found that GNOME/metacity does also honor the Motif
hints regarding decorations, but the fullscreen state is the only way
I've found to get top-level stacking of a window.  Leaving the
fullscreen state only adds window decorations back if they haven't
been disabled with the Motif hint.

 Would you be able to try out tweaking GraphicsWindow/GraphicsWindowX11
 to see if you can get the new window hints working?

 Robert.

 On 9/20/07, Leif Delgass [EMAIL PROTECTED] wrote:
  On 9/20/07, Robert Osfield [EMAIL PROTECTED] wrote:
   On 9/20/07, Anders Backman [EMAIL PROTECTED] wrote:
Problem 2 (windowed):
   
We are using gnome on Ubuntu, and
  osgviewer --window 100 100 500 500 cow.osg
   
works fine, but after 'f' is pressed two times (first into fullscreen, 
and
then back to windows), we dont get a window.
So it seems that it is not possible to go back from fullscreen  windowed
mode.
  
   This is a bug in the window manager ignoring request to add decoration
   back in.  It might be possible to code a workaround for working with
   such windowing managers but alas I can't divine what the problem might
   be as everything works just fine on all my linux boxes.
  
Robert.
 
  Hi Robert,
 
  I have been experimenting with window manager hints regarding a
  different but related issue.  I'm running GNOME under Fedora 5, and
  the fullscreen mode in osgviewer creates a window that stays under the
  top and bottom toolbar panels.   For me, switching to windowed mode
  properly adds the window decorations.  But what I discovered is that
  to get true fullscreen windows in GNOME, I need to use the Extended
  Window Manager Hints (EWMH):
 
  http://standards.freedesktop.org/wm-spec/wm-spec-latest.html
 
  Sending a ClientMessage event (or using XChangeProperty before mapping
  the window) to toggle the _NET_WM_STATE_FULLSCREEN Atom for the
  _NET_WM_STATE property works as expected -- the window is undecorated
  and appears on top of the toolbars in fullscreen state.  I was looking
  into the OSG implementation and it seems that fullscreen is
  implemented using a window resize to screen dimensions + the Motif
  window manager hints to remove decorations.  The EWMH spec is intended
  to replace Motif hints, and it has a slightly different philosophy.
  You specify the usage/type of window rather than directly controlling
  the use of decorations.
 
  I think what would be needed is a virtual fullscreen() method in
  GraphicsWindow that can be overriden in GraphicsWindowX11 using this
  implementation if the window manager supports it, and then the
  toggleFullscreen() method in ViewerEventHandler could call this
  function rather than setting the window rectangle and decoration (in
  the EWMH spec, the window manager is responsible for restoring the
  original window geometry and decoration when leaving fullscreen
  state).
 
  --
  Leif Delgass
  [EMAIL PROTECTED]

-- 
Leif Delgass
[EMAIL PROTECTED]
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] BUG?: mouse coordinate changes after window move

2007-09-20 Thread Leif Delgass
On 9/20/07, Robert Osfield [EMAIL PROTECTED] wrote:
 On 9/20/07, Anders Backman [EMAIL PROTECTED] wrote:
  Problem 2 (windowed):
 
  We are using gnome on Ubuntu, and
osgviewer --window 100 100 500 500 cow.osg
 
  works fine, but after 'f' is pressed two times (first into fullscreen, and
  then back to windows), we dont get a window.
  So it seems that it is not possible to go back from fullscreen  windowed
  mode.

 This is a bug in the window manager ignoring request to add decoration
 back in.  It might be possible to code a workaround for working with
 such windowing managers but alas I can't divine what the problem might
 be as everything works just fine on all my linux boxes.

  Robert.

Hi Robert,

I have been experimenting with window manager hints regarding a
different but related issue.  I'm running GNOME under Fedora 5, and
the fullscreen mode in osgviewer creates a window that stays under the
top and bottom toolbar panels.   For me, switching to windowed mode
properly adds the window decorations.  But what I discovered is that
to get true fullscreen windows in GNOME, I need to use the Extended
Window Manager Hints (EWMH):

http://standards.freedesktop.org/wm-spec/wm-spec-latest.html

Sending a ClientMessage event (or using XChangeProperty before mapping
the window) to toggle the _NET_WM_STATE_FULLSCREEN Atom for the
_NET_WM_STATE property works as expected -- the window is undecorated
and appears on top of the toolbars in fullscreen state.  I was looking
into the OSG implementation and it seems that fullscreen is
implemented using a window resize to screen dimensions + the Motif
window manager hints to remove decorations.  The EWMH spec is intended
to replace Motif hints, and it has a slightly different philosophy.
You specify the usage/type of window rather than directly controlling
the use of decorations.

I think what would be needed is a virtual fullscreen() method in
GraphicsWindow that can be overriden in GraphicsWindowX11 using this
implementation if the window manager supports it, and then the
toggleFullscreen() method in ViewerEventHandler could call this
function rather than setting the window rectangle and decoration (in
the EWMH spec, the window manager is responsible for restoring the
original window geometry and decoration when leaving fullscreen
state).

-- 
Leif Delgass
[EMAIL PROTECTED]
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Getting the screen's refresh rate

2007-08-21 Thread Leif Delgass
On 8/21/07, Robert Osfield [EMAIL PROTECTED] wrote:
 n 8/21/07, Serge Lages [EMAIL PROTECTED] wrote:
  Is there any way to get the screen refresh rate ? In
  WindowingSystemInterface there is a setScreenRefreshRate method, but not a
  get version.

 We could certainly add a getScreenRefreshRate() method to
 WindowingSystemInterface, and it should be implementatable in some
 manner.  I don't know offhand how to implement on each platform though
 so members of the community will need to chip in with suggestions on
 how to implement.

  If this is not possible, is there a way to change the screen resolution and
  keep the same refresh rate ? The problem I have is that when I change the
  fullscreen resolution for my app and come back to the initial desktop
  resolution when I close it, the refresh rate has changed (it sets the most
  little value possible, in my case 60Hz instead of 85).

 I would have thought it would be possible, I didn't implement the
 screen res setting code myself and its only currently implemented
 under Windows, so will need to defer to others.

 Robert.
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Under X, if the RANDR extension is supported, you can use  the
Xrandr(3x) library to get and set the refresh rate for a screen (as
well as changing screen size/modes).  You ought to be able to query
the current screen config before making modifications, and restore the
original config later.

The upcoming RandR 1.2 (will be part of the X.org 7.3 release) will
also support display hotplug and run-time (re-)configuration of
multi-head setups (also requires driver support).

-- 
Leif Delgass
[EMAIL PROTECTED]
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org