Hi all,

I'm the one looking into vsg vr (at least when I get the time, slow
progress overall). See https://github.com/geefr/vsg-vr-prototype for
progress so far.

Only using openvr for the moment, but this looks very relevant, some
interesting concepts to consider if my stuff ends up with XR support as
well. Hopefully being vulkan can avoid some of the platform-specific issues
(hopefully)..

Not much to add specifically for openXR yet, but happy to test or debug if
blockers/quirks pop up (windows or linux, htc vive, nothing fancy)


On Tue, 15 Jun 2021 at 18:42, Mads Sandvei <sandv...@gmail.com> wrote:

> Hi
>
> I have some experience integrating OpenXR and OSG from my work on
> OpenMW-VR.
> I'll share some of what i've learned
>
>  > OSG already has a concept of stereo (which currently this code doesn't
> interact with)
> OSG's multithreaded rendering works better with its own stereo method than
> the slave camera method, so i would recommend integrating with this instead.
> For example, if a user uses DrawThreadPerContext, the main thread can
> continue to the update phase of the next frame immediately when the last of
> slave cameras have begun its draw traversals.
> With two cameras you get two separate traversals and the main thread may
> is held up until the first camera is done with its draw, costing
> performance.
>
> In my work this meant using a single doublewide framebuffer instead of one
> framebuffer per eye. This is not a problem for OpenXR as you can create a
> doublewide swapchain and use the subimage structure
> to control the regions rendered to each eye when composing layers. I
> haven't looked to closely at whether OSG supports attaching different
> framebuffers per eye so that might be a moot point.
>
> It's worth noting that OSG is adding support for the GL_OVR_multiview2
> extension: https://groups.google.com/g/osg-users/c/__WujmMK5KE
> It would be worth integrating this in the future as this would easily be
> the fastest stereo method, though I don't have any personal experience with
> it.
>
>  > Performance is currently terrible. CPU usage and frame times don't seem
> high, so its blocking excessively somewhere
> Comparing your code to mine the only notable performance issues, that are
> under your control, is forcing single-threaded and the choice of stereo
> method.
> The code that is blocking is the xrWaitFrame() method, which is by design.
> See what i wrote below about nausea. It is okay to delay xrWaitFrame until
> the first time you need the predictedDisplayTime, but not any longer.
>
> Forcing single-threaded is undoubtably the biggest issue for performance.
> I see in your code a comment that the reason is so that no other thread
> can use the GL context.
> I have never touched openvr, so it's possible to openvrviewer has a good
> reason for this concern. With OpenXR i don't think there is any good reason
> for this.
>
> https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#XR_KHR_opengl_enable
> The OpenXR spec explicitly demands that the runtime does not use the
> context you give it except for a specific subset of functions you would
> only call from the rendering thread, which will have the context every time.
> Inspecting your code, the flow of openxr calls is very similar to my own
> and i have no issues running the threading mode DrawThreadPerContext. But i
> cannot speak for the other threading modes.
>
> > due to SteamVR changing GL context somewhere (a known bug, worked around
> in the swapchain abstraction
> My understanding is that the openxr spec doesn't actually forbid this
> behaviour. It only limits when when the runtime is allowed to use the
> context you gave it, not whether it binds/unbinds that or other contexts.
> This doesn't sound like behaviour anyone would want, though. Maybe an
> oversight in the openxr standard?
>
> The runtime cost of verifying the OpenGL context after each of the
> relevant functions is low since you're only doing it a handful of times per
> frame,
> so it might be a good idea to just wrap all of the mentioned methods in
> code that checks and restores opengl context.
> Of course, the best would be if all vendors adopted reasonable behaviour.
>
>  > Advancement is ideally driven by the expected display times of
> individual frames, i.e. the next frame should show the scene at exactly the
> moment when it is expected to be displayed to the user to avoid jitter and
> nausia. This may well be more of an app level concern (certainly is for
> flightgear which AFAICT currently uses fixed 120Hz simulation steps), but a
> general VR-specific viewer mainloop is probably needed in any case.
> This is the purpose of the xr[Wait,Begin,End]Frame loop, and why you're
> passing the predictedDisplayTime returned by xrWaitFrame() on to
> xrEndFrame().
>
> https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#frame-synchronization
> In short: you don't have to care, OpenXR is already doing this for you.
>
> Perhaps this is the issue for the openvrviewer, that openvr doesn't have
> this and so isn't automatically synchronized?
> My interpretation of xrBeginFrame is that it exists precisely so that the
> next frame never begins rendering operations before the runtime is done
> compositing the latest xrEndFrame.
>
> The only Nausea element you have to consider is when to locate an XrSpace.
> When locating an XrSpace, what you get is a predicted pose for the time
> you give it (usually the predictedDisplayTime you got from xrWaitFrame()).
> The close you get to the predicted time, the better the prediction will be.
> So it is encouraged to predict as close to draw as possible.
> By using the update slave callback, i believe you are accomplishing this
> as well as can be.
> This is also the motivation for the xrWaitFrame call, it delays your
> processing so that your poses will be predicted closer to the time they
> will actually be displayed.
>
> For the same reason, that predictions change in quality over time, it is
> encouraged to make all predictions at the same time and not spread out over
> time.
> Action spaces (i.e. motion controllers) have their pose data updated only
> when you sync actions, so sync these immediately before locating. I deal
> with this by putting all pose actions in their own action set so they don't
> get lumped together with other inputs.
>  > The OpenXR session is created using OpenGL graphics binding info
> provided via GraphicsWindow::getXrGraphicsBinding() which is only
> implemented for X11
> Just a heads up. On windows you will find that some OpenXR runtimes, such
> as WMR, do not support OpenGL. Not surprising, being microsoft's own
> runtime.
> I worked around this by using the wgl extension WGL_NV_DX_interop2 to
> share DirectX swapchains with OpenGL. I believe this would be the only way
> to support such runtimes in OSG.
>
> Hope this is of some help!
> Mads
>
> On Friday, June 11, 2021 at 10:37:18 PM UTC+2 James Hogan wrote:
>
>> Hi
>>
>> On Monday, 24 June 2019 at 23:07:53 UTC+1 davidg...@gmail.com wrote:
>>
>>> Greetings!
>>>
>>> I guess that I'm going to gripe on this subject like I did a year ago!
>>> I know that OpenXR is at least in Open Bata and I was wondering what
>>> progress anyone has made incorporating it in OSG.
>>>
>>> While I was in GDC I did see Khronos make some progress in this area and
>>> I even got to see someone do a demo of a VR display using HTC Vive. I
>>> challenged the group that worked on that and never heard from them again.
>>>
>>> I think one of the holdbacks was the interactive controls was not set
>>> yet, but from my perspective, they could have worked at the visual.
>>>
>>> I know that if I had the time and resources that I would hack this out,
>>> but one of the sad drawbacks of having a job is not having the time. It
>>> must be that most people still see this technology as a flash in the pan,
>>> but I think it’s taking on traction.
>>>
>>
>> I had a play with this over the last week or so (in the hopes of
>> eventually getting Flightgear working in VR with OpenXR since it seems to
>> be the future), and have managed to get something *extremely* minimal and
>> half broken going, enough to run some of the osg demos on Linux with
>> SteamVR's OpenXR runtime with an HTC vive (but not flightgear yet). I've
>> pushed a WIP version (see below), in the spirit of releasing early and
>> often, in case anybody here is interested in providing general feedback or
>> helping. I'll be able to get back to it in a few weeks when I'll try to get
>> more of it working & cleaned up:
>> https://github.com/amalon/OpenSceneGraph (openxr-devel branch)
>>
>> https://github.com/amalon/OpenSceneGraph/commit/71f80495be5cf7c4d286a52b345fa994a09e3bb7
>>
>> This is my first dive into OSG (and OpenXR), so i'm definitely open to
>> suggestions for improvements or the best way to integrate it (or whether it
>> should even be integrated into OSG rather than as an external plugin or
>> viewer). Currently I think it should get built into OSG since OSG already
>> has a concept of stereo (which currently this code doesn't interact with),
>> and this approach allows some rudimentary VR support even without the app
>> explicitly supporting it (though clearly app support is preferable
>> especially for menus and interaction), but I am not very familiar with OSG.
>>
>> Braindump below for anyone interested in the details.
>>
>> Cheers
>> James
>>
>> It is added as an osgViewer config OpenXRDisplay, which can be applied
>> automatically to the View by osgViewer::Viewer using environment variables
>> OSG_VR=1 and OSG_VR_UNITS_PER_METER=whatever. Some C++ abstractions of
>> OpenXR are in src/osgViewer/OpenXR, which are used by
>> src/osgViewer/config/OpenXRDisplay.cpp to set up the OpenXR instance,
>> session, swapchains, and slave cameras for each OpenXR view (e.g. each eye
>> for most HMDs, but it could be one display for handheld, or more for other
>> setups), and various callbacks for updating them and draw setup / swapping.
>> OpenXR provides multiple OpenGL textures to write to for each swapchain,
>> and we create a swapchain for each view, and an OpenGL framebuffer object
>> for each image texture in each the swapchain (i assume its faster not to
>> rebind the fbo attachments). Callbacks switch between the framebuffer
>> objects (like in osgopenvrviewer), and OpenXR frames are started
>> automatically before first render (or on first slave camera update), and
>> ended in the swap callback. The OpenXR session is created using OpenGL
>> graphics binding info provided via GraphicsWindow::getXrGraphicsBinding()
>> which is only implemented for X11.
>>
>> Current issues:
>> * i haven't mirrored the image to the window yet (there's probably a nice
>> OSG way to blit the view textures to the main camera?). it could perhaps
>> integrate with the DisplaySettings stuff somehow to decide what should be
>> mirrored.
>> * the application name (used for creating an XR instance which is shown
>> on HMD when app is starting) isn't discovered automatically and is still
>> set to "osgplanets". This can probably be discovered automatically in an
>> OSG way from argv[0] with the arguments stuff... haven't quite figured how
>> yet.
>> * Performance is currently terrible. CPU usage and frame times don't seem
>> high, so its blocking excessively somewhere. I briefly tried modding the
>> ViewerBase loop to avoid sleeping there, but haven't got to the bottom of
>> it yet. OpenXR does complain about validation errors on EndFrame, but its
>> unclear why & whether thats related, and it doesn't stop the images being
>> displayed in the HMD.
>> * synchronisation isn't handled between threads, as i don't yet have a
>> good grasp of how OSG uses threads to figure out exactly whats needed.
>> Currently threaded rendering is disabled (like in osgopenvrviewer).
>> * flightgear: currently it appears to fail due to SteamVR changing GL
>> context somewhere (a known bug, worked around in the swapchain
>> abstraction), resulting in OSG framebuffer objects being unable to be
>> created. I haven't had much time yet to figure it out. In any case it'll
>> need a fair bit of more custom setup eventually.
>> * there's a couple of places where I expect it may not build without
>> OpenXR. Fully intend to fix.
>>
>> Other thoughts about integration into OSG:
>> * The custom projection matrix calculation based on
>> fov{left,right,top,bottom} could be moved into osg::Matrix to join the
>> other projection matrix functions there.
>> * Maybe controller inputs could get exposed to OSG applications via
>> osg::Device events, though they're abstracted by OpenXR into app specific
>> actions.
>> * I wonder if OpenXR composition layers (which with extensions can be
>> composited as cubemaps, quads, part cylinders like curved TV, part spheres,
>> as well as the usual eye projections) should be represented as some weird
>> windowing system api with GraphicsWindows etc (though still tied to the
>> underlying window manager one), to allow for easier rendering of 2d
>> interfaces in VR...
>> * Advancement is ideally driven by the expected display times of
>> individual frames, i.e. the next frame should show the scene at exactly the
>> moment when it is expected to be displayed to the user to avoid jitter and
>> nausia. This may well be more of an app level concern (certainly is for
>> flightgear which AFAICT currently uses fixed 120Hz simulation steps), but a
>> general VR-specific viewer mainloop is probably needed in any case.
>> * Choosing pixel format from the list provided by OpenXR runtime... i
>> haven't looked into how OSG picks for non-VR yet.
>> * Each frame, the environment blend mode can be chosen from a set of
>> supported ones, and the frame may be rendered differently depending on it,
>> i.e. opaque (VR), alpha blended with camera (AR), or additive blending with
>> camera/background (for some AR displays). Need to figure out where that
>> should be decided, and whether to expose that in some rendering state
>> somewhere.
>>
>> Other misc development todo:
>> * use depth info extension to provide runtime with depth info for better
>> reprojection by runtime in event of missed deadlines
>> * use visibility mask extension to reduce rendering to part of screen
>> visible to eyes
>> * haven't looked into multisampling properly yet
>> * internals of OpenXRDisplay.cpp need splitting out into multiple files
>>
> --
> You received this message because you are subscribed to the Google Groups
> "OpenSceneGraph Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to osg-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/osg-users/b978786b-1d06-4ade-b111-768d96363187n%40googlegroups.com
> <https://groups.google.com/d/msgid/osg-users/b978786b-1d06-4ade-b111-768d96363187n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>


-- 
----
Gareth Francis
www.gfrancisdev.co.uk

-- 
You received this message because you are subscribed to the Google Groups 
"OpenSceneGraph Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to osg-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/osg-users/CAEKysgTSaoJvrBU8ao93URDs_xESEDHtTVh%3DuN0SN8WcDpWwGg%40mail.gmail.com.

Reply via email to