Hi,
I have been working on a minecraft world viewer for a while, and most of my
work has been done in Ogre3D, as this library is what I am most familiar with.
However, it is pretty clear that Ogre just can't go fast enough with my
specific use case - it sets so much redundant state per batch t
I have a similar issue - I want to display animated cube maps, which will be
prerendered in blender, stored as videos and provided to OSG by ffmpeg.
My current plan is to load a single buffer of data for each cubemap, in a 1x6
'stack' and create my 6 osg images using a simple linear offset into
This bit me too.. i create a number of Qt threads to do things (update terrain,
run physics simulation) and OSG was forcing them to all run on CPU 0, leading
to really terrible performance, since it sets CPU affinity on the main thread,
and all threads spawned from it will inherit the CPU mask.
This may be due to the fact that Fedora now runs gdm on its own screen.
e.g. it used to be that you could more or less assume setting DISPLAY=:0 and
running an X11 application would show the app on the machines 'local' display.
With Fedora 22+ this is now DISPLAY=:1
It looks to me, after glanc
The logic seems to be if there are more than 1 CPU, and the threadingModel is
set to singlethreaded (the default), then set CPU affinity (on what is very
likely the main thread of the application) to 0.
This really seems to be a poor idea - it might help a specific OSG app
configuration, but si
Hi Robert,
I want to unset the CPU affinity that OSG hardcodes. Currently I achieve this
under Linux with sched_set_affinity calls. But this requires platform-specific
defines in my code, and just seems ugly.
I have no idea why you would consider applications doing their own threading to
be 'n
Hi Robert,
Its not like I can't solve my own problem (I already worked around it by
overriding all OSGs CPU affinity setting with platform specific code) , but as
demonstrated by the original poster in this thread, I am not the only one who
has had to spend a lot of time wondering what is going
Pretty sure osgShadow doesn't do anything with the receiveShadows mask bit - ,
at least when using any of the shadow map algorithms.
Some reference to this:
http://osg-users.openscenegraph.narkive.com/eMJHm8Gm/osgshadow-question
--
Read this topic online here:
http://forum.opens
Its becase OSG's default shadowMap code isn't very good for large scenes - the
shadow camera frustum is sized to fit the entire scene, so an individual shadow
map pixel may cover a large number of rendered fragments, making the shadows
very pixellated, and the 'shadow acne' - caused by a similar
When you set up the technique on your ShadowedScene, you can use any of the
various techniques from osgShadow.
on this page:
http://trac.openscenegraph.org/projects/osg//wiki/Support/ProgrammingGuide/osgShadow
There is example code for setting up shadows, and, as an example, to use soft
shado
This is, bizarrely, by design.
In single-threaded mode, OpenSceneGraph silently sets CPU affinity to a single
core by default. Personally I think this is incredibly obtrusive on the
programmer, and the reasons for this being default behaviour are terrible, but
it is what it is.
This is what I
> Affinity is set by default because the it will provide the best
> performance for majority of OSG applications. This might be a
> "terrible" reason for you, but the OSG development is motivated not by
> just focusing on one class of users needs or preferences, default
> settings we try to do the
OK,
Apologies if I caused offense.
Goodbye.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68731#68731
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/
13 matches
Mail list logo