Re: [osg-users] Reverse engineer the VPB tile generation process

2016-11-30 Thread Trajce Nikolov NICK
Nevermind ... I did it with Quat from the tile geocentric up to local Z ...
Please ignore

On Wed, Nov 30, 2016 at 6:33 PM, Trajce Nikolov NICK <
trajce.nikolov.n...@gmail.com> wrote:

> Hi Robert, Community,
>
> I spent today some time to learn about the final output of the VPB tools
> and the terrain techniques (I am ok with the basic one for this moment, the
> GeometryTechnique). So the tiles are in quadtree fashion, and the generated
> geometry resides into Geodes, which is obvious. And this is for geocentric
> databases.
>
> My goal is: to bring the tiles back to "source space". I can get the
> nodepath of every single Geode representing a tile, get the localToWorld
> matrix and apply its inverse to the Tile vertices - to have the tile back
> to (0,0,0) from its ellipsoid position/orientation.
>
> When I do this, I am seeing the tiles back to (0,0,0) but they keep
> oriented as on the ellipsoid, so I am after a hint where is this geocentric
> orientation applied and how to "revert" it back
>
> Thanks for any hint
>
> Cheers!
> Nick
>
>
>
> --
> trajce nikolov nick
>



-- 
trajce nikolov nick
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Detecting when a texture is to big for graphics memory

2016-11-30 Thread werner.modenb...@texion.eu
Hi Allister,
I have a comparable situation  in my software. Displaying huge textures.
I got help from the list recently and now it works like a charm.
I split the texture in small tiles and assign them to textured quads. The quads 
are arranged to form a huge plane. The trick is arranging them in a PagedLOD 
Structure as a quad tree. Always 4 tiles create a tile of the same size in the 
next distance level with lower resolution. The tiles are loaded dynamically on 
whatever distance they appear. This way the total load of textures is not so 
high and manageable by the hardware.
Send me a private mail if you need more details. 
Werner

On 30. November 2016 15:42:17 MEZ, Alistair Baxter  wrote:
>Our application is using osgVolume to render 3D texture data that is
>provided by users. This means the data can be very large, and can
>exceed the amount of available graphics memory on some machines.
>
>I was looking for a way to detect whether a texture has failed to load
>in this way, so that we can alert the user, or react to the problem in
>some other way. But I'm having trouble finding anything in code that
>will help.
>OpenSceneGraph responds with   "Warning: detected OpenGL error 'out of
>memory' at after RenderBin::draw(..)"   but I'm not seeing anything in
>the scene graph data that can indicate that the texture in question is
>at fault. The TextureObject representing the 3D texture, for example
>declares that it is allocated, and reports the correct size and a
>positive id.
>
>Is there any way to tell whether a texture is too big for graphics
>memory, other than by just knowing how much there is in total  (a
>feature that only seems to work for me on NVidia hardware anyway) and
>checking whether the known size of your texture will fit?
>
>
>
>
>
>
>___
>osg-users mailing list
>osg-users@lists.openscenegraph.org
>http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Reverse engineer the VPB tile generation process

2016-11-30 Thread Trajce Nikolov NICK
Hi Robert, Community,

I spent today some time to learn about the final output of the VPB tools
and the terrain techniques (I am ok with the basic one for this moment, the
GeometryTechnique). So the tiles are in quadtree fashion, and the generated
geometry resides into Geodes, which is obvious. And this is for geocentric
databases.

My goal is: to bring the tiles back to "source space". I can get the
nodepath of every single Geode representing a tile, get the localToWorld
matrix and apply its inverse to the Tile vertices - to have the tile back
to (0,0,0) from its ellipsoid position/orientation.

When I do this, I am seeing the tiles back to (0,0,0) but they keep
oriented as on the ellipsoid, so I am after a hint where is this geocentric
orientation applied and how to "revert" it back

Thanks for any hint

Cheers!
Nick



-- 
trajce nikolov nick
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Detecting when a texture is to big for graphics memory

2016-11-30 Thread Robert Osfield
Hi Alistair,

FYI, OpenGL has a texture feature called proxy textures where you can
do a trial texture allocation and then query whether that are real
texture with the same parameters will succeed. This feature isn't
utilized by the OSG so it's something you'd have to roll your own
OpenGL code to do the query.  It's probably over 15 years since I look
at this particular annex of OpenGL so there is chance that modern
drives don't support it.  It might be useful though if it's still
supported.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Detecting when a texture is to big for graphics memory

2016-11-30 Thread Robert Osfield
Hi Alistair,

On 30 November 2016 at 15:37, Alistair Baxter  wrote:
> In the particular test case I'm looking at, it's about 5 gigs of texture and 
> 2 gigs of video RAM. We have a manual mechanism for downsampling, but then we 
> can end up in situations where a processed file that looks fine on a machine 
> with 6 or 8 gigs of video ram won't load on one with far less.
>
> Obviously, this is an absurdly profligate use of video memory, but if you've 
> got the data, you might as well use it.

This is where paging coming in handy.  We have ready made solution for
2d imagery and terrain but alas not one for volume rendering yet.

> What do you mean by "have a graphics operation/callback that forces a texture 
> compile" ? Is that sort of thing covered in the osg examples?

There a number of ways to go about it, one is add a draw callback to
the view's camera, or use a draw callback on a drawable in the scene
graph.  The other way would be to use RealizeOperation as the
osgvolume example uses to check for max support texture sizes.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Detecting when a texture is to big for graphics memory

2016-11-30 Thread Alistair Baxter
In the particular test case I'm looking at, it's about 5 gigs of texture and 2 
gigs of video RAM. We have a manual mechanism for downsampling, but then we can 
end up in situations where a processed file that looks fine on a machine with 6 
or 8 gigs of video ram won't load on one with far less.

Obviously, this is an absurdly profligate use of video memory, but if you've 
got the data, you might as well use it.

What do you mean by "have a graphics operation/callback that forces a texture 
compile" ? Is that sort of thing covered in the osg examples?

-Original Message-
From: osg-users [mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf 
Of Robert Osfield
Sent: 30 November 2016 15:12
To: OpenSceneGraph Users 
Subject: Re: [osg-users] Detecting when a texture is to big for graphics memory

Hi Alistair,

There isn't a mechanism built into the OSG to automatically provide a way of 
checking and then handling texture objects not being allocated due to out of 
memory issues, thankfully this isn't a common issue so doesn't trip up most 
users.  The best way to catch this case would probably be to have a graphics 
operation/callback that forces a texture compile for the textures in question 
and then immediately check the GL errors.

What size texture were you seeing issues with?  What is the GPU memory 
available?

Robert.

On 30 November 2016 at 14:42, Alistair Baxter  wrote:
> Our application is using osgVolume to render 3D texture data that is 
> provided by users. This means the data can be very large, and can 
> exceed the amount of available graphics memory on some machines.
>
>
>
> I was looking for a way to detect whether a texture has failed to load 
> in this way, so that we can alert the user, or react to the problem in 
> some other way. But I’m having trouble finding anything in code that will 
> help.
>
> OpenSceneGraph responds with   “Warning: detected OpenGL error 'out of
> memory' at after RenderBin::draw(..)”   but I’m not seeing anything in the
> scene graph data that can indicate that the texture in question is at fault.
> The TextureObject representing the 3D texture, for example declares 
> that it is allocated, and reports the correct size and a positive id.
>
>
>
> Is there any way to tell whether a texture is too big for graphics 
> memory, other than by just knowing how much there is in total  (a 
> feature that only seems to work for me on NVidia hardware anyway) and 
> checking whether the known size of your texture will fit?
>
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.
> org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Detecting when a texture is to big for graphics memory

2016-11-30 Thread Alistair Baxter
Our application is using osgVolume to render 3D texture data that is provided 
by users. This means the data can be very large, and can exceed the amount of 
available graphics memory on some machines.

I was looking for a way to detect whether a texture has failed to load in this 
way, so that we can alert the user, or react to the problem in some other way. 
But I'm having trouble finding anything in code that will help.
OpenSceneGraph responds with   "Warning: detected OpenGL error 'out of memory' 
at after RenderBin::draw(..)"   but I'm not seeing anything in the scene graph 
data that can indicate that the texture in question is at fault. The 
TextureObject representing the 3D texture, for example declares that it is 
allocated, and reports the correct size and a positive id.

Is there any way to tell whether a texture is too big for graphics memory, 
other than by just knowing how much there is in total  (a feature that only 
seems to work for me on NVidia hardware anyway) and checking whether the known 
size of your texture will fit?


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] How to check if pagedLod already exists in databasepager with same filename

2016-11-30 Thread Simone Rapposelli
Hi Rafa,

thank you for your advice, but trying to replace PagedLod with ProxyNode seems 
that readNode get called also for nodes apparently very far.
I don't know why this happens, maybe there is some mechanism I am 
misunderstanding or missing, but I haven't found any example with ProxyNodes 
and I can't go beyond for time reasons.
At any rate, at the moment the current solution with PagedLods is more than 
acceptable for me.
Thank you all for your precious support!


Rafa Gaitan wrote:
> Hi Simone,
> 
> If you just want to use the PagedLOD as a kind of delayed system for loading 
> nodes (but not unload them from memory), I suggest you use instead de 
> osg::ProxyNode, it defers the loading to the DatabasePager but once loaded is 
> not unloaded anymore. If your database is well balanced the CullVisitor will 
> ensure good framerate when the node is not in the frustrum.
> 
> 
> Just my two cents,
> 
> 
> Rafa.
> 
> 
> 
> 
> El mar., 29 nov. 2016 a las 15:43, Simone Rapposelli (< ()>) escribió:
> 
> 
> > Hi Robert,  by increasing TargetMaximumNumberOfPageLOD the problem of 
> > having to reload the same PageLod disappears, so it works!! Thank you!   
> > robertosfield wrote: > Hi Simone, > > On 29 November 2016 at 12:37, Simone 
> > Rapposelli > <> wrote: > > > thank you for your fast reply. > > My problem 
> > is that osgDB::ReaderWriter::ReadResult readNode(const std::string 
> > , const osgDB::ReaderWriter::Options *options) gets called even if 
> > a PagedLod with the same fileName has been previously loaded: for example, 
> > this happens if I move to any position on the viewer and then come back. > 
> > > Thus, inside this function I need to check if any of the PagedLod 
> > currently loaded in DatabasePager has the same fileName of the passed 
> > argument: in this case I could avoid to reload data already in memory. > > 
> > > > The PagedLOD/DatabasePager paging scheme is designed to expire and > 
> > reload subgraphs, it *crucial* to load balancing.  If you cached all > 
> > loaded subgraphs your memory would rapidly
  be overwhelmed and your > system would grind to a halt.  The very scheme you 
are trying to > defeat is one of the best assets of the OSG, you *absolutely* 
do not > want to be breaking this mechanism. > > Now, if you the defaults the 
paging scheme uses for load balancing is > too conservative w.r.t the number of 
PagedLOD it will aim for in > memory at one time you can adjust it to be higher 
simply by setting > the TargetMaximumNumberOfPageLOD parameter, from the 
DatabasePager > header you'll see: > > /** Set the target maximum number of 
PagedLOD to maintain in memory. > * Note, if more than the target number are 
required for > rendering of a frame then these active PagedLOD are excempt from 
being > expiried. > * But once the number of active drops back below the target 
> the inactive PagedLOD will be trimmed back to the target number.*/ > void 
setTargetMaximumNumberOfPageLOD(unsigned int target) { > 
_targetMaximumNumberOfPageLOD = target; } > > You can also set the default 
 value using the env var OSG_MAX_PAGEDLOD > i,e under bash: > > export 
OSG_MAX_PAGEDLOD=2000 > osgviewer mypageddatabase.osgb > > You can get a 
listing of the env vars supported by doing: > > osgviewer --help-env > > 
Robert. > ___ > osg-users mailing 
list > >  () 
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 
(http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org) 
> 
> 
>  --
> Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=69553#69553





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] clamp mouse cursor / position

2016-11-30 Thread Sebastian Schmidt
Of course i ran into the problem that the internal mouse position ends at the 
screen borders, so that the mouse delta is zero for faster mouse dragging.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=69551#69551





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] IntersectionVisitor and USE_EYE_POINT_FOR_LOD_LEVEL_SELECTION

2016-11-30 Thread Robert Osfield
Hi Nick,

The IntersectionVisitor generally picks the highlest available level
of detail to make sure it gets the most accurate results.  Once I have
some time available I will have a look at your change and consider the
issue.

Robert.

On 30 November 2016 at 00:09, Trajce Nikolov NICK
 wrote:
> Hi again Robert,
>
> attached is the mods to make it work with
> USE_EYE_POINT_FOR_LOD_LEVEL_SELECTION  too when traversing the PagedLODs.
> For my end it works. I would make a pull request but stuck with 3.5.5 and it
> is two additional lines of code. Please review...
>
> Thanks and cheers!
> Nick
>
> On Tue, Nov 29, 2016 at 11:34 PM, Trajce Nikolov NICK
>  wrote:
>>
>> Hi Robert,
>>
>> USE_EYE_POINT_FOR_LOD_LEVEL_SELECTION seams to not have effect no matter
>> if it is selected or not. I traced the code and as is written now
>> IntersectionVisitor::apply(osg::PagedLOD& plod) is working always with the
>> highest res.
>>
>> // Perform an intersection test only on children that display
>> // at the maximum resolution.
>>
>> I need this functionality to be able to pick on the level of detail based
>> on the distance from the eyepoint. I am seeing there is a override for that
>> but it is never used
>>
>> float IntersectionVisitor::getDistanceToEyePoint(const osg::Vec3& pos,
>> bool /*withLODScale*/) const
>> {
>> if (_lodSelectionMode==USE_EYE_POINT_FOR_LOD_LEVEL_SELECTION)
>> {
>> return (pos-getEyePoint()).length();
>> }
>> else
>> {
>> return 0.0f;
>> }
>> }
>>
>> If you are very busy and give some hints I can try to implement this
>> functionality
>>
>> Thanks a lot!
>>
>> --
>> trajce nikolov nick
>
>
>
>
> --
> trajce nikolov nick
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org