Dear Mr. Art Tevs,

     I have been able to sort out the problem I posed yesterday, i.e., making 
the texture Image available for analysis / parameter determination. Similar to 
your osgPPU::UnitOutCapture, a class called UnitTextureTap is derived from 
osgPPU::UnitOut. It overrides the noticeFinishRendering where I store a copy of 
the last rendered texture into an image and give an interface to access this 
Image. I have tested it and seems to work fine for my application. In case this 
class will be useful to anyone else, I would be more than happy to contribute.

    Thanks for the kind support.

Regards


Harash.  




________________________________
From: Art Tevs <arti_t...@yahoo.de>
To: osg-users@lists.openscenegraph.org
Sent: Monday, September 28, 2009 3:04:19 PM
Subject: Re: [osg-users] [osgPPU] osgPPU for image processing

Hello Harash,


Harash Sharma wrote:
> 
> I have been able to incorporate osgPPU into my application. Image filtering 
> and color space conversions are working fine for a desired resolution that I 
> set at the beginning of the osgPPU Pipeline. 
> 

Nice to hear that the library is also used not only for rendering, but also for 
some online/offline computations/processing. 


> 
> 1. We require to carry out filtering at a higher resolution followed by sub 
> sampling. Can you suggest a method to sub-sample the image to a lower 
> resolution.
> 

Huh, I suppose there exists multiple works in the field of image processing 
which handles about sub-sampling. The straight forward implementation, which is 
enabled by default in osgPPU uses GL_LINEAR for filtering. Although it is very 
simple approach and is supported by the hardware, so is very fast, it suffers 
by not-preventing high frequency details very well. Yeah, it even may create 
some kind of aliasing artifacts, when subsampling high frequency data. Maybe 
using gaussian kernel, one can produce smoother subsampled images.
If you haven't done it yet, then take a look into the lecture slides about 
image resampling of princton university here:
http://www.cs.princeton.edu/courses/archive/fall99/cs426/lectures/sampling/index.htm


> 
> 2. We need to stretch the contrast of the image. Earlier we were doing this 
> on CPU. We build a histogram of the image, identify the lower and higher gray 
> levels, followed by the pixel by pixel transformation of gray levels.
> 

Computing any kind of histogram on a GPU is not efficient, because this 
operation can not be parallelized. However if you just want to find the minimum 
and maximum value of your contrast, then I would propose to use custom build 
mipmaps for that. So you first transform your image into contrast 
representation, computing contrast for every pixel. Then you do the same thing 
as in the HDR example of osgPPU, where you build the mipmap of your image upto 
the 1x1 level. The mipmap will consist of two channels, the minimum and maximum 
value. 
At the end you read out the last level and use those both values to transform 
each pixel of the original image correspondigly. It seem this is almost the 
same thing as in the HDR example, so take a look there.

I hope, I was able to help.

regards,
art

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=17681#17681





_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



      
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to