Hi, Chris Denham wrote:
Hi Mike,I hadn't heard of CUDA, sounds interesting. Though shame it requires Nvidia hardware.What was in my mind when I said "part of a shader" was the idea that maybe there was a way to have a function in a shader that was only executed once per frame to setup some variables used by the rest of the shader. I guess a bit like a some kind of lazy initialization method.
I don't know of a way to do this. Because all shaders can run in parallel, it would be difficult to do something like this. The way I try to think of it is that shaders in parallel in a single pass does not know what their siblings are doing.
Hmmm, now I spelt it out, I just had a thought. Could I implement lazy per frame initialization within the fragment shader using the value of a pixel in frame buffer that the shader is writing? i.e. can the shader read pack a pixel that was written by a previous execution?
Same argument as above, because parallel you cannot read and write from the same texture in a single pass. This is the reason for the existence of ping-pong techniques. You read from 1 texture, write to another 1. See the osggameoflife example.
Can you explain what you mean by "geometry feedback path"? Not heard of that.
Cannot comment on this.
I'm still a bit puzzled about how to get a shader to execute once per frame, do I need to apply it to something that only has one vertex or renders to one pixel?
My first thought would be to use a single pixel FBO target. Also search the internet for "GPU reduction".
jp
The reason I say urgg about returning data via information embedded in a texture is the same reason I don't like any kind of blind type casting. But you are right, in my case may not be too bad as I probably just want to return a single float in the red component of a pixel, so I'll give it go.Cheers. Chris. ----- Original Message ----- From: "Mike Weiblen" <[EMAIL PROTECTED]>To: "Chris Denham" <[EMAIL PROTECTED]>; "OpenSceneGraph Users" <[email protected]>Sent: Sunday, November 23, 2008 6:49 PMSubject: Re: [osg-users] Getting a value calculated by a GLSL shader once per frame.Hi Chris, comments inline...On Sun, Nov 23, 2008 at 9:47 AM, Chris Denham <[EMAIL PROTECTED]> wrote:I have been trying to work out how/if you can use a shader to compute avalue (once per frame) and use that value as a constant in the shader forthe rest of the frame.For example, I want to use a shader to find the minimum value in a texture once at the beginning of a each frame, then use that minimum value (probablyas a uniform) in a fragment shader.Yes, a uniform would be the right path in.It seemed to me I have two problems: 1. How do I get a shader (or part of a shader) to only execute once per frame?(Not sure what you mean by part of a shader.) But you could use two passes, with the scene you're analyzing under your special shader, the other pass would render using the result from the first pass2. How do I get the minimum value out of the shader in order to get the application to put it in a uniform?There are three ways I can think of: 1) the usual way: readback final fragment values written to some render target, like a texture or FBO. 2) use the geometry feedback path (I have little experience with this path, so dont know its limitations etc) 3) use CUDAThe only way I could think of to get a value out of a shader was torender its result to a texture and pick the data out of the texture in theapplication. Urgg.. there must be a better way.it's not that distasteful is it?I had the idea that people may be using shaders as a general purpose 'rocketpowered' implementation for all kinds of general algorithms, but I can'tSure, NV's CUDA does all sorts of general-purpose computing on the GPU, hiding implementation details behind a parallel compute abstraction. At NVISION there was a cool demo using CUDA and GL together to render a scene using both raytracing and rasterization (used raytracing only on a shiny car, used rasterization for the road and other background features, a nice approach for quality/performance tradeoffs)see how to use them in that way. Does OSG provide a way to use GLSL for anything other than 'colouring pixels' ;-) ?afaik OSG doesn't have any direct support for CUDA, but that doesn't stop you from interpreting a framebuffer result as something other than colored pixels. cheers -- mewChris. - The answers to stupid questions are often more enlightening than the answers to smart ones. _______________________________________________ osg-users mailing list [email protected]http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org--Mike Weiblen -- Boulder Colorado USA -- http://mew.cx/_______________________________________________ osg-users mailing list [email protected] http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
--This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.
This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. MailScanner thanks Transtec Computers for their support.
_______________________________________________ osg-users mailing list [email protected] http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

