Hi Mathias,

> De: Mathias Fröhlich
> 
> On Friday, December 30, 2011 11:45:21 Frederic Bouvier wrote:
> > The problem I have to solve currently is how to feed the G-Buffer
> > to the Effect system because the textures used to store it are
> > camera-dependent (in a multi screen context) but the pass (which
> > is a stateset) is built once for all.
>
> I do not exactly understand. I see that the effects collide in some
> sense with this kind of an approach. Partly effects do try to 
> achieve some similar results in the good old fixed function derived 
> world, than you get once your code is used, but as far as I see, 
> the intermediate screen sized textures should not be processed 
> anymore with the effects? Or at least how would you know which
> fragment to process with witch effect? Or I think that all the
> effects probably need to be changed to write their 
> color/reflection/whatever information into the appropriate render 
> target?
> 
> So, far how I understand the question. I am almost sure I miss your
> point.
> 
> Ahh, ok do you want to write the compositing step as a usual effect
> file? Then, I understand the problem. Well, If this is the problem, 
> I do also not see an easy solution. I would think that this final 
> compositing step is so different from the rest off the effect stuff, 
> that inserting these textures using non generic custom code for this 
> special purpose is fine.
> So, for this kind of problem currently no solution - may be one when
> I think about this for some time.
> Let me know If I am looking at the right problem ...

Exactly. I want to give access to every stage of the rendering to the
effect system. The geometry pass outputs to render target, but the fog, 
the lights, the bloom need to have access to the textures of the buffer, 
and there is a separate one for each camera associated to windows or 
sub windows. 

I have found a solution where I can make an association between the 
cull visitor and the G-buffer and then modify the pass of the effect 
on the fly. I hope some multi-threading model will not screw up 
this scheme.


> > Currently, the fog is computed using the depth buffer as a
> > post-process pass. Any smarter computation (like atmospheric 
> > scattering) is just a matter of writing a single shader that 
> > would replace the default fog computation.
>
> Exactly. And you are looking into the sky if you find a far clipping
> plane depth value.

Actually, it's a null normal. I could also use the stencil buffer but 
I already messed with combined depth/stencil shared between fbos 
without much success.

> > > So, may be just one question for what you have done already again
> > > without looking into any code:
> > > 
> > > You do not require float textures?
> > > As far as I can see, there is a patent issue on this extension
> > > and usually this is not strictly required.
> > > Using a fixed point representation that makes use of the usual
> > > depth buffer - one that scales differently than the usually
> > > perspective depth - could be used instead and I think we should
> > > use this in the end. In the end this really gives even better
> > > accuracy than a float representation since floats waste some
> > > bits for the exponent, where a fixed point representation could
> > > just use all the bits for accuracy.
> > 
> > I used Policarpo's tutorial on deferred shading so I don't store
> > absolute world position in a float texture. As described in the
> > tutorial, I used the view direction and the depth value to compute
> > the eye space position of the fragment.
>
> That's fine! That's what I had in mind also. The fragment position
> gives the view direction by the projection matrix and together with 
> the depth value fed through the inverse projection matrix yields the 
> right values.
> 
> > Nevertheless, I use float texture to store the normals. I tried
> > setting up a normal buffer with just luminance and alpha with
> > half-float while the color buffers were rgb8 but it was very slow.
> > Instead I used rgba16f for all textures (except the depth buffer).
> > As I use additive blending to collect the lights, I thought it
> > would be possible to have some kind of HDR without the risk of
> > saturating the 8bit color components. I can try to use 2 color
> > channels to store the x and y components of the normal and
> > reconstruct the z part with sqrt(1 - (x^2+y^2)) but that won't
> > solve the saturation issue. Some use LUV color space to store
> > higher intensity in RGB8 but afaik, it don't support additive
> > blending.
> 
> Ok, I have not thought at the normals yet. Lets first make this work
> and then care for a fallback or nice trick that does not need float 
> textures and looks fine performance wise.

Regards,
-Fred

------------------------------------------------------------------------------
Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex
infrastructure or vast IT resources to deliver seamless, secure access to
virtual desktops. With this all-in-one solution, easily deploy virtual 
desktops for less than the cost of PCs and save 60% on VDI infrastructure 
costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to