Re: [Interest] Qt3D Framegraphs

2018-09-04 Thread Andy
On Mon, Sep 3, 2018 at 9:25 AM Paul Lemire  wrote:

> Glad to hear that, hopefully things are starting to make more sense now.
>

Getting there - thank you!

On 09/03/2018 02:54 PM, Andy wrote:
>
> Progress! Here's my current framegraph:
>
> [snip]
>
> Question:
>
>1) I am using an RGBAFormat for my texture. I changed the alpha in the
> clear colour from 0x80 to 0xEE and I now see an alpha cleared background in
> the offscreen (see image). I can just use RGB for my purposes right now,
> but I'm curious why the onscreen clearing is not using the alpha channel? I
> can confirm this by changing the clear colour to #FF00 - I just get
> solid black.
>
> Well I believe that this depends on the format of your back buffer
> (usually it is RGB). You can try to query it with
> QSurfaceFormat::defaultFormat() and looking for the alphaBuffer size (or
> apitrace also gives you the format when you select a draw call that renders
> to screen).
>

Got it! If I setAlphaBufferSize( 8 ) on my default format it works.

>
> Problem:
>
>1) The resulting scene isn't the same in the offscreen capture:
>   - the yellow cube is on top of everything
>   - the red & blue arrows aren't clipped by the plane
>
> I suspect that this is caused by the fact that you have no depth
> attachment on your RenderTarget so that depth testing isn't performed
> properly. You would need to create another RenderTargetOutput that you bind
> to the attachment point Depth with a suitable Texture2D texture with format
> (D32, D24 ...).
>

Bingo. That fixes it.


>   - it isn't antialiased
>
> That's likely caused by a) not having a high resolution enough for your
> attachments b) using a Texture2D instead of a Texture2DMultisample (though
> I'm not sure RenderCapture would work with the latter).
>
Have you tried going for a 2048/2408 texture instead of 512/512 assuming
> you have no memory constraints? Then you can always scale back the QImage
> you capture to 512/512 if need be.
>

Texture2DMultisample does indeed make it better. Once I set "samples" to
the same as my QSurfaceFormat::defaultFormat(), I get decent results. Not
100% the same, but very close. (So it does work w/RenderCapture!)

My "final" frame graph (for those following along):

RenderSurfaceSelector:
  Viewport:
ClearBuffers:
  buffers: ColorDepthBuffer
  clearColor: "#faebd7"
  NoDraw: {}
FrustumCulling:
  # OnScreen
  CameraSelector:
objectName: onScreenCameraSelector
RenderCapture:
  objectName: onScreenCapture
  # OffScreen
  CameraSelector:
objectName: offScreenCameraSelector
RenderTargetSelector:
  target:
RenderTarget:
  attachments:
  - RenderTargetOutput:
  attachmentPoint: Color0
  texture:
Texture2DMultisample:
  objectName: offScreenTexture
  width: 1024
  height: 768
  format: RGBFormat
  samples: 8
  - RenderTargetOutput:
  attachmentPoint: Depth
  texture:
Texture2DMultisample:
  width: 1024
  height: 768
  format: D24
  samples: 8
  ClearBuffers:
buffers: ColorDepthBuffer
clearColor: "#faebd7"
NoDraw: {}
  RenderCapture:
objectName: offScreenCapture


If anyone is interested in the code to read framegraphs as YAML like this,
please get in touch and I can clean it up & put on gitlab (sometime next
month). It makes it a lot easier to iterate on building a framegraph. It
also drastically reduces the amount of boilerplate code, you can include &
read them as resources, and you don't have to bring in all of QML.

Now that I have the basics working... I'll need to dig into the multipass
shader stuff to get the effects I want.

Thank you for your patience!
___
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest


Re: [Interest] Qt3D Framegraphs

2018-09-03 Thread Paul Lemire via Interest
Glad to hear that, hopefully things are starting to make more sense now.


On 09/03/2018 02:54 PM, Andy wrote:
> Progress! Here's my current framegraph:
>
> RenderSurfaceSelector:
>   Viewport:
>     ClearBuffers:
>   buffers: ColorDepthBuffer
>   clearColor: "#EEfaebd7"
>   NoDraw: {}
>     FrustumCulling:
>   # OnScreen
>   CameraSelector:
>     objectName: onScreenCameraSelector
>     RenderCapture:
>   objectName: onScreenCapture
>   # OffScreen
>   CameraSelector:
>     objectName: offScreenCameraSelector
>     RenderTargetSelector:
>   target:
>     RenderTarget:
>   attachments:
>   - RenderTargetOutput:
>   attachmentPoint: Color0
>   texture:
>     Texture2D:
>   width: 512
>   height: 512
>   format: RGBAFormat
>   ClearBuffers:
>     buffers: ColorDepthBuffer
>     clearColor: "#EEfaebd7"
>     NoDraw: {}
>   RenderCapture:
>     objectName: offScreenCapture
>
> Results of the render captures:
>
>    onScreenCapture: https://postimg.cc/image/v26nfj36l/
>    offScreenCapture: https://postimg.cc/image/68x3evrvx/
>
> I fixed the offscreen aspect ratio issue by creating a new offscreen
> camera and forwarding all but these two signals:
>
>    Qt3DRender::QCamera::aspectRatioChanged
>    Qt3DRender::QCamera::projectionMatrixChanged
>   
> Question:
>  
>    1) I am using an RGBAFormat for my texture. I changed the alpha in
> the clear colour from 0x80 to 0xEE and I now see an alpha cleared
> background in the offscreen (see image). I can just use RGB for my
> purposes right now, but I'm curious why the onscreen clearing is not
> using the alpha channel? I can confirm this by changing the clear
> colour to #FF00 - I just get solid black.
Well I believe that this depends on the format of your back buffer
(usually it is RGB). You can try to query it with
QSurfaceFormat::defaultFormat() and looking for the alphaBuffer size (or
apitrace also gives you the format when you select a draw call that
renders to screen).
>  
> Problem:
>
>    1) The resulting scene isn't the same in the offscreen capture:
>   - the yellow cube is on top of everything
>   - the red & blue arrows aren't clipped by the plane
I suspect that this is caused by the fact that you have no depth
attachment on your RenderTarget so that depth testing isn't performed
properly. You would need to create another RenderTargetOutput that you
bind to the attachment point Depth with a suitable Texture2D texture
with format (D32, D24 ...).

>   - it isn't antialiased
That's likely caused by a) not having a high resolution enough for your
attachments b) using a Texture2D instead of a Texture2DMultisample
(though I'm not sure RenderCapture would work with the latter).
Have you tried going for a 2048/2408 texture instead of 512/512 assuming
you have no memory constraints? Then you can always scale back the
QImage you capture to 512/512 if need be.

>
> I'm wondering if this is because the shaders aren't being used for the
> offscreen texture? I noticed in apitrace that when switching
> GL_DRAW_FRAMEBUFFER to 0 (onscreen), glUseProgram(1) is called. This
> is not called when switching GL_DRAW_FRAMEBUFFER to 1 (offscreen). Is
> the program supposed to persist or does it need to be called again
> when switching framebuffers?
Programs aren't tied to FrameBuffers, you just call glUseProgram when
you want to switch program and/or when you didn't track what was the
previously used program was.
>
> (apitrace is super-cool. Thanks for the pointer.)
There are also vogl and renderdoc but I tend to always go back to
apitrace :)
>
> Thank you for your time & help!
>
> ---
> Andy Maloney  //  https://asmaloney.com
> twitter ~ @asmaloney 

-- 
Paul Lemire | paul.lem...@kdab.com | Senior Software Engineer
KDAB (France) S.A.S., a KDAB Group company
Tel: France +33 (0)4 90 84 08 53, http://www.kdab.fr
KDAB - The Qt, C++ and OpenGL Experts



smime.p7s
Description: S/MIME Cryptographic Signature
___
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest


Re: [Interest] Qt3D Framegraphs

2018-09-03 Thread Andy
Progress! Here's my current framegraph:

RenderSurfaceSelector:
  Viewport:
ClearBuffers:
  buffers: ColorDepthBuffer
  clearColor: "#EEfaebd7"
  NoDraw: {}
FrustumCulling:
  # OnScreen
  CameraSelector:
objectName: onScreenCameraSelector
RenderCapture:
  objectName: onScreenCapture
  # OffScreen
  CameraSelector:
objectName: offScreenCameraSelector
RenderTargetSelector:
  target:
RenderTarget:
  attachments:
  - RenderTargetOutput:
  attachmentPoint: Color0
  texture:
Texture2D:
  width: 512
  height: 512
  format: RGBAFormat
  ClearBuffers:
buffers: ColorDepthBuffer
clearColor: "#EEfaebd7"
NoDraw: {}
  RenderCapture:
objectName: offScreenCapture

Results of the render captures:

   onScreenCapture: https://postimg.cc/image/v26nfj36l/
   offScreenCapture: https://postimg.cc/image/68x3evrvx/

I fixed the offscreen aspect ratio issue by creating a new offscreen camera
and forwarding all but these two signals:

   Qt3DRender::QCamera::aspectRatioChanged
   Qt3DRender::QCamera::projectionMatrixChanged

Question:

   1) I am using an RGBAFormat for my texture. I changed the alpha in the
clear colour from 0x80 to 0xEE and I now see an alpha cleared background in
the offscreen (see image). I can just use RGB for my purposes right now,
but I'm curious why the onscreen clearing is not using the alpha channel? I
can confirm this by changing the clear colour to #FF00 - I just get
solid black.

Problem:

   1) The resulting scene isn't the same in the offscreen capture:
  - the yellow cube is on top of everything
  - the red & blue arrows aren't clipped by the plane
  - it isn't antialiased

I'm wondering if this is because the shaders aren't being used for the
offscreen texture? I noticed in apitrace that when switching
GL_DRAW_FRAMEBUFFER to 0 (onscreen), glUseProgram(1) is called. This is not
called when switching GL_DRAW_FRAMEBUFFER to 1 (offscreen). Is the program
supposed to persist or does it need to be called again when switching
framebuffers?

(apitrace is super-cool. Thanks for the pointer.)

Thank you for your time & help!

---
Andy Maloney  //  https://asmaloney.com
twitter ~ @asmaloney 
___
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest


Re: [Interest] Qt3D Framegraphs

2018-08-31 Thread Roland Hughes

  
  
Can we please quit quoting massive chunks of messages which exceed
the "digest" trigger size so each message comes out as a new digest?
-- 
Roland Hughes, President
Logikal Solutions
(630) 205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net
http://www.johnsmith-book.com
http://www.logikalblog.com
http://www.interestingauthors.com/blog
http://lesedi.us
  

___
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest


Re: [Interest] Qt3D Framegraphs

2018-08-31 Thread Andy
On Fri, Aug 31, 2018 at 10:30 AM Paul Lemire  wrote:

> Hi Andy,
> Some ideas below :)
>

Thanks a lot Paul - answers inline.


> On 08/31/2018 02:03 PM, Andy wrote:
>
> The contours/silhouetting proved a bit of a leap right now so I backed off
> to look at the offscreen side of it.
>
> I removed the depth pass and am just trying to get a simple frame graph
> working for on-and-off screen capture.
>
> I have the following frame graph (in YAML, but it should be clear):
>
> RenderSurfaceSelector:
>   Viewport:
> ClearBuffers:
>   buffers: ColorDepthBuffer
>   clearColor: "#80faebd7"
>   NoDraw: {}
> CameraSelector:
>   objectName: cameraSelector
>   FrustumCulling: {}
>
> Is that FrustumCulling node the parent of the RenderPassFilter or is it a
> sibling? If it's not the parent of the RenderPassFilter, I looks like it
> would be part of a Branch Viewport -> CameraSelector -> FrustumCulling
> which would be of no use here
>

Yes, I had it as a sibling of the RenderPassFilter. I didn't know where it
went, because QForwardRenderer has it on the QClearBuffers.


>   RenderPassFilter:
> matchAny:
> - FilterKey:
> name: renderingStyle
> value: forward
>   RenderCapture:
> objectName: onScreenCapture
>
> Is the render capture a child of RenderPassFilter or a sibling here? You
> might be getting lucky (or unlucky depends how you see it) because if a
> branch has no RenderPassFilter, by default we select every RenderPasses
> from every Material. So visually it might be working but it's probably not
> what you had in mind.
>

I have it as a sibling. I'll choose... "unlucky" because I thought I
understood why it was working :-)


> What I'm seeing would result in:
>
> Viewport -> ClearBuffers -> NoDraw {} -> clear screen
> Viewport -> CameraSelector -> FrustumCulling {} -> draws to screen with
> FrustumCulling (executing all passes of each material)
> Viewport -> CameraSelector -> RenderPassFilter {} -> draws to screen
> (executing only forward passes)
> Viewport -> CameraSelector -> RenderCapture {} -> capture screen
> (executing all passes of each material)
>

Ah. How do I know which types of node result in drawing? I wouldn't have
expected  FrustumCulling to draw for example - from the docs I thought that
was kind of a "command node" like NoDraw.


> I suspect what you want is rather:
> Viewport -> ClearBuffers -> NoDraw {}
> Viewport -> CameraSelector -> FrustumCulling {} -> RenderPassFilter {}
> Viewport -> CameraSelector -> FrustumCulling {} -> RenderPassFilter {} ->
> RenderCapture {}
>

> I even think that this could work:
> Viewport -> ClearBuffers -> NoDraw {}
> Viewport -> CameraSelector -> FrustumCulling {} -> RenderPassFilter {} ->
> RenderCapture {} as RenderCapture shouldn't prevent from Rendering to
> screen as well
>
>
Even if I take it all the way back to what should be the simplest (I think)
- no FrustumCulling, no capture:

RenderSurfaceSelector:
  Viewport:
ClearBuffers:
  buffers: ColorDepthBuffer
  clearColor: "#80faebd7"
  NoDraw: {}
CameraSelector:
  objectName: cameraSelector
  RenderPassFilter:
matchAny:
- FilterKey:
name: renderingStyle
value: forward

Qt3DRender::QFrameGraphNode::Custom
Qt3DRender::QRenderSurfaceSelector::
Qt3DRender::QViewport::
Qt3DRender::QClearBuffers::
Qt3DRender::QNoDraw::
Qt3DRender::QCameraSelector::cameraSelector
Qt3DRender::QRenderPassFilter::
Qt3DRender::QFilterKey::

I'm getting a cleared screen, no model.

I'm using Qt3DExtras::QPhongMaterial on my entities, so the filter key
should match, right?

Based on what you outlined above, Qt3DExtras::QForwardRenderer doesn't make
sense to me. If QFrustumCulling is doing the drawing, then what's the
purpose of the filter keys on QForwardRenderer since they aren't part of
the QRenderSurfaceSelector branch?

Qt3DExtras::QForwardRenderer::
Qt3DRender::QRenderSurfaceSelector::
Qt3DRender::QViewport::
Qt3DRender::QCameraSelector::
Qt3DRender::QClearBuffers::
Qt3DRender::QFrustumCulling::
Qt3DRender::QFilterKey::

  RenderTargetSelector:
> target:
>   RenderTarget:
> attachments:
> - RenderTargetOutput:
> attachmentPoint: Color0
> texture:
>   Texture2D:
> width: 512
> height: 512
> format: RGBAFormat
>
> You might want to set generateMipMaps to false on the texture
>

Right - I thought they were off by default, but I guess it's better to be
explicit.

ClearBuffers:
>   buffers: ColorDepthBuffer
>   clearColor: "#80faebd7"
>   NoDraw: {}
>
> Looking at it, it does look like it would correctly clear the texture to
> the indicated color.
> Have you 

Re: [Interest] Qt3D Framegraphs

2018-08-31 Thread Paul Lemire via Interest
Hi Andy,

Some ideas below :)

On 08/31/2018 02:03 PM, Andy wrote:
> The contours/silhouetting proved a bit of a leap right now so I backed
> off to look at the offscreen side of it.
>
> I removed the depth pass and am just trying to get a simple frame
> graph working for on-and-off screen capture.
>
> I have the following frame graph (in YAML, but it should be clear):
>
> RenderSurfaceSelector:
>   Viewport:
>     ClearBuffers:
>   buffers: ColorDepthBuffer
>   clearColor: "#80faebd7"
>   NoDraw: {}
>     CameraSelector:
>   objectName: cameraSelector
>   FrustumCulling: {}
Is that FrustumCulling node the parent of the RenderPassFilter or is it
a sibling? If it's not the parent of the RenderPassFilter, I looks like
it would be part of a Branch Viewport -> CameraSelector ->
FrustumCulling which would be of no use here

>   RenderPassFilter:
>     matchAny:
>     - FilterKey:
>     name: renderingStyle
>     value: forward
>   RenderCapture:
>     objectName: onScreenCapture
Is the render capture a child of RenderPassFilter or a sibling here? You
might be getting lucky (or unlucky depends how you see it) because if a
branch has no RenderPassFilter, by default we select every RenderPasses
from every Material. So visually it might be working but it's probably
not what you had in mind.

What I'm seeing would result in:

Viewport -> ClearBuffers -> NoDraw {} -> clear screen
Viewport -> CameraSelector -> FrustumCulling {} -> draws to screen with
FrustumCulling (executing all passes of each material)
Viewport -> CameraSelector -> RenderPassFilter {} -> draws to screen
(executing only forward passes)
Viewport -> CameraSelector -> RenderCapture {} -> capture screen
(executing all passes of each material)

I suspect what you want is rather:
Viewport -> ClearBuffers -> NoDraw {}
Viewport -> CameraSelector -> FrustumCulling {} -> RenderPassFilter {}
Viewport -> CameraSelector -> FrustumCulling {} -> RenderPassFilter {}
-> RenderCapture {}

I even think that this could work:
Viewport -> ClearBuffers -> NoDraw {}
Viewport -> CameraSelector -> FrustumCulling {} -> RenderPassFilter {}
-> RenderCapture {} as RenderCapture shouldn't prevent from Rendering to
screen as well

>   RenderTargetSelector:
>     target:
>   RenderTarget:
>     attachments:
>     - RenderTargetOutput:
>     attachmentPoint: Color0
>     texture:
>   Texture2D:
>     width: 512
>     height: 512
>     format: RGBAFormat
You might want to set generateMipMaps to false on the texture
>     ClearBuffers:
>   buffers: ColorDepthBuffer
>   clearColor: "#80faebd7"
>   NoDraw: {}
Looking at it, it does look like it would correctly clear the texture to
the indicated color.
Have you tried displaying the render target texture by using a PlaneMesh
and a DiffuseMapMaterial?
If you feel adventurous you could try using apitrace to look at the GL
traces and check what's in your texture color attachment
>     RenderPassFilter:
>   matchAny:
>   - FilterKey:
>   name: renderingStyle
>   value: forward
>     RenderCapture:
>   objectName: offScreenCapture
>
> Results of the render captures:
Like the above I think RenderCapture should be a child of
RenderPassFilter here
>
>    onScreenCapture: https://postimg.cc/image/antf2d43h/
>    offScreenCapture: https://postimg.cc/image/e7fcs5z3h/
>
> The onscreen capture is correct - yay a forward renderer!.
>
> 1) Why isn't the offscreen one clearing the background colour using
> ClearBuffers? (Isn't obvious in postimage, but the background is
> transparent.) I tried moving ClearBuffers all over the place, but
> can't get it to work.
>
It looks like your FG is correct regarding the clearing of the
RenderTarget, it would be nice to try to display the texture so that we
can rule out some issue with the RenderCapture operating on a RenderTarget.
> 2) How do I fix the aspect ratio of the offscreen image (assuming I
> want the final image to be 512x512)? Do I need to give it its own
> camera and adjust its aspect ratio somehow?
Yes the easiest would be another Camera which sets its own aspect ratio
(you should be able to forward pretty much all the other properties from
your main camera except the aspect ratio)
>
> Thanks for any guidance!
>
> ---
> Andy Maloney  //  https://asmaloney.com
> twitter ~ @asmaloney 
>
>
>
> On Fri, Aug 24, 2018 at 11:24 AM Andy  > wrote:
>
> Paul:
>
> Thank you very much for the detailed responses!
>
> This has given me a lot more to work on/understand.
>
> The ClearBuffers part was very useful for understanding what's
> actually happening. This would be good info to drop into the
> QClearBuffers docs.
>
> I guess I now have to dive into render passes, render 

Re: [Interest] Qt3D Framegraphs

2018-08-31 Thread Andy
The contours/silhouetting proved a bit of a leap right now so I backed off
to look at the offscreen side of it.

I removed the depth pass and am just trying to get a simple frame graph
working for on-and-off screen capture.

I have the following frame graph (in YAML, but it should be clear):

RenderSurfaceSelector:
  Viewport:
ClearBuffers:
  buffers: ColorDepthBuffer
  clearColor: "#80faebd7"
  NoDraw: {}
CameraSelector:
  objectName: cameraSelector
  FrustumCulling: {}
  RenderPassFilter:
matchAny:
- FilterKey:
name: renderingStyle
value: forward
  RenderCapture:
objectName: onScreenCapture
  RenderTargetSelector:
target:
  RenderTarget:
attachments:
- RenderTargetOutput:
attachmentPoint: Color0
texture:
  Texture2D:
width: 512
height: 512
format: RGBAFormat
ClearBuffers:
  buffers: ColorDepthBuffer
  clearColor: "#80faebd7"
  NoDraw: {}
RenderPassFilter:
  matchAny:
  - FilterKey:
  name: renderingStyle
  value: forward
RenderCapture:
  objectName: offScreenCapture

Results of the render captures:

   onScreenCapture: https://postimg.cc/image/antf2d43h/
   offScreenCapture: https://postimg.cc/image/e7fcs5z3h/

The onscreen capture is correct - yay a forward renderer!.

1) Why isn't the offscreen one clearing the background colour using
ClearBuffers? (Isn't obvious in postimage, but the background is
transparent.) I tried moving ClearBuffers all over the place, but can't get
it to work.

2) How do I fix the aspect ratio of the offscreen image (assuming I want
the final image to be 512x512)? Do I need to give it its own camera and
adjust its aspect ratio somehow?

Thanks for any guidance!

---
Andy Maloney  //  https://asmaloney.com
twitter ~ @asmaloney 



On Fri, Aug 24, 2018 at 11:24 AM Andy  wrote:

> Paul:
>
> Thank you very much for the detailed responses!
>
> This has given me a lot more to work on/understand.
>
> The ClearBuffers part was very useful for understanding what's actually
> happening. This would be good info to drop into the QClearBuffers docs.
>
> I guess I now have to dive into render passes, render states, and
> materials now. :-)
>
> I also have a better appreciation for why most examples are QML - writing
> these in C++ is time consuming and error-prone. I've written a little
> (partially working) experiment to specify them in YAML so I don't have to
> pull in all the QML stuff just for defining my framegraph(s). I may
> continue down that road.
>
> Have there been any thoughts/discussions on providing a non-QML way to
> declare these? Could be useful for tooling (Qt Creator plugin for defining
> them visually?) as well.
>
> Thanks again for taking the time to go through this.
>
> ---
> Andy Maloney  //  https://asmaloney.com
> twitter ~ @asmaloney 
>
>
>
> On Tue, Aug 21, 2018 at 9:10 AM Paul Lemire  wrote:
>
>>
>> On 08/21/2018 01:54 PM, Andy wrote:
>>
>> Thank you so much Paul!
>>
>> That gives me something to start working on/pick apart. I see now how
>> onscreen vs. offscreen works and can concentrate on getting the onscreen
>> working the way I want first since they are very similar.
>>
>> 1) "I assume you want to fill the depth buffer with a simple shader
>> right?"
>>
>> I think so? Ultimately I want to experiment with a cel-shaded scene, but
>> for now I'd be happy with adding some black contours on my entities using
>> depth - slightly thicker lines closer to the camera, thinner farther away.
>> Is this the right setup for that?
>>
>>
>> Hmm that's not necessarily what I pictured. Usually a render pass where
>> the depth buffer is filled is used as an optimization technique so that 1)
>> You draw your scene with a very simple shader to fill the depth buffer 2)
>> You draw you scene again using a more complex shader but you then take
>> advantage of the fact that the GPU will only execute the fragment shader
>> for fragment whose depth is equal to what is stored in the depth buffer.
>>
>> If you want to draw contours (which is usually referred as silhouetting)
>> the technique is different. Meshes are composed of triangles which are
>> specified in a given winding order (order in which the triangles vertices
>> are specified, either clockwise or counterclockwise). That winding order
>> can be used at draw time to distinguish between triangles which are facing
>> the camera and triangles which are backfacing the camera. (Usually another
>> optimization technique is to not draw backfacing triangles a.k.a backface
>> culling).
>>
>> A possible silhouetting technique implementation can be to:
>> 1) draw only the back faces of the mesh (slightly enlarged) and with
>> depth writing into the 

Re: [Interest] Qt3D Framegraphs

2018-08-24 Thread Andy
Paul:

Thank you very much for the detailed responses!

This has given me a lot more to work on/understand.

The ClearBuffers part was very useful for understanding what's actually
happening. This would be good info to drop into the QClearBuffers docs.

I guess I now have to dive into render passes, render states, and materials
now. :-)

I also have a better appreciation for why most examples are QML - writing
these in C++ is time consuming and error-prone. I've written a little
(partially working) experiment to specify them in YAML so I don't have to
pull in all the QML stuff just for defining my framegraph(s). I may
continue down that road.

Have there been any thoughts/discussions on providing a non-QML way to
declare these? Could be useful for tooling (Qt Creator plugin for defining
them visually?) as well.

Thanks again for taking the time to go through this.

---
Andy Maloney  //  https://asmaloney.com
twitter ~ @asmaloney 



On Tue, Aug 21, 2018 at 9:10 AM Paul Lemire  wrote:

>
> On 08/21/2018 01:54 PM, Andy wrote:
>
> Thank you so much Paul!
>
> That gives me something to start working on/pick apart. I see now how
> onscreen vs. offscreen works and can concentrate on getting the onscreen
> working the way I want first since they are very similar.
>
> 1) "I assume you want to fill the depth buffer with a simple shader right?"
>
> I think so? Ultimately I want to experiment with a cel-shaded scene, but
> for now I'd be happy with adding some black contours on my entities using
> depth - slightly thicker lines closer to the camera, thinner farther away.
> Is this the right setup for that?
>
>
> Hmm that's not necessarily what I pictured. Usually a render pass where
> the depth buffer is filled is used as an optimization technique so that 1)
> You draw your scene with a very simple shader to fill the depth buffer 2)
> You draw you scene again using a more complex shader but you then take
> advantage of the fact that the GPU will only execute the fragment shader
> for fragment whose depth is equal to what is stored in the depth buffer.
>
> If you want to draw contours (which is usually referred as silhouetting)
> the technique is different. Meshes are composed of triangles which are
> specified in a given winding order (order in which the triangles vertices
> are specified, either clockwise or counterclockwise). That winding order
> can be used at draw time to distinguish between triangles which are facing
> the camera and triangles which are backfacing the camera. (Usually another
> optimization technique is to not draw backfacing triangles a.k.a backface
> culling).
>
> A possible silhouetting technique implementation can be to:
> 1) draw only the back faces of the mesh (slightly enlarged) and with depth
> writing into the depth buffer disabled.
> 2) draw the front faces of the mesh (with depth writing enabled)
>
> See http://sunandblackcat.com/tipFullView.php?l=eng=15 for a more
> detailed explaination, there are other implementation with geometry shaders
> as well (http://prideout.net/blog/?p=54)
>
> In practice, you would play with render states to control back face /
> front face culling, depth write ... e.g:
> RenderStateSet {
> renderStates: [
> DepthTest { depthFunction: DepthTest.Equal } // Specify
> which depth function to use to decide which fragments to key
> NoDepthWrite {} // Disable writing into the depth buffer
> CullFace { mode: CullFace.Front } // Cull Front faces
> (usually you would do back face culling though)
> ]
> }
>
> Note that cell shading might yet be another technique (with a different
> implementation than silhouetting). Usually it involves having steps of
> colors that vary based on light position in your fragment shader. It might
> even be simpler to implement than silhouetting actually.
>
> The above link actually implements a combination of both techniques.
>
>
>
> 2) "Have you tried the rendercapture ones?"
>
> Yes I have. That's how I got my render capture working (once those
> examples worked).
>
> One thing that wasn't clear to me before was where to attach the
> RenderCapture node. In the rendercapture example, it's created and then the
> forward renderer is re-parented, which is what I did with mine. Your
> outline makes more sense.
>
>
> I suppose it was made purely by convenience to avoid having to rewrite a
> full FrameGraph, but I do agree that makes understanding a lot harder.
>
>
> ClearBuffers (and NoDraw!) now make sense too. In QForwardRenderer they
> are on the camera selector which seems strange.
>
>
> That's a small optimization. If your FrameGraph results in a single branch
> (which QForwardRenderer probably does), you can combine the ClearBuffers
> and the CameraSelector as that translates to basically clear then draw.
>
> If your framegraph has more than a single branch:
> RenderSurfaceSelector {
> Viewport {
>   CameraSelector {
>   

Re: [Interest] Qt3D Framegraphs

2018-08-21 Thread Paul Lemire via Interest

On 08/21/2018 01:54 PM, Andy wrote:
> Thank you so much Paul!
>
> That gives me something to start working on/pick apart. I see now how
> onscreen vs. offscreen works and can concentrate on getting the
> onscreen working the way I want first since they are very similar.
>
> 1) "I assume you want to fill the depth buffer with a simple shader
> right?"
>
> I think so? Ultimately I want to experiment with a cel-shaded scene,
> but for now I'd be happy with adding some black contours on my
> entities using depth - slightly thicker lines closer to the camera,
> thinner farther away. Is this the right setup for that?

Hmm that's not necessarily what I pictured. Usually a render pass where
the depth buffer is filled is used as an optimization technique so that
1) You draw your scene with a very simple shader to fill the depth
buffer 2) You draw you scene again using a more complex shader but you
then take advantage of the fact that the GPU will only execute the
fragment shader for fragment whose depth is equal to what is stored in
the depth buffer.

If you want to draw contours (which is usually referred as silhouetting)
the technique is different. Meshes are composed of triangles which are
specified in a given winding order (order in which the triangles
vertices are specified, either clockwise or counterclockwise). That
winding order can be used at draw time to distinguish between triangles
which are facing the camera and triangles which are backfacing the
camera. (Usually another optimization technique is to not draw
backfacing triangles a.k.a backface culling).

A possible silhouetting technique implementation can be to:
1) draw only the back faces of the mesh (slightly enlarged) and with
depth writing into the depth buffer disabled.
2) draw the front faces of the mesh (with depth writing enabled)

See http://sunandblackcat.com/tipFullView.php?l=eng=15 for a
more detailed explaination, there are other implementation with geometry
shaders as well (http://prideout.net/blog/?p=54)

In practice, you would play with render states to control back face /
front face culling, depth write ... e.g:
RenderStateSet {
        renderStates: [
                DepthTest { depthFunction: DepthTest.Equal } // Specify
which depth function to use to decide which fragments to key
                NoDepthWrite {} // Disable writing into the depth buffer
    CullFace { mode: CullFace.Front } // Cull Front faces
(usually you would do back face culling though)
            ]
}

Note that cell shading might yet be another technique (with a different
implementation than silhouetting). Usually it involves having steps of
colors that vary based on light position in your fragment shader. It
might even be simpler to implement than silhouetting actually.

The above link actually implements a combination of both techniques.
 
>
> 2) "Have you tried the rendercapture ones?"
>
> Yes I have. That's how I got my render capture working (once those
> examples worked).
>
> One thing that wasn't clear to me before was where to attach the
> RenderCapture node. In the rendercapture example, it's created and
> then the forward renderer is re-parented, which is what I did with
> mine. Your outline makes more sense.

I suppose it was made purely by convenience to avoid having to rewrite a
full FrameGraph, but I do agree that makes understanding a lot harder.

>
> ClearBuffers (and NoDraw!) now make sense too. In QForwardRenderer
> they are on the camera selector which seems strange.

That's a small optimization. If your FrameGraph results in a single
branch (which QForwardRenderer probably does), you can combine the
ClearBuffers and the CameraSelector as that translates to basically
clear then draw.

If your framegraph has more than a single branch:
RenderSurfaceSelector {
    Viewport {
  CameraSelector {
                ClearBuffers { ...
                    RenderPassFilter { ... } // Branch 1
                    RenderPassFilter { ...} // Branch 2
                }
     }
    }
}

What would happen in that case is:

1) clear buffers then draw branch 1
2) clear buffers then draw branch 2

So in the end you would only see the drawings from Branch 2 because the
back buffer was cleared.

In that case you should instead have it like:

RenderSurfaceSelector {
    Viewport {
  CameraSelector {
                ClearBuffers { ...
                    RenderPassFilter { ... } // Branch 1
                }
               RenderPassFilter { ...} // Branch 2
     }
    }
}

or (which is a bit easier to understand but adds one branch to the
FrameGraph)

RenderSurfaceSelector {
    Viewport {
  CameraSelector {
                ClearBuffers { ...
                    NoDraw {}
                } // Branch 1
                RenderPassFilter { ... } // Branch 2
                RenderPassFilter { ...} // Branch 3
     }
    }
}


>
> 3) If I want to use any of the "default materials" in extras - Phong,
> PhongAlpha, etc - 

Re: [Interest] Qt3D Framegraphs

2018-08-21 Thread Andy
Thank you so much Paul!

That gives me something to start working on/pick apart. I see now how
onscreen vs. offscreen works and can concentrate on getting the onscreen
working the way I want first since they are very similar.

1) "I assume you want to fill the depth buffer with a simple shader right?"

I think so? Ultimately I want to experiment with a cel-shaded scene, but
for now I'd be happy with adding some black contours on my entities using
depth - slightly thicker lines closer to the camera, thinner farther away.
Is this the right setup for that?

2) "Have you tried the rendercapture ones?"

Yes I have. That's how I got my render capture working (once those examples
worked).

One thing that wasn't clear to me before was where to attach the
RenderCapture node. In the rendercapture example, it's created and then the
forward renderer is re-parented, which is what I did with mine. Your
outline makes more sense.

ClearBuffers (and NoDraw!) now make sense too. In QForwardRenderer they are
on the camera selector which seems strange.

3) If I want to use any of the "default materials" in extras - Phong,
PhongAlpha, etc - then in (3) and (4.3) the filterkeys must be
"renderingStyle"/"forward", correct? Or can I even use them anymore if I'm
going this route?

4) I will use the offscreen to generate snapshot images and video - I
assume I can turn offscreen rendering on/off dynamically by simply
enabling/disabling the RenderTargetSelector?


Thanks again for your help. I finally feel like I'm in danger of
understanding something here!


On Mon, Aug 20, 2018 at 1:20 AM Paul Lemire  wrote:

> Hi Andy,
>
> Please see my reply below
>
> On 08/15/2018 02:59 PM, Andy wrote:
>
> I've been struggling with framegraphs for a very long time now and still
> don't feel like I understand  their structure - what goes where or what
> kind of nodes can be attached to what. I can throw a bunch of things
> together, but when it doesn't work I have no idea how to track down what's
> missing or what's in the wrong place.
>
> Can anyone give an outline of what a framegraph would look like to
> facilitate all of the following for a given scene:
>
> 1. rendering in a window onscreen
> 2. depth pass for shaders to use
>
> I assume you want to fill the depth buffer with a simple shader right?
>
> 3. render capture for taking "snapshots" of what the user is seeing
> onscreen
> 4. offscreen rendering of the current scene at a specified size (not the
> UI window size)
> 5. render capture of the offscreen scene to an image
>
>
> I've not tested but the I would image what you want would look like the
> frame Graph below:
>
> RenderSurfaceSelector { // Select window to render to
>
> Viewport {
>
> // 1 Clear Color and Depth buffers
> ClearBuffers {
> buffers: ClearBuffers.ColorDepthBuffer
> NoDraw {}
> }
>
>
> // Select Camera to Use to Render Scene
> CameraSelector {
> camera: id_of_scene_camera
>
> // 2 Fill Depth Buffer pass (for screen depth buffer)
> RenderPassFilter {
> filterKeys: [ FilterKey { name: "pass"; value: "depth_fill_pass"] //
> Requires a Material which defines such a RenderPass
> }
>
> // 3 Draw screen content and use depth compare == to benefit for z fill
> passs
> RenderPassFilter {
>filterKeys: [ FilterKey { name: "pass"; value: "color_pass"] //
> Requires a Material which defines such a RenderPass
>RenderStateSet {
> renderStates: DepthTest { depthFunction: DepthTest.Equal }
> RenderCapture { // Use this to capture screen frame buffer
> id: onScreenCapture
> }
>}
> }
>
> // 4 Create FBO for offscreen rendering
> RenderTargetSelector {
> target: RenderTarget {
>   attachments: [
> RenderTargetOutput {
> attachmentPoint: RenderTargetOutput.Color0
> texture: Texture2D { width: width_of_offscreen_area;
> height: height_of_offscreen_area;  }
> },
>RenderTargetOutput {
> attachmentPoint: RenderTargetOutput.Depth
> texture: Texture2D { width: width_of_offscreen_area;
> height: height_of_offscreen_area;  }
> } ]
>} // RenderTarget
>
> // Note: ideally 4.1, 4.2 and 4.3 and 1, 2, 3 could be factored
> out as a reusable subtree (if using QML)
>
> // 4.1 Clear FBO
> ClearBuffers {
>   buffers: ClearBuffers.ColorDepthBuffer
>   NoDraw {}
>}
>
>// 4.2 Fill Depth Buffer pass (for offscreen depth buffer)
> RenderPassFilter {
> filterKeys: [ FilterKey { name: "pass"; value: "depth_fill_pass"]
> // Requires a Material which defines such a RenderPass
> }
>
> // 4.3 Draw content into offscreen color buffer and use depth compare
> == to benefit for z fill pass
> RenderPassFilter {
>filterKeys: [ FilterKey { name: "pass"; value: "color_pass"] //
> Requires a Material which defines such a RenderPass
>RenderStateSet {
> 

Re: [Interest] Qt3D Framegraphs

2018-08-19 Thread Paul Lemire via Interest
Hi Andy,

Please see my reply below


On 08/15/2018 02:59 PM, Andy wrote:
> I've been struggling with framegraphs for a very long time now and
> still don't feel like I understand  their structure - what goes where
> or what kind of nodes can be attached to what. I can throw a bunch of
> things together, but when it doesn't work I have no idea how to track
> down what's missing or what's in the wrong place.
>
> Can anyone give an outline of what a framegraph would look like to
> facilitate all of the following for a given scene:
>
> 1. rendering in a window onscreen
> 2. depth pass for shaders to use
I assume you want to fill the depth buffer with a simple shader right?
> 3. render capture for taking "snapshots" of what the user is seeing
> onscreen
> 4. offscreen rendering of the current scene at a specified size (not
> the UI window size)
> 5. render capture of the offscreen scene to an image

I've not tested but the I would image what you want would look like the
frame Graph below:

RenderSurfaceSelector { // Select window to render to

Viewport {

// 1 Clear Color and Depth buffers
ClearBuffers {
    buffers: ClearBuffers.ColorDepthBuffer
    NoDraw {}
}


// Select Camera to Use to Render Scene
CameraSelector {
    camera: id_of_scene_camera

// 2 Fill Depth Buffer pass (for screen depth buffer)
RenderPassFilter {
    filterKeys: [ FilterKey { name: "pass"; value: "depth_fill_pass"] //
Requires a Material which defines such a RenderPass
}

// 3 Draw screen content and use depth compare == to benefit for z fill
passs
RenderPassFilter {
   filterKeys: [ FilterKey { name: "pass"; value: "color_pass"] //
Requires a Material which defines such a RenderPass
   RenderStateSet {
    renderStates: DepthTest { depthFunction: DepthTest.Equal }
        RenderCapture { // Use this to capture screen frame buffer
    id: onScreenCapture
    }
   }
}

// 4 Create FBO for offscreen rendering
RenderTargetSelector {
    target: RenderTarget {
          attachments: [
            RenderTargetOutput {
                attachmentPoint: RenderTargetOutput.Color0
                texture: Texture2D { width: width_of_offscreen_area;
height: height_of_offscreen_area;  }
            },
   RenderTargetOutput {
                attachmentPoint: RenderTargetOutput.Depth
                texture: Texture2D { width: width_of_offscreen_area;
height: height_of_offscreen_area;  }
            } ]
   } // RenderTarget

        // Note: ideally 4.1, 4.2 and 4.3 and 1, 2, 3 could be factored
out as a reusable subtree (if using QML)

        // 4.1 Clear FBO
        ClearBuffers {
  buffers: ClearBuffers.ColorDepthBuffer
      NoDraw {}
   }

       // 4.2 Fill Depth Buffer pass (for offscreen depth buffer)
    RenderPassFilter {
        filterKeys: [ FilterKey { name: "pass"; value:
"depth_fill_pass"] // Requires a Material which defines such a RenderPass
    }

    // 4.3 Draw content into offscreen color buffer and use depth
compare == to benefit for z fill pass
    RenderPassFilter {
       filterKeys: [ FilterKey { name: "pass"; value: "color_pass"] //
Requires a Material which defines such a RenderPass
       RenderStateSet {
        renderStates: DepthTest { depthFunction: DepthTest.Equal }
            RenderCapture { // Use this to capture offscreen frame buffer
        id: offScreenCapture
        }
       }
    }
} // RenderTargetSelector

} // CamerSelector

} // Viewport

} // RenderSurfaceSelector



>
> Using the forward renderer in Qt3DExtras, I can do (1) and (3), but
> I've been supremely unsuccessful at implementing any of the rest
> despite many many attempts - even working with the examples. (And the
> deferred renderer examples - which might help? - don't work on macOS.)
Have you tried the rendercapture ones ? which are in tests/manual
>
> I am using C++, not QML. I tried replacing my framegraph with a
> QML-specified one but can't get that to work either (see previous post
> to this list "[Qt3D] Mixing Quick3D and C++ nodes").
>
> Can anyone please help? I'm stuck.
>
> Thank you.
>
> ---
> Andy Maloney  //  https://asmaloney.com
> twitter ~ @asmaloney 
>
>
>
> ___
> Interest mailing list
> Interest@qt-project.org
> http://lists.qt-project.org/mailman/listinfo/interest

-- 
Paul Lemire | paul.lem...@kdab.com | Senior Software Engineer
KDAB (France) S.A.S., a KDAB Group company
Tel: France +33 (0)4 90 84 08 53, http://www.kdab.fr
KDAB - The Qt, C++ and OpenGL Experts



smime.p7s
Description: S/MIME Cryptographic Signature
___
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest


[Interest] Qt3D Framegraphs

2018-08-15 Thread Andy
I've been struggling with framegraphs for a very long time now and still
don't feel like I understand  their structure - what goes where or what
kind of nodes can be attached to what. I can throw a bunch of things
together, but when it doesn't work I have no idea how to track down what's
missing or what's in the wrong place.

Can anyone give an outline of what a framegraph would look like to
facilitate all of the following for a given scene:

1. rendering in a window onscreen
2. depth pass for shaders to use
3. render capture for taking "snapshots" of what the user is seeing onscreen
4. offscreen rendering of the current scene at a specified size (not the UI
window size)
5. render capture of the offscreen scene to an image

Using the forward renderer in Qt3DExtras, I can do (1) and (3), but I've
been supremely unsuccessful at implementing any of the rest despite many
many attempts - even working with the examples. (And the deferred renderer
examples - which might help? - don't work on macOS.)

I am using C++, not QML. I tried replacing my framegraph with a
QML-specified one but can't get that to work either (see previous post to
this list "[Qt3D] Mixing Quick3D and C++ nodes").

Can anyone please help? I'm stuck.

Thank you.

---
Andy Maloney  //  https://asmaloney.com
twitter ~ @asmaloney 
___
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest