Hey Casey.
I'm a bit more optimistic about it. The core of the old pre-3.0 opengl
contained concepts that were stable for decades. I feel like this new
functionality has been brewing for a while, so there has been a flurry
of updates recently, but I'm hopeful the new stuff, with refinements,
will go on to be relatively stable for decades again.

I may be wrong. We'll see.

On Dec 3, 11:56 am, Casey Duncan <[email protected]> wrote:
> Learning all of this seems all well and good, but why do I get the
> feeling it will all be nearly useless knowledge 6 months to a year
> from now? I'm probably being cynical, but that feeling makes me
> reluctant to really dive into this, particularly for hobby work. If I
> was getting paid big bucks to know this stuff, that'd be different 8^)
>
> -Casey
>
>
>
> On Fri, Dec 3, 2010 at 4:20 AM, Florian Bösch <[email protected]> wrote:
> > On Dec 2, 3:16 pm, Jonathan Hartley <[email protected]> wrote:
> >> I think the ideas touched on above (moving from arrays to VBOs, using
> >> VAOs) plus using interleaved arrays, are all taking me in the
> >> direction you suggest, yes?
> > More or less, but there's a bit more to it.
>
> >> I'm just doing one call to glDrawElements for each modelview transform
> >> change in my scene. Presumably after I've done all of the above, this
> >> should be my next target - to put all my object orientations and
> >> positions into a big dynamic VBO (as matrices?) and having the shader
> >> transform vertices right from object space to eye space. At that point
> >> my entire scene could be rendered with a single call to
> >> glDrawElements. Presumably this sort of thing is the logical
> >> conclusion of the direction you are recommending?
>
> > Generally the idea of all this new stuff in OpenGL is to avoid per
> > frame bus transfer. You typically have something on the order of 0.5GB
> > - 1.5GB of free transfer capacity on the bus when the system is under
> > load (this is because the transfer from main memory to the CPU also
> > blocks time until you can put the data back on the bus again). Usually
> > you want more then 60 FPS rendering. If you divide your spare capacity
> > by the frames per second you get to between 10MB-25MB per frame!
> > That's not a whole lot, if you push it you might get to 50MB per
> > frame, but that's it, even very high end systems won't let you put
> > more then 3GB/s on the bus when not idle. 50MB is very little data
> > when when you talk of graphics. Often the geometry you want to render
> > alone occupies a couple hundred megabytes.
>
> > Graphics cards these days are practically their own (super) computer.
> > They have their own backplane/bus (typically at clock rates and bit-
> > widths far exceeding PCI), they have their own high speed memory
> > (VRAM) and they have their own processors (the GPU cores). So the
> > solution to avoid CPU<->GPU bus transfer per frame is to preload as
> > much of your data as you can onto VRAM. To facilitate this there are a
> > variety of buffers for different purpose (altough they tend to become
> > general purpose buffers).
>
> > There are different buffer concepts, they do not necessarily map to a
> > specific function, but more to a concept of how to use these
> > functions.
>
> > Texture: This is the oldest buffer of all, it has its own API and you
> > use it to put a chunk of texture data into VRAM. The reason it's old
> > is because classically textures where the bus bottleneck (before we
> > did the geometry explosion of the 1995-2000 ties)
> > - VBO: Stands for vertex buffer object. The idea is to preload
> > (possibly generic buffers) with the data you want, and at frame render
> > time bind these buffers and issue the draw calls (which are the same
> > as for unbuffered array rendering). The difference is in what OpenGL
> > does when you do this. It uses its own VRAM location of the data
> > instead of waiting for the data to arrive on the bus.
> > - FBO: Framebuffer Object: This buffer became necessary because it is
> > desirable to capture rasterized output into a texture for various post-
> > processing/computing effects, and not be restricted to screen
> > dimensions. It has its own API and is less of a buffer then a way to
> > render into textures.
> > - PBO: Pixel buffer Object: Very similar to VBOs, but uses a different
> > enumerant (GL_PIXEL_PACK_BUFFER_ARB). On ATI card you can use a PBO as
> > a texture (or framebuffer attachment). On NVIDIA cards you can use it
> > to copy texture data into that buffer. This is useful for a technique
> > labeled RTVBO (render to vertex buffer object). It's a way to get
> > rasterizing stage calculations into a geometry containing buffer.
> > - VAO: Vertex Array Object: similar to the FBO it does not represent
> > its own buffer, but rather is a state definition binding a variety of
> > OpenGL state together and setting it with one bind call. It's
> > generally not as useful as the other buffers, with one exception, if
> > you issue a lot of draw calls, it can save quite a bit of time (that
> > is because presumably the driver can optimize changing its state
> > better then you can issue the individual state changes).
> > - TFO: Transform Feedback Object: This stands for its own data, it
> > does have its own API. The idea of this buffer is to be able to
> > capture geometry *before* it is rasterized (just before the fragment
> > shader) into a buffer (and count how many primitives where captured).
> > This is useful for a variety of computation that happens at the
> > geometry stages that you either want to perform at less frequencies
> > then per frame, or that you have some results you reuse in the same
> > frame in a second geometry pass.
> > Uniform Buffers: This kind of buffer allows you to pass a uniform
> > array of values into a shader without incuring bus transfer, the
> > values for the uniform live in VRAM
>
> > I think you can see the common theme of this jungle of buffers. Avoid
> > per frame cpu work and bus transfer. There was some work from the
> > superbuffers group at Khronos to unify all that into a single buffer/
> > API, but it didn't get trough yet (or perhaps never will).
>
> > There's yet more concepts that help you avoid per frame cpu work:
> > - Instancing: Comes in various flavors, the aim is to render the same
> > geometry with different parameters many times from a single draw call
> > - Deferred shading/lighting: Avoid having to update buffers per frame
> > with cpu computed light information and implement per pixel lighting
> > as a function of rasterizing light bounding volumes into an already
> > rendered scene
> > - Generally post processing effects
> > - Texture/FBO ping pong: A variation of post processing, ping pong
> > between two (or cycle between more then 2) textures attached to a FBO
> > to perform some general computation on the GPU (like a blur filter for
> > instance, or erosion effects, edge detection etc.)
> > - Geometry ping pong: Involves switching rendering between two (or
> > cycle between more then two) transform feedback buffers, perform some
> > geometry buildup and computation  (like instancing and fractals or L-
> > systems)
> > - GPU skinning: pass in all parameters (bone matrices, weights, mesh)
> > required for skinning into the pipeline (from buffers), and use the
> > vertex shader to transform a mesh according to each vertices weight
> > for the relevant matrices.
> > - Multi pass alpha/coverage post processing: gives you nice alpha
> > blending without having to order your primitives on the CPU before
> > rendering (see GPU Pro)
> > - Raycasting into volumetric data from shaders: Various effects like
> > accentuated fog/clouds, ambient occlusion for volumetric data on the
> > GPU, displacement mapping, contur reconstruction etc.
> > - and many more.... (it's mind boggling how many different things
> > people do)
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "pyglet-users" group.
> > To post to this group, send email to [email protected].
> > To unsubscribe from this group, send email to 
> > [email protected].
> > For more options, visit this group 
> > athttp://groups.google.com/group/pyglet-users?hl=en.

On Dec 3, 4:56 pm, Casey Duncan <[email protected]> wrote:
> Learning all of this seems all well and good, but why do I get the
> feeling it will all be nearly useless knowledge 6 months to a year
> from now? I'm probably being cynical, but that feeling makes me
> reluctant to really dive into this, particularly for hobby work. If I
> was getting paid big bucks to know this stuff, that'd be different 8^)
>
> -Casey
>
>
>
> On Fri, Dec 3, 2010 at 4:20 AM, Florian Bösch <[email protected]> wrote:
> > On Dec 2, 3:16 pm, Jonathan Hartley <[email protected]> wrote:
> >> I think the ideas touched on above (moving from arrays to VBOs, using
> >> VAOs) plus using interleaved arrays, are all taking me in the
> >> direction you suggest, yes?
> > More or less, but there's a bit more to it.
>
> >> I'm just doing one call to glDrawElements for each modelview transform
> >> change in my scene. Presumably after I've done all of the above, this
> >> should be my next target - to put all my object orientations and
> >> positions into a big dynamic VBO (as matrices?) and having the shader
> >> transform vertices right from object space to eye space. At that point
> >> my entire scene could be rendered with a single call to
> >> glDrawElements. Presumably this sort of thing is the logical
> >> conclusion of the direction you are recommending?
>
> > Generally the idea of all this new stuff in OpenGL is to avoid per
> > frame bus transfer. You typically have something on the order of 0.5GB
> > - 1.5GB of free transfer capacity on the bus when the system is under
> > load (this is because the transfer from main memory to the CPU also
> > blocks time until you can put the data back on the bus again). Usually
> > you want more then 60 FPS rendering. If you divide your spare capacity
> > by the frames per second you get to between 10MB-25MB per frame!
> > That's not a whole lot, if you push it you might get to 50MB per
> > frame, but that's it, even very high end systems won't let you put
> > more then 3GB/s on the bus when not idle. 50MB is very little data
> > when when you talk of graphics. Often the geometry you want to render
> > alone occupies a couple hundred megabytes.
>
> > Graphics cards these days are practically their own (super) computer.
> > They have their own backplane/bus (typically at clock rates and bit-
> > widths far exceeding PCI), they have their own high speed memory
> > (VRAM) and they have their own processors (the GPU cores). So the
> > solution to avoid CPU<->GPU bus transfer per frame is to preload as
> > much of your data as you can onto VRAM. To facilitate this there are a
> > variety of buffers for different purpose (altough they tend to become
> > general purpose buffers).
>
> > There are different buffer concepts, they do not necessarily map to a
> > specific function, but more to a concept of how to use these
> > functions.
>
> > Texture: This is the oldest buffer of all, it has its own API and you
> > use it to put a chunk of texture data into VRAM. The reason it's old
> > is because classically textures where the bus bottleneck (before we
> > did the geometry explosion of the 1995-2000 ties)
> > - VBO: Stands for vertex buffer object. The idea is to preload
> > (possibly generic buffers) with the data you want, and at frame render
> > time bind these buffers and issue the draw calls (which are the same
> > as for unbuffered array rendering). The difference is in what OpenGL
> > does when you do this. It uses its own VRAM location of the data
> > instead of waiting for the data to arrive on the bus.
> > - FBO: Framebuffer Object: This buffer became necessary because it is
> > desirable to capture rasterized output into a texture for various post-
> > processing/computing effects, and not be restricted to screen
> > dimensions. It has its own API and is less of a buffer then a way to
> > render into textures.
> > - PBO: Pixel buffer Object: Very similar to VBOs, but uses a different
> > enumerant (GL_PIXEL_PACK_BUFFER_ARB). On ATI card you can use a PBO as
> > a texture (or framebuffer attachment). On NVIDIA cards you can use it
> > to copy texture data into that buffer. This is useful for a technique
> > labeled RTVBO (render to vertex buffer object). It's a way to get
> > rasterizing stage calculations into a geometry containing buffer.
> > - VAO: Vertex Array Object: similar to the FBO it does not represent
> > its own buffer, but rather is a state definition binding a variety of
> > OpenGL state together and setting it with one bind call. It's
> > generally not as useful as the other buffers, with one exception, if
> > you issue a lot of draw calls, it can save quite a bit of time (that
> > is because presumably the driver can optimize changing its state
> > better then you can issue the individual state changes).
> > - TFO: Transform Feedback Object: This stands for its own data, it
> > does have its own API. The idea of this buffer is to be able to
> > capture geometry *before* it is rasterized (just before the fragment
> > shader) into a buffer (and count how many primitives where captured).
> > This is useful for a variety of computation that happens at the
> > geometry stages that you either want to perform at less frequencies
> > then per frame, or that you have some results you reuse in the same
> > frame in a second geometry pass.
> > Uniform Buffers: This kind of buffer allows you to pass a uniform
> > array of values into a shader without incuring bus transfer, the
> > values for the uniform live in VRAM
>
> > I think you can see the common theme of this jungle of buffers. Avoid
> > per frame cpu work and bus transfer. There was some work from the
> > superbuffers group at Khronos to unify all that into a single buffer/
> > API, but it didn't get trough yet (or perhaps never will).
>
> > There's yet more concepts that help you avoid per frame cpu work:
> > - Instancing: Comes in various flavors, the aim is to render the same
> > geometry with different parameters many times from a single draw call
> > - Deferred shading/lighting: Avoid having to update buffers per frame
> > with cpu computed light information and implement per pixel lighting
> > as a function of rasterizing light bounding volumes into an already
> > rendered scene
> > - Generally post processing effects
> > - Texture/FBO ping pong: A variation of post processing, ping pong
> > between two (or cycle between more then 2) textures attached to a FBO
> > to perform some general computation on the GPU (like a blur filter for
> > instance, or erosion effects, edge detection etc.)
> > - Geometry ping pong: Involves switching rendering between two (or
> > cycle between more then two) transform feedback buffers, perform some
> > geometry buildup and computation  (like instancing and fractals or L-
> > systems)
> > - GPU skinning: pass in all parameters (bone matrices, weights, mesh)
> > required for skinning into the pipeline (from buffers), and use the
> > vertex shader to transform a mesh according to each vertices weight
> > for the relevant matrices.
> > - Multi pass alpha/coverage post processing: gives you nice alpha
> > blending without having to order your primitives on the CPU before
> > rendering (see GPU Pro)
> > - Raycasting into volumetric data from shaders: Various effects like
> > accentuated fog/clouds, ambient occlusion for volumetric data on the
> > GPU, displacement mapping, contur reconstruction etc.
> > - and many more.... (it's mind boggling how many different things
> > people do)
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "pyglet-users" group.
> > To post to this group, send email to [email protected].
> > To unsubscribe from this group, send email to 
> > [email protected].
> > For more options, visit this group 
> > athttp://groups.google.com/group/pyglet-users?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"pyglet-users" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pyglet-users?hl=en.

Reply via email to