Re: NPOT textures, difference between GL and GLES (was: Re: [clutter] [cogl] Texture slices: what is the waste?)

2010-04-27 Thread Robert Bragg
Excerpts from Brian J. Tarricone's message of Wed Apr 21 09:59:34 +0100 2010:
 Hi Robert,
 
 On 04/20/2010 04:11 PM, Robert Bragg wrote:
  
  It might be worth investigating if your GLES platform supports this
  extension:
  http://www.khronos.org/registry/gles/extensions/OES/OES_texture_npot.txt
 
 I'm a bit new to OpenGL (and GLES), so please bear with me if what I'm
 asking is silly/obvious.
 
 I'm working on an embedded platform that has EGL and GLES 2.0.  Our
 driver reports it supports GL_ARB_texture_rectangle.  I know cogl's GL
 backend will make use of texture_rectangle, but, when I was poking into
 cogl's GLES backend (during the 1.0.x time frame), it looked like there
 wasn't support for texture_rectangle.

Oh interesting I haven't seen a GLES driver with the
ARB_texture_rectangle extension but it's plausible that your hardware
isn't capable of supporting all the features of texture_npot but it does
have enough for texture_rectangle.

I'm not really sure what the texture_rectangle extension gives you
beyond what core GLES 2.0 requires anyway. Core GLES 2.0 supports NPOT
textures but not mipmapping and limited repeat modes. (I think just
CLAMP_TO_EDGE) The texture_rectangle extension also doesn't support
mipmapping and has limited repeat modes. The big difference with
texture rectangle that might explain why they added the extension is
that you don't use normalized texture coordinates to sample from them.

Normal TEXTURE_2D textures have coordinates in the range [0,1] but
TEXTURE_RECTANGLE textures use coordinates in the range
[0,texture_width/height].

It can sometimes be convenient to use TEXTURE_RECTANGLE textures in
shaders since it's simpler to use them as lookup tables. Using
TEXTURE_2D textures involves calculating a floating point delta to allow
indexing of specific texels in your texture.

 
 Is it a mistake that our GPU vendor is supporting texture_rectangle on
 GLES?  Should we ask them to support OES_texture_npot instead or in
 addition?  (It looks like texture_npot has fewer restrictions than
 texture_rectangle, anyway, which is nice.)  Or is it normal and accepted
 to sometimes see texture_rectangle on GLES implementations?

I don't know if it's a mistake; but unless you want a texture that
doesn't use normalized texture coordinates I don't think it buys you
anything over the basic support for NPOT textures that GLES 2 already
comes with.

The Cogl OpenGL backend doesn't really support texture_rectangle it just
allows you to use cogl_texture_new_from_foreign with TEXTURE_RECTANGLE
and it understands that they don't have normalized coordinates. I would
expect for GLES 2.0 you could also use cogl_texture_new_from_foreign to
create NPOT TEXTURE_2D textures and avoid having to use
texture_rectangle. The problems would be that you can't enable mipmaping
for such textures or use anything but the CLAMP_TO_EDGE repeat mode.
(Note: if you aren't using Clutter master though you don't have explicit
control over the repeat modes so you would need to patch Cogl to work
around this.)

Getting the same level of support for texture_rectangle as we have for
OpenGL should be easy if you want to experiment with it, since most of
the CoglTexture code is shared between GL and GLES. You can search for
#if HAVE_COGL_GL guards in cogl-texture-2d-slice.c; remove them; tweak
_cogl_texture_driver_allows_foreign_gl_target to allow the
COGL_TEXTURE_RECTANGLE_ARB target and then you'll have to use
cogl_texture_new_from_foreign, which means you have to create the
texture manually with raw GL calls and you have to be very careful to
restore any GL state you modify while creating the texture so you don't
confuse any state caching Cogl does internally. The only reason we have
texture_rectangle support for OpenGL is to support texture_from_pixmap
on some limited GPUs.

If possible I would ask the vendor if they can support OES_texture_npot
since that would make your life *much* easier. It's very possible though
that your hardware can't support the extension though.

Adding better support for the limited NPOT textures that GLES 2 exposes to
Cogl could be done something like this:

* Make the ensure_mipmaps vfunc for all texture backends return a boolean
  status so it may fail.
* Make the set_wrap_mode_parameters vfunc for all texture backends
  return a boolean status so it may fail.
* patch the texture_2d backend with some #ifdef HAVE_COGL_GLES2 guards:
* so that it always returns FALSE in the ensure_mipmaps vfunc.
* so it returns FALSE in set_wrap_mode_parameters for anything but
  CLAMP_TO_EDGE.
* patch cogl_texture_new_from_bitmap and cogl_texture_new_with_size
  with some #ifdef HAVE_COGL_GLES2 guards so we never try and create
  and return a _cogl_texture_2d directly.
* Add a cogl-texture-rectangle backend. This should basically be a copy
  of cogl-texture-2d.c backend except replace occurrences of TEXTURE_2D
  with TEXTURE_RECTANGLE.
* Adapt cogl-texture-2d-sliced.c to be implemented in terms

Re: [clutter] [cogl] Texture slices: what is the waste?

2010-04-20 Thread Robert Bragg
Hi Alberto,

Excerpts from Alberto Mardegan's message of Fri Apr 16 20:11:03 +0100 2010:
 Hi all,
I'm implementing some optimizations to some cogl texture functions, 
 since they seem to have a considerable impact on my application 
 performance, and I've started with _cogl_texture_upload_to_gl() (GLES 
 backend).
 
 I added some debugging statements in there, and it seems that the 
 texture is never sliced in my case.
 So, I've implemented the optimization suggested by the FIXME comment, 
 that is avoid copying the bitmap to a temporary one. Things seem to work 
 fine, and definitely faster.

Excellent, thanks for taking a look at this.

 
 Before submitting this patch for review, though, I'd like to understand 
 whether the code blocks introduced by the if ({x,y}_span-waste  0) 
 condition are also relevant in the single slice case, or if they can be 
 omitted. I left them out and I'm not noticing any problems.

In short; yes a sliced texture with only one slice can have waste...

  First this is a multi slice example with waste:

  |Slice 0  |   Slice 1  |  Slice 2   |
  | power of two size |-- POT size --|-- POT size --|
  | User's texture size -|- waste |
  |-|-|
  |o||o|xx|
  |o||o|xx|
  |o||o|xx|
  |o||o|xx|
  |o||o|xx|
  -
  o = user data; x = waste data
  A slice is an individual OpenGL texture object.

  But a single slice example could look like this:

  | power of two size |
  |-- Usr tex size --|-- waste -|
  |-|
  |||
  |||
  |||
  |||
  |||
  ---

The waste is basically used to pad the difference between the power of
two texture sizes and the size of the user's texture data.

When the difference would be too large that's when the user's texture
data gets spread between multiple GL textures (slices). It's the
max_waste threshold that determines when we do this.

So for example if you try and load a 190 pixel wide texture then we
first determine that the nearest power of two size to fit that would be
256 and you'd have a waste of 66 pixels wide on the right. If that's
larger than the current max_waste threshold then instead of loading the
users texture into a 256 wide texture we'd consider loading it into
a 128 pixel texture + a 64 pixel texture. This leaves 2 pixels of waste
on the right of the 64 pixel slice which we'd expect to pass the
max_waste threshold.

If the max_waste threshold were greater than 66 though we would simply
load the users texture 190 pixel texture into one 256 pixel wide slice
with 66 pixels of waste.

Note: the above examples only depict waste along the x axis with the
waste on the right, but it's also possible to have waste on the y axis
at the bottom.

Note: platforms fully supporting npot textures don't ever need to slice
unless you upload textures larger than the GPU texture size limits and
even then they never have waste.

It might be worth investigating if your GLES platform supports this
extension:
http://www.khronos.org/registry/gles/extensions/OES/OES_texture_npot.txt

If so it might be worth patching the GLES backend to check for this and
when it's available OR in the COGL_FEATURE_TEXTURE_NPOT flag.
(you could do this in _cogl_features_init in driver/gles/cogl.c)

I hope that helps,
kind regards,
- Robert

 
 Ciao,
Alberto
 
-- 
Robert Bragg, Intel Open Source Technology Center
-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Using cogl without clutter

2010-04-06 Thread Robert Bragg
Excerpts from Oscar Lazzarino's message of Tue Apr 06 10:59:11 +0100 2010:
 Hi,
 
 would it be possibile to use cogl without clutter, on a gl context
 created with gtkglext? If so, a small example or some hints would be
 very useful.

Hi,

The long term goal is for Cogl to become a standalone 3D graphics API
and we are incrementally working towards this goal.

Cogl will also have a window system abstraction (only as far as
framebuffer management is concerned, I don't mean anything relating
input events etc) that could make it possible to integrate tightly with
GTK in one way or another.

Integrating with a GL context not owned by Cogl adds additional
complexity but theoretically it would be possible to create a Cogl
winsys that allowed this.

Sorry that doesn't really help you, but you might be interested to know
it may be possible one day.

kind regards,
- Robert

 
 Thanks
 
 O.
-- 
Robert Bragg, Intel Open Source Technology Center
-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Questions about combining Clutter with an OpenGL 3D game

2010-02-08 Thread Robert Bragg
 ();//restore original modelview
cogl_set_projection (); //restore clutter's projection
GameGUI::paint ()

Note: due to you modifying the projection matrix you will have
difficulties if you try and transform input coordinates relative to
actors inside your game actor using the Clutter APIs. You should be able
to ignore this problem though and simply capture input events on the
whole game actor and deal with all game input manually. I.e. don't
expect input to work correctly for ClutterCurvedGst actors and so don't
make them reactive.

 
 From what you describe it sounds like I could do all this with Cogl
 and a custom Clutter actor.  Knowing these details do you foresee any
 problems?
The video stuff sounds trickiest to me and playing with the projection
matrix will confuse Clutter's input transformations inside such actors
but I think overall its doable and a better way to go than raw OpenGL.

If you have any difficulties with using Cogl please say and I'll try to
help. The reference manual can be found here:
http://www.clutter-project.org/docs/cogl/1.0/ but we don't have any
tutorials / guides for newcomers yet.

kind regards,
- Robert

 
 On Thu, Jan 28, 2010 at 2:02 PM, Robert Bragg b...@o-hand.com wrote:
  Excerpts from Adam B's message of Thu Jan 28 05:55:05 + 2010:
  Hello all,
 
  I'm starting work on a puzzle/adventure game which will use OpenGL.  I
  think Clutter might be able to help me with two things:
 
  1) I need to build a pleasing 2D GUI to allow players to save/load
  games and configure options etc.
  2) I need to render video to an OpenGL texture.  (It seems that
  Clutter-gst can do this quite well)
 
  My concern is that, Clutter might not like being a 2nd class citizen
  in a larger 3D OpenGL game.  For instance:
 
  We do support breaking out into raw OpenGL to some extent, though we
  wouldn't really recommend it.
 
  cogl_begin_gl() and cogl_end_gl() can be used to delimit where you are
  manually using OpenGL and they will basically try and normalize the
  OpenGL state and flush any of Cogl's cached OpenGL state. The important
  limitations of this mechanism are:
  1) It's your responsibility to save and restore any OpenGL state that
  you need to change while rendering your puzzle game.
  2) We don't currently provide any guarantees about how we normalize the
  state only that it's consistent each time you call cogl_begin_gl () We
  can potentially improve this though:
  - Currently we just setup a simple CoglMaterial and flush the state to
   OpenGL as a way to normalize the state, the problem is that we may
   change how CoglMaterial is implemented which will change how the state
   is normalized.
  - An alternative would be to guarantee that we restore everything back
   to OpenGL defaults; this would be quite difficult for us to support
   but could potentially be done. The OpenGL state is owned and cached by
   different Cogl components, but we could potentially add a
   _cogl_component_normalize_gl_state () to each component to help
   do this in a relatively maintainable way.
 
  Asside from cogl_{begin,end}_gl we don't provide an easy way to
  integrate with an existing game engine (probable the best way at the
  moment would be to start with a Clutter backend for the game engine),
  but if its something simple you are writing from scratch then I would
  recommend that you just leave Clutter responsible for creating the GL
  context.
 
  If you create a custom Clutter actor and add it to the stage then you
  can use the paint function as the point where you break out and render
  your game scene.
 
 
  1) When the user is playing I need full control of the OpenGL
  pipeline.  I need to setup the perspective matrix, rotate/translate
  according to mouse movements , setup textures, draw geometry, etc.
  More bluntly, Clutter needs to stay out of my way.
 
  It might not expose all the feature you want yet, but have you
  considered using the Cogl API instead of OpenGL? Cogl gives direct
  control over the perspective/modelview matrices, it gives access to
  offscreen framebuffers, vertex buffers, blend modes, loading textures,
  controlling texture combining and a clipping API.
 
  It sadly doesn't yet have lighting support, and some other fairly basic
  things but it may be feasible to add missing features to Cogl depending
  on how much you need?
 
 
  2) I need to render video onto a rectangle face and be able to place
  that face precisely using 3D coordinates.
 
  ClutterGst can give you a ClutterActor that displays video and you can
  position actors in 3D. The difficulty may be if you want this right in
  your game scene which has a custom projection matrix? I can think of
  some approaches but they all sound quite hairy. Need to think a bit
  about this one.
 
 
  Is this level of control feasible with Clutter?
 
  I think you will come across some difficulties and probably some Clutter
  bugs, but its potentially doable. If you could use Cogl instead

Re: [clutter] Questions about combining Clutter with an OpenGL 3D game

2010-01-28 Thread Robert Bragg
Excerpts from Adam B's message of Thu Jan 28 05:55:05 + 2010:
 Hello all,
 
 I'm starting work on a puzzle/adventure game which will use OpenGL.  I
 think Clutter might be able to help me with two things:
 
 1) I need to build a pleasing 2D GUI to allow players to save/load
 games and configure options etc.
 2) I need to render video to an OpenGL texture.  (It seems that
 Clutter-gst can do this quite well)
 
 My concern is that, Clutter might not like being a 2nd class citizen
 in a larger 3D OpenGL game.  For instance:

We do support breaking out into raw OpenGL to some extent, though we
wouldn't really recommend it.

cogl_begin_gl() and cogl_end_gl() can be used to delimit where you are
manually using OpenGL and they will basically try and normalize the
OpenGL state and flush any of Cogl's cached OpenGL state. The important
limitations of this mechanism are:
1) It's your responsibility to save and restore any OpenGL state that
you need to change while rendering your puzzle game.
2) We don't currently provide any guarantees about how we normalize the
state only that it's consistent each time you call cogl_begin_gl () We
can potentially improve this though:
- Currently we just setup a simple CoglMaterial and flush the state to
  OpenGL as a way to normalize the state, the problem is that we may
  change how CoglMaterial is implemented which will change how the state
  is normalized.
- An alternative would be to guarantee that we restore everything back
  to OpenGL defaults; this would be quite difficult for us to support
  but could potentially be done. The OpenGL state is owned and cached by
  different Cogl components, but we could potentially add a
  _cogl_component_normalize_gl_state () to each component to help
  do this in a relatively maintainable way.

Asside from cogl_{begin,end}_gl we don't provide an easy way to
integrate with an existing game engine (probable the best way at the
moment would be to start with a Clutter backend for the game engine),
but if its something simple you are writing from scratch then I would
recommend that you just leave Clutter responsible for creating the GL
context.

If you create a custom Clutter actor and add it to the stage then you
can use the paint function as the point where you break out and render
your game scene.

 
 1) When the user is playing I need full control of the OpenGL
 pipeline.  I need to setup the perspective matrix, rotate/translate
 according to mouse movements , setup textures, draw geometry, etc.
 More bluntly, Clutter needs to stay out of my way.

It might not expose all the feature you want yet, but have you
considered using the Cogl API instead of OpenGL? Cogl gives direct
control over the perspective/modelview matrices, it gives access to
offscreen framebuffers, vertex buffers, blend modes, loading textures,
controlling texture combining and a clipping API.

It sadly doesn't yet have lighting support, and some other fairly basic
things but it may be feasible to add missing features to Cogl depending
on how much you need?

 
 2) I need to render video onto a rectangle face and be able to place
 that face precisely using 3D coordinates.

ClutterGst can give you a ClutterActor that displays video and you can
position actors in 3D. The difficulty may be if you want this right in
your game scene which has a custom projection matrix? I can think of
some approaches but they all sound quite hairy. Need to think a bit
about this one.

 
 Is this level of control feasible with Clutter?

I think you will come across some difficulties and probably some Clutter
bugs, but its potentially doable. If you could use Cogl instead of
OpenGL that would avoid the awkward state management issues but that may
not be feasible for your game.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center
-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Buffering updates for tfp

2010-01-11 Thread Robert Bragg
Excerpts from Robert Bragg's message of Mon Jan 11 15:02:36 + 2010:
 Excerpts from Jason Tackaberry's message of Tue Jan 05 15:40:16 + 2010:
  Hi,
  
  I'm currently using ClutterGLXTexturePixmap to redirect video.  Rather
  than having updates from the redirected window be rendered immediately,
  I'd like to instead buffer 4 or 5 frames into separate COGL textures,
  and then replace the underlying COGL texture of a ClutterTexture using
  my own timing mechanism.
  
  From what I can see, in order to accomplish this, it looks like I'm
  going to be reimplementing most of the tfp code inside clutter in my
  application.  I think the basic recipe would look like:
  
   1. Create a ClutterTexture to hold the video contents and add it to
  the stage.
   2. XCompositeRedirectWindow() and XCompositeNameWindowPixmap() to
  fetch the pixmap.
   3. glXCreatePixmap() on the above pixmap to fetch glx_pixmap.
   4. Create the desired number of buffered frames with
  cogl_texture_new_with_size().
   5. Upon a damage event, grab an available COGL texture created in
  #4 and glBindTexture(), then glXBindTexImageEXT() on the
  glx_pixmap.
   6. A separate pipeline responsible for timing the buffered frames
  takes the oldest updated texture from #5 call
  clutter_texture_set_cogl_texture() on the ClutterTexture from
  #1.
  
  Before falling too far down the rabbit hole, I wanted to ask those
  better informed:
  
   1. Is the above approach at all tenable?
   2. Is there any way I can still make use ClutterGLXTexturePixmap
  but extend it in some way to accomplish the same goal?
 
 
 Hi Jason,
 
 In principle your outline looks sensible to me, but the alarm bells
 really sound at #4 if you want to do the glBindTexture outside of Cogl.
er, I mean #5 oops

 
 We are already in a bit of a mess with Clutter doing its own GL calls
 outside of Cogl (most specifically glBindTexture). We can only support
 applications breaking out into raw GL under very constrained conditions.
 (ref cogl_{begin,end}_gl and cogl_flush()) Manually binding textures
 will very likely not work with future versions of Clutter and Cogl as we
 know we need to do a better job of avoiding texture binding costs and
 could well end up caching this state in Cogl at some point; possibly
 soon as part of Neil's texture atlas work.
 
 I'm currently aiming to implement a CoglTexturePixmap subclass of
 CoglTexture (hopefully fairly soon) so we can directly support
 texture_from_pixmap in Cogl. This should gives us APIs something like
 cogl_texture_new_from_pixmap () and cogl_texture_pixmap_update () which
 I *think* might enable you to do what you want without the manual GL
 calls.  At this point developers can either use ClutterTexture + this
 new cogl_texture_pixmap API directly or we'll possibly also add a new
 ClutterTexturePixmap actor which will also support the XSHM fallbacks
 currently handled by ClutterX11TexturePixmap and deprecate
 ClutterX11TexturePixmap and ClutterGLXTexturePixmap which have become
 something of a mess to maintain.
 
 If you wanted to take a look at what's involved in this and perhaps even
 take a stab at starting I'd like to recommend taking a look a Neil's
 more-texture-backends branch (this sort of work should be based of that
 branch until it gets merged to master) as it has changes to how
 CoglTextures are subclassed, using a vtable instead of the messy switch
 statements we currently have and gives an example of adding a new
 texture subclass.
 
 Adding a CoglTexturePixmap subclass should be straightforward, and it
 sounds like you are familiar with tfp, so implementing the
 _new_from_pixmap() and _pixmap_update() APIs should be ok too. The only
 awkward bits I can think of a.t.m are that it would be the first piece
 of Cogl code to interact with GLX and currently we don't expose public
 API for any CoglTexture subclasses. We are slowly migrating the
 GLX/EGL/WGL etc window system code down from the Clutter backends to
 Cogl, so adding interaction with GLX is ok; I think we just need a
 (private for now) mechanism to inform Cogl of the Display that Clutter
 is using.
 
 Please keep us informed if you start looking into this so we don't
 duplicate effort though and feel free to hassle me (#clutter irc nick =
 rib) or Neil Roberts (irc nick = bpeel).
 
 I hope that helps,
 kind regards,
 - Robert
  
-- 
Robert Bragg, Intel Open Source Technology Center
-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Offscreen Clutter Stage

2010-01-05 Thread Robert Bragg
Hi Bob,

You picked a time when everyone was on holiday so sorry you didn't get a
reply sooner.

A while back we did have some primitive offscreen stage support in the
backends, which only the GLX backend tried to implement via PBuffers.
This was removed though because it was a messy solution, and at the
moment we are trying to reduce the complexity of clutter backends so we
can start figuring out how we can migrate bits down into Cogl.

The current plan for supporting offscreen stages in a more portable way
(the same code for GLX/EGL/WGL etc) is to use framebuffer objects
instead of window system specific features such as PBuffers. For sharing
these between processes we would aim to use GEM/TTM handles eventually.

Cogl's offscreen rendering support recently had a first pass overhaul
giving us a CoglFramebuffer abstract base class that is subclassed to
implement CoglOffscreen (and eventually CoglOnscreen) objects. The plan
is to continue extending the capabilities of these objects/abstractions
and expose more ARB/EXT/OES_framebuffer_object features this way.

All Clutter stages will eventually be CoglFramebuffers, and at this
point, hopefully, Clutter won't need to care if it's a CoglOnscreen or
CoglOffscreen framebuffer.

Assuming you are only dealing with X Window stages (You only mentioned
GLX and EGLX), then for a nsapi plugin I wonder if you could redirect
the X window of a stage offscreen using the X composite extension?
You can get the stage window XID via clutter_x11_get_stage_window().

I hope that helps to some extent; though I guess a bit late.

If you are interested in the work to migrate parts of the Clutter
backends to Cogl so you can track the progress of this work I can look
at publishing the crude stabs I've made at this work so far in a branch
and perhaps you can provide some feedback on it.

Hopefully we will have a wiki for Clutter soon so it will be easier to
see what ideas are being thrown around for Clutter and Cogl.

kind regards.
- Robert

Excerpts from Bob Murphy's message of Mon Dec 21 05:15:04 + 2009:
 Hi all,
 
 I'm on a team writing a windowless npapi (browser) plug-in. Such plug- 
 ins don't let you have a window to draw into, just an offscreen pixbuf  
 that the browser blits to the main browser window.
 
 We also need to use Clutter as an underlying technology, but we can't  
 offer an on-screen window for Clutter to use for a stage.
 
 I gather the ClutterOffscreen container proposed for Bug 1573 requires  
 an on-screen stage window - if so, it won't work for us.
 
 So I'm thinking of writing a ClutterOffscreenStage. This would  
 subclass ClutterStage, accept normal ClutterActors as children, but  
 render to an offscreen pixbuffer (which can be platform-specific).
 
 Oh, and whatever I do needs to work on GLX and EGLX in the next two  
 weeks. :-) Fortunately, I have full time to work on that, and lots of  
 good coffee available.
 
 So I'd be very grateful for advice, reality-checks, comments,  
 alternatives, suggestions, etc. Anybody who helps will get a hearty  
 handshake and a free beer, whiskey, coffee, tea, soft drink, or other  
 beverage of their choice at the next FOSS conference we're both at.
 
 Thanks,
 Bob
 
 P.S. Here are some things I'm thinking about this so far, and would  
 appreciate corrections and advice from people who know more than I do:
 
 It seems to me that an offscreen stage is necessary to build a  
 completely offscreen scene graph, without any on-screen window, and  
 render it into an offscreen pixbuf. Is this correct? Or is there  
 another, simpler way to accomplish those goals?
 
 I'd like to do this without doing any backend-specific code. But it  
 doesn't look like that's feasible. Is that right?
 
 Many ClutterStage implementation details are done in the back-ends. So  
 ClutterOffscreenStage would need to do the same thing. Is that right?
 
 Do any of the ClutterActors make assumptions, like about being on- 
 screen, that would cause problems if they were embedded in a  
 ClutterOffscreenStage?
 
-- 
Robert Bragg, Intel Open Source Technology Center
-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Immediate Mode

2009-11-24 Thread Robert Bragg
 state. One such piece of
derived state is a combined modelview and projection matrix. Your glGet
request may not depend on that matrix but mesa may update it anyway. So
code that effectively does:
glRotate ()
glGet GL_XYZ
glTranslate ()
glGet GL_XYZ
glTranslate ()
glGet GL_XYZ
glDrawElements()
Would update the combined matrix at each glGet call even if unneeded.
(This happened because each time the modelview was changed this derived
state gets marked as dirty, and glGetFloatfv calls mesa_update_state()
before doing anything) We saw this kind of pattern in the past because
the modelview matrix would be continually modified as we traversed the
scene graph painting Clutter actors, and then various cogl calls would
request uncached state directly from OpenGL at each point in the graph.

Anyhow; I certainly welcome any investigation in to Cogl performance,
and would be interested to hear about your findings and any ideas for
improving Cogl.

kind regards,
- Robert


Cheers
John

On Mon, Nov 23, 2009 at 10:25 AM, Robert Bragg b...@o-hand.com wrote:

 Excerpts from john delahey's message of Wed Nov 18 04:49:46 + 2009:

 Hello
 
 When using the OpenGL backup, can Clutter render in immediate mode? That
 is,
 send all Opengl command to the GPU instead of backing them with COGL. Are
 there fundamental reasons why this can't be done?

 Hi John,

 We could potentially consider adding API to disable the Cogl journal
 which batches a lot of draw calls, or we could make the journal
 pluggable potentially. Aside from GL draw calls though there are lots of
 other GL calls which we defer. E.g. glEnable calls or glBindXYZ calls
 tend to get deferred until the last moment so we avoid redundant state
 changes.

 Can you be more specific about the problem you are facing? For example
 there are the cogl_flush() and cogl_begin_gl/cogl_end_gl APIs that
 may be of some help.

 Since I'm assuming you're asking this because you're trying to break out
 into raw GL, I'll note one thing that we can't support and that is
 interleaving of Cogl and GL calls done in such a way that you are trying
 to affect the behaviour of Cogl via manual GL calls. We can only support
 interleaved OpenGL calls that diligently save and restore the GL state
 that they change, before returning to Cogl calls, and even then it's a
 risky business and we'd much rather see proposals to improve the Cogl
 API if possible.

 kind regards,
 - Robert

-- 
Robert Bragg, Intel Open Source Technology Center
-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



[clutter] [ANNOUNCE] Clutter 1.0.6 (core) - stable release

2009-09-22 Thread Robert Bragg
Hi all,

I get to break everything while Emmanuele is on holiday, so here goes:

Clutter 1.0.6 is now available for download at:

http://www.clutter-project.org/sources/clutter/1.0

MD5 Checksums:

477e9093b2869f961e7295dab7b92d6b  clutter-1.0.6.tar.bz2
85dadedcd2c77b6df851ae53a53cf793  clutter-1.0.6.tar.gz

Clutter is an open source software library for creating fast, visually
rich and animated graphical user interfaces. Clutter is licensed under
the terms of the GNU Lesser General Public License version 2.1.

Clutter currently requires:

  * GLib = 2.16.0
  * Cairo = 1.6
  * Pango = 1.20
  * OpenGL = 1.4, OpenGL|ES 1.1 or OpenGL|ES 2.0
  * GLX, SDL, WGL, Quartz or an EGL Implementation

Notes
-

o This is the fourth stable release of the 1.0.x cycle.

o This version is parallel installable with Clutter 0.8.

o Installing this version will overwrite the files from the
  installation of a git clone of the current development
  branch.

o Bugs should be reported to: http://bugzilla.o-hand.com

What's new in Clutter 1.0.6
---

o Various documentation improvements including a new ClutterPath migration
  guide, a Glossary and objects index

o A couple of new unit tests for: initial actor sizing, preferred actor
  size and ClutterGroup depth sorting

o Fix ClutterGroup depth sorting

o Fix double to float type conversions in ClutterScript and update
  test-script.json so it doesn't refer to old Actor types such as
  ClutterLabel.

o Do not attempt to free empty ClutterModel column names

o Fix the BlendString parser so numbers can be part of function names
  allowing use of DOT3_RGB

o Fix the parsing of special signal- property names available when
  using the ClutterAnimation vararg API

o Adds a use-markup property getter for ClutterText

o Account for clock roll backs between frames so timelines don't simply
  hang

o Disable mipmap filters before checking framebuffer object completeness
  since some drivers consider texture objects incomplete if a mipmap
  filter is set but the mipmap data hasn't yet been uploaded.

o Various Makefile fixes, including fixes for the %.c: %.glsl codegen rules
  for GLES2, use AM_SILENT_RULES for automake  1.11, use a shared set of
  defines for silencing make rules (Makefile.am.silent) and cleanup some
  misuse of CLUTTER_MAJORMINOR.

o Fix cogl_clear so the alpha component isn't ignored

o Fix for the GLES 2.0 Cogl backend and the eglx Clutter backend


Full list of changes since 1.0.4:
-

Damien Lespiau (1):
[docs] Clutter's model implementation is called ClutterListModel

Emmanuele Bassi (20):
[build] Clean up the eglnative and fruity Makefile.am
[build] Split out the custom silent rules
[build] Update the configure.ac documentation
[container] Use a 1:1 mapping between child and ChildMeta
[docs] Add a Path migration guide
[docs] Add fixxref for Cairo symbols
[docs] Add more collateral documentation
[docs] Small annotation fixes
[docs] texture_polygon() is called polygon()
[gitignore] Add test-preferred-size
[model] Do not attempt to free empty column names
Post-release bump to 1.0.5
[script] Clean up the ad hoc parsing code in Actor
[script] Convert double to float when parsing
[tests] Add a Group actor unit
[tests] Add initial sizing conformance test suite
[tests] Add preferred size conformance test unit
[tests] Update the script test JSON
[timeline] Account for clock roll backs between frames
Use AM_SILENT_RULES if automake = 1.11 is installed

Neil Roberts (4):
[animation] Move the check for the 'signal::' prefix into a separate 
function
[cogl] Remove CoglContext-journal_vbo{,_len}
Fix the documentation for clutter_texture_set_cogl_material
Take a reference to the material in clutter_texture_set_cogl_material

Øyvind Kolås (2):
[group] Use floating point in sort_z_order
[text] implement get_property for use-markup

Robert Bragg (8):
[backend-egl] fix clutter_backend_egl_get_visual_info to not use Xalloc
[cogl] %.c: %.glsl codegen: use BUILT_SOURCES var + fix stringify.sh
[cogl_clear] Also pass the alpha component of the CoglColor to glClearColor
[cogl-fbo] Disable mipmap filters before calling glCheckFramebufferStatusEXT
[test-cogl-multitexture] Print an error if textures can't be loaded
[tests] Remove test-entry.c since we don't have a ClutterEntry any more
Update the NEWS
[release] 1.0.6

Samuel Degrande (1):
DOT3_RGB[A] cannot be used in a Blend String

Zhou Jiangwei (2):
[cogl] Fix the GLES 2.0 backend
[eglx] Update the EGLX backend

As always; we hope you have fun with Clutter!

Kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center
-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



RE: [clutter] Clutter Fixed Point API

2009-08-14 Thread Robert Bragg
On Fri, 2009-08-14 at 17:04 +0700, Hieu Le Trung wrote:
 Robert,
 
 I'm sure that my CPU does not have FPU so that it may cause performance
 bottle neck if using floating point math.
 Is there any guide on adding Fixed Point support for Clutter? Or is
 there any document that list out the part of Clutter which is currently
 using Floating Point math?

I think the right way to approach is is via profiling data. It's quite
possible that even with *software* floating point fallbacks your
application will still run adequately depending on the nature of your
Clutter scenes. Depending on the overall design of your platform there
may be other aspects that will cause the bottleneck to something else
entirely, but you wont know until you try it out. Even if it does turn
out to be a problem then you probably don't need to convert everything
to use fixed point, there should only be a few performance critical
paths involving floating point.

There is no guide for this. The most likely place that it will be a
problem is the transformations done in cogl-primitives.c, in the Cogl
journal. By default we transform geometry that gets logged in this
journal on the CPU so that we can batch more together with a single
modelview matrix. Assuming your GPU does hardware vertex transforms you
may want to experiment with disabling this software transform using
COGL_DEBUG=disable-software-transform. The CoglMatrix API could also be
an issue for you too.

kind regards,
- Robert

 
 Regards,
 -Hieu
 
 -Original Message-
 From: Robert Bragg [mailto:b...@o-hand.com] 
 Sent: Thursday, August 13, 2009 12:09 AM
 To: Hieu Le Trung
 Cc: clutter@o-hand.com
 Subject: RE: [clutter] Clutter Fixed Point API
 
 On Wed, 2009-08-12 at 00:28 +0700, Hieu Le Trung wrote:
  Robert,
  
  Thanks for your information. So in case I need to run Clutter on non
  FPU platform, I must spend effort on porting it? Why is it named Fixed
  Point API :)
 
 I'm assuming you are talking about Clutter 1.0 here? The only think that
 should be named Fixed Point API is the CoglFixed utility API.
 
 The CoglFixed typedef and utility API are the only things left that
 support fixed point maths. They can be used by applications for
 optimizations if they want, but the rest of Clutter and Cogl only
 accepts single precision floats.
 
 All calculations internal to Clutter will be done using floating point
 so if your profiling shows that any of them are a bottleneck you may
 need to patch Clutter to implement some fixed point fast-paths, and
 accept the loss or precision and range implied.
 
 I would strongly recommend that you profile your application before
 assuming what effort is involved. There are many factors that may affect
 your applications performance and the cost of soft float on your
 platform may not be a problem depending on the complexity of the scenes
 you are painting.
 
 kind regards,
 - Robert
 
-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



RE: [clutter] Clutter Fixed Point API

2009-08-12 Thread Robert Bragg
On Wed, 2009-08-12 at 00:28 +0700, Hieu Le Trung wrote:
 Robert,
 
 Thanks for your information. So in case I need to run Clutter on non
 FPU platform, I must spend effort on porting it? Why is it named Fixed
 Point API :)

I'm assuming you are talking about Clutter 1.0 here? The only think that
should be named Fixed Point API is the CoglFixed utility API.

The CoglFixed typedef and utility API are the only things left that
support fixed point maths. They can be used by applications for
optimizations if they want, but the rest of Clutter and Cogl only
accepts single precision floats.

All calculations internal to Clutter will be done using floating point
so if your profiling shows that any of them are a bottleneck you may
need to patch Clutter to implement some fixed point fast-paths, and
accept the loss or precision and range implied.

I would strongly recommend that you profile your application before
assuming what effort is involved. There are many factors that may affect
your applications performance and the cost of soft float on your
platform may not be a problem depending on the complexity of the scenes
you are painting.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Clutter Fixed Point API

2009-08-11 Thread Robert Bragg
On Tue, 2009-08-11 at 10:25 +0700, Hieu Le Trung wrote:
 Hi,
 
  
 
 Regarding Clutter definition of CLUTTER_NO_FPU macro: “Deprecated:
 0.6: This macro is no longer defined (identical code is used
 regardless the presence of FPU).”, in case we have FPU the Clutter is
 still using Fixed Point Math?
 
 Is there any plan/effort for adding FPU support?
Clutter no longer uses fixed point internally, everything is now done
using single precision floating point.
 
  
 
 Regards,
 
 -Hieu
 
 
-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Color management

2009-07-24 Thread Robert Bragg
On Thu, 2009-07-23 at 17:28 +0200, Vladimir Nadvornik wrote:
 On čt 23. července 2009, Robert Bragg wrote:
 
  If there is functionality missing in Cogl, I would be more interested in
  improving Cogl than finding ways to work around it via direct GL calls.
 
  Binding 1D and 3D textures is a valid feature request for Cogl, but so
  far I haven't been able to work through what it would take to expose in
  a nice and compatible way. If you could come up with a proposal or
  better still patches :-) for Cogl I think that's the only way we will be
  able to reliably support your use-case.
 
 
 I don't know Cogl internals much, but I can try to create something. At the 
 first look I think that the best way to add this would be:
 - add 3rd dimension to CoglBitmap
I think we can avoid needing to extend CoglBitmap, instead we would
potentially just give the option to use 2D CoglBitmaps as a source for
defining separate layers of a 3D texture object. Also since, I think,
glTexImage3D essentially expects your 3D layer data to be arranged as if
you have a large 2D texture divided into subregions vertically for the
layers, we could even just have a utility function for loading all
layers from a single, normal 2D CoglBitmap.

 - add new methods to CoglBitmap and CoglTexture that takes 3 dimensions
right, potentially just some new cogl_texture_new_ and cogl_texture_get_
variants for supplying 1d or 3d data.

 - the original methods would assume depth = 1
 - the texture dimension (GL_TEXTURE_3D or GL_TEXTURE_2D) would be determined 
   by the method used for creating the CoglTexture instance
yes, that sounds about right.

e.g. counterparts to other cogl_texture_ functions, that might be
necessary: cogl_texture_1d_new_with_length,
cogl_texture_3d_new_with_volume, cogl_texture_{1d,3d}_new_with_data,
cogl_texture_{1d,3d}_set_region, cogl_texture_{1d,3d}_new_from_file,
cogl_texture_{1d,3d}_new_from_foreign,
cogl_texture_{1d,3d}_get_data

Note: although Cogl doesn't have a very formal object model at this
point, I think it would make sense to consider 1d and 3d textures as
subclasses for CoglTexture. Internally even things specific to 2d
textures would also become part of a 2d subclass. (we could possibly
even add cogl_texture_2d_ API and deprecate some of the cogl_texture_
API as appropriate.)

Note: since we are talking about exposing more texture targets; as well
as 1d and 3d textures we may also want to have 'rect' or 'rectangle'
textures to more formally expose GL_TEXTURE_RECTANGLE_ARB, which can be
useful for applying image filters in shaders due to the un-normalized
texture coordinate space. (though, alternatively we could just add some
kind of 2d texture property like cogl_texture_set_normalize_coords (tex,
FALSE), and continue to hide GL_TEXTURE_RECTANGLE_ARB) - anyway this is
a bit offtopic.

if you take a stab at implementing this, please keep me posted, and I'll
try and provide feedback and assistance if I can.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



RE: [clutter] Using COGL for 3D drawing

2009-07-20 Thread Robert Bragg
On Sat, 2009-07-18 at 19:09 -0400, Dmitri Toubelis wrote:
 Thank Robert, this is very nice explanation. I almost figured it out on my
 own but with your help it is all clear now. Another question though, how it
 all works with materials? Do I just set source material and use vertex
 buffer API to draw? 

Yep, you should be able to simply set a source material and call
cogl_vertex_buffer_draw ().

The state is flushed to OpenGL in a number of steps, but when flushing
the material state the layers of the material will setup the OpenGL
texture combine mode and determine which texture name is bound to each
unit; the blend equations of the material will be converted to OpenGL
blending state and the material color will usually translate to a
glColor call.

Then each of the vertex buffer attributes that are currently enabled
will internally result in binding a VBO and corresponding call to
glVertexPointer, glColorPointer etc.

regards,
- Robert

P.S. I'm replying from a different address since realized my original
reply didn't reach the clutter mainlinglist, so anyone else interested
in an overview of the cogl vertex buffer API please see below...

 
 Regards,
 Dmitri
 
 
 -Original Message-
 From: Robert Bragg [mailto:rob...@linux.intel.com] 
 Sent: July 18, 2009 6:09 PM
 To: Dmitri Toubelis
 Cc: clutter@o-hand.com
 Subject: Re: [clutter] Using COGL for 3D drawing
 
 On Fri, 2009-07-17 at 09:15 -0400, Dmitri Toubelis wrote:
  Hi,
   
  I need to create few 3D objects in clutter and I would like to use 
  OpenGL for this. So, what is officially recommended approach for this?
 
 The officially recommended approach is to use the Cogl vertex buffer API for
 this. Although there is some limited Cogl API to allow you to break out into
 raw GL in exceptional circumstances, I would discourage this.
 If Cogl doesn't already support what you need, I'd much rather discuss
 improving Cogl.
 
   My thinking was to get absolute coordinates of the actor and then use 
  vertex buffer API to draw. Is it right thing to do? Could anyone share 
  some code samples?
 
 If you use the cogl vertex buffer API then you can just create an actor
 subclass and use cogl_vertex_buffer_draw () in your paint function. You
 shouldn't need to muck about getting the absolute coordinates of any actor
 since the geometry will be transformed by the actors model view matrix. This
 API is also integrated with the CoglMaterial API.
 
 Essentially you can use this api something like this:
 
 typedef struct {
   float x, y, z;
   float u, v;
   uint8_t r, g, b, a;
 } MyVertex;
 
 MyVertex my_vertices[100] = {
 ...
 };
 
 vbo = cogl_vertex_buffer_new (100); /* declares the number of vertices in
 your VBO */
 
 /* Add X,Y,Z vertex attributes: */
 cogl_vertex_buffer_add (
   vbo, /* handle of vertex buffer object */
   gl_Vertex, /* name of attribute */
   3, /* number of components (3 for X, Y and Z) */
   COGL_ATTRIBUTE_TYPE_FLOAT, /* attribute data type */
   FALSE, /* should integer types be normalized [0,1]?: no */
   sizeof (MyVertex), /* stride between vertices */
   my_vertices[0].x); /* pointer to first vertex attribute */
 
 /* Add u,v texture coordinate attributes: */ cogl_vertex_buffer_add (vbo,
 gl_MultiTexCoord0, 2, COGL_ATTRIBUTE_TYPE_FLOAT, FALSE, sizeof (MyVertex),
 my_vertices[0].u);
 
 /* Add RGBA color attributes : */
 cogl_vertex_buffer_add (vbo, gl_Color, 4,
 COGL_ATTRIBUTE_TYPE_UNSIGNED_BYTE, TRUE, sizeof (MyVertex),
 my_vertices[0].r);
 
 NOTE: For the unsigned byte color attribute we want the values 0-255 to be
 normalized to the range [0,1]
 
 NOTE: The names gl_Vertex, gl_MultiTexCoord0 and gl_Color aren't
 arbitrary, they correspond to builtin glsl names. Even if you aren't using
 glsl you must use the builtin glsl name if one exists, otherwise you are
 free to create custom attributes with any name you like.
 
 You can add multiple gl_Vertex attributes - assuming only one is enabled
 at draw time. To differentiate them the names can have a detail, such as
 gl_Vertex::active or gl_Color::enabled.
 
 /* Now upload all data from your client side arrays to GPU buffers...
  *
  * note: at this point if your xyz, texture and color attributes were
  * allocated on the heap you could free them if you like here.
  * note: calling cogl_vertex_buffer_submit is optional, but if you don't
  * explicitly call it then your client side arrays must remain valid
  * until you draw e.g. using cogl_vertex_buffer_draw ().
  */
 cogl_vertex_buffer_submit ();
 
 Ideally you would only do the above construction work once and you wouldn't
 need to modify your vertex buffer per-frame.
 
 In your paint function you can then do:
 
 cogl_vertex_buffer_draw (vbo, COGL_VERTICES_MODE_TRIANGLE_STRIP, 0, 100);
 
 
 If you want to morph your geometry, then you can do that by re-adding the
 attributes that change over time.
 
 Note: if you are morphing geometry then you should avoid uploading more data
 than

Re: [clutter] Linking actors in 3D

2009-07-20 Thread Robert Bragg
I realized my original reply didn't reach the clutter mailinglist since
I sent it from the wrong address...

On Fri, 2009-07-17 at 23:12 +0200, Filipe Nepomuceno wrote:
 On Fri, Jul 17, 2009 at 11:24 AM, Neil Robertsn...@linux.intel.com wrote:
  On Fri, 2009-07-17 at 10:39 +0200, Filipe Nepomuceno wrote:
 
  I am trying to connect actors in a graph with a single line in 3D and
  was wondering what would be the best way to go about this.
 
  I was thinking of creating a rectangle of width=2 and height=distance
  between actors, and then use trig to rotate it into place. This is a
  very expensive solution though (if it works) because of the trig
  function calls and it wont show a line if it is rotated 90 degrees
  around the x axis.
 
  Another idea I had was to create a custom actor and use opengl calls
  to just draw a line. with glVertex.
 
  You can also use the Cogl path API to draw a line. This should be easier
  than using GL directly because otherwise you have to be careful not to
  conflict with Cogl's state caching of GL. So something like:
 
  cogl_path_move_to(actor_x, actor_y);
  cogl_path_line_to(other_actor_x, other_actor_y);
  cogl_path_stroke();
 
  - Neil
 
 
 
 Hi,
 
 That solution works if all the actors are on one plane but how can I
 extend that to actors that are at a different depth? Or am I missing
 something?

Another option is the cogl vertex buffer API. You can find more details
about this API here:
http://www.clutter-project.org/docs/cogl/0.9/cogl-Vertex-Buffers.html
(or please check my recent reply to the mailinglist question: [clutter]
Using COGL for 3D drawing)

The cogl_polygon API also allows you to give z coordinates to vertices
so that may be a simpler option.

regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Clutter 0.9 broken for GLES

2009-06-01 Thread Robert Bragg
On Mon, 2009-06-01 at 19:01 +0200, Koen Kooi wrote:
 Hi,
 
 This broke the gles backend:
 
 authorRobert Bragg rob...@linux.intel.com   2009-05-12 13:15:18
 commit36cfb6030784791a4420a1e52a8c18d56b1d0c69 (patch)
 [cogl] Remove the COGL{enum,int,uint} typedefs
 
 Fixing it seems to be trivial:
 
  for i in clutter/cogl/gles/* ; do
  sed -i -e s:CGL_NEAREST:COGL_TEXTURE_FILTER_NEAREST:g \
 -e s:CGL_LINEAR:iCOGL_TEXTURE_FILTER_LINEAR:g \
 -e s:CGL_VERTEX_SHADER:COGL_SHADER_TYPE_VERTEX:g \
 -e s:CGL_FRAGMENT_SHADER:COGL_SHADER_TYPE_FRAGMENT:g $i
  done
 
 With that I can build revision 36cfb6030784791a4420a1e52a8c18d56b1d0c69 
 against the SGX530 SDK again, but more recent revisions introduced even 
 more GLES breakage
 
 Same goes for clutter-gst and mutter, btw.
 
 It would be great if someone with some actual knowledge about clutter 
 would fix GLES support before 1.0 :)

I think I fixed this in the 1.0 integration branch last week, but didn't
back port to master so far.

I was also testing GLES 1 and GLES 2 and they were both working for me.

If you could test the 1.0 integration branch and confirm it works for
you that would be good to know.

thanks,
- Robert
 
 regards,
 
 Koen
 
-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Lighting and Lights in Clutter

2009-04-01 Thread Robert Bragg
On Tue, 2009-03-31 at 14:40 +1000, Saul Lethbridge wrote:
 Is there any way to create actors that emit light in clutter? I'm
 wanting to create a spotlight effect that I can animate and move
 around the stage. I noticed some reference to lighting in recent
 builds of cogl: cogl_material_set_emission
 
 Am I on the right track?

Cogl doesn't yet give you a way to add lights to your scene. I had
lighting in mind when I wrote CoglMaterials so they do expose GL
material properties which you can use if you find a way to add lighting.

For now you may be able to break out into raw GL and call glLightfv as
appropriate. I can't be sure of what issues you might hit trying this,
nor can I guarantee that this code wont be broken when we add proper
support into Cogl, but short term it might work for you.

Alternatively you can look into adding Cogl support for lighting. If you
go for the latter though I'd urge you not to thinly wrap the gl API,
instead I think creating a CoglHandle that represents a light object
would be a preferable approach, and add methods for changing the
ambient, diffuse and specular intensities and position etc.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Cylinder effect in clutter

2009-03-23 Thread Robert Bragg
On Mon, 2009-03-23 at 12:17 +, Chris Lord wrote:
 This is probably referring to the texture-deformation demo in
 clutter-toys (it isn't a test) - It's called 'odo'.
 
 Note, if the actor in question isn't a texture, you'll need fbo support
 (or you'll have to do it in some other, more cunning way).
 
 --Chris
 
 On Sun, 2009-03-22 at 19:36 -0700, Peng Liu wrote:
  
  Could you please tell us which one?

You can also look at tests/interactive/test-cogl-vertex-buffer.c which
gives an example of creating and deforming a quad mesh; this should
theoretically give better performance than the approach used in odo.

As the name suggests it's built on top of the new cogl vertex buffer
api, which wasn't available when odo was first written.

regards,
- Robert
-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] client-side matrix

2009-02-23 Thread Robert Bragg
On Mon, 2009-02-23 at 08:26 +, Tomas Frydrych wrote:
 Havoc Pennington wrote:
  Anyway, maintaining the matrix client-side does not look super hard
  but it's mostly a series of matter of taste judgment calls so if
  anyone could give some guidance on how to approach this... some early
  thoughts:
 
 I do not particularly like this idea on principle, as you are moving
 processing from GPU to CPU. Also, it feels like you are trying to
 address driver brokenness in Clutter.

Personally I wouldn't be so worried about this - at least not for the
same reason. I think there are good reasons why GLES 2.0 and OpenGL 3.0
scrapped the GL matrix stack APIs entirely. I don't think it's typical
for the matrix stack to involve GPU hardware.

The biggest disadvantage to dealing with the matrices client side looks
to be with tracking the inverse matrix. Calculating the inverse can be a
rather expensive operation, and at least looking at the Mesa code it's
clear they have quite a bit of smarts involving tagging matrices
according to the transformations they represent to allow selecting the
most optimal inverse calculation function. (E.g. think of cases like
glTranslatef (x,y,z), the inverse is dead simple.)

Perhaps in an ideal world; OpenGL would have had API for pushing/popping
a matrix along with its inverse too.

Also it might be worth noting that although I think OpenGL only needs
the inverse matrix for dealing with lighting calculation, and Cogl
doesn't expose lighting:
 - Eventually we will probably want to expose lighting via Cogl so it
may become more of an issue.
 - GLSL vertex programs have a builtin variable for the inverse so that
implies; that no matter if you are using lighting or not; if you use
GLSL for vertex shading the driver needs to calculate the inverse.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] client-side matrix

2009-02-23 Thread Robert Bragg
 to be used like
glEnable/glDisable that will sit on top of an internal cache.
2) Also add cogl_enable/disable_client_state calls + cogl_is_enabled
calls.
3) We expose a new cogl_flush_cache func that commits the cache to GL
4) We expose a new cogl_dirty_cache func that lets you invalidate cogls
cache.
5) Internaly re-work code in terms of these new funcs in place of the
current cogl_enable. 

So if we come around to this problem again; I think it's still solvable
and I don't think your client matrix stack has to conflict with it, but
perhaps it would make sense to add top level cogl_flush_gl_caches and
cogl_dirty_gl_caches at the same time.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] client-side matrix

2009-02-23 Thread Robert Bragg
On Mon, 2009-02-23 at 08:09 -0500, Havoc Pennington wrote:
 Hi,
 
 On Mon, Feb 23, 2009 at 7:32 AM, Robert Bragg b...@o-hand.com wrote:
  I'd personally be fairly happy with the flush type approach; but I'd
  take the opportunity to add something like cogl_flush_gl_state() which I
  think would tie into ideas we've discussed in the past about improving
  the ability to break out of Cogl into raw GL.
 
 Internally to COGL, maybe you want to keep a _cogl_flush_matrices()
 distinct from flushing 'everything' but make the public API just a
 'flush everything'? Something like that makes sense to me.

yes, agreed,

regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] client-side matrix

2009-02-23 Thread Robert Bragg
On Mon, 2009-02-23 at 08:12 -0500, Havoc Pennington wrote:
 Hi,
 
 On Mon, Feb 23, 2009 at 7:32 AM, Robert Bragg b...@o-hand.com wrote:
  The biggest disadvantage to dealing with the matrices client side looks
  to be with tracking the inverse matrix. Calculating the inverse can be a
  rather expensive operation, and at least looking at the Mesa code it's
  clear they have quite a bit of smarts involving tagging matrices
  according to the transformations they represent to allow selecting the
  most optimal inverse calculation function. (E.g. think of cases like
  glTranslatef (x,y,z), the inverse is dead simple.)
 
 To be sure I'm understanding correctly, you are saying that if we
 always send GL a LoadMatrix instead of say Translate, then the mesa
 code has to analyze from scratch to get the inverse and other
 properties, while if we send Translate it knows a lot to start with?
My initial thought was that you should aim to mirror the whole stack by
adding some glPush/PopMatrix calls instead of flattening it into one
server side matrix. This will add traffic, but I'd half expect that if
you are flushing the whole stack in one go without interleaving any
synchronizing requests you can get the whole lot to go in one context
switch. -- I'm wondering if we are about to learn about a whole new
level of GL indirect fail where *everything* synchronizes :-) ... though
it occours to me that this probably is true for a debug build given that
we follow up most requests with glGetError(); It's probably worth
keeping that in mind.

The idea is that by mirring the whole stack you let the driver cache the
inverse at different levels of the stack. By using glLoadMatrix and
always re-using the top of the stack means the driver has chuck
everything away each time.

I guess your idea of calling using glTranslate/glRotate also has
potential to improve things. Looking at mesa I get the impression that
using glLoadMatrix doesn't actually trigger an analyze from scratch
function; rather is enables a USE_THE_GENERALISED_PATH flag. :-/ I might
be missing something though...

See _mesa_LoadMatrixf in mesa/main/matrix.c
  it calls _math_matrix_loadf in mesa/math/m_matrix.c
which does: mat-flags = (MAT_FLAG_GENERAL | MAT_DIRTY);

There is a _math_matrix_analyse function, which could potentially be
called, but that seems to check for mat-flags  MAT_DIRTY_TYPE so it
doesn't seem like it would analyse the matrix in this case.

 
 (Somewhat frustratingly, because my patch just has the Mesa code in
 it, so we have exactly what Mesa computes already on cogl side!)
yeah, if only GL had a way to load the inverse too :-)

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] Confused by eglx backend

2008-12-23 Thread Robert Bragg
On Tue, 2008-12-23 at 00:38 -0800, Gregoire Gentil wrote:
 Hello,
 
 I'm compiling clutter-0.8.4 with the following --with-flavour=eglx
 --with-gles=2.0. I'm also compiling clutter-gtk-0.8.2 with the
 following --with-flavour=x11 --with-gles=2.0
 
 My ultimate goal is to use something like:
 
 gtk_clutter_init(argc, argv);
 GtkWidget *clutter_widget = NULL;
 clutter_widget = (GtkWidget *)gtk_clutter_embed_new();
 gtk_container_add(GTK_CONTAINER(c), clutter_widget);
 stage = (ClutterActor *)gtk_clutter_embed_get_stage(clutter_widget);
 
 I get the following warning/errors:
 Clutter-WARNING **: Unable to create a new stage: the eglx backend does
 not support multiple stages.
 Clutter-CRITICAL **: clutter_actor_realize: assertion 'CLUTTER_IS_ACTOR
 (self)' failed
 ClutterX11-CRITICAL **: clutter_x11_get_stage_visual: assertion
 'CLUTTER_IS_STAGE (stage)' failed

hmm, this seems odd because it looks like Matthew updated the eglx
backend to support multistage back in April. (Clutter 0.7.1, git rev
77a7eaeed51) I tried checking out 0.8.2 and 0.8.4 and grepping for not
support multiple stages and sure enough it looks like the only backends
that should print similar messages are the eglnative, SDL and fruity
backend, unless I'm missing somthing?

 
 I'm obviously confused by what clutter can do with eglx backend and how
 I should do it. Can anyone clarify to me at least the following points:
 
 - Can I embbed cluter into gtk with the eglx backend?
I can't say I've tested what you're trying but I think it should
work modulo various bugs due to lack of testing. Some experience with
early OMAP 3 PowerVR drivers showed the eglx drivers to be a bit
unstable and sensative to the size of the X Window used. (e.g. we were
ok with full screen windows, but saw issues when window managers resized
our eglx windows.) - Of course this may have improved since we last
tried.

 - If yes, what am I doing wrong?
Double checking you are running against the right version and
understanding how you see the error you report when it seem like Clutter
0.8.x doesn't contain that message in the source code, seems like the
place to start.

 - If not, does it mean that clutter-eglx can only work in fullscreen on
 top of everything? How does it work?

eglnative corresponds to full screen egl, or rather passing a NULL
display/window handle to egl, which is what the IMG NULL window
system EGL driver expects.

eglx is used to draw over a single X window. It passes an Window XID to
EGL, according to how the IMG eglx window system defines the egl Native
types. (technically powervr-eglx might be a better name.) This should
not be limited to fullscreen.


 I would definitely appreciate some clarifications or some pointers
 explaining the backend story and roadmap.
I'm sorry, but the best I can suggest here if you want to look into
the history is to look at the code/git logs; unless you have a more
specific question. As far as a road map goes for backends, I'm afraid we
don't have one. If there is something specific you would like to see
though or you can clearly point to a bug please file a report in
http://bugzilla.o-hand.com, or we would be very happy to review patches.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] XCOMPOSITE extension not found when configuring Clutter

2008-10-27 Thread Robert Bragg
On Mon, 2008-10-27 at 02:51 -0700, Peng Liu wrote:
 I am a newbie  to clutter. I downloaded clutter-0.8.2 and configuired
 it. It output as following:
  
  
 checking for X11... found
 checking for XFIXES extension = 3... found
 checking for XDAMAGE extension... found
 checking for XCOMPOSITE extension = 0.4... not found
 configure: error: Required backend X11 Libraries not found.
 
  
 It seems that  XComposite extension is not install.   I check the
 path /usr/lib : 
 root$ ll libXcomposite.so*
 lrwxrwxrwx 1 root root   22 2008-10-27 22:30 libXcomposite.so -
 libXcomposite.so.1.0.0
 lrwxrwxrwx 1 root root   22 2008-10-27 22:17 libXcomposite.so.1 -
 libXcomposite.so.1.0.0
 -rwxr-xr-x 1 root root 8840 2006-11-20 23:27 libXcomposite.so.1.0.0
 
 libXcomposite library is installed. I googled it but failed to find
 the solution.  
  
 Any clue what I'm doing wrong?  
  
 My system is Fedora 7 running on VirtualBox.

You are probably missing a dev package. I don't know what Fedora calls
them but for example Ubuntu has a seperate libxcomposite-dev package
which will include /usr/lib/pkgconfig/xcomposite.pc. It's that .pc
file that configure is actually looking for.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to [EMAIL PROTECTED]



Re: [clutter] About COGL Path

2008-10-22 Thread Robert Bragg
On Tue, 2008-10-21 at 18:41 -0400, Tom Cooksey wrote:
 On 10/21/08, Neil Roberts [EMAIL PROTECTED] wrote:
  Another thing I've been thinking about for the paths is to tessellate
  them into triangles so that we avoid using the stencil buffer
  altogether. In that case we would probably need to calculate the texture
  coordinates again, but it might help with your other idea which was to
  get a handle to a stored path. We could combine this with the Mesh API
  idea from Rob Bragg [2] and store the tessellated path as a mesh. The
  tessellation idea is described in bug #1198.
 
 If you do this, look into caching the results of the tesselation as it
 can be quite expensive - especially for complex paths. Also, if you
 cache the tesselation, be careful if the path contains curves. If the
 path gets scaled up your curves aren't curvy any more and you need to
 re-tesselate.
The mesh API is essentially going to be a fairly raw abstraction over
buffer objects which we can use to cache the tesselation. (Though
apparently the GLU tesselation path is already faster than using the
stencil buffer tricks, which is cool :-) )

Since we will be caching the results, one way of dealing with the non
curvy edges issue initially might be to just wack the detail level of
the tesselation right up, store that, and then I think we should be able
to sample lower resolution geometry out from the same buffer quite
cheaply.

Perhaps another way would be to walk around the boundary vertices and
determine a variable LOD based on how curved the boundary is at any
point, then find a way to feed that data into the tessellating.

If anyone has any other tricks/experience with this problem that'd be
cool to hear about though!

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to [EMAIL PROTECTED]



Re: [clutter] CSS for Clutter

2008-10-03 Thread Robert Bragg
 code that deals with cascading a selecting
to some extent, and wonder what you'd think of exposing that code via an
interface similar to above? I think it should then be possible to sit
your existing cairo specific code on top of that in a GTK+ theme engine
and start working on Tidy code that can draw using Cogl.

Anyhow, thanks for the interest Rob, it would be cool to see some
progress on CSS theming for a Clutter based toolkit, I hope you can
help.

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to [EMAIL PROTECTED]



Re: [clutter] Re: CSS for Clutter

2008-10-03 Thread Robert Bragg
On Fri, 2008-10-03 at 13:38 +, Thomas Wood wrote:
 Hi Rob (x2),
 
 What you proposed sounds interesting, but I wonder whether we're in danger of
 over engineering here. Styling an application using css-style syntax rules is 
 a
 fairly simple requirement. Creating a multi level generic css styling system
 seems an order of complexity greater than what we really need. The end
 requirement is that the end user doesn't need to learn yet another styling
 language in order to style their applications. I'm not sure it makes sense to
 share the same small bits of code cross toolkit, for the amount of benefit we
 gain.
 
 Anyway, just a thought...

Talking with Robsta a bit on IRC, it looks like the abstractions are
already there to some extent anyway. Since he's looking at sharing code
originally used for a GTK+ theme engine with Clutter I think it makes
sense to tidy up the divide between his selector engine and the Cairo
specific code.

I'm fairly sure we aren't talking about adding complexity (certainly not
order of magnitude :-) ) Some of the code *can* be shared across
toolkits. I.e. I see this more of a code organisation problem and a few
typedef tweaks, it doesn't really affect the code that needs to be
written one way or another. (except perhaps the canonical name stuff -
but I'm fairly sure that can be a later consideration anyway)

(To be clear about the canonical name stuff though, a.t.m that's more of
a would-be-nice feature, to keep in mind, but I see no reason that it
couldn't be implemented later.)

At the end of the day, we need code to parse CSS data; code to handle
cascading and using selectors be able look up the right data for a
particular toolkit widget. We also need code that does rendering, via
Cairo for GTK+, and via Cogl for Clutter. In a Clutter toolkit we would
also be affecting layout based on padding properties, I'm not sure if
the GTK+ code supports that a.t.m.

As far as I understand it Rob has achieved the toolkit independence by
representing the hierarchy of widgets in a document tree of nodes
which is different from my idea of having an opaque NativeWidget type. I
can't see any real advantage of one approach over the other, so sticking
with the document tree makes sense to me.

I think Rob also has a simlar concept to my backend interface, which
uses a vtable of functions to access widget data:

robsta: rib: basically, there is no toolkit-specific code in place now,
the consumer has to submit a vtable when querying the stylesheet
robsta: rib: the vtable consists of one-liners like
clutter_actor_get_name() for the implementation of the get_id() hook

kind regards,
- Robert

-- 
Robert Bragg, Intel Open Source Technology Center

-- 
To unsubscribe send a mail to [EMAIL PROTECTED]



Re: [clutter] Drawing in the background while idle

2008-09-24 Thread Robert Bragg
On Wed, 2008-09-24 at 20:39 +1000, Steve Smith wrote:
 Hi,
 
 I'm working on simple scrolling-graph app in Python with Clutter and
 Cairo.  The scolling works by sliding a Cairo texture across with a
 simple Path Behaviour and once it reaches the end returns it to the
 start and redraws the graph shifted with new data appended to the end.
 
 The problem is that the time taken to redraw the Cairo texture causes
 a visible pause on all but the fastest machines.  The obvious method
 to avoid this is to draw the next texture in the background while the
 first is scrolling and then flip them at the end (double-buffering
 basically).  I /could/ do this in a separate thread but I'd rather
 avoid this and was wondering if there is a better method of utilising
 idle time in the background?  Any tips would be appreciated.
Hi Steve,

Yeah I'm fairly sure we are going to start seeing more and more of these
kinds of problems as people start writing Clutter apps involving slow to
render scenes. Unless you are running on a high end system then a single
threaded Clutter just can't avoid blocking the mainloop while rendering
the scene, in turn that will inevitably affect how quickly clutter can
deal with input events further impacting user experience. We've been
discussing this quite a bit recently. I don't believe you can work
around this via idle handlers, or making your application
multi-threaded; instead you need to find tricks to reduce your scene
render time. (It's appreciated that's not ideal.)

The nicest approach we've come up with so far seems to be getting
Clutter to internally push the rendering out to a worker thread by
moving the calls to glXSwapBuffer into a new thread. Since writing multi
threaded GL apps is pretty common amongst game developers etc I think
there should be very little risk for Clutter to internally start doing
GL calls from different threads. This approach also completely hides
threading issues from developers. (Usually for the win!)

Neil Roberts has even implemented all this already, yay! You can track
its progress here:
http://bugzilla.o-hand.com/show_bug.cgi?id=1118

The main caveat is that it requires a new approach to picking since we
currently have an API that lets you synchronously query the actor at
some stage position. Picking requires an offscreen render, so to allow
it to be pushed into the worker thread, the picking API now takes a
callback that gets called only when the render is complete and we have a
result.

Note, this stuff is still in the stages of discussion/experimentation,
so no guesses if this will actually make it into Clutter proper any time
soon, or if we might come up with a different approach. If you are
interested in experimenting with the patches please feel free to give
feedback. Of course you'll also need to tweak the python bindings for
clutter_stage_get_actor_at_pos yourself for now.

kind regards,
- Robert


-- 
To unsubscribe send a mail to [EMAIL PROTECTED]



Re: [clutter] COGL mapping with OpenGL model

2008-08-26 Thread Robert Bragg
On Mon, 2008-08-25 at 22:29 +0900, bobos26 wrote:
 Hi there.
 
 I send you this E-mail to bag your help
 
 I'm currently making widget(or custom actor) to draw sphere
 I use openGL library to draw sphere and use COGL library to map
 texture.
 I success to draw sphere but I cannt map the sphere with JPG image.
 I can see the sphere but mapping image is totally crashed.
 I thing there is something problems with vertex and texture
 positionning.
 so i tried to find that for 3 days. but i couldn't find that.
 i use normal JPG image and make source likes down there.
 
 please what i have to fix~
 
 -Source Image
 world
 
 -Result
 
 sphere
 (where is my beautiful Earth?)
 
 
 
 
 
 - Make Arrays
 
 for( i = 0; i  p/2; ++i ){
 theta1 = i * TWOPI / p - PIDIV2;
 theta2 = (i + 1) * TWOPI / p - PIDIV2;
 
 for(j = 0; j = p; ++j ){
 theta3 = j * TWOPI / p;
 
 ex = cosf(theta2) * cosf(theta3);
 ey = sinf(theta2);
 ez = cosf(theta2) * sinf(theta3);
 
 array[polygon_index] = ex*width;
 polygon_index++;
 
 array[polygon_index] = ey*width;
 polygon_index++;
 
 array[polygon_index] = ez*width;
 polygon_index++;
 
 vertices[geo_index].x =
 CLUTTER_FLOAT_TO_FIXED((ex*width));
 vertices[geo_index].y =
 CLUTTER_FLOAT_TO_FIXED((ey*width));
 vertices[geo_index].z =
 CLUTTER_FLOAT_TO_FIXED((ez*width));
 vertices[geo_index].tx =CLUTTER_FLOAT_TO_FIXED(
 j/(float)p);
 vertices[geo_index].ty =CLUTTER_FLOAT_TO_FIXED((2*i
 +1)/(float)p);
 geo_index++;
 
 ex = cosf(theta1) * cosf(theta3);
 ey = sinf(theta1);
 ez = cosf(theta1) * sinf(theta3);
 
 array[polygon_index] = ex*width;
 polygon_index++;
 
 array[polygon_index] = ey*width;
 polygon_index++;
 
 array[polygon_index] = ez*width;
 polygon_index++;
 
 vertices[geo_index].x
 =CLUTTER_FLOAT_TO_FIXED((ex*width));
 vertices[geo_index].y
 =CLUTTER_FLOAT_TO_FIXED((ey*width));
 vertices[geo_index].z
 =CLUTTER_FLOAT_TO_FIXED((ez*width));
 vertices[geo_index].tx  =CLUTTER_FLOAT_TO_FIXED(
 j/(float)p);
 vertices[geo_index].ty
 =CLUTTER_FLOAT_TO_FIXED(2*i/(float)p);
 geo_index++;
}
 }
 
 
 - Make Texture
 priv-cogl_tex_id = cogl_texture_new_from_file (worldmap.jpg, 0,
 FALSE,
 COGL_PIXEL_FORMAT_ANY,
 NULL);
You need to pass -1 in here for max_waste so that cogl will not slice
your texture up. This is a requirement of the cogl_texture_polygon call.
Also this would let you use cogl_texture_get_gl_texture
to pluck out a GL handle for the texture loaded by Cogl. (see more
below)

 
 cogl_texture_set_filters
 (priv-cogl_tex_id,CGL_NEAREST,CGL_NEAREST);
 
 - Draw sphere and Mapping
 //-- Vertex draw
 glEnableClientState(GL_VERTEX_ARRAY);
 glVertexPointer   (3, GL_FLOAT, 0, array);
 glDrawArrays(GL_TRIANGLE_STRIP, 0, 930);

It looks like you are trying to emit geometry here without enabling
texturing. This step seems redundant looking at your next step where you
then use Cogl to emit textured geometry. (OpenGL/Clutter does not
separate the phases of emitting geometry and then painting that
geometry. With OpenGL it's more like you emit textured-geometry. Or
rather you glEnable (TEXTURE_2D) bind your texture and emit vertices
that include a position and texture coordinates)

 //- Mapping
 CoglTextureVertex verticess[4];
 CoglTextureVertex tmp[3];
 int l = 0;
 int m = 0;
 int n = 0;
 int o = 0;
 for (l = 0; l  930; l++)
 {
 if (o != 0){
 verticess[0] = verticess[2];
 verticess[1] = verticess[3];
 o = 0;
 }else{
 verticess[m] = vertices[l];
 }
 m = m +1;
 printf(%d, ,m);
 if(m == 4)
 {
 for (n = 0;n  3;n++)
 {
 if (n == 0)
 {
 tmp[n] = verticess[0];
 }else if(n ==1)
 {
 tmp[n] = verticess[1];
 }else if(n == 2)
 {
 tmp[n] = verticess[2];
 }
 
 }
 cogl_texture_polygon (priv-cogl_tex_id, 3, tmp, FALSE);
 for (n = 0;n  3;n++)
 {
 if (n == 0)
 {
 tmp[n] = verticess[2];
 }else if(n ==1)
 {
 tmp[n] = verticess[1];
 }else if(n == 2)
 {
 tmp[n] = verticess[3];
 }
  

Re: [clutter] Custom actors using GL directly

2008-08-20 Thread Robert Bragg
On Wed, 2008-08-20 at 16:18 +0300, Michael Boccara wrote:
 
 Hi Robert,
 
 Robert Bragg wrote:
  I'd be rather worried if your GL driver is causing a hardware flush for
  calling glGet* ? Broadly speaking a GL driver will maintain a large
  state machine e.g. using a plain C struct {} and perhaps maintain some
  dirty flags for various members. If you glEnable somthing then the dirty
  flag could be set and the value updated (no HW flush here), and if you
  just glGet somthing that should simply read some particular struct
  member. When the driver comes to do the next frame it uses the dirty
  flags to determine what state changes need to be issued to the HW and
  continues to emmit the geometry + kick the render etc.
  

 Yes I agree actually, as I said to Neil.
  Certainly there are pros and cons. I think typically the GL functions
  would have marginally greater overheads in the single threaded use case
  (most of the time for Clutter) since GL implicitly has to do a thread
  local storage lookup for each GL call, and possibly take a lock. That
  used to be quite expensive, though I guess these days with NPTL it might
  not be such an issue. Also I wouldn't be so hopeful that all GL/GLES
  drivers are good yet sadly. Certainly a number of smaller GPU vendors
  creating GLES capable hardware are less likely to have very well
  optimised drivers.

 Yes, and PVR is an example of what you say.
  Currently our cache supports a different interface than the
  glEnable/Disable approach of OpenGL. We currently have cogl_enable()
  (The name is a bit misleading because it's not synonymous with glEnable)
  that takes a complete bitmask of the state we want enabled which will
  determine the appropriate glEnables/Disables etc to call. I.e. because
  Cogl's state management requirements have been quite simple so far it's
  been convenient for us to be able to setup all our enables in one go via
  a single bitmask.
  

 Besides that, when it comes to client state and generic vertex
 attributes, cogl is using a very proprietary and hidden mapping for
 vertex, texcoords and color attributes, which a developer of a custom
 actor with native GL calls can't guess, which may lead to collisions
 when needing a new vertex attribute (like even normals) for a fancy
 shader.
  Currently cogl_enable is purposely not public :-)
  It is not a scaleable interface at all, and for example we still haven't 
  determined
  how to manage multiple texture units with this approach. Basically it's
  not possible to flatten all the GL enums that can be enabled/disabled
  etc into one bitmask.

 I see, as a ref to another question I posted yesterday, is that what
 is delaying the promotion of multitexture support in the next clutter
 release ?
Not really; it's possible to work around the problem internally by just
disabling all additional texture units once they are finished with and
cogl wont fall over.

The status update (from what's on bugzilla) of this work is that I have
implemented the GLES 2.0 backend, but so far haven't had a chance to
test/debug it, GLES 1 still needs support but should be trivial, the
main blocking piece is Clutter support. Clutter has no object a.t.m to
represent texture data that is disassociated from actor geometry
(ClutterTextures are defined as textured quads) so I'm currently adding
a new ClutterTextureLayer object. So the idea a.t.m is for a
ClutterTexture to sit on top of a single ClutterTextureLayer, and a new
ClutterMultiTexture actor will be created that sits on top of N layers.
(most of the current ClutterTexture brains should move into
ClutterTextureLayer though so they will effectively be sharing a lot of
code.)

  Brainstorming this a bit with others, we have the following proposal:
  1) We remove cogl_enable. (Internally it only manages 4 flags a.t.m)
  2) We expose new cogl_enable and cogl_disable functions to be used like
  glEnable/glDisable that will sit on top of an internal cache.
  3) We expose a new cogl_flush_cache func that commits the cache to GL
  3) We expose a new cogl_dirty_cache func that lets you invalidate cogls
  cache.
  4) Re-internaly re-work code in terms of these new funcs in place of the
  current cogl_enable.
  
  This should support a complete break out into GL (even into code you
  can't easily modify) since you just need to call cogl_dirty_cache to
  invalidate everything.
  
  Do you think that could help your use case?
  

 I love the idea, specially the cogl_dirty_cache...
okey, well we'll hopefully put something up on Bugzilla for you to take
a look at.

regards,
- Robert

-- 
To unsubscribe send a mail to [EMAIL PROTECTED]