Re: [clutter] Using cogl without clutter

2010-04-20 Thread Hieu Le Trung
Oscar,

On Wed, 2010-04-07 at 16:56 +0200, Oscar Lazzarino wrote:
 On Wed, Apr 7, 2010 at 4:43 PM, Neil Roberts n...@linux.intel.com wrote:
 
  You can't call the cogl_pango_* functions using a layout created with a
  regular pango context - instead it has to be a context created with the
  special Cogl font map. The cogl_pango functions in cogl-pango.h are
  meant to be public so it is safe to use them. The lack of documentation
  is an oversight not an attempt to hide them.
 
  The usual way to paint custom text in an actor is to call
  clutter_actor_create_pango_layout() and then render it with
  cogl_pango_render_layout(). This will take advantage of the CoglPango
  glyph cache and render the text from textures so it should be relatively
  efficient.
 
  Hope that helps
 
  - Neil
 
 
 I did as you said, and it works :)
 
 Actually, just out of curiosity, I also tried to create the Pango
 context like this
 
 PangoFontMap *lPangoFontMap = cogl_pango_font_map_new();
 PangoContext *lPangoCtx = pango_font_map_create_context(lPangoFontMap);
 
 and it also works.

Yes, it works. Your case is about interacting with Pango. Cogl is a
wrapper of GL and GLES calls, if you try with GL or GLES specific call,
you need a call to clutter_init to initialize the display.

 
 Thanks to everyone who answered :)
 
 O.

Regards,
-Hieu

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] x86-64 issue

2010-04-20 Thread Hieu Le Trung
Emilio,

On Fri, 2010-04-16 at 22:42 -0300, Emilio Fernandes wrote:
 Hi all,
 
 I'm new on clutter and now i'm trying to create a interface based on
 Clutter and Mx,  
 but i'm having a issue when trying to compile Clutter-1.2.4 on my
 Ubuntu AMD64bits. 
 
   CCLD   libclutter-glx-1.0.la
 /usr/bin/ld: i386 architecture of input file `.libs/clutter-profile.o'
 is incompatible with i386:x86-64 output
 
 can someone help me?

Try make clean and rm the all .o files and make again

 
 
 thx!
 -- 
 Emilio Seidel Fernandes
 Tec. Desenvolvimento de Sistemas Distribuídos - UTFPR Curitiba

Regards,
-Hieu

-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com



Re: [clutter] [cogl] Texture slices: what is the waste?

2010-04-20 Thread Robert Bragg
Hi Alberto,

Excerpts from Alberto Mardegan's message of Fri Apr 16 20:11:03 +0100 2010:
 Hi all,
I'm implementing some optimizations to some cogl texture functions, 
 since they seem to have a considerable impact on my application 
 performance, and I've started with _cogl_texture_upload_to_gl() (GLES 
 backend).
 
 I added some debugging statements in there, and it seems that the 
 texture is never sliced in my case.
 So, I've implemented the optimization suggested by the FIXME comment, 
 that is avoid copying the bitmap to a temporary one. Things seem to work 
 fine, and definitely faster.

Excellent, thanks for taking a look at this.

 
 Before submitting this patch for review, though, I'd like to understand 
 whether the code blocks introduced by the if ({x,y}_span-waste  0) 
 condition are also relevant in the single slice case, or if they can be 
 omitted. I left them out and I'm not noticing any problems.

In short; yes a sliced texture with only one slice can have waste...

  First this is a multi slice example with waste:

  |Slice 0  |   Slice 1  |  Slice 2   |
  | power of two size |-- POT size --|-- POT size --|
  | User's texture size -|- waste |
  |-|-|
  |o||o|xx|
  |o||o|xx|
  |o||o|xx|
  |o||o|xx|
  |o||o|xx|
  -
  o = user data; x = waste data
  A slice is an individual OpenGL texture object.

  But a single slice example could look like this:

  | power of two size |
  |-- Usr tex size --|-- waste -|
  |-|
  |||
  |||
  |||
  |||
  |||
  ---

The waste is basically used to pad the difference between the power of
two texture sizes and the size of the user's texture data.

When the difference would be too large that's when the user's texture
data gets spread between multiple GL textures (slices). It's the
max_waste threshold that determines when we do this.

So for example if you try and load a 190 pixel wide texture then we
first determine that the nearest power of two size to fit that would be
256 and you'd have a waste of 66 pixels wide on the right. If that's
larger than the current max_waste threshold then instead of loading the
users texture into a 256 wide texture we'd consider loading it into
a 128 pixel texture + a 64 pixel texture. This leaves 2 pixels of waste
on the right of the 64 pixel slice which we'd expect to pass the
max_waste threshold.

If the max_waste threshold were greater than 66 though we would simply
load the users texture 190 pixel texture into one 256 pixel wide slice
with 66 pixels of waste.

Note: the above examples only depict waste along the x axis with the
waste on the right, but it's also possible to have waste on the y axis
at the bottom.

Note: platforms fully supporting npot textures don't ever need to slice
unless you upload textures larger than the GPU texture size limits and
even then they never have waste.

It might be worth investigating if your GLES platform supports this
extension:
http://www.khronos.org/registry/gles/extensions/OES/OES_texture_npot.txt

If so it might be worth patching the GLES backend to check for this and
when it's available OR in the COGL_FEATURE_TEXTURE_NPOT flag.
(you could do this in _cogl_features_init in driver/gles/cogl.c)

I hope that helps,
kind regards,
- Robert

 
 Ciao,
Alberto
 
-- 
Robert Bragg, Intel Open Source Technology Center
-- 
To unsubscribe send a mail to clutter+unsubscr...@o-hand.com