Re: Project rename to "GTK"

2019-02-14 Thread Owen Taylor via gtk-devel-list
On Wed, Feb 6, 2019 at 10:23 AM Owen Taylor  wrote:
>
> On Wed, Feb 6, 2019 at 5:04 AM Emmanuele Bassi via gtk-devel-list
>  wrote:
> >
> > Hi all;
> >
> > tl;dr: GTK is GTK, not GTK+. The documentation has been updated, and the 
> > pkg-config file for the future 4.0 major release is now called "gtk4"
> >
> > over the years, we had discussions about removing the "+" from the project 
> > name. The "plus" was added to "GTK" once it was moved out of the GIMP 
> > sources tree and the project gained utilities like GLib and the GTK type 
> > system, in order to distinguish it from the previous, in-tree version. Very 
> > few people are aware of this history, and it's kind of confusing from the 
> > perspective of both newcomers and even expert users; people join the wrong 
> > IRC channel, the URLs on wikis are fairly ugly, etc.
>
> Thanks for moving this along! It's good to see the GTK name finally
> get less confusing and easier to talk about!
>
> But to clarify the history, the "+" predates the point when GTK was
> moved out of the GIMP tree. Every single version of GTK with publicly
> released sources was called GTK+. As I understand it, Peter Mattis
> added the + to mark a change from a very early version that was
> structured more like Xt/Motif, to a version that had a fuller type
> system with inheritance.

To add a little more to this - Elijah Lynn pointed me to an answer he
obtained from Peter on the subject - see
https://unix.stackexchange.com/a/443832/27902 -

GTK was the first version of the toolkit used in pre-1.0 versions
of the GIMP.
At some point, the architectural limitations were revealed and I rewrote and
   renamed it as GTK+. This too was used in pre-1.0 versions of the GIMP.
   I don't believe any project outside of the GIMP used GTK-(no-plus).
   Why a "+" instead of a version number? No reason other than whim.
   ~ Peter Mattis
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Project rename to "GTK"

2019-02-06 Thread Owen Taylor via gtk-devel-list
On Wed, Feb 6, 2019 at 5:04 AM Emmanuele Bassi via gtk-devel-list
 wrote:
>
> Hi all;
>
> tl;dr: GTK is GTK, not GTK+. The documentation has been updated, and the 
> pkg-config file for the future 4.0 major release is now called "gtk4"
>
> over the years, we had discussions about removing the "+" from the project 
> name. The "plus" was added to "GTK" once it was moved out of the GIMP sources 
> tree and the project gained utilities like GLib and the GTK type system, in 
> order to distinguish it from the previous, in-tree version. Very few people 
> are aware of this history, and it's kind of confusing from the perspective of 
> both newcomers and even expert users; people join the wrong IRC channel, the 
> URLs on wikis are fairly ugly, etc.

Thanks for moving this along! It's good to see the GTK name finally
get less confusing and easier to talk about!

But to clarify the history, the "+" predates the point when GTK was
moved out of the GIMP tree. Every single version of GTK with publicly
released sources was called GTK+. As I understand it, Peter Mattis
added the + to mark a change from a very early version that was
structured more like Xt/Motif, to a version that had a fuller type
system with inheritance.

Owen
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: g_object_ref() now propagates types

2017-12-11 Thread Owen Taylor
On Fri, Dec 8, 2017 at 6:26 AM, Philip Withnall 
wrote:

>
> child_type = CHILD_TYPE (g_object_ref (parent_type));
>
> That will add a compile-time explicit cast, and a runtime type check.
> (As always, the runtime type check is disabled if GLib is built without
> debugging enabled (or with G_DISABLE_CAST_CHECKS defined.)
>

G_DISABLE_CAST_CHECKS is defined internally in GLib and GTK+ for stable
releases (--enable-debug defaults to minimum for stable releases, yes for
unstable releases), but that doesn't have any effect on whether they are
enabled or disabled for your application; if you want to disable cast
checks in production builds for your application, you need to specify
G_DISABLE_CAST_CHECKS yourself.

Owen
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: pango_layout_get_extents() question

2017-05-17 Thread Owen Taylor
I'm not sure about your particular case, but in general, ink is not necessarily 
contained within the logical rectangle - there might be flourishes and tails on 
letters that overlap with the adjacent letters - the logical rectangle just 
determines spacing.

Try:
 pango-view --annotate=1 --text f --font 'DejaVu Sans Italic 80'
 pango-view --annotate=1 --text fo --font 'DejaVu Sans Italic 80'

And that should make it clear.

Owen

- Original Message -
> For the ink and logical rectangles returned by
> pango_layout_get_extents(), is the ink rectangle supposed to be
> contained within the logical rectangle?  I'm seeing a case where this
> doesn't seem to be the case:
> 
>   text `W': ink: x=1024 y=2048 w=7168 h=6144, logical: x=0 y=0 w=7168 h=10240
> 
> Here, the ink and logical rectangles have the same width (7168), but
> the ink rectangle as an x offset of 1024 whereas the logical rectangle
> has an offset of 0.  This doesn't make sense to me, but perhaps I'm
> just not understanding the API.
> 
> Thanks and best regards,
> 
>   --david
> ___
> gtk-i18n-list mailing list
> gtk-i18n-list@gnome.org
> https://mail.gnome.org/mailman/listinfo/gtk-i18n-list
> 
___
gtk-i18n-list mailing list
gtk-i18n-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-i18n-list


Re: GSK review and ideas

2016-12-16 Thread Owen Taylor
On Fri, 2016-12-16 at 09:05 +0100, Alexander Larsson wrote:
> On tor, 2016-12-15 at 13:15 -0500, Owen Taylor wrote:
> > [...]
> > Just because we need to be able to repaint the whole toplevel
> > quickly for resizing and other animations, and just because games
> > repaint the whole scene - that should not drive an assumption that
> > there is no value to clipped updating. Clipped updating is not done
> > for speed - it's done to reduce power consumption and to leave
> > resources (GPU, CPU, memory bandwidth) for other usage.
> > 
> > If a copy of Evolution in the background is using only 25% of
> > system resources to update a small progress bar at 60fps, that's
> > not "60fps success!", that's "2W of power - fail!"
> 
> Well, things are not always so clear cut. Fundamentally, OpenGL
> doesn't have great primitives for clipping to a region of unbounded
> complexity.
> 
> First of all, generally you have to supply entire buffers that have
> valid content everywhere. If you're lucky you can use extensions like
> buffer age so that you can track which part of the back buffer is up-
> to-date, but that requires double or tripple buffering, which itself
> brings up the memory use and possibly the the power use.

I don't think we need to count on luck to have the buffer age extension
- it is widely available on Xorg and Wayland and we have the ability to
fix cases where it is missing. Yes, we want to run on GL on Windows/Mac
too, but we shouldn't tie our efficiency hands behind our back because
there might be some place where we have to redraw full frames.

Reusing old buffers shouldn't increase memory usage - you decide if you
are double buffering or triple buffering, and either way, once a frame
has finish being composited or scanned out, you simply reuse it instead
of freeing it.

> Secondly, if you're painting to an old buffer where you want to
> update only the damage region, then you need to guarantee that all
> your drawing is completely clipped to the damage region. If this is a
> complex region, or just say two small rects far from each other, then
> scissoring is not good enough to do clipping for you. The alternative
> then is something like stencil buffers, but that means clipping on
> the pixel level, so you have to do a lot of work anyway, submitting
> geometry that you don't know will be clipped or not.

I think there's a lot of value in just updating the bounding rectangle
of the damage. While pathological cases exist (two progressbars, one
in each corner of the window!), most of the time a minor update to a
window - whether a progress indicator or a blinking cursor - are
actually very confined. 

My inclination is to track the full damage regions - the region
implementation is darn efficient when compared to modern hardware - and
only reduce to a bounding rectangle at the end.

This maintains flexibility and allows the possibility of using things
like https://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_swap_b
uffers_with_damage.txt 

- Owen

P.S. - when thinking about power consumption, not only does clipping
allow actually not processing pixels, if you combine it with culling,
you greatly reduce the number of objects you are walking over and the
amount of setup that is being sent to the rendering API.

> Still, I guess we should try to do this as well as we can, especially
> if we want to keep any decent performance for the software fallback
> case.

___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GSK review and ideas

2016-12-15 Thread Owen Taylor
On Thu, 2016-12-15 at 16:26 +0100, Alexander Larsson wrote:
> This combined with the fact that OpenGL makes it very hard, flickerly
> and generally poorly supported to do damage-style partial updates of
> the front buffer means we should consider always updating the entire
> toplevel each time we paint. This is what games do, and we need to
> make that fast anyway for e.g. the resize case, so we might as well
> always do it. That means we can further simplify clipping in general,
> because then we always start with a rectangle and only clip by rects,
> which i think means we can do the clipping on the GPU.

Just because we need to be able to repaint the whole toplevel quickly
for resizing and other animations, and just because games repaint the
whole scene - that should not drive an assumption that there is no
value to clipped updating. Clipped updating is not done for speed -
it's done to reduce power consumption and to leave resources (GPU, CPU,
memory bandwidth) for other usage.

If a copy of Evolution in the background is using only 25% of system
resources to update a small progress bar at 60fps, that's not "60fps -
success!", that's "2W of power - fail!"

- Owen



___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Leading Pango Metrics

2016-01-29 Thread Owen Taylor
On Thu, 2016-01-28 at 13:25 +0100, Alex Vazquez wrote:
> Hi! I have a question about get metrics of a font.
> If I don't understand wrong the height of a font is
> ascent+descent+leading.
> with pango I can get ascent and descent but i can't get the leading.
> I try calculate the leading with  pango_layout_get_spacing() but this
> value get me 0.
> Also I try get leading with pango_layout_get_pixel_size() but the
> result is ascent + descent.
> I'm using as a font "Purisa"
> Can i calculate the leading of a font using pango ? 

"leading" isn't a property of a font, but rather of how it's used - and
does corresponding to pango_layout_get_spacing().

The standard for computer fonts is that the point size of the font -
say 12pt, is equal to the ascent+descent, and a lot of computer fonts
look pretty good "set solid" without extra leading, at least for short
line lengths.

However, what is a bit confusing is that you actually do end up with
visible space between lines in most cases. If you put a Å directly
underneath a g, they might touch, but if you stick to unaccented
characters, there is a gap between lines.

- Owen

___
gtk-i18n-list mailing list
gtk-i18n-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-i18n-list


Re: Dropping 'fringe' pixbuf loaders

2015-09-21 Thread Owen Taylor
Do we trust this code or not? If not, we should either a) sandbox it or b) 
delete it.

Moving less-trusted loaders into a separate repo is a blame-the-user or 
blame-the-os-vendor move, depending on who installs them onto the system.

- Owen

- Original Message -
> On Mon, Sep 21, 2015 at 8:28 AM, Matthias Clasen < matthias.cla...@gmail.com
> > wrote:
> 
> > Before doing so, I want to ask if anybody is willing to step up and maintain
> > these loaders. Note that even if we drop these from gdk-pixbuf itself, they
> > can be maintained out-of-tree... one of the advantages of having loaders as
>  modules.
> 
> Not stepping up to maintain those, but I really like Emmanuele's idea of
> splitting the other modules into a separate repository on git.gnome.org ; I
> think there is value in keeping them all in a central location.
> 
> Cosimo
> 
> ___
> gtk-devel-list mailing list
> gtk-devel-list@gnome.org
> https://mail.gnome.org/mailman/listinfo/gtk-devel-list
> 
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Notes on wip/gdk-gl2

2014-10-12 Thread Owen Taylor
I spent a bit of time looking at the wip/gdk-gl2 branch. In general, the
public API looks very good to me - minimal but sufficient. I think the
branch is basically good to land.

Performance on my system is actually quite poor in at the moment, which
seems at least to be pathological interactions with the system
(i915, Fedora 21, Haswell.) I see 60fps with the default configuration
of gdkgears for any windowed size of the window, but when I maximize I
see the frame rate drop to 30fps, and when I make it fullscreen, I see
it drop to 10fps. 'perf top' shows kernel memory allocation as the top
functions, so it may be that the continual allocation of new render
buffers is triggering the problem.

I also see rendering locked at a sub-multiples of 60fps - despite the
fact that the rendering is additionally synchronized with the compositor
frame clock. If I export vblank_mode=0 I see the expected non-locking
behavior.

* Docs for gdk_cairo_draw_from_gl() need to describe where the rendering
ends up (0, 0 of the curent user coordinates, it seems)

* Docs for gdk_cairo_draw_from_gl should document that it disturbs the
current GL context.

* It looks like there's a need to create a GdkGLContext for a window
*before* the paint callback it is first used, since we use the existence
of the internal paint GL context to know whether we are using GL for
this paint; this is not documented.

* Does the paint GL context need to be always derived from the toplevel
or nearest native ancestor of a GdkWindow? It looks to me like things
might not work right if gdk_window_create_gl_context() is called on a
client side subwindow.

* GtkGLArea tries to keep the context on unrealize() unless the screen
changes; but this doesn't seem workable - first because of the need
of the context to be created sharing with the internal paint context in
GDK, and second because realize() doesn't actually check if the context
already exists.

* The approach of continually creating the render buffer may not give
good enough performance, but if we do that, it's definitely desirable to
create a minimal sized render buffer instead of one that's the size of
the widget, since the cost of allocating a buffer gets larger the more
pages that have to be allocated.

* What's the intended event handling for GtkGLArea? It seems like you'd
have to put it into a GtkEventBox and handle events on the GtkEventBox -
maybe GtkGLArea should have an event window?

* The name gdk_gl_context_flush_buffer() is confusing to me -
end_frame() might be better.


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: CSS Transitions

2013-05-08 Thread Owen Taylor
On Wed, 2013-05-08 at 10:58 +0100, Allan Day wrote:
 Hi all,
 
 Last week I had a go at adding CSS transitions to Adwaita. It was
 pretty easy to do, and the familiarity of CSS made it easy to get
 started. However, I encountered an issue which leaves me uncertain how
 to proceed.
 
 The problem is that CSS doesn't provide a way to specify transitions
 according to beginning and end states. Instead, each style class can
 have a transition associated with it, and it is triggered whenever
 that style appears.
 
 I can set an animated transition for pressed buttons, but that
 animation is used whenever the pressed button style appears,
 irrespective of the style of the button beforehand. The pressed button
 transition will be used when a window changes from being unfocused to
 being focused, for example (in which case all the buttons in the
 window look like they are being pressed at the same time), or when it
 changes from being insensitive to being sensitive.
 
 As a result of this issue, I'm not sure that I can make use of CSS
 transitions, which is a shame - the ability to animate between
 different widget states would definitely add to the user experience.

I think you can quickly get into prohibitively heavy complexity here,
which is why, presumably, that CSS doesn't try to to have the idea of
start and end states.

If I was handling this on the web, I'd probably do something like,
in setup:

  $(button).transitionEnd(
   function() { 
   $(this).removeClass('pressing');   
   });

When pressed:

 $(button).addClass('pressed').addClass('pressing');

In CSS:

 .button.pressed { background: red; }
 .button.pressing { transition: background 1s; }

I think we possibly should do something similar here. We could do
something like:

 gtk_style_context_add_temporary_class(button,
GTK_STYLE_CLASS_PRESSING);

With the semantics of temporary meaning removed when last transition
finishes. I don't think you'd need many of these style classes to allow
most of what the designers want.

A generalization would be to automatically add extra temporary
pseudo-classes on changing state:

 .button:active-changing { transition: background 1s; }

Note that you can represent a transition in a particular direction as:

 .button:hover:hover-changing

So you don't need to represent that in the pseudo-class, but I'm worried
about the performance implications of having it on, in
particular, :backdrop.

- Owen



___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: On regressions and carelessness

2013-04-27 Thread Owen Taylor
Hi Tristan,

I'm sorry that you've had this experience - as someone who's been around
GTK+ a long time, I'm upset to see commit wars going on.

I could say a lot here, but I'll stop at saying that exception of
emergency cases (blocking a release, build breaks), I except everybody
to make sure that all other relevant people are signed off on a
consensus *before* a revert. This applies ten times as much to a revert
of a revert.

- Owen

On Sat, 2013-04-27 at 18:21 +0900, Tristan Van Berkom wrote:

 I am sorry to bore you all with this email, I've tried to resolve this
 in bugzilla and IRC and failed, if I am to have any trust in GTK+
 development practices, I must unfortunately share this conflict
 in public.

 Around a week ago, while I was tirelessly spending my evenings and
 weekends improving Glade, I noticed a height-for-width regression
 in GtkBin derived widgets.
 
 While this might not be serious or noticeable for many GNOME core
 applications, the regression sticks out like a sore thumb in Glade
 (since Glade uses wrapping labels for all of it's property editors, in the
 interest of economizing space), and frankly I expect GTK+ to be much
 much more than just a toolbox for the current GNOME core applications.
 
 The regression was originally introduced in the 3.8 cycle with this commit:
 
 commit f4438a1ffc6aaab92fb6b751cd16e95c2abaa0e3
 Author: Jasper St. Pierre jstpie...@mecheye.net
 Date:   Thu Nov 8 19:13:52 2012 -0500
 
 Which was signed off by Benjamin Otte.
 
 My course of action was to fix the regression, as this is code of my
 own doing, and I spent many hours getting it right the first time, I
 understand that I have license to fix these things, but fixing it would
 not be enough, because if I just fix the regression, who's to say that
 this type of careless regression will not recur in the future ?
 
 So, in the interest of notifying those responsible for the regression,
 I first opened this bug:
https://bugzilla.gnome.org/show_bug.cgi?id=698433
 
 Naturally, I wanted to be sure that those who removed code and
 caused regressions will pay better attention in the future, so I put
 Jasper and Benjamin on CC explicitly in the bug report, in the hope
 that they will learn from this and be more careful in the future.
 
 So, I closed the bug after fixing it with this commit:
 
 commit b164df74506505ac0f4559744ad9b59b5ea57ebf
 Author: Tristan Van Berkom trista...@openismus.com
 Date:   Sat Apr 20 17:52:16 2013 +0900
 
 And all was well in the world again, labels wrapped and requested
 enough height inside their check buttons.
 
 Until yesterday, when I updated my local copy of GTK+ sources again
 and rebuilt Glade and found the nasty behaviour again.
 
 This was a blow to the face, the regression was silently re-introduced
 without re-opening bug 698433, without even acknowledging that
 there is a serious misbehaviour caused by this commit.
 
 After looking through the commit log today I find the offending commit
 
 commit b8e4adfff9ba62330a71ea7498595227979bb4f0
 Author: Benjamin Otte o...@redhat.com
 Date:   Mon Apr 22 08:23:08 2013 -0400
 
 This looks very irresponsible to me, and is alarming for several
 reasons.
 
  a.) It seems that the regression is only a matter of Benjamin's taste,
   he does not like how things are implemented, and instead of
   changing the implementation, he has simply removed code and
   caused regressions.
 
  b.) It seems that Benjamin's superiority complex transcends the
   need for software that actually works. He would rather have
   the last word and break GTK+ in doing so, than swallowing
   his own pride and living with some code he doesn't like, at
   least until which time he could replace it with code that works
   without introducing regressions in the meantime.
 
   This is called too cool for school.
 
  c.) Worse still, he presumes to suddenly turn this in to my own
   problem. It is his prerogative that he remove code that does
   not suit his taste, and that the regressions he causes should be
   my own fault. That I should devote more of my own time to
   change this implementation to his taste, for free as in beer.
 
 All I ask of you, dear fellow GTK+ developers, is that responsibility
 be taken for your own actions. If your code introduces a regression,
 you should be responsible for fixing that regression, it's not right
 to introduce regressions and expect that others clean up the mess
 you leave behind.
 
 Carelessness is something that we all practice at times, but we
 should strive to be less careless. If you read code and you are
 uncertain what it does, Assume people mean well, don't assume
 that it's useless and can be safely removed. Removing code that
 you do not understand is almost certain to cause breakage.
 
 By all means, simplify code that you do not understand at first
 sight, by first understanding why it exists and then replacing 

Re: Baseline alignment ideas

2013-02-26 Thread Owen Taylor
On Tue, 2013-02-26 at 15:30 +0100, Alexander Larsson wrote:
 I don't really see any way to solve this generically. But maybe we can
 somehow limit our baseline support so that this works? For instance,
 we could always request the naural size for baseline aligned widgets
 and never grow them? Would that be enough in practical use? I dunno...

This is my suggestion - do something that handles the case of
single-line labels and buttons packed into a horizontal box or grid row
and don't add a single bit of complexity where we don't have a very
strong justification.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Answers to some comments about frame synchronization

2013-02-15 Thread Owen Taylor
On Fri, 2013-02-15 at 09:21 +0100, Alexander Larsson wrote:

  In terms of using timeBeginPeriod() and timeEndPeriod(), unfortunately,
  the GDK level API has no concept of a running animation, so it's not
  clear when GDK would set up a period. We could add such a thing -
  basically gdk_frame_clock_begin_continual_updates() - which would help
  for style animations and gtk_widget_add_tick_count(). It wouldn't help
  if a higher level built on top of GTK+ has an interface like
  window.requestAnimationFrame() or we're reacting to mouse events.
 
 We could add such an api though. For instance, we could have a
 refcounted api on the paint clock like gdk_paint_clock_use/unuse() such
 that any running animations would cause a clock use and thus higher
 timing resolution on win32. 

I tried creating such an API - patch at:

 https://bugzilla.gnome.org/show_bug.cgi?id=693934

It's a reasonable addition, and in some ways an improvement, though it
makes the API a bit less minimal.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Answers to some comments about frame synchronization

2013-02-14 Thread Owen Taylor
On Thu, 2013-02-14 at 13:52 -0500, Alexander Larsson wrote:
 Some more feedback:
 
 Cut and paste doc bug:
  * @GDK_FRAME_CLOCK_PHASE_FLUSH_EVENTS: corresponds to 
 GdkFrameClock::flush-events. Should not be handled by applications.
  * @GDK_FRAME_CLOCK_PHASE_BEFORE_PAINT: corresponds to 
 GdkFrameClock::flush-events. Should not be handled by applications.
 The last one should be before-paint

Benjamin caught this too.

 Did you try this on win32? The default timer resolution there is ~16msec which
 is about the 60Hz frame rate, which seems like it can cause some framerate
 instability. Its possible to temporarily raise this resolution by calling 
 timeBeginPeriod() although its frowned upon to always do this as it raises
 total system cpu use. Maybe we could hook this up to the default paint clock,
 so that whenever we're doing regular animations we increase the timer 
 resolution.

I haven't tested on anything but X11. My feeling is that we should just
switch g_get_monotonic_time() to using QueryPerformanceCounter() on
windows, and not worry about all the warnings you find on the internet
that on some old version of windows on some buggy bios that QPC jumps
as you switch between cores.

If it turns out that doesn't work, we can write some function that
combines GetTickCount() and QPC and sanity-checks, interpolates, etc,
but we really shouldn't do so without having demonstrated existence of
such buggy systems among our user base.

 I see that GtkTickCallback got a bool return value similar to GSource, which 
 is
 bound to have the same kind of confusion wrt what value does what. Maybe we
 should have the G_SOURCE_REMOVE/CONTINUE equivalents already from the 
 start to avoid this confusion.

Hmm, I didn't even know we added G_SOURCE_REMOVE / G_SOURCE_CONTINUE -
after 15 years it doesn't seem confusing to me!

The two possibilities here would be:

 * Document people to use G_SOURCE_REMOVE / G_SOURCE_CONTINUE - this is
   the maximally consistent approach.

 * Add enum GtkTickCallbackReturn 
   { G_TICK_CALLBACK_REMOVE, G_TICK_CALLBACK_CONTINUE }. This has the
  advantage of compiler enforced type safety, but do we really want to
  litter our code base with such two-element enums for every type of
  callback?

If consistency with timeouts/idles wasn't an issue, I'm not sure I'd
have a return value at all - it's always possible to just remove.

 I think the motion compression is still mishandling motion events from
 different devices, so that if you get two motion event streams for the
 same window from two devices they will be compressed together. 

Ah, yeah, forgot about that. Do you think it needs anything more complex
than the attached patch. I don't think getting continual streams of
events for two devices is going to be common, so I'm not sure it's worth
worrying about compressing interleaved streams.

 Also, there seems to be no compression of touch events, which seems kinda
 wrong, does it not?

I think that should certainly wait until we have real usage of touch
events to figure out. Emmanuele probably makes a good point that full
history is probably more commonly useful for touch than it is for mouse
motion where only painting programs actually care.

- Owen


From e7878c6f194de42de4e054176a9de6d64351bd63 Mon Sep 17 00:00:00 2001
From: Owen W. Taylor otay...@fishsoup.net
Date: Thu, 14 Feb 2013 14:51:33 -0500
Subject: [PATCH] Don't compress motion events for different devices

---
 gdk/gdkevents.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/gdk/gdkevents.c b/gdk/gdkevents.c
index 8e05a8e..4a9d0b9 100644
--- a/gdk/gdkevents.c
+++ b/gdk/gdkevents.c
@@ -268,6 +268,7 @@ _gdk_event_queue_handle_motion_compression (GdkDisplay *display)
   GList *tmp_list;
   GList *pending_motions = NULL;
   GdkWindow *pending_motion_window = NULL;
+  GdkDevice *pending_motion_device = NULL;
 
   /* If the last N events in the event queue are motion notify
* events for the same window, drop all but the last */
@@ -288,7 +289,12 @@ _gdk_event_queue_handle_motion_compression (GdkDisplay *display)
   pending_motion_window != event-event.motion.window)
 break;
 
+  if (pending_motion_device != NULL 
+  pending_motion_device != event-event.motion.device)
+break;
+
   pending_motion_window = event-event.motion.window;
+  pending_motion_device = event-event.motion.device;
   pending_motions = tmp_list;
 
   tmp_list = tmp_list-prev;
-- 
1.8.0.2

___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Answers to some comments about frame synchronization

2013-02-14 Thread Owen Taylor
On Thu, 2013-02-14 at 15:35 -0500, Alexander Larsson wrote:

  I haven't tested on anything but X11. My feeling is that we should
  just
  switch g_get_monotonic_time() to using QueryPerformanceCounter() on
  windows, and not worry about all the warnings you find on the
  internet
  that on some old version of windows on some buggy bios that QPC jumps
  as you switch between cores.
  
  If it turns out that doesn't work, we can write some function that
  combines GetTickCount() and QPC and sanity-checks, interpolates, etc,
  but we really shouldn't do so without having demonstrated existence
  of
  such buggy systems among our user base.
 
 I did a bunch of research on this, see the g_get_monitonic_time() win32
 implementation for it. I definately don't think we can just rely on QPC
 as is. Its not monotonic, it will drift over time and i believe over e.g.
 cpu sleeps. If we use it we should slave it to the lower precision clock
 in some kind of PLL (this is what firefox does). I just couldn't be
 bothered with the complexity last time...

The Firefox bug linked to from the GLib comments is 5 years old, and
in following through to various links, all the problems people were
describing were with Windows XP. I'm not convinced that this is a
current problem. DwmGetCompositionTimingInfo() which is the primary
interface we might want to interact with uses QPC time, so my impression
is that Microsoft's view of QPC is that it gives you sane timestamps
along the lines of CLOCK_MONOTONIC on Linux, and not uninterpreted rdtsc
values.

 And anyway, the QPC time is just for reporting time. A poll() sleep (i.e.
 MsgWaitForMultipleObjectsEx) will still use the timeGetTime precision, so
 it does not help here.

True. One thing we may want to do on Windows is consider having
gdk_paint_clock_get_frame_time() return a nominal time that increases by
1/60fps at each frame rather than using g_get_monotonic_time(). That
will cover up some amount of timing skew, since we know that frames go
*out* at regular intervals.

This has to be done somewhat carefully - if you can only do 40fps, then
you *want* frame times that increase by 1/40fps instead of quantizing to
the frame rate.

In terms of using timeBeginPeriod() and timeEndPeriod(), unfortunately,
the GDK level API has no concept of a running animation, so it's not
clear when GDK would set up a period. We could add such a thing -
basically gdk_frame_clock_begin_continual_updates() - which would help
for style animations and gtk_widget_add_tick_count(). It wouldn't help
if a higher level built on top of GTK+ has an interface like
window.requestAnimationFrame() or we're reacting to mouse events.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Answers to some comments about frame synchronization

2013-02-13 Thread Owen Taylor
Benjamin pasted these comments (along with some trivial stuff that I
just fixed) to me on IRC, wanted to respond to them here.

- typedef struct _GdkFrameClock GdkFrameClock; should probably go in
  gdktypes.h so the #include of gdkframeclock.h gets unnecessary (iw
  in gdkwindow.h)

- Should gtk_style_context_set_frame_clock() be public? If so, we want
  to disallow people overriding the frame clock if it's a widget's
  frame clock (people do weird things to style contexts :( ),
  otherwise we should just assume priv-widget-frame_clock or
  NULL. Though I like making style contexts animatable for
  non-widgets, so the API should probably stay.

If style contexts are supposed to be usable for non-widgets, it seems to
me that they should animate as well. That's why I wanted a public
ability to specific the paint clock for a style context. It should be
noted that gtk_style_context_should_animate() currently returns FALSE
if there is no widget for the style context ... I didn't change that,
but didn't want to encode that current limitation in the API - it
doesn't seem motivated by essential considerations.

- Is using ids the currently accepted way to handle callback
  connections? I'd have expected gtk_widget_remove_tick_callback
 (widget, tick_callbck, user_data) API. But that might be using
  g_signal_handlers_disconnect_by_func() instead of using ids.

Removing by function isn't easily bindable by language bindings. Signals
don't work because tick callbacks aren't just notification, they also
have the semantic to run the frame clock and produce ticks. (And also
because they would require walking the entire widget tree on every
update and emitting signals on every widget.) It seemed to me  that
copying the pattern of g_timeout_add(), etc, was the way to go. We could
add g_tick_callback_remove_by_func() for C convenience as well, but
that's probably API bloat.

  - Multiple tick callbacks is a good idea? I think it probably is,
 just thought I'd ask.

As an example, if I had done GtkStyleContext animations using tick
callbacks, that shouldn't block the widget the an application from also
using tick callbacks as well.

 - recomputing the style should be done in GDK_FRAME_CLOCK_PHASE_UPDATE
  and we don't want a separate GDK_FRAME_CLOCK_PHASE_STYLE? Because if
  UPDATE hides/shows widgets, that might trigger restyling due to the
  CSS tree changing...

In the branch, style recomputation is done in the LAYOUT phase. It can't
be in the UPDATE phase because, as you say, UPDATE can do things that
will cause CSS tree updates. (Style animation is done in the UPDATE
phase.)

I don't have any strong reason for not having a separate COMPUTE_STYLES
phase other than it didn't end up being necessary in the GtkContainer
implementation.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Problems with un-owned objects passed to closures in pygobject (gtk_cell_renderer_text_start_editing)

2013-02-06 Thread Owen Taylor
Hi Simon,

I didn't see this thread earlier. I wanted to chime in to strongly
support the view that:

 * Floating references are C-convenience only

 * Languages can and should sink floating references when they
   first touch them.

 * Any interface or code in libraries that create problems with
   this are buggy.

This has been the view from day 1, and I think you'll create
considerably bigger problems trying to futz around with it and treat
floating references specially in a binding than you have now.

On Tue, 2013-02-05 at 05:33 -0800, Simon Feltman wrote:
 For completeness, the two major problems are as follows

 https://bugzilla.gnome.org/show_bug.cgi?id=687522
 This is a vfunc implementation which the gtk internals are basically
 expecting a floating ref from. Using the standard scheme just listed,
 we sink and own the created MenuToolButton. The held widget is then
 finalized at the end of the vfunc, returning an invalid object back to
 the caller. If we add an extra ref we get a leak because the method is
 marked as transfer-none. Example:
 
 
 class ToolMenuAction(Gtk.Action):
 def do_create_tool_item(self):
 
 return Gtk.MenuToolButton()

This is basically broken API at the GTK+ level :-( ... a virtual
function can't return (transfer none) unless it's a getter for an
existing field. It is expected that language bindings will have no way
to create a floating object, so a virtual function cannot expect to be
returned a floating object.

The only thing I can see at the GTK+ level would be to add a
make_tool_item replacement vfunc and use that instead if non-null.
There's a workaround at the application level, which is something like:

 def do_create_tool_item(self):
 button = Gtk.MenuToolButton()
 self.buttons.append(button)
 button.connect('destroy', self.remove_from_buttons)
 
 return button

we can document this bug in the gtk-doc and suggest the workaround
there. But I'd strongly suggest *not* doing wholesale changes to the
Python memory management based on this bug.

 https://bugzilla.gnome.org/show_bug.cgi?id=661359
 This is a very simple case of a widget as a parameter being marshaled
 as an in arg to a callback. But because the gtk internals have not yet
 sunk the floating ref for the editable parameter, PyGObject will do
 so. By the time the callback is finished, the editable will be
 finalized leaving gtk with a bad object. It should really just be
 adding a safety ref during the lifetime of the wrapper and not mess
 with the floating flag.
 
 
 def on_view_label_cell_editing_started(renderer, editable, path):
 
 print path
 renderer = Gtk.CellRendererText()
 renderer.connect('editing-started',
 on_view_label_cell_editing_started)

This one is is simple. GTK+ needs to sink the arg before calling the
function. There should be no compatibility problems. The pygi patch on
the bug appears simply wrong and probably is creating the leak you
noticed.

- Owen

(I apologize for any duplication of earlier discussion on the thread)


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Frame synchronization open questions

2012-10-04 Thread Owen Taylor
On Thu, 2012-10-04 at 08:09 -0400, Alexander Larsson wrote:
  Some open questions in my head about the frame synchronization work:

[...]

  * For pausing the main event delivery, what we currently do is that
we queue events but don't dispatch them. This could conceivably
cause ordering problems for apps that use filters, or for work
in GDK that is done at the translate stage - since we are not
pausing translation, just pausing delivery. Alternatives:
 
 Are you sure that is really a problem? The x11 gdk_event_source_check()
 code will never even look for a message if there is any GdkEvent queued.
 And if there is nothing queued _gdk_x11_display_queue_events() will stop
 as soon any X event was converted to a queued GdkEvent. And, since this
 queued event will not be read due to the pause we will never process more
 X events until resume events.

That gets into the details of exactly how I implemented event pausing,
but logically speaking, if event delivery is paused, and a new event
happens, it needs to be queued up in GDK, Xlib, or in the X server.
If we don't translate events, then either:

 * Event is queued in Xlib, XPending() starts returning TRUE
 * Event is queued in XServer, the file descriptor 

Either situation is going to make us start spinning currently. But leaving
that aside, we *need* to translate events or we'll never get _NET_WM_FRAME_DONE
and unfreeze event delivery.

My current preference is to just unfreeze all event delivery at the end of
drawing the frame, and not wait for _NET_WM_FRAME_DONE. Then we can make
pausing *really* pausing - not unqueue, not translate, etc. I don't see
any real disadvantage of doing things that way.

[...]

 What I do however worry about is the combination of multiple GdkPaintClocks
 and gdk_display_set_events_paused(). It seems to me that each paint clock
 assumes it has full ownership of the display, calling set_events_paused(FALSE)
 whenever it has finished its paint cycle. However, could there not be other
 paintclocks (say for another toplevel) being active at that time?

Yes, this is an issue. However, if you switch to the above approach, then all
you have to do is to properly reference count the event-pausing - once we
reach the GDK_PRIORITY_EVENTS + 1 idle, we want to pause *all* event delivery
until we are done with new frame handling.

 Another thing I worry about is offscreen window animation. If you have an 
 window
 with an embedded offscreen inside, then queueing a redraw on a widget inside 
 the
 offscreen will cause a repaint cycle. When drawing to the offscreen this will
 generate damage events that will cause the embedding widget to repaint itself.
 However the damage events will be emitted during the paint phase, and the 
 parent
 widget will not get these until the next frame. This will cause a delay of on 
 frame
 which may look weird.

Hmm. So your concern is that the connection between the embedded
offscreen and the embedder is purely at the GTK+ level - if GDK_DAMAGE
events are not delivered to GTK+, then the the embedder won't know to
repaint.

I think the normal way it would end up working is:

 offscreen paints a frame, damage events are generated
 damage events are delivered, new frame is queued for the embedder

 embedder paints a frame, waits for _NET_WM_FRAME_DONE
 offscreen paints a frame, damage events are generated
 damage events are delivered, new frame is queued for the embedder
 _NET_WM_FRAME_DONE arrives
  
 embedder paints a frame, waits for _NET_WM_FRAME_DONE
 offscreen paints a frame, damage events are generated
 damage events are delivered, new frame is queued for the embedder
 _NET_WM_FRAME_DONE arrives

So the offscreen will generally paint during the wait for
_NET_WM_FRAME_DONE, and immediately get incorporated into the next
output frame. Though once things get loaded down and slow, there is no
guarantee of this.

To do better than this, I think you need to slave the clock for the
offscreen into the clock for the window that is embedding it. If the
size of the embedded widget is free-floating and not constrained by the
allocation of the embedder, then the slave clock would simply do the
complete cycle inside an ::update handler. If you want integration
of size-negotiation, then you'd have to do different phases at different
points in the master clock cycle.

And yes, the delivery of damage as events is problematical for that kind
of slaving - maybe we would need to have a signal on GdkOffscreenWindow
to allow getting as-it-happens notification of damage.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Partial frame synchronization TODO

2012-10-04 Thread Owen Taylor
On Thu, 2012-10-04 at 08:18 -0400, Alexander Larsson wrote:
  ? Implement paint throttling for the Broadway backend.
(I'm not sure what this means exactly - the default
throttling to 60fps may be OK.)
 
 Not sure what the best thing is here. If you have a low
 bandwidth connection then you would like to do a roundtrip
 to the client before sending the next frame to avoid just
 filling up the pipe with frames and then blocking on socket
 write.
 
 However, if you have a high bandwidth but high latency link
 then you could theoretically keep sending new frames and they
 would display fine on the client, although somewhat delayed.
 Doing the roundtrip in this case would just unnecessary skip
 frames in the animation.

The client asynchronously acks the frames it receives, GTK+
uses those acks to estimate the latency and bandwidth of the connection,
you keep track of how full the link is, and at the point of saturation,
you avoid processing a new frame until there is room in the link for the
new frame.

Trivial! Could not possibly go wrong! ;-)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Frame synchronization status

2012-10-03 Thread Owen Taylor
I've just pushed publically wip/frame-synchronization branches for
Mutter and GTK+ that include my long-delayed work to get proper
synchronization going between the toolkit and the compositor. This
is the work that I spoke about and demo'ed at GUADEC.

The patches are also in bugs:

GTK+: https://bugzilla.gnome.org/show_bug.cgi?id=685460
Mutter: https://bugzilla.gnome.org/show_bug.cgi?id=685463

For those who prefer to look at patches that way. The GTK+ patch is a
hybrid between my “modernizing the display loop” mail:
https://mail.gnome.org/archives/gtk-devel-list/2011-December/msg00082.html

and the work that Havoc started in:
https://mail.gnome.org/archives/gtk-devel-list/2010-October/msg4.html

I started from Havoc's work, removed some parts of it that didn't make
sense to me, then added multiple phases, layout, compositor
synchronization, and motion event compression.

I'll send out some follow up mails with more details about how the event
compression works and about remaining questions and TODO items.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Frame-based motion event compression

2012-10-03 Thread Owen Taylor
One of the concepts of the paint-clock work is that it also should be
used for event delivery and compression. The idea is that instead of
continually delivering events and possibly never getting to updating,
we batch up events and deliver them right before doing the other work
of the frame. This allows to look ahead and compress consecutive
motion events.

This is the approach that was taken for Clutter. However, doing it
for GTK+ would create serious compatibility because it's very common
in current GTK+ programs to add idle handlers that have a more
urgent priority than GDK_PRIORITY_RESIZE,  and expect those handlers
to actually be run before resizing. I found about 70 idles being added
at a priority above RESIZE in the GNOME code checked out on my laptop.
It's hard to say how many of these actually are triggered from event
handlers, but some of them certainly are.

The alternate approach I took was instead to make the paint clock
install two separate idle handlers, so we have the following sources
from high to low in priority:

 G_PRIORITY_DEFAULT - GDK Event Handling - Events are proccessed, motion 
events
   are not flushed to GTK+ until 
some
   other events comes in. 
Consecutive
   motion events are deleted.

 G_PRIORITY_DEFAULT + 1 - ::flush-events   Pending motion event is flushed
   Event delivery is paused

 added idle handlers can run

 GDK_PRIORITY_REDRAW  ::before-paint
  ::update Animations are updated
  ::layout size request and size allocate
  ::paint  Everything is redrawn
  ::after-paint
  ::resume-events  Event delivery is resumed

Pausing the event delivery is necessary because if you don't pause
event delivery, then if the motion event delivery takes significant
amount of time and more motion events arrive, you will return
immediately to the event source at G_PRIORITY_DEFAULT and never get
to drawing.

The current implementation of this in my branch is fairly simplistic
in how it does motion event compression - it only ever compresses motions
if they are completely consecutive in the event queue - any other events,
even if they are completely unrelated to the mouse will flush out motion
events and  prevent compression. More sophisticated approaches are possible
but may not be necessary.

We've already done a lot of work at suppressing mouse lag in the GTK+
core, so there wasn't much noticeable affect for things like dragging
scrollbars, but the patch does make a huge difference for an artificial
test case I wrote that calls g_usleep() in its motion event handler.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Partial frame synchronization TODO

2012-10-03 Thread Owen Taylor
Here's the dump of my current  TODO list for finishing up the frame
synchronization work. If things on this list are things you
want to work on, speak up - there's more here than I'll be able to do in
any short amount of time.

Major Stuff Inside GTK+
===

* Implement paint synchronization for the OS X backend. This basically
  means that after we submit a frame we want to use CVDisplayLink to
  find out when it makes sense to start drawing the next one.

* Implement paint synchronization for the Window backend. It's less
  clear how to do this - I suspect there are some sensible approaches
  using DwmGetCompositionTiminginfo() and QueryPerformanceCounter() to
  figure out an appropriate time to sleep before drawing the next
  frame, but it would take some experimentation.

* Implement paint synchronization for the Wayland backend. This may
  be straightforward if the protocol already has the right messages
  for frames being drawn, or may require protocol extensions.

? Implement paint throttling for the Broadway backend.
  (I'm not sure what this means exactly - the default
  throttling to 60fps may be OK.)

* When there is a frame where no painting ends up being done, we still
  at the moment are sending increments to the frame serial and waiting
  for_NET_WM_FRAME_DONE. It may be worth tracking when we are about to
  damage a toplevel window (by drawing or configuring it) and only at
  that point start a frame. Then we'll avoid asking the compositor to
  tell us when it's done painting a frame that involves no painting.

Minor stuff inside GTK+
===

* Rename GtkTimeline:progress-type to GtkTimeline:timing-function
  and sync definitions to be exactly the same as CSS if
  they aren't.

* Make GtkIconView, GtkTextView, GtkTreeView do the pre-layout
  layout handling in an ::update handler (is this right? should
  it happen after ::update and before redraw? connect-after
  to ::update?)

* Consider whether GtkIconView/GtkTextView/GtkTreeView should
  do the incremental validate step in ::after-paint rather than
  in a low priority idle. Doing it in a timeout means that an animation
  could completely starve animation.

* Figure out what to do with GtkEntry::recompute-handler

* Right now, we do scan scrolling (that is, dragging past the
  end to scroll) by adding a timeout and periodically advancing
  a jump that's influenced by how far the pointer is off the end.
  We possibly should do this scrolling in an ::update handler
  instead and make it smooth by advancing by a velocity * time rather
  than a fixed jump.

  (GtkComboBox, GtkIconView, GtkMenu, GtkTextView, GtkTreeView)

* Make GtkWindow not ever call gdk_window_process_updates() and
  always work within the paint cycle.

* Handle switching to a different window manager when we are pending
  waiting for _NET_WM_FRAME_DONE - I think this can cause a hang,
  though it may be that we'll always get an UnmapNotify signal.

Outside GTK+


* Fix up Metacity, Mutter, gnome-canvas (evolution, gcompris,
  any other cut-and-pastes), WebKit for adding idle handlers
  either between GTK_PRIORITY_RESIZE and GDK_PRIORITY_REDRAW
  or at GDK_PRIORITY_REDRAW - this never really worked, and will
  work less well now.

* Do a good job on integration of GtkClutter with this system -
  both directions of embedding to figure out what API changes
  are needed.

* Hook up GStreamer to the paint clock - find out if any changes
  are needed.


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Frame synchronization open questions

2012-10-03 Thread Owen Taylor
Some open questions in my head about the frame synchronization work:

* Is GdkPaintClock the right name? It might imply that it only has to
  do about painting and not about other things like layout.
  GdkFrameClock would be an alternative. GdkClock is possible but
  likely too generic.

* For pausing the main event delivery, what we currently do is that
  we queue events but don't dispatch them. This could conceivably
  cause ordering problems for apps that use filters, or for work
  in GDK that is done at the translate stage - since we are not 
  pausing translation, just pausing delivery. Alternatives:

  - Remove the file descriptor and don't unqueue events from the OS
queue until event delivery is unpaused. Since we can wait and
sleep currently while event delivery is paused, we have to be
careful that we don't spin in this case.

  - Unpause event delivery earlier - before we freeze waiting
for _NET_WM_FRAME_DRAWN. Then we don't need to worry about spinning
when there are OS events pending, since we'll never sleep with
event delivery paused.

* Do we need something like GtkTimeline but rawer - where you can
  just get updates and a raw elapsed time? Should we make
  GtkTimeline with a negative duration do this with the progress
  being the elapsed time?

* Is it OK for the paint-clock to be an immutable property set at
  GdkWindow construction time? Right now, it's mutable, but not
  notified, and not handled within gtk.

* Right now GdkPaintClockTarget only has a set_clock() method. Would
  it make sense to also have it have a update() method, and have the
  behavior that adding a paint clock target to a widget or directly
  to a GdkPaintClock implicitly requests the ::update phase until
  the target is removed? This would simplify the code in the places
  where I'm using GdkPaintClockTarget currently a bit, but I don't
  see implementing GdkPaintClockTarget directly to be a common thing.


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GMenuModel has landed

2011-12-09 Thread Owen Taylor
On Fri, 2011-12-09 at 00:25 -0500, Ryan Lortie wrote:
 hi,
 
 On Thu, 2011-12-08 at 19:24 -0800, John Ralls wrote:
  I think that you misunderstand how mac os works. 
  
  Yes, a single menu bar is displayed at the top of the screen. This is
  correct behavior according to Fit's Law, because you can bang the
  pointer to the top of the screen and it can't overshoot.
  
  No, applications are not limited to having a single menu bar. It's a
  one-liner to switch menubars when different window (or notebook tab,
  for that matter) gets focus.
 
 This is obviously true from the fact that an application can detect
 which window has focus and the fact that the items in a menu can be
 changed, but it has to be done manually and is exceedingly uncommon in
 native mac os applications.

When designing for the Mac (or for a different global-menu-bar interface
like Unity), you probably don't want to make the set of menu options
bounce around depending on what window is selected.

But the jump from there to the idea that when you *do* have a per-window
menu, it should be the same for every application window seems
unwarranted.

Let's not fall into the fallacy that you can write one piece of code
without any conditionalization and have it be a well-designed UI for:

 Mac
 Windows
 GNOME
 Unity
 KDE

That's not possible, and we should concentrate on letting app developers
create applications that are competitive with native applications, even
if that means doing different things on different environments.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Those darn gdk-pixbuf .po file conflicts

2011-07-28 Thread Owen Taylor
Anybody jhbuilding GNOME will have run into problems with .po file
conflicts in gdk-pixbuf, where building it causes local changes that
conflict with updates from translators. Finally got annoyed enough to
track down the problem.

The unique characteristics that gdk-pixbuf has that causes these
problems are:

 * It uses the upstream gettext Makefile.in.in not the GLib
   Makefile.in.in or the intltool Makefile.in.in

 * The .pot file isn't checked into Git

The upstream Makefile.in.in is designed so that when the .pot file isn't
there, it's generated, and the .po files are updated a single time.

(The upstream Makefile.in.in also has another incompatibility with
the GNOME internationalization workflow - it runs update-po on 'make
dist')

Possible fixes:

 A) Check in a .pot file. But this leaves the 'update-po on dist'
problem. [This is the state of affairs of Clutter]

 B) intltoolize gdk-pixbuf, even though it doesn't need anything, so we
get a non-annoying Makefile.in.in. [This is the most common
thing in GNOME probably]
   
 C) Don't intltoolize gdk-pixbuf, but check some better
Makefile.in.in into git so autopoint doesn't replace it.
   [This is the state of affairs in GTK+. Just copying the
   Makefile.in.in from GTK+ would presumably work fine.]

B) is probably cleanest; I don't know if it will cause problems for
people [cross]building gdk-pixbuf with mingw or building on OS X.

I haven't suggested going back to glib-gettextize, since that's been
something people have been trying to get away from. 

- Owen



___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: About gsettings aborting on unkown schemas

2011-05-31 Thread Owen Taylor
On Fri, 2011-05-27 at 11:57 -0400, Matthias Clasen wrote:
 On Fri, May 27, 2011 at 7:42 AM, ecyrbe ecy...@gmail.com wrote:
  I just filled this bug : https://bugzilla.gnome.org/show_bug.cgi?id=651225
  Mathias closed it as wontfix, this is by design.. i'm told that it's not a
  bug it's a feature!
 
  So if my desktop is crashing it's a feature and nobody is willing to fix
  it?I really would like to have another answer than this one.
 
 The fix is to install the missing schema.

Sorry to pick on Matthias, by responding to his mail, but I think that
general-purpose thinking is preventing people from realizing that there
is a real problem that needs to be fixed here.

Yes, programs should not be written to run with missing schemas.
Yes, we don't have ways clean ways of raising exceptions for
programmer or system configuration errors.

But that doesn't change the fact that if from some interactive
environment (GNOME Shell looking glass console, python command line,
etc), I try to use GSettings and typo the name of a schema, it should
not; _should not_ make the entire environment go boom.

So, let's add alternate API that allows for failure  without going boom
and blowing up the world, and let's figure out how to get that hooked
up to languages with exceptions automatically. Yes, this is made harder
by the fact that it's a constructor, not a normal function, but it's
certainly doable; we just have to come up with a convention.

(The idea that comes t mind is that we add a construct-only property
that means construct in an error state, and and a getter to check
for and retrieve the GError.)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Some comments on the StyleContext stuff

2010-12-06 Thread Owen Taylor
On Mon, 2010-12-06 at 12:52 +0100, Benjamin Otte wrote:
 - If all style properties are rgba, everything that uses GdkColor is
 fail. Widgets using GtkStyle will suddenly look wrong when we use
 translucency for background colors. Can we just remove GtkStyle,
 please? And deal with the fallout? It's not like people don't have to
 rewrite their draw functions anyway for GTK2 = GTK3 ...

This probably makes sense, but to point out the obvious, if this is
done, patches for every core GNOME module have to be landing within
minutes after the GtkStyle removal.

I'm going back to harping on this because 48 hours after the
deprecation of GtkStyle, there are still modules in core GNOME that
are failing to build - e.g. gnome-power-manager, because they have
the combination of GTK_DISABLE_DEPRECATED and -Werror 
and other modules that are building but likely miscompiling - 
e.g. gnome-control-center- because they have GTK_DISABLE_DEPRECATED
without -Werror.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: right click *in* a menu

2010-12-06 Thread Owen Taylor
On Mon, 2010-12-06 at 13:34 -0500, Paul Davis wrote:
 Some users of my software raised this issue in the last 24hrs:
 
 -
 
   I still call `!' `pling'...
 
  I'm still missing the extremely handy RiscOS feature that right-click
  on a menu allowed to make a selection without closing the menu. Such
  a thing in GTK would have me saved hours of re-opening nested menus
  in Ardour.
 
  Ciao,
 
 I'm wishing for that as well all the time. I wonder who came up with the
 idea that there's only ever one thing you want to do in a menu. Didn't
 know anyone was clever enough to implement something as complex as
 don't close menu if right-clicked or don't close if modifier is
 pressed when menu-item is clicked.
 
 It's one of those obvious user interface doh's.

For check and radio buttons, you can hit space to toggle them without
closing the menu.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Shrinking and growing widgets in GTK+ 3.x

2010-12-02 Thread Owen Taylor
On Thu, 2010-12-02 at 00:56 +, Bastien Nocera wrote:
 Heya,
 
 One of the features of Totem (and one implemented in a number of movie
 players) is to resize the video to match the actual video size (or a
 multiple of it). For example, a 320x480 video would see Totem resize its
 video canvas to 480x320, if the preference is enabled.
 
 The old GTK+ 2.x code did that by:
 - shrinking the toplevel to 1x1 (so all the widgets in the window get
 their minimum size set)
 - wait for the size-request
 - request our real size (480x320) in the size-request signal
 - and unset that size request in an idle (so that the window can be
 shrinked)

Is there some reason that your real size is computable only in the
size request signal?

The simple thing to do would be something like:

 gtk_window_set_geometry_hints (window,
video_canvas, /* geometry_widget */,
NULL, 0); /* hints/mask */
 gtk_window_resize_to_geometry (window, 480, 320);

Then code in GtkWindow will take care of figuring out what that means
for the size of your toplevel.

[ I'm not 100% sure how this will interact with the GtkPaned usage
  in Totem without more thought and looking at the GtkPaned code. ]

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Doubts about GPeriodic

2010-10-25 Thread Owen Taylor
[ Reply abbreviated to a couple of topics where I had firmer answers ]

On Sat, 2010-10-23 at 17:42 -0400, Havoc Pennington wrote:

 On Sat, Oct 23, 2010 at 3:37 PM, Owen Taylor otay...@redhat.com wrote:
   - We should not start painting the next frame until we are notified
the last frame is complete.
 
 Does frame-complete arrive when we just did the vsync i.e. last
 frame is just now on the screen?
 
 We can dispatch other stuff while we wait for this, right? Does the
 time between sending off the buffer swap, and getting the frame
 complete back, count as time spent doing other stuff? I guess that
 would roughly mean if paint finishes earlier than it had to, get
 ahead on other stuff in the meantime - the wait-for-frame-complete is
 a way to take advantage of any time in the 50% designated for painting
 that we didn't need.
 
 I mean, presumably while waiting for frame-complete the main loop is
 going to run, the question is just whether that time gap factors into
 any of the time calculations.

The time between when we finish the frame and the we receive frame
complete is some unknowable mix of:

 - CPU time in the kernel validating render buffers
   (or in a indirect rendering X server, I suppose)
 - Time waiting for the GPU to finish
 - Time until VBlank occurs

Only the first is conceivably something we want to balance with other
stuff, and even that is likely running on another core these days and
more so in the future.

So, what we are trying to balance here is:

 A) The time in event-processing, animation update, layout, and paint
up to the point we call glXSwapBuffers() (or XCopyArea() in the X
case)
 B) The time we spend processing other stuff from the point where
we called glXSwapBuffers()

[...]

  So, there's some appeal to actually base it on measured frame times.
  Using just the last frame time is not a reliable measure, since frame
  painting times (using painting to include event process and relayout)
  are very spiky. Something like:
 
 I had awful luck with this. (I did try the averaging over a window of
 a few frames. I didn't try minimum.)
 It's just really lumpy. Say you're transitioning from one screen to
 another, on the first frame maybe you're laying out the new screen and
 uploading some textures for it, and then you have potentially very
 different repaint times for the original screen alone, both screens at
 once during transition, and the final screen. And especially on crappy
 hardware, maybe you only get a few frames in the whole animation to
 begin with. Minimum might be more stable than average. Another issue
 with animation is you don't know the average until you're well into
 the animation

For the purposes of balancing, it doesn't matter if we have an accurate
estimate. If the animation has more complexity and takes longer to paint
than the pre-animation state, that means we're just balancing a bit more
toward animation than in the pre-animation state until we get new
statistics. (I don't think animations typically have *reduced*
complexity, because the animation have a mixture of pre-animation GUI
elements and post-animation GUI elements.)

What we don't want is to be thrown way off - if the first animation
frame takes 100ms to layout because we have a bunch of new text to
measure, we don't want to eat another 100ms doing background processing.
This is why I suggested a minimum over several frames. (Or detecting
first frames by looking for keystrokes and button presses.)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Doubts about GPeriodic

2010-10-23 Thread Owen Taylor
On Fri, 2010-10-22 at 19:30 -0400, Havoc Pennington wrote:
 Hi,
 
 On Fri, Oct 22, 2010 at 4:48 PM, Owen Taylor otay...@redhat.com wrote:
  I think we're largely agreeing on the big picture here - that priorities
  don't work so there has to be arbitration between painting and certain
  types of processing.
 
 Right, good. The rest is really just details - there are various ways
 it could work.
 
 As I wrote this email I realized I'm not 100% clear how you propose
 the 50/50 would work, so maybe it's something to spell out more
 explicitly. There's no way to know how long painting will take right,
 so it's a rule for the other stuff half? Do you just mean an
 alternative way to compute the max time on non-painting tasks (half of
 frame length, instead of 5ms or until frame-complete comes back)?

I hadn't really worked it out to the point of an algorithm, but let me
see if I can take a stab at that.

My starting point is that:

 - We should not start painting the next frame until we are notified
   the last frame is complete.

 - Once we are notified the last frame is complete, if the 
   other stuff queue is empty, we should start painting the next
   frame immediately - we shouldn't hang around waiting just in case
   something shows up.

So the question is how long after frame completion we should keep on
processing other stuff before we start painting the frame. The target
here is the 50% rule - that we want to roughly balance the time to paint
the frame with the time that we spend processing everything else before
processing the frame.

The simplest technique we could take is to say that when we have
contention processing other stuff is limited to 0.5 / (refresh rate)
seconds (roughly 8ms for the standard 60hz refresh.) This works out
pretty well until the paint time gets big. Picking a bunch of
arbitrary data points:

 paint timeother time  fps   work fraction
 ====  ===   =
 1ms   15ms 60   94%
 8ms8ms 60   50%
 10ms  22ms 30   68%
 17ms  15ms 30   47%
 20ms  12ms 30   38%
 24ms   8ms 30   33%
 40ms  10ms 20   20%
 55ms  11ms 15   20%
 90ms  10ms 10   10% 

But what this does mean is that there is a cliff across different
systems here that's even worse than it looks from above. Take a very
non-extreme example - if I'm testing my app on my laptop, maybe painting
is taking 20ms, and I'm getting a reasonable 30fps. I give it to someone
with a netbook where CPU and GPU are half the speed and painting takes
40ms. The framerate drops only to 20fps but the time for a background
operation to finish increases by 3.8x. The netbook user has half the CPU
and we're using only half that half to do the background work.

(This type of thing can happen not just because of a slow system, but
because of other different conditions - the user has more data, has a
bigger screen, etc. The less predictable the situation, the more we need
to make sure that things degrade gracefully. GTK+ application running
on a user's system is a pretty unpredictable situation.)

So, there's some appeal to actually base it on measured frame times.
Using just the last frame time is not a reliable measure, since frame
painting times (using painting to include event process and relayout)
are very spiky. Something like:

 - Average time over last three frames
 - Minimum time over last three frames
 - Average time over last three frames where only motion events were
   delivered

Probably works better. Once you have a historical frame time estimate,
then you limit the total other stuff time (before and after frame
completion) to that time.

 paint timeother time  fps   work fraction
 ====  ===   =
 1ms   15ms 60   94%
 8ms8ms 60   50%
 10ms  22ms 30   68%
 17ms  33ms 20   65%  (30 47%)
 20ms  30ms 20   60%  (30 38%)
 24ms  26ms 20   52%  (30 33%)
 40ms  60ms 10   60%  (20 20%)
 55ms  61ms8.6   52%  (15 20%)
 90ms  93ms5.5   51%  (10 10%)

  But pathological or not, I think it's also common. This is where my
  suggestion of a 50% rule comes in. It's a compromise. If we're lucky
  repainting is cheap, we're hitting the full frame rate, and we're also
  using 75% of the cpu to make progress. But when drawing takes more time,
  when there is real competition going on, then we don't do worse than
  halve the frame rate.
 
  (This continues to hold in the extreme - if redrawing is *really* slow -
  if redrawing takes 1s, then certainly we don't want to redraw for 1s, do
  5ms of work, redraw for another 1s, and so forth. Better to slow down
  from 1 fps to 0.5fps than to turn a 1s computation into a 3

Re: Doubts about GPeriodic

2010-10-22 Thread Owen Taylor
If we say that painting should have a higher priority than IO
completions and IPC or IO completions and IPC should have a higher
priority than painting, then we are talking a hard priority system. And
the fundamental rule of hard priority systems is that the stuff with
higher priority has to be well behaved. If PulseAudio uses real time
priorities to get itself scheduled ahead of everything else, then it
must make sure that it's not eating all the CPU.

Is painting well behaved? Inherently - no. We can easily get in
situations where we can spend all our time painting and no time doing
anything else. Once we add synchronization to a an external clock,
painting becomes *better behaved*. If we are able to paint at 80fps or
40fps, then that will be throttled to 60fps or 30fps and there will be
some time remaining. But we maybe we can inherently paint at 61fps? If
we make painting highest priority, we have to make provisions for other
stuff to progress.

Are IO completions and IPC well behaved? Well that's really up to the
application however, they have to be *somewhat* well behaved in any
case. If I have a GIO async callback that fills a treeview, there is one
pathology where my callback gets called so frequently that we never get
get to repaint. But what may happen instead is that I get so much data
in a *single* callback that I block the main loop for an unacceptably
long period of time. So we always will have the requirement that
callbacks from the main loop must be *individually* short. Making IO
completions and IPC highest priority makes this requirement a bit more
stringent - it means that callbacks from the main loop must be *in
aggregate* short. That callbacks from the mainloop aren't allowed to do
expensive stuff, but instead must queue it up for an idle at lower
priority.

While a two-part system like this sounds like a huge pain for
application writers - it does have the big advantage that everybody gets
a say. If we just cut things off after a fixed time and started
painting, then we could end up in a situation where we were just filling
the treeview and painting, and never processing D-Bus events at all. 

Well, sort of - the main loop algorithms are designed to protect against
this. *All* sources at the current priority are collected and dispatched
once before we check again for higher priority sources. But since
sources have different granularities and policies, the effect would be
some unpredictable. The behavior of the GDK event source is to unqueue
and dispatch one X event source per pass of the main loop, D-Bus and GIO
probably do different things.

Right now event compression in Clutter counts on events getting unqueued
from the X socket at the default priority and then stored in an internal
queue for compression and dispatching before painting. Going to a system
where painting was higher priority than normal stuff would actually
require 3 priorities: Event queueing, then painting, then everything
else. [*]

But can we say for sure that nothing coming in over D-Bus should be
treated like an event? Generally, anything where the bulk of the work is
compressible is better to handle before painting.

An example: If we have a change notification coming over D-Bus which is 
compressible - it's cheap other than a triggered repaint. Say updating
the text of label. And combine that with our GtkTreeView filler, then we
might have:

 Fill chunk of tree view
 Change notification
 Repaint tree view and label
 Fill chunk of tree view
 Change notification
 Repaint tree view and label
 Fill chunk of tree view
 Change notification
 Repaint tree view and label

Instead of:

 Queue stuff up for filling 
 Change notification
 Change notification
 Change notification
 Fill chunk of tree view
 Repaint tree view and label
 Fill chunk of tree view
 Repaint tree view
 Fill chunk of tree view
 Repaint tree view
 
On Thu, 2010-10-21 at 16:25 -0400, Havoc Pennington wrote:

[...]

 Re: frame-complete, it of course assumes working drivers... If you
 don't have the async frame completed signal you may be back to the 5ms
 thing, no? I guess with direct rendering you are just hosed in that
 case... with indirect you can use XCB to avoid blocking and then just
 dispatch for 5ms, which is what we do, but with direct rendering you
 might just have to block. Unless you're using fglrx which vsyncs but
 does not block to do so (at least with indirect, not totally sure on
 direct). Sigh. The driver workarounds rapidly proliferate. Maybe
 clutter team already debugged them all and the workarounds are in
 COGL. :-P

I think the basic assumption here is working drivers. If we know what we
want, define what we want precisely, implement what we want in the free
drivers, the proprietary drivers will eventually catch up. Of course,
for GTK+, it doesn't matter, since it's not directly vblank swapping.

[...]

 You were talking about handling incoming IPC at higher priority than
 repaint... it sort of depends on what the IPC is 

Re: Doubts about GPeriodic

2010-10-22 Thread Owen Taylor
I think we're largely agreeing on the big picture here - that priorities
don't work so there has to be arbitration between painting and certain
types of processing.

I think the points where aren't entirely aligned are: what is a suitable
method of arbitration, and whether the arbitration is something that
happens normally, or you have to opt into it as a background task.

About the method of arbitration: if we look at the idea of reserving a
fixed 5ms or painting during the waiting for completion gap, that
works well if the computation and painting are essentially unrelated -
if we are painting a GtkExpander expanding smoothly while we are filling
a treeview. It doesn't work so well if the painting is being triggered
*by* the computation. If we are using 12ms of CPU to relayout and
repaint and only filling the treeview in the intermediate 4ms, then
we've increased the total time to complete by a factor of 4.

If course, having each chunk of a large computation trigger the same
amount of compressible paint work is pathological - ideally we'd be in a
situation like incrementally laying out a GtkTextView - only the first
chunk triggers a full repaint, subsequent chunks only cause the
scrollbar to redraw and the scrollbar redraw is cheap.

But pathological or not, I think it's also common. This is where my
suggestion of a 50% rule comes in. It's a compromise. If we're lucky
repainting is cheap, we're hitting the full frame rate, and we're also
using 75% of the cpu to make progress. But when drawing takes more time,
when there is real competition going on, then we don't do worse than
halve the frame rate.

(This continues to hold in the extreme - if redrawing is *really* slow -
if redrawing takes 1s, then certainly we don't want to redraw for 1s, do
5ms of work, redraw for another 1s, and so forth. Better to slow down
from 1 fps to 0.5fps than to turn a 1s computation into a 3 minute
computation.)

[...]

  Are IO completions and IPC well behaved? Well that's really up to the
  application however, they have to be *somewhat* well behaved in any
  case.
 
 What's hard I think is to make them well behaved in the aggregate and
 on every single frame.
 
 i.e. it's hard to avoid just randomly having too much to dispatch
 from time to time, then you drop 3 frames, it just looks bad. But as
 long as you're OK *on average* this can be solved by spreading the
 dispatch of everything else across more than one frame, instead of
 insisting on doing it all at once.

I think this is a very good point, especially when we are trying to keep
a video from stuttering or similar cases where the redrawing is
unrelated to the painting. Unfortunately, however, we can't spread
things out if the work occurs at the layout stage - not an uncommon
circumstance.

(There may be a slight overestimation going on about how bad it is to
drop frames- early versions of Clutter and of the Clutter/Tweener
integration just didn't handle the computations correctly, so dropped
frames were causing velocity stutters.)

[...]

  While a two-part system like this sounds like a huge pain for
  application writers - it does have the big advantage that everybody gets
  a say. If we just cut things off after a fixed time and started
  painting, then we could end up in a situation where we were just filling
  the treeview and painting, and never processing D-Bus events at all.
 
 If painting is a higher than default priority you could still add
 sources at an even higher priority, or you could hook into the paint
 clock in the same place events, resize, etc. hook in to force some
 queue to be drained before allowing paint to proceed.

Yes, the X event queue could be force-drained before painting (with
considerable adaption to the current interfaces for doing things like
GTK+/Clutter integration.)

 Also if you have a handler that just does your whole queue of whatever
 at once, it effectively does run on every frame and compress it all,
 even if it's an idle - since the main loop can't interrupt a dispatch
 in progress, and the gap means that we'll probably run the dispatch
 handler once on all nonpaint sources in a typical frame.

[...]

 dbus works like the GDK source (because it copied it). One message per
 dispatch at default priority. I'm not sure how gdbus works.
 
 I think what dbus does works well, as long as painting is 1) above
 default priority and 2) not ready for dispatch for at least some time
 during each frame length.
 
 The thing is that as long as everything but painting is basically
 sane, then the up to 5ms or while waiting for vsync gap is going
 to be enough to dispatch everything. If you get a flood of dbus
 messages or whatever though, then you start spreading those over
 frames (but still making progress) instead of losing frames
 indefinitely until you make it through the queue.

 It's just less bad, to spread dispatching stuff out over a few frames
 if you get a flood, than it is to drop a few frames.

Certainly in the case 

Re: Doubts about GPeriodic

2010-10-22 Thread Owen Taylor
On Fri, 2010-10-22 at 16:20 -0400, Havoc Pennington wrote:

 Imagine two processes that are both following the rules and have 10
 streams open to each other and they are both processing all 10 at a
 superfast rate just tossing messages back and forth. What's the
 latency between occasions where these processes have 0 sources to
 dispatch? That drives your framerate. While 10 streams between two
 apps sounds contrived, I don't think one big complex app with say a
 sound server and some downloads in the background and some other
 random main loop tasks is all that different.

At some point we do have to realize that preemptive multitasking was
invented for a reason. We can play around the edges, but we can't make a
single thread able to smoothly do 5 things at once.

That may sound weird coming from me - considering that the gnome-shell
approach is to put everything in a single process and write it in
Javascript.

But as I see gnome-shell it is limited in scope - it's the compositor,
it handles selecting and switching tasks. But it isn't playing movies,
it isn't loading web pages. It isn't doing your taxes. If it does start
doing any of those things, we'll have to answer the question of how to
get those activities into a different thread, into a different process.

This is why as web browser become more and more application containers,
you are seeing a move to isolate the pages from each other - separate
threads, separate garbage collection, even separate processes.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Doubts about GPeriodic

2010-10-21 Thread Owen Taylor
On Thu, 2010-10-21 at 08:17 -0400, Havoc Pennington wrote:
 Hi,
 
 On Thu, Oct 21, 2010 at 5:46 AM, Ryan Lortie de...@desrt.ca wrote:
 
  What about non-input events, though?  Like, if some download is
  happening and packets are coming in and causing dispatches from the
  mainloop that we do not have control over.
 
 I brought this up a bit in the earlier thread.
 
 My takeaway is that for I/O type stuff you usually want what we ended
 up with at litl, which is to limit it to some length of time per
 frame. Unfortunately GMainLoop has no native way to do that. I
 described our solution a bit in the old paint clock thread.

 There's a danger both of some random download starving animation and
 of your download spinner starving the download.

I think to start off we have to realize that a GTK+ application is
significantly different from a compositor like Mutter or the litl shell
in a number of ways:

* GTK+ is quite efficient when just a small amount of stuff is
  changing. Even if the entire toplevel takes a long time to paint,
  a cheesy animation somewhere in the frame isn't going to cause
  all the time to be spent painting.

* GTK+ is not going to be using a blocking glSwapBuffers(); GTK+ will
  be timing frames either based on a straight-up timer, or by getting
  frame-complete signals back from the compositor.

* It's not the compositor - if painting blocks, it's not the end
  of the world.

Once we move beyond that, then I'm skeptical about lumping everything
that's not events/animation/relayout/repaint into the same bucket.
Everything else includes a number of different things:

* UI updates being done in response to asynchronous IO finishing

  In this case, I think usually you just want to do the updates
  immediately; for most UI updates the real expense is the relayout/
  repaint, so there's no advantage to trickling them in... if you
  get such a bunch of updates that you block for a couple hundred
  ms, then you just accept a small stutter.

  If that might be a couple of seconds, then I think it's up to
  the app author to figure out how to fix the situation - if updates
  can be batched and batching reduces the work that needs to be done,
  then an easy to use before relayout API is handy.

* Computations being done in idle chunks because threads are evil.

  If the computations don't affect the GUI, then in my mind they
  should just happen in whatever time isn't needed to draw whatever
  is going on. We have no way of knowing whether whatever is going
  on is a spinner or is a video playing.

  In other words, progress displays need to be self-limiting to eat
  only a small amount of CPU. After all, it's pretty bad if my
  computation is going on at *half*-speed because of the progress
  spinner!

* Servicing incoming IPC calls

  Assuming incoming calls queue up, I think it's fine to just handle
  them at higher priority than the repaint.

  The pathological case here is that Totem is playing a movie which
  is maxing out the frame rate, and somebody in another process does
  sync calls:

   for (movie in allMovies)
  movie.length = totemProxy.getPlayingTime(movie.id);

  And Totem handles one call, then paints a frame, then handles another
  call and the whole thing takes forever. This is clearly bad, but I
  don't think the solution is for totem to reserve 5ms for ever 
  frame of every movie just because someone might start using the
  D-Bus API it exports. Solutions here are general solutions:

   - Don't put service API's in the GUI thread of GUI applications
   - Use async calls - if the above was done by making a bunch of
 async calls in parallel, it would be completed in one frame.

* Expensive GUI work done incrementally (adding thousands of items
  to a GtkTreeView, say) Threads not useful because GTK+ isn't
  thread-safe.

  This one is slightly harder because each update can actually trigger
  a relayout/repaint, which might be expensive. So if this is being
  done at idle priority, you may be in the situation of do one chunk,
  which takes 0.1ms, repaint for 20ms, do another chunk, and so forth.

  This is the case where something like your proposal of reserving
  time per frame starts making sense to me. But rather than just doing
  a blanket reservation of 5ms per frame, it seems better to actually
  let the master clock know what's going on. To have an API where you
  add a function and the master clock balances calling it with relayout.
 
  That a) avoids wasting time waiting for nothing to happen
  b) allows better handling of the case where the relayout takes 100ms
  not 20ms so you don't work for 5ms, relayout for 100ms, repeat.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: New rule

2010-10-21 Thread Owen Taylor
On Fri, 2010-10-22 at 02:26 +0900, Tristan Van Berkom wrote:
  Except that we're talking about applications that are in the core
  desktop (gnome-bluetooth, gnome-power-manager, gnome-color-manager,
  gnome-packagekit, gnome-control-center), or in the default applications
  (totem in my case, which also got bitten by the sizing changes, and that
  I have no idea how to fix [1]).
 
 And having core desktop modules depend on an API that's still unfinished
 and still unstable is a good idea... because ?

There's three things going on here:

 * As Bastien said, having the core of the GNOME desktop ported to 
   GTK+ 3 is incredibly useful validation of the new APIs.

 * If GNOME 3 is going to ship against GTK+ 3, it has to be ported now.
   We can't wait until 3.0 is out in December or January and then
   start porting.

 * The decision to base GNOME 3 on GTK+ 3 was made when it still looked
   like GTK+ 3 was going to be ABI and minor API changes release.

   The fact that major API changes are being made now isn't necessarily
   a bad thing ... I was never too happy with the idea of a GTK+ that
   just broke ABI for no good reason. 

   *But* it's unexpected, it doesn't really fit in with the planned
   release schedule, and causes problems for trying to actually
   work on the user parts of GNOME 3.

I think there's definitely a need for the people working on GTK+ 3 to be
respectful of GNOME 3, to make sure that making GTK+ 3 better doesn't
make GNOME 3 worse. That doesn't mean not making API changes, but it
does mean:

 - Making sure there is information out about how to fix applications
   that need to be fixed. If updating the porting guide is too hard
   to do immediately, or you are changing something that was never
   in GTK+ 2, then send mail to desktop-devel-list describing the
   change and what it takes to deal with it.

 - Considering how your changes fit in with the release schedule. If
   GNOME 2.91 is going out on Monday, don't land a breaking ABI on
   Friday.

 - If your change is going to cause major breakage, figuring out
   in advance what's going to break and work with maintainers to make
   sure that there are fixes ready to go in immediately.

(And, yes, people have often been doing these things.)

In order for GNOME 3 to ship on time, to be a good release, it needs to
build every day. And that's going to require coordination between GTK+
and GNOME.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Doubts about GPeriodic

2010-10-20 Thread Owen Taylor
A new GPeriodic class has popped up in GIO that's supposed to be the
basis of a unified master clock implementation between Clutter and GTK+.
I'm skeptical that any abstraction like GPeriodic can provide useful
integration between Clutter and GTK+

The real problem is that the phases of the repaint cycle matter. We
don't just have a bunch of stuff we need to do every frame, we need to
do things in the order:

 * Process events
 * Update animations
 * Update layout
 * Repaint

If GTK+ and Clutter are working together in the same process, then we
still need to go through those phases in the same order and do
everything for each phase.

It looks like GPeriodic has two phases:
 
 - Tick
 - Repair

Which I guess are meant to be update animations and relayout and
repaint. I can sort of see how I can squeeze the right behavior out of
it given various assumptions. In particular, you need to only ever have
one one repair function that does all the work of relayout then repaint
- you can't have separate repair functions for relayout and repaint. Or
for clutter and for GTK+.

But does an abstraction make sense at that point? If we need to
explicitly glue GTK+ into clutter or clutter into GTK+ using hooks
provided in GTK+ and Clutter, then all that GPeriodic is doing is saving
a bit of code reuse.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Minimum height for minimum width

2010-10-12 Thread Owen Taylor
On Tue, 2010-10-12 at 15:44 +0900, Tristan Van Berkom wrote:

[...]

 Also... lets try to break this down into two separate issues at hand.
 
 First issue being that the window requires enough height to fit the
 windows minimum width at all times... this is because we dont to any
 dynamic updating of window constraints based on window allocations.

*Even if you don't do any dynamic updating* the minimum-for-minimum
approach isn't inherently right, it's a decision that you are making. A
height-for-width window content doesn't have a single minimum size, it
has a range of minimum sizes:

 +---+  +-+ +--+ +--+
 |   |  | | |  | +--+
 |   |  | | +--+
 |   |  | | 
 |   |  +-+
 |   |
 +---+ 

You are saying that we should always pick the left-most version - the
tallest version, and make that the minimum size and the default size.
(Without dynamic updating, minimum and default size should be the same
for a window without any expandable areas; e.g. a dialog with buttons
and labels, since a bigger default would just add useless excess space.)

The corollary to this is that if I'm designing a window I have to make
the minimum width on the window the right width - the width where it
has a pleasing aspect ratio and looks good.

Maybe this is a reasonable conclusion, but it's not some inherently
logical conclusion.

 Second thing is that Owen disagrees that the minimum wrap width of
 a label should be limited to something greater than the size of the
 largest word.
 
 My initial thoughts on these issues is that:
   a.) dynamic constraints on GtkWindow is something worth experimenting
   b.) The minimum size of a label is not a hack just because of our
   current minimum-for-minimum policy on window constraints, and
   for the most part should be left as is.
 
 Comments in line...
 
 On Mon, 2010-10-11 at 15:30 -0400, Owen Taylor wrote:
  On Mon, 2010-10-11 at 14:45 -0400, Havoc Pennington wrote:
   Agreed, GtkLabel needs to report min size = true min sane size and
   natural size = either full width, or a good width
   The full width is more correct imo, maybe we should figure out why
   that doesn't work well.
 
 I'm not sure you agree completely here, are you saying that a
 wrapping label's minimum width should be the width of the largest word ?
 or it should be a true sane minimum size ? My opinion is that a single
 word wrap width is definitely not a sane minimum size at all.

 In other words I don't think its a sane idea to allow the user to shrink
 the window's width so much that its height probably flows right off
 screen (leaving larger paragraphs virtually unreadable in a single
 column when hitting the minimum constraint).

You are jumping from a conclusion about windows to a conclusion about
labels. Yes, if I have a window with a single label in it, and I've
implemented height-for-width window constraints on it in some fashion,
then it doesn't make sense to allow the user to accidentally reflow the
window so it is 5000 pixels tall. We want a reasonable minimum width on
the window.

But that doesn't mean that that minimum reasonable width of a window
should be used for minimum reasonable width of a label - there are
lots of other places where a label can be used.

If someone is using a layout editor, and they add a label for a name,
and they decide that they want it to wrap onto two lines when necessary,
they are going to be very surprised when setting the label wrappable
suddenly makes everything expand way out horizontally. Yes, they can
find the width-chars property and crank it down, but I don't think this
is expected behavior. Keeping to expected behavior is important for
making people understand what's going on... it's better to have people
reason out:

 Oh, if I don't set a minimum width on the window, then the user can
  resize it way down and it gets too tall, I better set a minimum 
  width

Than learn:

 Labels in GTK+ start off with a magic width that seems to be around
  40-50 characters wide. If you want to make a wrapping label narrower
  than that you have to set the width-chars property

The first way may not be any easier than the second way, but it's far
less magic and more comprehensible.

Now, obviously if we combine:

 - Label doesn't have a magic minimum width when wrapping

With:

 - Window starts off with minimum width

Then that also could produce something a little unexpected - but I think
it would be better to have an artificial minimum size enforced by
default on GtkWindow than on GtkLabel. (For one thing, it deals with
things that are *like* GtkLabel.)

 However, currently we use width-chars property of label to allow
 the user to declare an unreasonably small minimum width for a label
 (very small minimums IMO should only be used for some static text 
 that is known to never be a long paragraph, thus never pushing 
 window height to an unreasonably large value when allocated its 
 minimum width

Minimum height for minimum width

2010-10-11 Thread Owen Taylor
When we are doing height-for-width layout, sometimes we get a situation
where we have a height-for-width object in a context that doesn't
support height-for-with-layout.

Examples:

 A) The height-for-width contents of a GtkWindow. X doesn't support
height-for-width layout, the window hints are just the minimum
size and the default size. [*]

 B) A height-for-width widget inside a container that is
width-for-height.

The current behavior of GTK+-2.9x is that the miminum size in such a
context is the minimum-height-for-the-minimum width.

This sounds obviously right, but I think it's not. For example, if a
wrapping GtkLabel did the obvious thing and reported its minimum width
as the minimum width as the width of the longest word, the result of
putting this inside a GtkWindow would be:

 ++
 |This|
 |is  | (using the current default-size-of-GtkWindow is minimum size)
 |some|
 |text|
 ++

Or

 +-+
 |This is some text|
 | | (using default-size-of-GtkWindow is natural size)
 | |
 | |
 +-+

Because that works out so badly, GtkLabel currently doesn't report it's
real minimum width, but, as in GTK+-2.0, reports a guess of a good
width to wrap to as the mininum width, so what you get is that the
window starts off as:

 +--+
 |This is   | 
 |some text |
 +--+

and can't be resized to a smaller width/height. That doesn't work badly
for this case, but means that a wrapped GtkLabel always has that
artificial minimum width, even in cases where it has a real
height-for-width parent. (Unless overridden by the programmer.)

In my opinion minimum-height-for-minimum width is just conceptually
wrong - the minimum width should be the real minimum width, and at the
real minimum width a height-for-width widget will be unnaturally tall.
This is no a good miminimum height.

What, to my mind, works better in every way is
minimum-height-for-natural-width. The objection I was hearing to this
is then the window ends up with:

 +-+
 |This is some text|
 +-+

And can't be made any narrower than this, but unlike minimum width, the
natural width has no inherent meaning for a widget that can adapt to any
width, like a wrapping label. We can get identical results to the
current results by making a wrapped GtkLabel report a good width to
wrap to as the *natural* width. And we do this without breaking the
minimum width of GtkLabel.

- Owen

[*] It actually works pretty well in X to report a dynamic minimum
height depending on the current width. We start off with the
minimum width of the window as the real minimum width, but the
minimum height of the window as the height for the default/natural
width. If the user starts shrinking the window horizontally, we set
progressively bigger minimum height hints. This does work out
slightly oddly with wireframe resizing, however.


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Minimum height for minimum width

2010-10-11 Thread Owen Taylor
On Mon, 2010-10-11 at 14:45 -0400, Havoc Pennington wrote:
 Agreed, GtkLabel needs to report min size = true min sane size and
 natural size = either full width, or a good width
 The full width is more correct imo, maybe we should figure out why
 that doesn't work well.

For an ellipsized label, the natural width is clearly the full width.
In this case there's an obvious interpretation of natural size because
there's a minimum size where we display all the information and above
that we're just adding padding.

But for a wrapped label, there are many different possibilities for
displaying all the information. I'm not sure that there's anything
more natural about the case where we put each paragraph on a single
unwrapped line.

Of course, if we take the position that there is never any reason to
allocate an actor more than it's natural size - if the natural size is
all the size that is useful - then using a narrower natural width
could be problematical, especially if things like centering are added by
the parent actor or GTK+, and not the actor itself.

 A related patch attached, if you fix this you'll quickly want it.

Yeah, came up when I was fixing gnome-terminal behavior
(see e.g., https://bugzilla.gnome.org/show_bug.cgi?id=631903)

 Also, you said if doing minimum-height-for-natural-width the window
 doesn't wrap the label and can't be made narrower. I don't understand
 that... I would expect the min size of the window is the min height
 for natural width as you propose, and the min width as returned by
 get_preferred_width(). So the min width ought to be the true min
 width?

If you use those values, then you are telling the window manager that
the window can be sized to a size that's too small for its contents.
Since GTK+ 3 basically can't handle underallocation all, this isn't
a good idea. 

(The behavior with underallocation is that widgets are
allocated with their minimum size centered at their allocated location.
Since there is no clipping this means you get a mess of overlapping
widgets.)

Setting the hints dynamically based on the current width can work, if
we're willing to say screw wireframe resizing (wireframe resizing
doesn't completely *not* work, you just have to release and retry
a few times to get to certain sizes.)

 Hmm. The a good width to wrap to thing seems like pretty much crack
 to me. If people want their window to have some sort of pleasing
 aspect ratio they should just pack the label to limit its width, or
 set default size on the window, or whatever. Or maybe GtkWindow should
 constrain the default size to nice aspect ratio somehow, solving
 globally for the window instead of per-label.

I certainly always saw the good width to wrap to thing as a workaround
for GTK+ 1/2 geometry management. But picking a good aspect ratio from
the toplevel will require a binary search. That might be fine if the
binary search is done only once when picking a default size for the
toplevel when first mapping it.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GTK+ policy (was RE:rendering-cleanup-next)

2010-09-14 Thread Owen Taylor
On Mon, 2010-09-13 at 21:48 -0400, Paul Davis wrote:
 On Mon, Sep 13, 2010 at 6:27 PM, Matthias Clasen
 matthias.cla...@gmail.com wrote:
  2010/9/13 Thomas Wood t...@gnome.org:
 
  Clutter's (very detailed) coding style document may be useful here,
  since it has a very similar coding style to GTK+:
 
  http://git.clutter-project.org/clutter/tree/doc/CODING_STYLE
 
 
  Yes, I think we could basically adopt this word-by-word.
 
 i know that coding styles are, as it says in the clutter guide,
 arbitrary, but i would just like to mention one little detail that i
 find problematic when working on shared projects (and much less so on
 non-shared projects). this rule:
 
 --
 Curly braces should not be used for single statement blocks:
 
   if (condition)
 single_statement ();
   else
 another_single_statement (arg1);
 ---
 
 what's wrong with this?

There are valid arguments the other way, and you make them. But
remember, GTK+ is using GNU style bracing and that pretty much takes the
question out of the matter. Doing:

 if (conditition)
   {
 single_statement ();
   }
 else
   {
 another_single_statement ();
   }

consistently wouldn't really fly. (Plus, there's the question of all the
existing code...)

- Owen

[ Along with the nested if thing, if you are specifying exactly
  the style, the other thing to note is that if one branch of an if gets
  braces because it is multiline, the other branch should usually get
  them too ]

 


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GDBus socket code ...

2010-08-11 Thread Owen Taylor
On Wed, 2010-08-11 at 12:52 +0100, Michael Meeks wrote:

   In historic types in ORBit2 / linc - we had a custom GSource (which
 IMHO is well worth stealing), for which we could strobe the poll
 conditions easily [ though perhaps in glib that is easier anyway ].
 
   Thus - when we hit an EAGAIN on write, we would immediately switch our
 poll to wait until there is space on the socket buffer (G_IO_OUT), and
 get on with processing any other data / backlog and/or sleep.

Note that gmain wasn't designed for this, and it was a source of
mysterious crashers throughout gnome until someone smart tracked it down
a couple of years ago. GLib is robust against it now, though.

https://bugzilla.gnome.org/show_bug.cgi?id=523463

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Unix signals in GLib

2010-04-30 Thread Owen Taylor
On Fri, 2010-04-30 at 09:18 +0100, Richard Hughes wrote:
 I'm writing for comments. Making my daemons (upower, PackageKit, etc)
 quit nicely after receiving SIGTERM or SIGINT is _really_ hard to do
 correctly. The fact that I can only do a few things (write, etc) in
 the signal handler makes otherwise quite nice GLib code very quickly
 descend into l33t UNIX #ifdef code which is quite unportable and
 really hard to get right.
 
 Would there be any interest in providing these features in Glib? Some
 work appears to have been done already here
 http://www.knitter.ch/src/gunixsignal/ although would obviously need a
 lot of work before being even close to proposing. I appreciate this is
 UNIX only, but I'm sure for other OS's (WIN32) we can just do nothing
 for these functions. Comments?

It would be nice to have this functionality. The caveat is that it's
also tricky - we were originally planning to do GChildWatch as a more
generic mechanism but we gave up on that and found making it work just
for one signal tricky enough.

Some things that I remember as difficult:

 * Signals are a process-global resource. What GLib needs to do
   to reliably catch signals may depend very much on how the signal
   is configured. So, either you have to say that applications can't
   call signal(), sigaction(), etc, on their own at all, or you have
   to give very detailed instructions about exactly what they do.

 * The are race conditions if you get signals just as you are setting
   things up. They were crucial for SIGCHLD,  and solvable because of
   the known interaction with waitpid(), but this was one of the big
   reasons we had trouble basing GChildWatch on a more generic
   functionality. This may be less of a problem for TERM/INT/etc.

 * Interaction between signals and threads is complex. GChildWatch
   works very differently depending on whether threads are enabled 
   or not.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Comments on GApplication

2010-04-20 Thread Owen Taylor
Spent a bit of time looking at GApplication and I'm not sure it's
completely cohering as an overall picture for me. There seems to be a
lot of quite different things in it:

 - Option processing
 - Hiding of g_type_init() and gtk_init()
 - Single instance
 - Actions exported externally.
 - The mainloop

That aren't fundamentally related to each other. It does make for
a cleaner looking Hello World, but I'm wondering if it wouldn't be
better to scope it more narrowly maybe just to the single
instance / exported presence aspects.

I'm also uncertain about the GApplication vs. GtkApplication split. It's
obviously nice theoretically if we can share infrastructure between
Clutter apps and GTK+ apps. But it means there is a lot of pointless
GApplication API (most of it) that doesn't really make sense to
an application developer, since you'd be unlikely to have a raw
GApplication. 

(It looks like there's an attempt to make GtkApplication somewhat
complete, with gtk_application_run(), is that meant to shield
Hello World from GApplication?)

- Owen

===
  void (* action)(GApplication *app, const char *name, guint timestamp);
===

Maybe use a detailed signal?

===
typedef enum
{
G_APPLICATION_FLAG_DISABLE_DEFAULT_IPC = 1  0
} GApplicationFlags;

GApplication *  g_application_new   (int
*argc,
 char   
  ***argv,
 const char 
*appid,
 
GApplicationFlags   flags);
===

What are the flags about? The one example is undocumented and as far as I can 
find, unused.

Do we really want to bind argc/argv processing into GApplication? That always
a source of huge complexity in libgnome.

===
GApplication *  g_application_try_new   (int
*argc,
 char   
  ***argv,
 const char 
*appid,
 
GApplicationFlags   flags,
 
GApplicationExistsCallback on_other_process_exists,
 gpointer   
 user_data);
===

I don't think a callback if another process exists is enough - you also
need to know when another process *doesn't* exist, so you can actually go
ahead and construct your UI. A signal seems better to me than a custom
callback, and avoids the problem you have here that callback has no
destroy notify.

===
GApplication *  g_application_try_new_full  (int
*argc,
 char   
  ***argv,
 
GApplicationFlags   flags,
 GType  
 class_type,
 
GApplicationExistsCallback on_other_process_exists,
 gpointer   
 user_data,
 guint  
 n_parameters,
 GParameter 
*parameters);
GApplication *  g_application_new_full  (int
*argc,
 char   
  ***argv,
 const char 
*appid,
 
GApplicationFlags   flags,
 const char 
*first_arg,
 ...) 
G_GNUC_NULL_TERMINATED;
===

Tim has very strongly articulated a view in the past that _full means with 
GDestroyNotify
and not with random extra stuff. 

_try_ definitely means fail gracefully and return an error rather than failing 
hard

Don't know why try_new_full() takes a GParameter array, and new_full() takes 
varargs.
(I don't see g_application_new_full() in the C file)

 
GApplication *  g_application_get_instance  (void);


If this is meant to be single instance, then that needs to be clearly 
indicated, and 
calling g_application_new() multiple times should be caught a warning/error.

===
voidg_application_add_action
(GApplication  *app,
   

Re: Extended Layout

2010-04-13 Thread Owen Taylor
On Mon, 2010-04-12 at 15:16 -0400, Tristan Van Berkom wrote:

 - Problem --
 
 There is another problem I cant seem to figure out at this point
 and it would be great if someone with prior experience in this area
 could explain.
 
 The problem in a phrase is this:
   How do you get the collective minimum and natural height for all
   the children in a GtkHBox given a width for the box ?

 In UI terms, A label will unwrap and request less height inside a
 VBox - pretty easy. But a Label with a Button inside an HBox
 placed in a GtkVBox will unwrap when widened; the HBox will not
 request any less height for that.

I think there's a reasonable meaningful implementation of
height-for-width for a horizontal box - see:

http://git.gnome.org/browse/hippo-canvas/tree/common/hippo/hippo-canvas-box.c

for an implementation. I forget the exact details, but I think the basic
idea is just to do the expand/fill calculations based on the minimum and
natural widths of the children. Then based upon the computed child
sizes, compute minimum and natural heights for the children.

This will certainly work fine in simple situations - e.g. one child is
fixed with and set not to expand, and another child is a wrapped label.

That's for height-for-width children inside a height-for-width
horizontal box. HippoCanvas just had height-for-width, and that made
things a lot simpler to think about and to get completely right. 
(My impression of the layout containers in Clutter 1.2, in MX, in ST, is
that width-for-height is honored more in name than in actuality.)

Once you get collections of height-for-width and width-for-height
widgets mixed together it gets really, really hard to think about. My
general advice to try and keep things sane is that a container should
say:

 If I was called in height-for-width mode, I will size negotiate all my
 children in height-for-width mode as well.

And vice-versa. 

Then if a widget can't do the requested negotiation mode, then it should
just do the simple thing. If I ask for the natural height of a GtkLabel
for a width of -1, it should tell me the height it would have for its
natural width without wrapping. And if I ask for the natural width of a
label for a specified height h, it should ignore h, and just tell me its
natural width without wrapping.

[ 
* Actually, a bit better than this is to honor
gtk_widget_set_size_request() values if provided in preference to the
natural width.

* It's possible a widget should declare whether it implements
height-for-width or width-for-height or both, and GTK+ should take care
of falling back to the simple behavior for unimplemented modes.
]

 I wrote an algorithm for this but its expensive; the HBox has
 to guess some heights and query all children if they would
 horizontally fit into the HBox given a guessed height, for each
 guessed height until the minimum and natural heights are obtained.

 I also found that in some UIs my algorythm works perfectly,
 but as soon as you have to query a GtkLabel for its desired
 width for a given height - things break; because GtkLabel
 does not implement anything like that.

I don't think we ever want to do this kind of iterative multi-pass
layout. It could easily get prohibitively expensive.

 The really confusing and important detail to note here is that
 Clutter does none of the above, clutter does not return any
 contextual width-for-height information on a ClutterText, nor
 does it calculate the minimum/natural height of a horizontal box
 given an allocated width.
 
 In the case of ClutterText, the width-for-height is basically
 a default minimum and natural width based on the current text.
 
 In the case of the ClutterBoxLayout; it returns the MAX's of the
 default minimum and natural heights of all of its children.
 
 It would be really great if someone could explain to me what
 are the tradeoffs of not implementing width-for-height in GtkLabel,
 and how it is in general that Clutter gets away with not implementing
 these.

The basic way that people get away with it is that width-for-height is
enough to do almost everythign you want, and height-for-width is just a
bit of gravy on top to make things look complete.

(This view is somewhat biased toward horizontal writing - but even for
traditional vertical-text languages like Chinese and Japanese, computer
interfaces are written horizontally.)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: glib's role in the units policy game

2010-03-27 Thread Owen Taylor
On Sat, 2010-03-27 at 02:44 +0100, Benjamin Drung wrote:
 Hi,
 
 I am sending this mail to gtk-devel list to catch as many ideas and
 opinions as possible, if you not already following bug #554172 [1].
 
 Ubuntu has now a units policy [2] and I want to implement it, but I am
 still not sure what the best way is. The solution should not be Ubuntu
 (or GNOME) specific. Please read my blog post [3] and comment there or
 here.

I don't see this as a problem that's solved with an API. No reasonable
set of guidelines are going to read:

 Always use power of 10 units in all circumstances
 Always use power of 2 units in all circumstances

So, an API couldn't consist of a single function and an config switch.
It would need to be an exhaustive list of all possible places that a
size might be used - disk sizes for fixed and rotating media, file
sizes, data transferred on a network, raid stripe sizes, memory sizes,
application memory usage, memory and disk quotas. And so forth and so
on. Then what happens when some app author hits a situation that isn't
covered by the extensive list? Are they going to submit a proposal for
an addition and wait 12 months for it to appear in distributions? Are
they going to pick the closest match even if the results seem
inappropriate? Or are they going to pick an arbitrary enum value that
gives the result they want. Almost certainly the last.

An API is a poor substitute for a compact, well-written set of
application guidelines. Guidelines are usable for everybody, no matter
how they are implementing their program. Guidelines allow the
*application* designer to use human intelligence and judgment to figure
out the best balance of consistency with the environment and suitability
to the use case. (Applications have designers too, sometimes they are
the same as the author, sometimes not.)

And where do this sort of application guidelines go for GNOME? (*) 
In the HIG. Getting additions into the HIG isn't always as easy as it
should be, but if a proposal has been made and its languishing then the
solution is to fix that, not to try and work around it with an API. 

Now, whether it makes sense for GLib to have an API that allows an
application author to explicitly request a power-of-two size or a
power-of-ten size is a different question. It probably does, but that's
just a convenience function.

- Owen

(*) I know you said that you don't want this to be GNOME specific.
But if you want consistent applications, then you need a group of
application authors that agree that their apps should be consistent. 
You can certainly push KDE and GNOME to adopt the same basic set of
principles of GNOME in this one particular area.


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Testing for memory leaks in GTK

2010-01-05 Thread Owen Taylor
On Tue, 2010-01-05 at 09:57 -0500, Paul Davis wrote:
 On Tue, Jan 5, 2010 at 8:51 AM, Morten Welinder mort...@gnome.org wrote:
  You probably need to end with something like this
 
 useful, but that doesn't explain the dependency on the number of
 GtkWindows created, does it?

Generally, GdkWindow objects won't be released until DestroyNotify
events for the corresponding X windows are received back from X. 
So if you don't wait for events, the more windows you create
the more memory you will leak at program exit.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GIO will link with -pthread soon

2009-11-12 Thread Owen Taylor
On Thu, 2009-11-12 at 14:57 +0100, Alexander Larsson wrote:
 On Wed, 2009-11-11 at 23:10 -0500, Ryan Lortie wrote:
 
  The easiest fix here is to link libgio with -pthread.  We discussed this
  at the GTK meeting yesterday and decided to do this unless anyone on the
  list has a compelling reason not to.
 
 Its certainly an easy fix. However, it will inflict e.g. threadsafe
 malloc and all the other libc functions on all single-threaded Gtk+ apps
 which is not a small cost (although the Gtk+/glib locking will not be
 enabled unless you call g_thread_init()). 

Do we have the numbers for that? The original gthread split was done
based on timings I did in 1999 or so, and if I recall, showed an overall
overhead of 5% or so for allocation intensive GTK+ code (like
constructing a large tree of widgets.)

10 years later we have GSlice, a completely different implementation of
threads, and very different processors so it's very hard to extrapolate
from those original measurements.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Minutes of the GTK+ Team Meeting - 2009-11-10

2009-11-10 Thread Owen Taylor
On Tue, 2009-11-10 at 17:19 -0500, Behdad Esfahbod wrote:
 On 11/10/2009 04:45 PM, Emmanuele Bassi wrote:
  4. text-buffer 3.0 request (jessevdk)
  - split TextView: single TextBuffer driving two TextView widgets
  - there are problems with selection and cursor handling
  - move some things from the TextBuffer to the TextView, like the
 new EntryBuffer in gtk+ 2.18 does
  - worth supporting, targeting 3.0 - might cause deprecations
 during the 2.19/2.20 cycle
 
 I'm not sure what this is about exactly.  But something that I thought about 
 working on vte (and specifically, thinking about breaking vte into 
 model/view) 
 is, if GtkTextBuffer and GtkTextView where proper interfaces, VteTerminal and 
 VteTerminalView could implement them.  In fact, VteTerminalView could be 
 implemented only if GtkTextView was too slow for us.  This could give us lots 
 of neat stuff I imagine.

The maintenance on GtkTextView over the last 5+ years has been very
slight ... (150 open bugs at the moment)

I'd rather see someone pick that up rather starting on ambitious
rewrites.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: another quartz maintainance request (patch review commit)

2009-09-11 Thread Owen Taylor
On Thu, 2009-09-10 at 08:24 -0400, Paul Davis wrote:
 This bug report: https://bugzilla.gnome.org/show_bug.cgi?id=594738
 contains a potentially important fix to make 2+ monitors work with
 GTK/Quartz. Those of us in the GTK/Quartz community would appreciate
 someone with commit rights (1) looking at christian's approach and
 forming a judgement on whether its the right approach to the problem
 (2) committing this or some eventual fix.
 
 Things are looking pretty bad for GTK/Quartz maintainance right now.
 Nobody that has commit access appears to be in a position to test
 (i.e. care) about Quartz fixes; those who care do not have commit
 access. It would be hugely preferable (IMHO) for us to not to have to
 branch to an alternate git repo ...

There's no problem with giving people who have been active work on the
OS/X backend commit access - you just need to ask:

http://live.gnome.org/NewAccounts

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: [PATCH] If running in GNU gdb, disable grabs

2009-08-22 Thread Owen Taylor
On Sat, 2009-08-22 at 18:38 +0200, Martin Nordholts wrote:
 Hi,
 
 When debugging applications that uses a lot of grabs, such as the GIMP 
 paint core, it is annoying when breakpoints are hit while in a grab. 
 There are ways to remedy this, but for an inexperience developer it 
 appears as if X11 completely freezes.
 
 I have attached two patches to this mail. The first one adds the 
 possibility to disable grabs with the GTK_DEBUG environment variable, 
 the other patch disable grabs if we appear to run in GNU gdb. The 
 approach is inspired by the Qt toolkit which uses the same approach.
 
 I wanted to discuss this on the mailing list before filing any bug 
 report. So, does this make sense to anyone else? I have push access so 
 when these patches have been reviewed and approved, I can push this to 
 git master.

The idea is reasonable - certainly would prevent a common
novice-gtk-programmer mistake... if the programmer doesn't know about
GDK_DEBUG=nograbs.

It is much harder to get in trouble these days with this then it used to
be since we use many less X grabs then we used to and are careful to
flush X ungrabs before activating a menu item callbacks. But there are
still a few cases where it still matters.

It's probably good to extend the message to explicitly say that things
are going to work funny (menus, comboboxes, scale buttons, etc.) and
some things (drag-and-drop, e.g.) won't work at all.

(Alternative - maybe it should just be reversed, and instead of
disabling grabs, we should just print a helpful message that you can
use GDK_DEBUG=nograbs ? but if you don't know how to switch-to-a-vt
and kill -9 gdb you might not read that message until you were already
locked up and had to reboot...)

However, I don't see how your patch does anything - *GTK* grabs aren't
an issue - they only control delivery of events that are already going
to the process anyways. You actually should be interested in *GDK*
grabs.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Widget states for 3.0 (and 2.18?)

2009-08-17 Thread Owen Taylor
On Mon, 2009-08-17 at 14:21 +0100, Thomas Wood wrote:
 On Mon, 2009-08-17 at 07:46 -0500, Cody Russell wrote:
  On Sun, Aug 16, 2009 at 4:35 PM, Thomas Wood t...@gnome.org wrote:
  I think the current GTK+ states are correct as an enum. They
  are single
  user or application activatable states of which the widget
  cannot be in
  more than one state at once (normal, insensitive, active,
  prelight).
  
  But then we get workarounds for things like toggle/check/radio buttons
  because we can't support active and prelight at the same time.  Not
  all combinations of states make sense for different widgets, but I
  think it's still a more sensible way to store the information.
 
 Why would you have active and prelight at the same time? The two states
 are mutually exclusive. One indicates the user is holding the mouse
 button down on a widget, the other indicates that the mouse is
 hovering over the widget.

What happens when you hover over a pressed-in togglebutton?

(As Matthias says, if you investigate the current GTK+ state types, once
you get beyond the trivial stuff things fall apart. I don't think you'll
find any rational reason that a scrollbar trough is drawn in the active
color, other than the active color is dark, and scrollbars needed
to be drawn dark.)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GTK on Macintosh OSX

2009-07-13 Thread Owen Taylor
On Sun, 2009-07-12 at 19:47 -0700, John Ralls wrote:
 On Jul 12, 2009, at 6:18 PM, Dominic Lachowicz wrote:
 
  Glib on Win32 has routines to solve this problem. It resolves things
  relative to where the Glib DLL is installed. If your applications use
  the XDG data directory functions in Glib, you might get away with this
  too. Maybe you could invent something similar that used the OSX bundle
  as your point of reference.
 
 
 
 The routines only solve the problem if they're used.
 
 Don't need to invent anything. The core foundation functions are easy  
 to use, and Richard Hult already abstracted it into a gobject. But the  
 code still has to be patched. It's not just application code, either,  
 but infrastructure libraries like gconf, gnome-keyring, dbus, etc.
 
 I set up a $PREFIX of /usr/local/gtk, built Gnucash, and ran `find / 
 usr/local/gtk -name *.dylib -exec strings \{\} | grep -H 'local/gtk'  
 \; ` and got more than 100 hits. Many of them are likely to be just a  
 define that isn't used for anything, but every one would have to be  
 examined, and a goodly number of them would require patching.

Well, it's hard to say how many places Gnucash hard codes paths, but the
number of places in the GTK+ stack is nowhere close to 100.

http://git.fishsoup.net/cgit/reinteract/tree/src/reinteract_wrapper_osx/main.m

Sets only 7 environment variables before initializing GTK+ to get
everything found properly within the bundle.

I did need:

 http://bugzilla.gnome.org/show_bug.cgi?id=554524

Hmm, Behdad gave me the commit approval on that; didn't see that.

Dom's suggestion of unifying with the Win32 functionality for locating
paths relative to the executable makes a lot of abstract sense though I
haven't looked into the practical details of how it works out.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GTK on Macintosh OSX

2009-07-13 Thread Owen Taylor
On Sun, 2009-07-12 at 07:29 -0400, Paul Davis wrote:

 Regarding the general question of non-X11 backends being 2nd-class
 citizens ... yes, I have seen and suffered from this problem when I
 was doing work on gtk/osx last year and the previous year. It would be
 nice if we could somehow get the core GTK team to commit to not making
 changes that are not tested on non-X11 backends, but this seems
 unlikely and the reasons are not totally unreasonable.

There is no fixed core GTK+ team.

The way we've always determined who gets listed in the GTK+ release
announcements as the team is simply to look at who has done lots of
work and taken ownership of components.

[ It looks like the team list in some of the recent release
announcements has gotten a bit stale; the 2.16 list includes me among
some other people not doing much work at the moment. ]

If someone wants to make sure that the OS/X port is released working out
of the box for 2.18, they have to be building from git, fixing problems
that come up, going through patches in bugzilla, etc.

And then that person will be on the team and the team can make the
commitment you want.

In the past, when I've made changes that require per-backend changes,
I've generally tried to stub out the necessary parts of the other
backends if stubs make any sense. E.g.,

  http://bugzilla.gnome.org/show_bug.cgi?id=587247

Adds a backend function that is called after processing updates;
backends that don't need to do anything there don't need to do anything
so stubbing out was very reasonable. But other changes do require actual
work, and requiring every person submitting a patch to GDK to:

 A) Have a OS/X machine and a windows machine
 B) Know enough about OS/X and windows programming to make changes

Doesn't seem reasonable. (As you say.) Requiring people making changes
to GDK to provide the docs and test cases so that the people maintaining
the backends can easily add the missing functionality is, on the other
hand, quite reasonable.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GtkTextView documentation needs to be reviewed

2009-07-05 Thread Owen Taylor
On Sun, 2009-07-05 at 14:34 +0200, Nicolò Chieffo wrote:
 While I was reading the documentation of GtkTextView [1] I came up to
 an error. I moved to the section about
 gtk_text_view_add_child_in_window [2] and read the help:
 . a possible hack would be to update all child positions when the
 scroll adjustments change or the text buffer changes. See bug 64518 on
 bugzilla.gnome.org for status of fixing this issue.
 
 I read the bug [3] which is marked as fix released, but it does not
 seem to have anything to do with the problem which the documentation
 was trying to expose.
 (In fact I still haven't found a solution to this problem).
 
 I would like to discuss about how changing the doc here. I can make my 
 proposal:
  a possible hack would be to connect to the signal 'changed'
 and/or 'value-changed' of the GtkAdjustment, which you can obtain by
 calling gtk_scrolled_window_get_vadjustment() on the parent window in
 which the text view is. Then you will need to update the child
 position using gtk_text_view_move_child() after having computed the
 correct ypos (the GtkAdjustment can help)

Why don't you think this problem is fixed?

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GObject Introspection support for GSList or other types without a GLib type

2009-06-21 Thread Owen Taylor
On Sun, 2009-06-21 at 19:33 -0500, Daniel Espinosa wrote:
 That's true, but is there any data type check when g_object_set/get
 function is called?
 
 If GLib plans to use G_TYPE_POINTER for GSList, GError, gshort, and
 any other data type without G_TYPE* macro defined, then just tell it
 on the documentation: if you (programer) want to use a not defined
 type use G_TYPE_POINTER on properties declaration.
 
 If documented or the better practice is stablished, some projects like
 Anjuta doesn't need to define a G_TYPE_ERROR from him self.

You are misunderstanding my comment.

I'm talking in *particular* about GSList. Well and GList, GHashTable and
other container types.

Without parameterized types (a list of what?) G_TYPE_SLIST is useless.

- Owen

 2009/6/17 Owen Taylor otay...@redhat.com
 On Sun, 2009-06-14 at 02:30 -0500, Daniel Espinosa wrote:
  How to handle data types without a GLib GType defined.
 
  On libgda, it define a GType for GError and a GSList becouse
 these
  doesn't exist on GLib and it uses them as parameters when
 creating
  properties and events.
 
  For now may be the library (as Anjuta does) must create its
 own GType
  definition but with the following rule: the name of the type
 must be
  defined as GError and GSList, in order to allow
 g-ri-scanner to
  detects the currect types GError and GSList as the example.
 
  In GDA it has GDA_TYPE_ERROR and GDA_TYPE_SLIST with
 GdaError and
  GdaSList, then the scanner tries to find a definition to
 GdaError
  and GdaSList but they don't exist, when changed this types'
 names as
  avobe the correct type is detected.
 
 
 To point out what may be obvious - there is zero advantage of
 a
 X_TYPE_SLIST over G_TYPE_POINTER.
 
 This is true in general, and true for gobject-introspection -
 if
 gobject-introspection finds a property by introspection and
 deduces a
 type of GSList for it, it still doesn't have the element-type.
 
 - Owen
 
 
 
 
 
 -- 
 Trabajar, la mejor arma para tu superación
 de grano en grano, se hace la arena (R) (en trámite, pero para los
 cuates: LIBRE)

___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GObject Introspection support for GSList or other types without a GLib type

2009-06-17 Thread Owen Taylor
On Sun, 2009-06-14 at 02:30 -0500, Daniel Espinosa wrote:
 How to handle data types without a GLib GType defined.
 
 On libgda, it define a GType for GError and a GSList becouse these
 doesn't exist on GLib and it uses them as parameters when creating
 properties and events.
 
 For now may be the library (as Anjuta does) must create its own GType
 definition but with the following rule: the name of the type must be
 defined as GError and GSList, in order to allow g-ri-scanner to
 detects the currect types GError and GSList as the example.
 
 In GDA it has GDA_TYPE_ERROR and GDA_TYPE_SLIST with GdaError and
 GdaSList, then the scanner tries to find a definition to GdaError
 and GdaSList but they don't exist, when changed this types' names as
 avobe the correct type is detected.

To point out what may be obvious - there is zero advantage of a
X_TYPE_SLIST over G_TYPE_POINTER. 

This is true in general, and true for gobject-introspection - if
gobject-introspection finds a property by introspection and deduces a
type of GSList for it, it still doesn't have the element-type. 

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Native file chooser dialog on Windows

2009-05-14 Thread Owen Taylor
On Thu, 2009-05-14 at 22:46 -0400, David Cantin wrote:
 Hi all,
 
 is there a plan or any activities regarding using the native file
 chooser on the Windows platform ? Like the print dialog does.
 
 There is already an opened bug about this :
 http://bugzilla.gnome.org/show_bug.cgi?id=319312

I think my comment #4 there says everything that needs to be said.

Not sure why Tor hasn't WONTFIX'ed the bug already.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Upgrade of gettext on git.gnome.org (was Re: Moving GLib and GTK+ to git)

2009-04-06 Thread Owen Taylor
On Thu, 2009-04-02 at 16:56 -0400, Owen Taylor wrote:
 On Thu, 2009-04-02 at 22:45 +0200, Olav Vitters wrote:
  On Thu, Apr 02, 2009 at 12:07:30PM +0200, Alexander Larsson wrote:
   I've got a local branch with the rebased client-side-windows work.
   However, I am unable to push it to git.gnome.org due to the pre-commit
   hooks:
   
   The following translation (.po) file appears to be invalid. (When
   updating branch 'client-side-windows'.)
   po/af.po
   The results of the validation follow. Please correct the errors on the
   line numbers mentioned and try to push again.
   stdin:90: keyword msgctxt unknown
   stdin:90:8: parse error
   .
   
   
   Checking
   http://git.gnome.org/cgit/gitadmin-bin/tree/pre-receive-check-po we
   have:
   
   # gettext-0.14.6 on git.gnome.org isn't new enough to handle
   # features such as msgctx
   # dash_c=-c
dash_c=
  
  So a gettext update should be done. CC'ed gnome-sysadmin.
 
 Upgrading the system gettext to a radically different version isn't
 something that I want to do. My plan here is to create an RPM with just
 the gettext utilities that installs in /usr/lib/gettext17 or something.
 
 (BTW, I temporarily disabled the hooks so Alex could push his branch.)

I've now gone ahead and done this - there is a statically linked version
of gettext-0.17 in /usr/libexec/gettext17 that the pre-receive check
uses now.

I've also reeneabled -c, so it should be doing a full set of checks.

Let me know if any problems show up.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Moving GLib and GTK+ to git

2009-04-03 Thread Owen Taylor
On Fri, 2009-04-03 at 06:23 +, Stef Walter wrote:
 Kristian Høgsberg wrote:
   So unless we find a show-stopper bug in the import
  within the next few days, what's on git.gnome.org now is final.
 
 Not a show stopper, but it'd be cool to migrate the svn-ignore property
 over into .gitignore files. Or is this to be handled some other way?

The svn-ignore property and .gitignore files are different in various
ways; an automated conversion would be challenging. So the current
plan is just that people need to create new .gitignore files.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Upgrade of gettext on git.gnome.org (was Re: Moving GLib and GTK+ to git)

2009-04-02 Thread Owen Taylor
On Thu, 2009-04-02 at 22:45 +0200, Olav Vitters wrote:
 On Thu, Apr 02, 2009 at 12:07:30PM +0200, Alexander Larsson wrote:
  I've got a local branch with the rebased client-side-windows work.
  However, I am unable to push it to git.gnome.org due to the pre-commit
  hooks:
  
  The following translation (.po) file appears to be invalid. (When
  updating branch 'client-side-windows'.)
  po/af.po
  The results of the validation follow. Please correct the errors on the
  line numbers mentioned and try to push again.
  stdin:90: keyword msgctxt unknown
  stdin:90:8: parse error
  .
  
  
  Checking
  http://git.gnome.org/cgit/gitadmin-bin/tree/pre-receive-check-po we
  have:
  
  # gettext-0.14.6 on git.gnome.org isn't new enough to handle
  # features such as msgctx
  # dash_c=-c
   dash_c=
 
 So a gettext update should be done. CC'ed gnome-sysadmin.

Upgrading the system gettext to a radically different version isn't
something that I want to do. My plan here is to create an RPM with just
the gettext utilities that installs in /usr/lib/gettext17 or something.

(BTW, I temporarily disabled the hooks so Alex could push his branch.)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Gtk+ 3.0 Theming API Hackfest Minutes

2009-03-09 Thread Owen Taylor
On Mon, 2009-03-09 at 15:09 -0400, Behdad Esfahbod wrote:
 Alberto Ruiz wrote:
  2009/3/2 Behdad Esfahbod beh...@behdad.org:
  Alberto Ruiz wrote:
  * All drawing funcitions to use a cario context and hide GtkWidget and
  GdkWindow (Strong request from 3rd party toolkits)
  When we discussed this before, I among others suggested that this is wrong 
  as
  it hardcodes cairo as the only supported drawing system in the API.  For
  example, one wouldn't be able to use OpenGL for drawing anymore.
  
  Well, you can always get the native device of the surface. This works
  for Windows and Mac native drawing APIs. You can then create an OpenGL
  context out that (someone correct me if I'm wrong here).
  
  A bit tricky, we might add facilities for that, but most engines are
  going to use cairo anyway.
 
 It's just about whether the API is extensible or hardcodes cairo.

Hardcode cairo!

I think it's more important to have a constrained well documented
drawing API between the the clients of the theme engine and the theme
engines than to worry about hypothetical what ifs.

It's OK to have some sort of backdoor. But if you don't have:

 - Any client of the API passes in a cairo surface to the theme
   engine that is all the theme engine needs to know about.

 - Any theme engine can draw properly to an arbitrary cairo 
   surface that is passed in.

Then you've replicated the current broken ecosystem.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Using Pango for Batch Processing

2009-02-17 Thread Owen Taylor
On Tue, 2009-02-17 at 19:23 +0200, Joshua Harvey wrote:
  
 Hi guys,
 
 
 I'm interested in using Pango to set type in OpenType fonts to be
 saved as PNG images. I've done some googling and I can't find any
 examples on how to use the library or pango-view to do this without
 GTK. The idea is to use either pango-view or the Pango Ruby wrapper to
 convert text into PNGs on the fly from a linux server that's not
 running gtk, for use as titles in web pages.
  
 
 
 I've got pango-view installed, but when I run it I get:
 
 
 % pango-view -q --text hi there -o hi.png
 
 pango-view: Cannot open display
  
 
 
 Any pointers would be great!

I'd suggest filing a bug - it should be pretty simple to fix in
pango-view... probably just never tested.

(That's not an immediate answer to your question, obviously.)

- Owen



___
gtk-i18n-list mailing list
gtk-i18n-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-i18n-list


Re: client-side-windows vs metacity

2009-02-03 Thread Owen Taylor
On Tue, 2009-02-03 at 11:16 +0100, Alexander Larsson wrote:
 On Tue, 2009-02-03 at 10:55 +0100, Alexander Larsson wrote:
  On Sat, 2009-01-31 at 07:43 -0500, Owen Taylor wrote:
  
   If you get an Inferior leave, you may be losing the ability to track the
   pointer at that point ... the pointer may have disappeared deep into a
   descendant of some foreign child. So I don't see how you can just ignore
   it - it's going to need to be translated into one or more GDK leave and
   enter events. (Depending on the current sprite window tracked by GDK and
   the subwindow field.)
   
   Same for Inferior enters, and in fact virtual enters/leaves as well.
  
  Hmm, this is a bit of a problem. How do you tell the difference from a
  virtual leave to an inferior with subwindow NULL to a virtual leave to a
  parent (which also sets subwindow to NULL).
 
 Actually, I think the book i have is not correct. This can't happen,
 right? Because in the second case we'd have an virtual enter with parent
 NULL.

Well, the reverse - you get a Enter/Virtual to an inferior, a
Leave/Virtual to a parent, but yes.

- Owen



___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: client-side-windows vs metacity

2009-01-31 Thread Owen Taylor
On Sat, 2009-01-31 at 08:51 +0100, Alexander Larsson wrote:
 On Fri, 2009-01-30 at 14:48 -0500, Owen Taylor wrote:
  On Fri, 2009-01-30 at 20:38 +0100, Alexander Larsson wrote:
   I'm running a full gnome session with the client-side-windows branch
   here. There are a few minor issues I'm working on, but overall it just
   works(tm). I'll send a new status report soon.
   
   However, there is a specific issue with metacity that I'd like some
   feedback on. Metacity uses gdk to handle the frame window for managed
   children. This window will get the clients window reparented into it,
   however metacity never accesses that window as a gdk window (i.e. it
   doesn't call gdk_window_foreign_new on the xid) , so gdk doesn't know
   about it.
   
   This means that as gdk knows there are no children of the frame, and
   thus the event emulation code won't send a leave for inferior event to
   the frame when the mouse moves from the frame to the client window. This
   means metacity won't reset the cursor on the frame, thus affecting the
   default cursor in the client window. (In X the default cursor is
   inherited from the parent.)
  
   Now, how do we solve this? There are two possibilities. Either we add
   special code to metacity so that it creates the child window as a gdk
   object and keeps it updated as to the size when the frame is resized.
   
   Or, we add some hacks to gdk to track this case and make it work. One
   way is to detect native Inferior Leave events on windows with no
   children and use XQueryTree to find the windows. Resizes can be tracked
   with ConfigureEvents. I'm attaching a patch that implements this.
  
  Although I believe there is a problem, It's not clear from the above
  what it is. Is there problem that Metacity isn't it getting a GDK leave
  event? If that's the problem, why can't you just convert the native
  event to a GDK event and send that along?
  
  And how are ConfigureEvents related?
 
 Yes, metacity is not getting a gdk leave event on the frame when the
 cursor moves from the frame to the client area. This happens because the
 event emulation code gets a native leave event on the frame to some
 position inside the frame, but the frame GdkWindow-children list is
 empty, so it doesn't generate a leave event for a child. 
 
 We filter out the native events that we get on the toplevel, because
 they don't necessary match whats gonna be right when taking the client
 side windows into account. 

If you get an Inferior leave, you may be losing the ability to track the
pointer at that point ... the pointer may have disappeared deep into a
descendant of some foreign child. So I don't see how you can just ignore
it - it's going to need to be translated into one or more GDK leave and
enter events. (Depending on the current sprite window tracked by GDK and
the subwindow field.)

Same for Inferior enters, and in fact virtual enters/leaves as well.

And once you are doing that translation, it seems reasonable to me that
if the subwindow field in the X event is a child you don't know about,
to:

 A) Generate events as if that child was an immediate child of the 
window receiving the events. (Not a child of some CSW)

 B) Generate GDK events with a NULL subwindow - as GDK does currently
in the Metacity case.

 The solution is to create a GdkWindow object for the client window so
 that the event emulation machinery is aware of the child and everything
 works. The configure event is required to update the size of the foreign
 child when the frame changes.

I don't like the QueryTree, I don't like tracking ConfigureEvents on a
random subset of foreign windows. If we need to track the existence and
position of children, then selecting for SubstructureNotify upfront
seems more appropriate.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: client-side-windows vs metacity

2009-01-31 Thread Owen Taylor
On Sat, 2009-01-31 at 14:22 +0100, Alexander Larsson wrote:
 On Sat, 2009-01-31 at 07:43 -0500, Owen Taylor wrote:
  On Sat, 2009-01-31 at 08:51 +0100, Alexander Larsson wrote:
   Yes, metacity is not getting a gdk leave event on the frame when the
   cursor moves from the frame to the client area. This happens because the
   event emulation code gets a native leave event on the frame to some
   position inside the frame, but the frame GdkWindow-children list is
   empty, so it doesn't generate a leave event for a child. 
   
   We filter out the native events that we get on the toplevel, because
   they don't necessary match whats gonna be right when taking the client
   side windows into account. 
  
  If you get an Inferior leave, you may be losing the ability to track the
  pointer at that point ... the pointer may have disappeared deep into a
  descendant of some foreign child. So I don't see how you can just ignore
  it - it's going to need to be translated into one or more GDK leave and
  enter events. (Depending on the current sprite window tracked by GDK and
  the subwindow field.)
 
 Hmm, what do you mean by current sprite window?

Sorry, the (innermost) window that GDK thinks that the cursor is over.
It's terminology from the X server internals, which tend to use sprite
and cursor somewhat interchangeably.

 In general we don't just ignore it, we'll send leave/enter events on any
 to gdk known window inbetween the native window we got the event on and
 the known window at the position the leave event specifies.

 However, in this case the known window at that position is the toplevel
 itself, so we don't send any events.

The description above is a bit odd, I think you have to remember the
window where the pointer last was and take that into account in
determining where to send events, but perhaps you are just simplifying
in the description.

  Same for Inferior enters, and in fact virtual enters/leaves as well.
 
 Yeah.
 
  And once you are doing that translation, it seems reasonable to me that
  if the subwindow field in the X event is a child you don't know about,
  to:
  
   A) Generate events as if that child was an immediate child of the 
  window receiving the events. (Not a child of some CSW)
  
   B) Generate GDK events with a NULL subwindow - as GDK does currently
  in the Metacity case.
 
 Yes, something like that might work. We should also avoid sending the
 normal enter/leave events to the location in the leave event.
 
 Also, I guess this means we need to track enter/leave events on child
 windows too. Right now we just track it on the toplevel and do the
 enter/leave on native children ourselves. But with that approach we
 can't detect NULL subwindows leaves on non-toplevels. (We'd treat
 non-NULL subwindow leave/enter as more or less motion events on the
 toplevel though).

Yes, I agree that you have to select for enter/leave on all native
windows, or you won't have the needed value of the 'subwindow' field.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: client-side-windows vs metacity

2009-01-30 Thread Owen Taylor
On Fri, 2009-01-30 at 20:38 +0100, Alexander Larsson wrote:
 I'm running a full gnome session with the client-side-windows branch
 here. There are a few minor issues I'm working on, but overall it just
 works(tm). I'll send a new status report soon.
 
 However, there is a specific issue with metacity that I'd like some
 feedback on. Metacity uses gdk to handle the frame window for managed
 children. This window will get the clients window reparented into it,
 however metacity never accesses that window as a gdk window (i.e. it
 doesn't call gdk_window_foreign_new on the xid) , so gdk doesn't know
 about it.
 
 This means that as gdk knows there are no children of the frame, and
 thus the event emulation code won't send a leave for inferior event to
 the frame when the mouse moves from the frame to the client window. This
 means metacity won't reset the cursor on the frame, thus affecting the
 default cursor in the client window. (In X the default cursor is
 inherited from the parent.)

 Now, how do we solve this? There are two possibilities. Either we add
 special code to metacity so that it creates the child window as a gdk
 object and keeps it updated as to the size when the frame is resized.
 
 Or, we add some hacks to gdk to track this case and make it work. One
 way is to detect native Inferior Leave events on windows with no
 children and use XQueryTree to find the windows. Resizes can be tracked
 with ConfigureEvents. I'm attaching a patch that implements this.

Although I believe there is a problem, It's not clear from the above
what it is. Is there problem that Metacity isn't it getting a GDK leave
event? If that's the problem, why can't you just convert the native
event to a GDK event and send that along?

And how are ConfigureEvents related?

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: g_malloc overhead

2009-01-26 Thread Owen Taylor
On Mon, 2009-01-26 at 18:30 +0100, Martín Vales wrote:
 Colin Walters escribió:
  On Mon, Jan 26, 2009 at 9:12 AM, Behdad Esfahbod beh...@behdad.org wrote:

  Lets just say that
  UTF-16 is at best implementation details of Firefox.
  
 
  Well, JavaScript is notably UTF-16.  Given that the Web, Java and .NET
  (i.e. all the most important platforms) are all UTF-16 it's likely to
  be with us for quite a while, so it's important to understand.

 Yes i only wanted say that. For example i work in c# and i would like 
 create glib libraries and use it in .net, but the char in mono/.NET is 
 utf16  and therefore i have there the same overhead.
 
 The solution are 2:
 
 1.- conversion using glib ():
 http://library.gnome.org/devel/glib/2.19/glib-Unicode-Manipulation.html#gunichar2
 .-2. automatic NET conversion in the p/invoke side.
 
 The 2 solutions have the same overhead.
 
  But yeah, there's no way POSIX/GNOME etc. could switch even if it made
  sense to do so (which it clearly doesn't).

 Yes, i only talked about the overhead with utf8 outside of glib, only that.
 Perhaps the only solution is add more suport to utf16 in glib with more 
 methods.
 

There's zero point in talking about a solution until you have profile
data indicating that there is a problem.

- Owen


___
gtk-app-devel-list mailing list
gtk-app-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-app-devel-list

Re: g_malloc overhead

2009-01-26 Thread Owen Taylor
On Mon, 2009-01-26 at 18:30 +0100, Martín Vales wrote:
 Colin Walters escribió:
  On Mon, Jan 26, 2009 at 9:12 AM, Behdad Esfahbod beh...@behdad.org wrote:

  Lets just say that
  UTF-16 is at best implementation details of Firefox.
  
 
  Well, JavaScript is notably UTF-16.  Given that the Web, Java and .NET
  (i.e. all the most important platforms) are all UTF-16 it's likely to
  be with us for quite a while, so it's important to understand.

 Yes i only wanted say that. For example i work in c# and i would like 
 create glib libraries and use it in .net, but the char in mono/.NET is 
 utf16  and therefore i have there the same overhead.
 
 The solution are 2:
 
 1.- conversion using glib ():
 http://library.gnome.org/devel/glib/2.19/glib-Unicode-Manipulation.html#gunichar2
 .-2. automatic NET conversion in the p/invoke side.
 
 The 2 solutions have the same overhead.
 
  But yeah, there's no way POSIX/GNOME etc. could switch even if it made
  sense to do so (which it clearly doesn't).

 Yes, i only talked about the overhead with utf8 outside of glib, only that.
 Perhaps the only solution is add more suport to utf16 in glib with more 
 methods.
 

There's zero point in talking about a solution until you have profile
data indicating that there is a problem.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: gparamspecs.c param_double_validate() doesn't support NaN/Inf?

2009-01-12 Thread Owen Taylor
On Sat, 2009-01-10 at 14:30 +0200, Andrew W. Nosenko wrote:
 On Sat, Jan 10, 2009 at 2:06 PM, Brian J. Tarricone bj...@cornell.edu wrote:
  On Sat, 10 Jan 2009 13:42:31 +0200 Andrew W. Nosenko wrote:
 
  First at all, could you provide any real-world example, where min/max
  restriction on GParamSpec could be usefull?  The reason is simple:
  when validation fails, the application has no way to know about it
  and, therefore, to do anything usefull.  There just no interface for
  such things, like validation-fails-callback.  As consequence, any
  validation should be done at the application level, before bringing
  GObject/GParamSpec/GValue/etc machinery into game.  Hence, I hard to
  imagine any usefull example of using restrcted GParamSpecs...
 
  Then you really just aren't imagining hard enough.  If you look at the
  gdk/gtk sources, there are quite a few GObject properties that use
  GParamSpecDouble that restricts the min/max value a property can have.
  For example, think of a progress bar that uses a double to indicate
  the percent full: 0.0 is 0%, 1.0 is 100%.  Any value outside that
  range is invalid.
 
 I know about that.  But how it usefull?  What you can to do with it?
 Yes, for progressbar it resulted just for some different rendering
 that has nothing but visual effect.  You could to don't use restricted
 paramespec but just silently CLAMP(), for example, without any harm.
 Even if you do not do any validation, the result would be the same:
 just a visual effects, possible ugly, but have nothing with the core
 functionality.

One of the main reasons for the presence of limits in the paramspec
is to allow proper presentation in user interface builders.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Rumination on offscreen-ng

2008-11-19 Thread Owen Taylor
On Wed, 2008-11-19 at 10:18 +0100, Alexander Larsson wrote:
 On Tue, 2008-11-18 at 16:28 -0500, Owen Taylor wrote:
  Some consequences:
  
   - Invalid and damage regions are kept per native-target not per window.
 
 What exactly do you mean by this btw? Do you mean moving  the current
 GdkWindowObject-update_area somewhere so it affects the native window?
 We could do that, although it would require some restructuring of how we
 handle updates. However, its not strictly necessary just because we push
 GdkVirtualWindow into GdkWindow. 

I'm not really proposing an implementation technique; there are various
things that could be done:

 - Making all native windows implement GdkPaintableIface, perhaps with
   a helper object that can be chained to for the default
   implementation. (This might imply that you'd need a PaintablePixmap
   object for GdkOffscreenWindow.)

 - Have a GdkPaintInfo object that is referenced in parallel to the
   drawable target from each GdkWindow.
 
But the simplest thing is just to ignore the fields in GdkWindow unless
the window has its own drawable target.

 Its got some advantages, for instance we will automatically never keep
 update areas for invisible areas. But it also causes some added
 complexities. For instance, when moving a subwindow that has some
 invalid area we currently just keep that, which is right because its in
 window relative coordiantes. If we keep that update area on the parent
 window we need to do some complex updates on the parents update_area in
 order to move the invalid area of the subwindow with the subwindow.

Well, the approach of keeping the invalid region on the topmost window
for the drawable target is pretty much forced for the case when the
drawable target implements GdkPaintableIface... that is, if the invalid
region is stored inside the windowing system, we can't create a separate
one for a subwindow.

Another consideration that I had in mind was your idea of having windows
with a parent background ... where you don't clip the parent with the
child and always redraw the parent before redrawing the child. That's
less of a special case if you share invalid regions.

The idea of moving portions of the invalid region along with the child
bits sounds complex, but it's really a pretty simple exercise in region
arithmetic. And it's one that we've already done! See the code in
gdkgeometry-x11.c:_gdk_x11_window_move_region()

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Rumination on offscreen-ng

2008-11-19 Thread Owen Taylor
On Wed, 2008-11-19 at 09:42 +0100, Alexander Larsson wrote:
 On Tue, 2008-11-18 at 16:28 -0500, Owen Taylor wrote:
  Some consequences:
  
   - Invalid and damage regions are kept per native-target not per window.
 
 What about paint regions (and their pixmaps)? Should we try to combine
 these to the native target too?

There are obviously some external constraints here: in particular,
begin_paint()/end_paint()/process_updates() can be called on
windows without a native target, and that needs to be handled, at least
in some minimal fashion.

And when passing redraws to GDK, there continues to be GdkExposeEvent
per GdkWindow

But in the normal case where redraws are coming from GDK (or the
windowing system) I would expect a single paint pixmap to be created for
the entire drawable target, and all repaints for that drawable target
handled in one go.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Rumination on offscreen-ng

2008-11-18 Thread Owen Taylor
So, I spent some time today reading through Alex's offscreen-ng branch
today. My first question is how this is going to appear to the
application programmer and widget writers. The very existence of
GdkWindow at all is an imposition on the programmer .. after all,
widgets define the structure of your application, not windows.

It seems like we should have a single concept of GdkWindow that is as
simple as possible, have the programmer declare what features they need
for it:

 - Should the window have its own private offscreen storage?
 - Does the window have alpha transparency?
 - Is the window automatically drawn to the parent window?

And then GDK should go off and figure out how best to do that behind the
scenes.

(Unrelated idea: deprecate GdkWindowAttributes in favor of
g_object_new())

But looking more closely, there's another worry I have ... GdkWindow and
GdkVirtualWindow are a very tightly coupled base class and subclass.
Much of the logic for implementing virtual windows actually belongs to
the *parent* window, so GdkWindow needs special handling of 
GdkVirtualWindow children. I don't feel like the long term
maintainability of GDK is being improved.

I want to make a more radical proposal here: What if we consider the
client side window to be the basic concept and native windows to be an
elaboration of that.

So, GdkWindow would have all the code to:

 - Keep a clip region for the window
 - Clip rendering to that clip area
 - Implement window movement with gdk_draw_drawable()
 - Deliver events to the correct window

And so forth. Each GdkWindow would have a target drawable - basically
like the current window-impl - that could be:

 - A native GdkWindowImplFoo 
 (current GDK window)

 - The same native window as the parent window 
 (GdkVirtualWindow in offscreen-ng) 

 - A pixmap 
 (GdkOffscreenWindow in offscreen and offscreen-ng)

(Of course, it's not just a drawable, because you do need to do window
specific stuff to the native window when there is one.) 

Some consequences:

 - Invalid and damage regions are kept per native-target not per window.

 - It would then be very natural to have backends - directfb, maybe also
   Quartz that *just* implemented toplevel windows.

 - Since you keep the clip region for all windows, you could implement
   native children clipped by sibling virtual windows by setting a
   window shape. (Idea from alex)

 - If we wanted to switch a GdkWindow from virtual to native on the fly
   it would be pretty natural ... no change of class.


My instinct is that this would work out pretty cleanly - I won't say
simply.

- Owen

___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


GdkOffscreenHooks

2008-11-18 Thread Owen Taylor
The input redirection in the offscreen branch is currently:

struct _GdkOffscreenChildHooks
{
  void   (*from_parent) (GdkWindow *offscreen_child,
 gdoubleparent_x,
 gdoubleparent_y,
 gdouble   *child_x,
 gdouble   *child_y);
  void   (*to_parent)   (GdkWindow *offscreen_child,
 gdoublechild_x,
 gdoublechild_y,
 gdouble   *parent_x,
 gdouble   *parent_y);
};

Which puts GTK+ in charge of everything, and the only interception
possible is choosing the actual transform. What if you wanted to have a
scene graph that combined redirected windows with other objects. And a
non-window object was obscuring the redirected window? It's not clear
how to handle that naturally.

It seems to me that it might be better to say that input redirection is
restricted to the case where you have a toplevel offscreen window that
is basically completely outside the normal widget tree.

Then you'd have functions like:

 void
 gdk_offscreen_window_motion (GdkOffscreenWindow *window,
  GdkEventMotion *motion,
  double  relative_x,
  double  relative_y);

(also enter/leave/button) that are used to to forward events to the
toplevel offscreen window.

You'd have to be able to track the cursor, so you'd have a
signal ::displayed-cursor-changed and a function

 GdkCursor *
 gdk_offscreen_window_get_displayed_cursor(GdkOffscreenWindow);

Notes on the idea:

- The basic idea here is that the container where you are embedding
  the transformed window already has code for event propagation,
  child window picking, and implicit grabs (once you click on 
  an element, events keep going there until release), so we should
  just reuse that.

- You might sometimes want to do input redirection (like if you 
  were putting the window into a scene graph with other objects)
  but *not* do interesting transforms. In that case you want things   
  like popping up windows positioned with respect to other elements
  to work correctly. So, you probably do need to have a simple
  gboolean ::get_root_coordinates() callback/signal.

  If you do interesting transforms, then all bets are off.

- Ungrabbing implicit grabs (allowed by X/GDK) is a bit of a problem
  since implicit grabs have to be maintained cooperatively between
  the TOW and the container it is embedded in. You could add
  a signal for that ungrab, or you could just ignore the problem;
  I don't think the usage is common.

- Explicit pointer grabs also need consideration. Ideally we wouldn't 
  have any explicit pointer grabs on subwindows but we still have
  a few.

   gtk_paned_button_window(): If this one is needed at all it is
 to override some aspect of the implicit grab, so it could be
 done completely within the TOW code without server involvement.

   gtk_cell_renderer_accel_start_editing(): probably should be
 grabbing on a GtkInvisible instead. I think it's only here
 to avoid clicking off to some other program.

   gtkhsv.c:set_cross_grab(): Looks like it is used to override
 the cursor of the implicit grab - again could be done completely
 within the TOW.

  I'm not really worried if we need a few widget fixes to make things
  work with input redirection ... it's not like we are discussing
  doing input redirection within existing unmodified applications.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: string return result conventions

2008-09-15 Thread Owen Taylor
On Mon, 2008-09-15 at 08:59 +, Luke Kenneth Casson Leighton wrote:
 tim, thank you for responding.
 
  therefore it's important for me to find out what glib / gobject memory
  conventions are, for strings.
 
  Strings querried through the property interfacem e.g.:
 
  gchar *string = NULL;
  g_object_get (label, label, string, NULL);
 
  is always duplicated and needs to be a freed by the caller,
  because the returned string might have been dynmically
  constructed in the objects proeprty getter (i.e. not
  correspond to memroy actually allocted for object member storage).
 
  ok - in this situation, fortunately we have control over that.  the
 property getter is entirely auto-generated.  the code review of the
 new webkit glib/gobject bindings brought to light the webkit
 convention of not imposing any memory freeing of e.g. strings on
 users of the library.  use of refcounting is done on c++ objects, for
 example.
 
 the strings in webkit are unicode (libicu).  _at the moment_ i'm
 alloc-sprintf'ing strings - all of them - into utf-8 return results.

Why is a sprintf involved here? g_utf16_to_utf8() will convert a UTF-16
string into a UTF-8 string that can be freed with g_free().

 it was recommended to me that i create a string pool system, to keep a
 record of strings created, and, at convenient times, destroy them all
 (reminds me of apache pools and samba talloc).  exactly when is
 convenient is yet to be determined, which is the bit i'm not too
 keen on :)
 
 looking at the auto-generated code in pywebkitgtk, i'm seeing use of
 PyString_FromString which copies arguments passed to it - there are a
 number of functions which return strings (not property-getters) - so
 there's definitely memory leaks (hurrah).
 
 clearly, the best overall thing would be to actually return the
 unicode strings themselves rather than convert them (needlessly?) to
 utf-8.
 
 if that's not possible to do, what would you recommend, in this situation?

Just return newly allocated UTF-8 strings. It's going to be a little bit
inconvenient, with some risk of leakage, for people using your API from
C, but that's the way it works out.

Even if you were writing from scratch in GLib, allocating all returns
might be the right approach. It's pretty mystical if get_node_name()
returns a const char * but get_text_content() returns a char *.

(We've made some effort to avoid get names in the GTK+ stack for
things that return allocated strings, but that doesn't work if you are
mapping the DOM.)

Trying to play tricks where the string returned magically gets freed
sometime in the future at an undefined time will definitely cause
problems.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Slaving the GTK+ main loop to an external main loop

2008-09-05 Thread Owen Taylor
On Mon, 2008-09-01 at 15:11 -0400, Owen Taylor wrote:
 One distinct problem for ports of GTK+ to other windowing systems has
 been dealing with modal operations in the windowing systems API; 

[...]

 So all we have to do is do everything before the dispatch() step in
 the thread, signal the main thread do the dispatch() call, wait for
 that to complete, then continue.

I spent some time this week trying to get the idea implemented on OS X;
although I got it working OK (I've attached the patch for reference)
there were some significant bottlenecks that I wanted to write up here
in case anyone else tries this in the future.

The big problems centered around loop ownership and reentrancy -
inherently the main thread has to be the owner of the main loop - the
caller of g_main_context_acquire(). If the helper thread is the owner of
the main context, then you will get deadlocks if the application calls a
function like gtk_dialog_run() because the main thread is waiting for
the helper thread to process events to finish the dialog, but the helper
thread is waiting for the main thread to do the dispatch part of the
main loop.

Once the main thread is the loop owner, then you have the problem that
the main loop iteration in the helper thread is only partially in sync
with the operation of the main thread... so a lot of the assumptions
that the GLib main loop makes don't fully work; attempts to do recursive
main loop iterations in particular tend to trigger scary warnings. Also,
you have problems where GTK+ code adds an event to the event queue in
the expectation that the main thread is not blocking, but it is blocking
in the other thread; a call to g_main_context_wakeup() needs to be
added.

Efficiency is also pretty bad. You also get interactions like:

 - Window is resized, and a GDK event is queued
 - Main thread calls g_main_context_wakeup() to wake up the helper
   thread.
 - Helper thread sees the event has been queued and then sends a Cocoa
   event to wake up the main thread.
 - Main thread wakes up and dispatches the event

There were also some quartz specific issues: for one thing, window
resizing was my main target and it there's no easy begin/end
notifications for that; you can use CFRunLoopObserver to detect entry
and leaving the run loop, but that appears as a separate Enter/Leave
notification for every motion event, which makes things even more
inefficient since we have to keep starting and stopping the iteration
in the helper thread.

A second quartz specific problem was simply that I was trying to use the
same helper thread for poll() emulation and for iteration during modal
operations so things got pretty hairy: lots of different states in the
state machine.

During work on this approach, I realized that a more simple and
efficient approach was possible on OS X based on CFRunLoopObserver...
you could simply call the g_main_context_iteration() functions directly
at the appropriate points in the native run loop. An alternate patch
based on that approach came out quite a bit better and can be found at:

 http://bugzilla.gnome.org/show_bug.cgi?id=550942

So the patch attached below, is in my opinion, a dead end as far as OS X
is concerned. I think my conclusion on the general technique is that it
is workable in situations when nothing else is possible, but shouldn't
be a first choice.

- Owen



main-loop-separate-thread.patch
Description: application/mbox
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Slaving the GTK+ main loop to an external main loop

2008-09-01 Thread Owen Taylor
One distinct problem for ports of GTK+ to other windowing systems has
been dealing with modal operations in the windowing systems API; this
comes up for:

 - Window resizing
 - Drag and drop
 - Modal print dialogs

And so forth. While the modal operation is going on, the GTK+ main loop
is not running, so no idles or timeouts are dispatched, relayout and
repaints don't happen and so forth. There are some workarounds in the
Win32 backend code - in particular, contrary to the way other events are
handled, it handles WM_PAINT events directly out of the window
procedure. But a more general approach would be desirable.

As a strawman, we could imagine for the duration of the modal operation
running the GTK+ main loop in another thread. The problem with this is
obvious: callbacks would be dispatched in that other thread. To see how
we can refine that, let's look at how g_main_context_iteration() works.
(Aside from a few efficiency considerations, g_main_context_iteration()
uses only public API and can be reimplemented.)

In skeleton, g_main_context_iteration() looks like:

 g_main_context_acquire(context);
 g_main_context_prepare (context, ...);
 g_main_context_query (context, ...);
 g_main_context_poll (context, ...);
 g_main_context_dispatch (context); 
 g_main_context_release(context);

So all we have to do is do everything before the dispatch() step in the
thread, signal the main thread do the dispatch() call, wait for that to
complete, then continue.

Signaling the main thread is platform dependent. On Windows, we can
PostMessage() to a window created by the main thread and do the
dispatch() in the window procedure of that window. On OS X we can add a 
CFRunLoopSource to the main thread's run loop and then call 
CFRunLoopSourceSignal() on it when we are ready to dispatch.

Potential issues:

 * The prepare() and query() functions of custom main loop sources may
   not be thread safe. I think this would just have to be documented
   as a portability issue.

 * You have to know when you are entering and leaving a modal
   operation in order to start and stop the iteration in the helper
   thread. This is possible for most or all of the examples listed above
   but may not always be possible.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: adding elastic tabstops to a widget

2008-08-22 Thread Owen Taylor
On Tue, 2008-08-19 at 23:39 +0200, Nick Gravgaard wrote:
 Hi all,
 
 I'm trying to make a proper GTK text editing widget that supports
 elastic tabstops [1] with a view to being able to use it in the near
 future in a PyGTK project (a programmer's text editor), and perhaps one
 day getting it added to GtkTextView or GtkSourceView.
 
 I have something pretty close to being finished. I've developed it by
 hacking the main Pango and GTK code directly rather than making new GTK
 objects that inherit from the standard objects - I figured I'd get the
 functionality working properly before moving the code into new files. It
 consists of 2 parts:
 
 1. Modified PangoTab structure to contain width (distance between tab
 characters) and contents_width (width of text between tabs) values,
 instead of location (distance from left margin to tab).
[...]

 2. Modified gtktextview.c (or gtktextlayout.c - not sure which is a
 better place) to get and set PangoTabs' width and contents_width values
 as text is inserted/deleted etc. This makes up most of the code.
[...]

I talked with some with Nick about this yesterday on IRC and wanted to
summarize here.

My personal opinion on the elastic tab idea is that it's quite neat, and
if tabs worked that way, my life would be better. But it's hard for me
to see how to get from the way that tabs work now to the elastic tabs
model ... I wouldn't want to commit code that used elastic tabs to a
public project because then it would be indented weirdly for everybody
who didn't have elastic tabs. So, I think it's better to focus on how we
can make it possible to implement an elastic-tabs editor with
GtkTextView rather than adding the feature itself to GtkTextView. 

It turns out that the PangoTab changes are mostly an optimization .. to
be able to use the same structure to set tabs on a PangoLayout as to be
able to store the cached lengths of the different segments of a line.
So, I don't think they are needed in 

It also seems that you should mostly be able to do what is needed for
GtkTextView using the public API from a subclass or application. There
is one missing piece:

 - You need to be able to tell when parts of the GtkTextView are 
   revalidated (that is, layout has happpened for those lines.)

Right now, the only way to do this is to get the not-really-public
GtkTextLayout object for the GtkTextView and connect to it's ::changed
signal. This is hackish from C (requires accessing a non-public
structure field), and much more hackish from Python (I gave Nick some
code using ctypes that manages to pull out the GtkTextLayout with some
effort.) And such hacks won't work at all in GTK+ 3.0. So, I think a
::range-validated signal on GtkTextView would be a good addtion.

It also would be useful to have a way of attaching user data to a line
of a GtkTextView. (In this case, what you want to store is a cache of
the segment widths for a line.) It is possible to keep a lookaside data
structure in sync with the GtkTextView, but it's a bit awkward and
inefficient.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: UTF8 strchug/chomp/strip

2008-08-04 Thread Owen Taylor
On Tue, 2008-07-29 at 11:27 -0400, ANDREW PAPROCKI, BLOOMBERG/ 731 LEXIN
wrote:
 I noticed that glib doesn't contain g_utf8_strchug, g_utf8_strchomp, 
 g_utf8_strstrip (macro), which would be UTF8 safe versions of the functions 
 in gstrfuncs.c.
 
 Conveniently, these functions already exists in file-roller, which I found 
 here:
 
 http://tinyurl.com/6xpjv9
 
 Can these be pulled down into gutf8.c / gunicode.h?

Well, not without relicensing from the authors.

Note that the non-_utf8_ work fine on UTF-8 strings as long as you don't
need to strip exotic whitespace. (Stripping exotic whitespace is
probably good if you are cleaning up user input, less necessary if you
are, say parsing a config file.)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: setting up a gtk dev environment

2008-07-28 Thread Owen Taylor
On Sun, 2008-07-27 at 14:40 -0400, Patrick Hallinan wrote:
 On Sun, 2008-07-27 at 14:24 -0400, Paul Davis wrote:
  On Sun, 2008-07-27 at 14:08 -0400, Patrick Hallinan wrote:
   Hi,
   
   I wish to help with the development of gtk+ but I'm not having any fun
   trying to setup a build environment for gtk+.  I've looked for help at
   www.gtk.org/development.html and developer.gnome.org. I have tried using
   jhbuild from  http://svn.gnome.org/svn/jhbuild/trunk. No dice. 
  
  no dice doesn't really add up to a bug report on jhbuild. hundreds,
  perhaps thousands, of people use that as a way to build and maintain the
  GTK stack from svn. what was your problem with it?
  
 
 I guess you are saying that I should be using jhbuild to get a gtk+
 build environment?
 
 I'm using the subversion trunk for jhbuild which I didn't assume was
 stable.  I get the output below when I try jhbuild bootstrap  

In general, I'd strongly recommend against jhbuild bootstrap. It:

 - May install older versions of components than your system versions,
   causing weird problems
 - Increases the total amount of things you are building, giving
   more possibilities for failure.

It is definitely a bad idea for Fedora 9, which has nice shiny new
versions of everything. So blow away your install directory and
start over without the bootstrap, and you'll be happier.

 http://live.gnome.org/JhbuildDependencies/FedoraCore

Has information about what packages you need to install for Fedora.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: New subscriber: redesign of Unicode character tables (GLib)

2008-07-23 Thread Owen Taylor
On Wed, 2008-07-23 at 22:38 +0100, adrian.dmc wrote:
 Hi, I'm new here...
 
 My objective, for now, its to rework the Unicode support of GLib by
 redesigning its character tables, specially its size.
 I'll appreciate some guidelines and suggestion (Give up!! included).
 
 I'm currently reading/analyzing the Unicode Standard 5.1 and the GLib
 support for Unicode to later propose the redesign.

The Unicode tables in GLib were designed with a lot of attention to:

 - Minimizing size
 - Keeping lookups fast
 - Keeping data static and shared between multiple applications
 - Minimizing relocations

(The second two are closely related.) 

If you have weird application requirements (disk space is much more
expensive than RAM say), doing things like gzip compressing the data
tables would work.

Otherwise, I would basically say give up. I don't think it's a project
where you'll get of traction easily... I'm sure it's possible to make
some increment improvements if you are smart enough or persistent
enough, but it won't be easy.

Feel free to prove me wrong :-)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: About GTK+ 3.0 and deprecated things

2008-07-16 Thread Owen Taylor
On Wed, 2008-07-16 at 11:20 +0200, Colin Leroy wrote:
 On Wed, 16 Jul 2008 09:51:03 +0100, Bastien Nocera wrote:
 
 Hi,
 
  IMO, if you're still using GtkCTree and GtkCList, which were
  deprecated when GTK+ 2.0 was released 6 years ago, you're asking for
  trouble.
 
 Well, they do work for us. When GTK+ 2.0 was released six years ago, we
 were already too busy with the rest of the rewriting code-that-worked to
 do it. Two years and nine days, exactly, between the first commit to
 the GTK2 branch and the first GTK2 release after 497 commits. And we
 never came to replace the GtkCtrees because a) they work and b) we
 didn't have the time/motivation.

Hmm, strangely most code worked fine with GTK+ 2.0 with a recompile...
(e.g., I remember doing that for gftp, not a trivial app.)

Porting to GtkCList and GtkCTree was was the main thing that took
significant work.

So, I'm not really sure what you were doing for 497 commits...

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: [compiz] color management spec

2008-06-15 Thread Owen Taylor
On Sun, 2008-06-15 at 11:52 +0200, Kai-Uwe Behrmann wrote:
 Am 14.06.08, 20:13 -0400 schrieb Owen Taylor:
  On Sun, 2008-06-15 at 01:38 +0200, Tomas Carnecky wrote:
   Owen Taylor wrote:
If the visual of a window is ARGB, it can't hold XYZ or Lab, or most
obviously CMYK data.
   
difficulties of per-monitor specifics... if I have an image in Adobe RGB
with 8 bits per primary, it might be interesting to preserve that
colorspace and use that as the colorspace on my toplevel, but that's not
done to avoid having colorspace conversion code in the application: it's
done to avoid losing gamut when converting it to something else.
   
   If you have an image in CMYK the best way to avoid losing gamut, just 
   like in your AdobeRGB example, is to let the CM convert it at the very 
   last stage. So it might be useful to be able to upload CMYK pixels into 
   the xserver and tag the data somehow. I don't see any conflict there as 
   long as the application and CM agree on the pixel format.
  
  I think there is a significant difference between having the CM deal
  with the exact interpretation of RGB in a RGB pixel format, and having
  the CM take random data stuffed into a RGB pixel and convert it - does
  the user still see something intelligible if the CM/app communication
  goes wrong? If a CM isn't running? If the user takes a screenshot of the
  window?
  
  Also, of course, CMYK isn't a precisely defined color space like sRGB;
  you'd have to accompany the window with a color profile.
  
Honestly, the only reason this is at all interesting is the limitations
of RGB 8:8:8. The long term solution here is support for something
like scRGB (http://en.wikipedia.org/wiki/ScRGB_color_space.)
   
   First the server would have to support buffers with at least 6 byte per 
   pixel, which it does not currently. I would love to see support for 
   arbitrary pixel formats in the xserver. I would even go as far as having 
   the xserver create bare buffers and then tag them with the pixel format 
   / icc profile. After all, if a (OpenGL-based) CM is running the xserver 
   doesn't need to know anything about the windows besides the dimensions.
  
  There's certainly significant work there at all layers. I believe that
  there was discussion on the cairo mailing list of someone starting doing
  work within pixman, which is certainly a good place to start.
  
  As above, I'm not sure uninterpreted window buffers is a good idea... or
  really necessary. Better to support a small set of things well than a
  large set of things poorly.
 
 The idea is to have as few as possible colour conversions to gain speed, 
 be economic with memory resources and preserve precission. An other goal 
 to bring fancy pixel buffers close to the DVI outpu connector is to send 
 high bit depth channels over to the monitor device. 
 
 With dual link DVI connections we have since quite some years devices on 
 the marked. Recently they became, in the speach of colour management 
 experts, affordable. So there are reasons enough to drive this approach. 
 
 About scRGB, Windows has a very simple approach in mind with its kind of 
 single colour space solutions, which might be powerful for its purpose. 
 The open source colour management community, read Scribus, Krita, 
 Ghostscript, Oyranos ... did not adapt to such a thing in mind or in 
 practise. So citing scRGB will fall relatively short in this community. 
 We orient much toward ICC and possibly OpenEXR colour management. What is 
 good for the internet, and we support this hartedly, might not be good for 
 the desktop, printing, movies and other arts.

[...]

You use very simple here as if it was an insult. 

Inherently, if you want to alpha blend, you want to use intermediate
buffers, then you need a common colorspace. Reject a common colorspace,
then you've thrown out some of the most important tools in the graphics
toolkit.

Let me emphasize here that I'm not trying to sabotage the approach of
doing final color conversions in the CM. I think it's a fine project.

But what I'm arguing against here is putting in a lot of complexity - in
particular, tagging regions of a toplevel with different colorspaces -
to try and achieve future hypothetical advantages that are better 
achieved by using high precision pixel formats.

- Owen

[
If, in certain specialized areas, the precision of 16-bit floats is
insufficient, then certainly 128-bit pixel formats with 32-bit floats
are feasible with current hardware, if memory and bandwidth intensive. 
]


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: [compiz] color management spec

2008-06-14 Thread Owen Taylor
On Sat, 2008-06-14 at 12:51 -0700, Hal V. Engel wrote:
 On Wednesday 11 June 2008 08:33:04 am Owen Taylor wrote:
 
  [ Intentionally not trimming quoting much due to various bounces
 from lists
 
  ]
 
 
 
  On Wed, 2008-06-11 at 09:05 +0200, Kai-Uwe Behrmann wrote:
 
 snip
 
   Tagging the window with the image colour space will render in
 rather a
 
   mosaic of windows.
 
 
 
  I don't understand this.
 
 I think you are assuming that the image color space is some kind of
 gamma corrected RGB color space. What is the image color space was a
 CMYK color space or XYZ or Lab? 

If the visual of a window is ARGB, it can't hold XYZ or Lab, or most
obviously CMYK data.

I don't see color management support at the CM level as being a
mechanism to simplify the writing of image handling code, it's a
mechanism to allow the user to see the widest range of accurate colors
possible on their monitor.

For any color space you might want to support on a toplevel you have to
ask what is the advantage. 

 sRGB: decent compromise between range and precision. Widely used
 standard.
 
 raw monitor color space: allows the application full control. 
 difficulties in multi-monitor setups if the monitors aren't
 identical and calibrated identically.

The reason why you might want to support other color spaces is to get
greater gamut then is possible with sRGB without having to deal with the
difficulties of per-monitor specifics... if I have an image in Adobe RGB
with 8 bits per primary, it might be interesting to preserve that
colorspace and use that as the colorspace on my toplevel, but that's not
done to avoid having colorspace conversion code in the application: it's
done to avoid losing gamut when converting it to something else.

Honestly, the only reason this is at all interesting is the limitations
of RGB 8:8:8. The long term solution here is support for something
like scRGB (http://en.wikipedia.org/wiki/ScRGB_color_space.)

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: [compiz] color management spec

2008-06-14 Thread Owen Taylor
On Sun, 2008-06-15 at 01:38 +0200, Tomas Carnecky wrote:
 Owen Taylor wrote:
  If the visual of a window is ARGB, it can't hold XYZ or Lab, or most
  obviously CMYK data.
 
  difficulties of per-monitor specifics... if I have an image in Adobe RGB
  with 8 bits per primary, it might be interesting to preserve that
  colorspace and use that as the colorspace on my toplevel, but that's not
  done to avoid having colorspace conversion code in the application: it's
  done to avoid losing gamut when converting it to something else.
 
 If you have an image in CMYK the best way to avoid losing gamut, just 
 like in your AdobeRGB example, is to let the CM convert it at the very 
 last stage. So it might be useful to be able to upload CMYK pixels into 
 the xserver and tag the data somehow. I don't see any conflict there as 
 long as the application and CM agree on the pixel format.

I think there is a significant difference between having the CM deal
with the exact interpretation of RGB in a RGB pixel format, and having
the CM take random data stuffed into a RGB pixel and convert it - does
the user still see something intelligible if the CM/app communication
goes wrong? If a CM isn't running? If the user takes a screenshot of the
window?

Also, of course, CMYK isn't a precisely defined color space like sRGB;
you'd have to accompany the window with a color profile.

  Honestly, the only reason this is at all interesting is the limitations
  of RGB 8:8:8. The long term solution here is support for something
  like scRGB (http://en.wikipedia.org/wiki/ScRGB_color_space.)
 
 First the server would have to support buffers with at least 6 byte per 
 pixel, which it does not currently. I would love to see support for 
 arbitrary pixel formats in the xserver. I would even go as far as having 
 the xserver create bare buffers and then tag them with the pixel format 
 / icc profile. After all, if a (OpenGL-based) CM is running the xserver 
 doesn't need to know anything about the windows besides the dimensions.

There's certainly significant work there at all layers. I believe that
there was discussion on the cairo mailing list of someone starting doing
work within pixman, which is certainly a good place to start.

As above, I'm not sure uninterpreted window buffers is a good idea... or
really necessary. Better to support a small set of things well than a
large set of things poorly.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: [compiz] color management spec

2008-06-11 Thread Owen Taylor
[ Intentionally not trimming quoting much due to various bounces from lists ]

On Wed, 2008-06-11 at 09:05 +0200, Kai-Uwe Behrmann wrote:
 Am 10.06.08, 17:56 -0400 schrieb Owen Taylor:
  On Tue, 2008-06-10 at 16:43 +0200, Tomas Carnecky wrote:
   Added gtk-devel-list@gnome.org to hear their opinion about this matter. 
   For reference, this is what I proposed:
   http://lists.freedesktop.org/archives/xorg/2008-May/035772.html
   
   Danny Baumann wrote:
Hi,
 
I strongly dislike supporting subwindow ID/profile tuples. 
The task of 
window and compositing managers is and always has been to 
manage and 
draw _toplevel_ windows, not subwindows. I don't really think that 
adding a subwindow management infrastructure to compositing 
managers 
just for saving some lines of code in the toolkit (and not 
even all of 
them) is an overly good idea.
It's not just for 'saving some lines of code in the toolkit'. 
Color management would require significantly more code in the 
toolkit and would most likely be slower then if it is done in 
the compositing manager.

I was just talking about communicate using subwindow id/profile 
tuples vs.
communicate using toplevel window region/profile tuples. The former 
would
save a bit of code in the toolkit, but would complicate compositing 
managers
significantly; which is why I strongly prefer the latter.
   
   The compositing manager would never actually draw subwindows, just 
   merely use them to identify regions.
   
   When using properties on the top level window, the toolkit would have to 
   explicitly update those whenever the window is resized. But when using 
   subwindows, the toolkit (at least gtk) wouldn't have to do anything 
   special. In gtk, each widget that uses a subwindow resizes it when the 
   top level window is resized. The compositing manager would just 
   subscribe to ConfigureNotify events of the subwindows and be done.
  
  If I was working on a new toolkit from scratch it would most likely have
  no subwindows, or a very, very limited use of subwindows.
  
  In the case where you do have subwindows, future toolkits will commonly
  act as compositing managers for their own subwindows, so a subwindow
  does not necessarily represent an integer-pixel-bordered region of the
  window.
  
  I have trouble seeing the idea of separate profiles for subwindows
  as being a good idea. There are also other problems like:
  
   - There's no easy way to get or be notified of changes to the 
 clip region of a window in X. If a subwindow with a separate
 profile was partially obscured by another subwindow, the compositing
 manager would have trouble tracking that.
  
   - If there was inter-process embedding, the ID's of subwindows with
 separate profiles would have to be propagated up to the toplevel,
 which would be a pain. (You don't seem to have a
 WM_COLORMAP_WINDOWS equivalent, but one would be needed.)
 
 Are colour maps applicable in the range of this project? I'd guess that 
 OpenGL cards with the necessary features for compiz would run almost 
 always in a true visual mode?

WM_COLORMAP_WINDOWS is just an analogy; in the same way that
WM_COLORMAP_WINDOWS identifies subwindows that have different colormap,
you would need a property to identify subwindows with different color
profiles. *If* you wanted to put color profiles on subwindows (something
that I think is a bad idea.) The expense for the compositing manager to
monitor all subwindows of each toplevel for property changes would be
extreme.

  The _NET_COLOR_MANAGER client message also introduces a whole lot of 
  complexity for toolkit authors.
  
  I assume that the problem you are trying to solve here is:
  
   A) Photo app has a embedded image in a wider-than-SRGB-colorspace
  plus some random toolbars
   B) You don't to convert the image to SRGB and lose gamut that the
  monitor might actually be able to reproduce
 
   C) Deploy a fast colour conversion path on the GPU rather than the CPU
 
   E) Manage the whole desktop at once, like it is displayed at once.
 
  While the suggestion of subwindow tracking is seductive, I don't think
  it really works out. So, you need to go with one of the other
  possibilities - either you export the monitor profile to applications
  and allow applications to mark windows as being in the monitor profile,
  or you put the whole window into the image colorspace. (Using the
  monitor colorspace obviously works better if there are multiple images
  with significantly different gamuts in the same toplevel.)
 
 Tagging the window with the image colour space will render in rather a 
 mosaic of windows.

I don't understand this.

 The other suggestion is covered by the _ICC_PROFILE_xxx atom but misses a 
 practical use case.

What use case?

  Either way, this end up with the question ... how do you get a
  toolkit dealing with some non-SRGB

Re: RGBA Colormaps in GtkWindow

2008-06-01 Thread Owen Taylor
On Sat, 2008-05-31 at 02:52 +0200, Andrea Cimi Cimitan wrote:

[...]

  - Now you have another issue to deal with: if a compositing
 manager
stops or starts or the theme changes, then you might have
 to change
a GtkWindow on the fly from RGBA to non-RGBA. This means
 unrealizing
it and realizing it again. You need to do another around of
 testing
similar to the round above to make sure that this won't
 cause
problems for applications.
 
 
 No, I *hope* you're wrong (even if you know this topics billion times
 better than myself :-) ).
 As far as I know RGBA is independent from the compositing manager:
 RGBA is available if your X server is running with the composite
 extension (or the parallel of Quartz, since in OSX
 get_rgba_colormap(screen) returns a working rgba colormap). So you can
 have a RGBA colormap with fluxbox, openbox, or metacity *without* the
 compositing. And you can run windows without any issue, example:
 gnome-system-monitor is using a RGBA colormap, did you see any issue
 enabling/disabling compositing? no.

Well, yes, and no. The way it works is that if you use an RGBA window
that window will always be composited, even if you don't have a
compositing manager running. The compositing is done by the default CM
inside the X server.

This will significantly change the performance profile of the
application in good ways and bad ways:

 - There will be no exposes on the window (good)
 - Hardware acceleration on the window may be disabled
 - The window may be forced out to local system memory
 - More video memory may be used
 
Whether the net result is positive or negative is going to be hard to
say, and will depend greatly on the hardware and software of the system.

 It is *ready*, but the alpha channel should be pure black if you try
 to use it, but you won't of course :).
 I have written a Gtk+ engine that works as follows: 1) it checks if
 the screen has a rgba colormap, 2) if the window has a colormap too.
 finally, 3) if both are true AND 4) you're running a compositing
 window manager (gdk_screen_is_composited()) 5) then it draws using the
 alpha channels.
 In that way you'll *NEVER* have any black pixels, and any crashes
 (keep using since february).

I think this gets at the important thing: we should not be triggering
application problems at handling RGBA visuals when the theme is taking
no advantage of it. If I start up with GNOME desktop with a simple
theme, I should get an RGB colormap.

Once you say that, you either have do the unrealize/realize thing, or
you have to restart logging out and logging back in again to switch to a
fancy theme.

[...]

In particular for this, you would want to test out
 applications that
embed OpenGL or Xvideo within then.
 
If there are problems, then again, you would need a
 GtkWindow
property (unrealize-ok) to declare that it is safe to
 unrealize
and realize the window again.
 
 Following the same topic above, I have tested a lot of applications
 running in RGBA: when you switch between a non-composited window
 manager to a composited the windows acquire immediatly the
 transparency *without any issue/glitch*! Works great, really.
 So you don't have to unrealize, since applications keeps working great
 and opaque also with a RGBA colormap under a non-composited window
 manager, and if you switch you don't have to bother against those
 realizing stuff (see gnome-system-monitor, it works flawlessy). 
 The only issue you may have is that RGBA must be assigned _before_
 realizing, as you know, but by the time you get it you don't mind
 about unrealizing to get plain RGB. This means that the applications
 must read the xsetting/variable _before_ realizing (gtk+ must do)

I would be shocked if there was ever a checkbox in GNOME that said:

 [X] Enable alpha-transparent themes (may break some applications)

RGBA visual usage needs to be done automatically based on the needs of
the theme and the applications and it needs to be done in a safe way.

- Owen



___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: RGBA Colormaps in GtkWindow

2008-06-01 Thread Owen Taylor
On Sat, 2008-05-31 at 12:04 +0200, Andrea Cimi Cimitan wrote:

 WxWidgets could be bugged too... By the way, in my opinion, 
 it's so useless for us to continue quoting every single bug:
 we're sure there will be, maybe a lot.

I'm 100% of the opposite opinion. In order for the GTK+ maintainers
to make informed choices, there needs to be a page somewhere
listing:

 - Every app that is known to break
 - *Why* those apps are breaking
 - Links to patches in bugzilla where possible

- Owen



___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: RGBA Colormaps in GtkWindow

2008-05-30 Thread Owen Taylor
On Fri, 2008-05-30 at 18:33 +0200, Giuseppe Fuggiano wrote:
 Hi all,
 
 I am working on a patch to make GtkWindow to use an RGBA colormap, in
 order to add trasparency to each GTK+ application to make possible doing
 fancy things by themers and artists, but not only [0].
 
 Actually, after some tests we discovered that not all applications are
 really ready for this change.
 
 If an application uses a tray icon like XChat or Pidgin, it crashes
 because of an Xorg badmatch.
 
 Months ago, Ryan desrt Lortie proposed some new specs for tray icon,
 that was accepted [1], but not implemented yet.
 
 Also, some developers would like to have an xsetting to control the
 colormap, some others (like Owen Taylor) would like an environment variable.
 
 I'd like to understand better the tray icon issue to try to fix it. Is
 there anyone could help me in that sense?

I think this issue is both a lot simpler and a lot harder than you are
making out:

 - The status icon issue is *only* an issue between GtkStatusIcon, 
   the tray icon protocol, and gnome-panel. It:

- it can be worked around in the gnome-panel by creating a wrapper
  window if it detects an icon with a non-matching visual
- it can be avoided  by simplying making the window for
  GtkStatusIcon never use a RGBA window independent of your setting.
- it can be fixed properly by extending the tray icon protocol 
  to declare whether the tray wants RGBA icons or not. Ryan's
  proposal is workable, though I would do it a little differently - 
  put the visual (or colormap) ID in a property.

  - Once you fix the GtkStatusIcon issue, you need to test a wide
variety of apps (especially less native ones like Firefox, 
Adobe reader, etc.) to make sure that using an RGBA visual doesn't
cause other sorts of crashes or misbehavior.

If it does cause crashes, you need to investigate those crashes.
If they can't be worked around, then we'd need to make apps declare
window-by-window that an RGBA visual is OK - a rgba-ok property
on GtkWindow, so to speak.

  - If the tests are successful, at then point, what you would need
to do is add a *style property* for GtkWindow that says that 
the theme wants an RGBA window.

Despite the theme wanting an RGBA window, it might not get it - 
you can only use RGBA windows if a compositing manager is running.

  - You'll need to extend GDK to give notification when a compositing
manager starts or stops .. something GDK doesn't have currently
as far as I know.

  - Now you have another issue to deal with: if a compositing manager
stops or starts or the theme changes, then you might have to change
a GtkWindow on the fly from RGBA to non-RGBA. This means unrealizing
it and realizing it again. You need to do another around of testing
similar to the round above to make sure that this won't cause
problems for applications.

In particular for this, you would want to test out applications that
embed OpenGL or Xvideo within then.

If there are problems, then again, you would need a GtkWindow
property (unrealize-ok) to declare that it is safe to unrealize
and realize the window again.
 
That's basically the program that would need to be followed. Fooling
around with environment variables or XSETTINGS is pointless.

- Owen


___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GtkSocket Doesn't Recieve Events

2008-03-10 Thread Owen Taylor

On Mon, 2008-03-10 at 17:44 +0200, natan yellin wrote:
 Hello,
 
 I'm not sure if it's a bug, but GtkSocket only recieves keypress and
 release events.
 
 Is there a way to work around this?

Why do you want events on a GtkSocket?

- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-list mailing list
gtk-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-list


Re: simple widget to draw on ?

2008-03-05 Thread Owen Taylor

On Wed, 2008-03-05 at 09:10 +0100, Sven Neumann wrote:
 Hi,
 
 On Tue, 2008-03-04 at 20:36 +0100, Sven Neumann wrote:
 
  Even if this is classified as a theme bug, it would still be nice to
  provide a simple way to draw without introducing an extra output window.
  If the patch attached to bug #519317 is accepted, GtkDrawingArea could
  serve this purpose.
 
 Xan Lopez added a comment to this bug-report asking that API should be
 added to get and set the NO_WINDOW mode like in GtkFixed. I don't think
 this is necessary as it would only duplicate GTK_WIDGET_SET_FLAGS(). But
 I would like to get another opinion on this...

GTK_WIDGET_SET_FLAGS is protected API; also there is no way a GUI
builder would know what widgets you could toggle into 
no-window mode, and no way to express set the NO_WINDOW flag in 
GtkBuilder. And no notification or handling of it when the widget is
already realized.

There's a reason GtkFixed and GtkEventBox have API. (Why they don't have
the *same* API, I don't know...)

- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: glib utf8 api

2008-03-04 Thread Owen Taylor

On Tue, 2008-03-04 at 16:24 -0800, Gregory Sharp wrote:
 Thanks so much Owen and Bedhad for your response.
 
   1) There seems to be no good way to strncpy a utf8 string 
   into a fixed buffer.  g_strncpy doesn't work, because the 
   last character can get truncated causing an invalid string. 
  
   g_utf8_strncpy doesn't work either, because I don't know 
   how many characters fit in the buffer.
  
  Doesn't strike me as a useful operation. Easy enough to write
  yourself with
  g_utf8_get_char()/next_char()/g_unichar_to_utf8().
 
 May I try to convince you that it is useful?  For good or 
 evil, it is still common to copy strings into fixed length
 buffers.  That is why functions like strncpy exist in 
 the standard C library.  It is not expected that everyone 
 write his own strncpy, even though it is easy, because we 
 all benefit from using the copy in the library.

Behdad certainly has a lot more influence than I do about what goes into
GLib these days ... he actually commits things! So, convincing me isn't
really necessary.

We generally have discouraged people from using fixed sized buffers..
GString,g_strdup_printf(), g_strconcat(), 
g_markup_printf_escaped(), etc. are generally a much more robust, more
convenient and safer ways to build strings. 

Other than that, I can only offer that I've never felt that I needed a
truncate Unicode string 'cleanly' at N bytes' operation. (Noting that
even if you preserve character boundaries you might strip accents, break
 up clusters, etc.) And I've never seen it in anybody else's GLib code
either. So, it clearly isn't an essential operation. But could it be
occasionally useful? Sure. As could hundreds of other functions.

- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: glib utf8 api

2008-03-03 Thread Owen Taylor

On Sun, 2008-03-02 at 14:49 -0800, Gregory Sharp wrote:
 Hi, I'm new to glib, and have questions/comments about
 the utf-8 API.
 
 1) There seems to be no good way to strncpy a utf8 string 
 into a fixed buffer.  g_strncpy doesn't work, because the 
 last character can get truncated causing an invalid string.  
 g_utf8_strncpy doesn't work either, because I don't know 
 how many characters fit in the buffer.

Doesn't strike me as a useful operation. Easy enough to write
yourself with g_utf8_get_char()/next_char()/g_unichar_to_utf8().

 2) There seems to be no way to create a best guess valid
 string.  g_utf8_validate is nice and all, but if validation 
 fails I still need to create a valid string.  Am I supposed 
 to use g_convert_with_fallback() from UTF-8 to UTF-8?

No, g_convert() needs input in the character set you specify.
The fallback is for characters not in the output character
set.

There are lots of different things you might want to do for
an force to valid function:

 - Try to guess the real encoding:
 - Drop invalid sequences
 - Replace invalid sequences with replacement characters or ?
 - Replace invalid sequences with hex escapes 
   (The GLib logging functions do this)

I guess I could see a point for including some function along these
lines in GLib, though it's not too hard to write your own.

 3) If validated utf8 strings are fundamentally different from 
 unvalidated strings, shouldn't they use a different C type?

I don't think this type of thing usually makes sense.
strlen() takes a char *. It can be used on validated UTF-8,
or on a random sequence of bytes.

 4) What are the developers' reaction to camel_utf8_getc() 
 on this page: http://www.go-evolution.org/Camel.Misc

Apparently they were useful to the camel authors. However,
from timings I did:
 
g_utf8_get_char() = g_utf8_get_char_validated()
g_utf8_next_char() = g_utf8_find_next_char()

Are both quite noticeable slowdowns, not to mention other
issues (like keeping your handling of invalid characters
consistent, keeping track of input/output indexes, etc)
when iterating through possibly invalid input. 

Generally, validating at the boundaries is a better approach.

- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GLib and 64-bit Windows

2008-01-30 Thread Owen Taylor

On Tue, 2008-01-29 at 15:12 +0100, Tim Janik wrote:

 2008-01-29 14:58:31  Tim Janik  [EMAIL PROTECTED]
 
  * glib/gmem.[hc]: changed size argument type from gulong to gsize as
  discussed on gtk-devel-list:

 http://mail.gnome.org/archives/gtk-devel-list/2007-March/msg00062.html
  this should be ABI compatible on all platforms except win64 for which
  no ABI binding port exists yet.
 
  There *are* platforms where gssize is an unsigned integer rather than an
  unsigned long, but my general feeling is that just changing the gmalloc
  prototypes is unlikely to cause problems; GMemVTable, which would be

So, changing the gmem.h prototypes actually broke compilation of
gnome-keyring. (It was passing g_realloc into a function that took a
function pointer; causing a warning, and with -Werror on, a fatal
error.)

My feeling is that this is *probably* OK (in the context of a
between-major-version releases), though if more problem start showing
up, changing the prototypes only for Win64 might be necessary.

  more likely to cause warnings already has gsize there.
 
 i suppose you mean gsize (which is always unsigned), because gssize is
 always signed.

I'm not sure what you you are asking here. What I was saying is that 
changing vtable members is more likely to break things than changing
function prototypes.

  There are going to be other situations however, where the fix isn't so
  obvious.
 
  - When 64-bit proofing the Unicode string bits of GLib (years ago)
I took the policy that:
 
 - Counts of bytes were sized as gsize
 - Counts of items sized larger than a byte are longs
 
because it seemed very strange to me to use size_t for non-byte
counts.
 
 C++ does this all the time though (also calls it's .get_number_of_elements()
 methods .size()), so you get used to it after a bit of STL fiddling.
 
But that means that something like the return value from
g_utf8_strlen() is wrong for win32. This can't be changed in a
source-compatible fashion.
 
Probably the right thing to do for g_utf8_strlen() is to compute
things internally as 64-bit, then clamp the result to 32-bits
on return. Without the clamp:
 
  long size = g_utf8_strlen(str);
  gunichar chars = g_new(gunichar, size);
  for (char *p = str, gunichar *c = chars; *p; p = g_utf8_next_char(p)) {
*c = g_utf8_get_char(p);
  }
 
 Is a potential buffer overflow, though a hard one to trigger.
 (Actually, it's a potential overflow currently for 32-bits. We really
 should make g_new0() not a g_malloc()-wrapping macro so we can protect
 the multiplication.)
 
 if i understand you correctly, you mean to imply that we also fix the
 signatures from *long to *size as well for the following functions
 (comprehensive list of *long API in glib/glib/*.h):
 
 
 gdouble  g_timer_elapsed (GTimer  *timer,
gulong  *microseconds);
[...]

No, I didn't mean that, because 

 
 gdouble  g_timer_elapsed (GTimer  *timer,
   size_t  *microseconds);

 gulong microseconds;

 g_timer_elapsed(timer, microseconds);

Will warn in many many situations on many platforms, and MSVC will warn
about:

 gsize g_utf8_strlen (const gchar *p,  
  gssize   max);

 long value = g_utf8_strlen(p, max);

even when compiling for 32 bits. So I don't consider changing out
parameters and returns from long = size_t compatible.

- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GTK+ recent manager and RTL issues

2008-01-22 Thread Owen Taylor

On Mon, 2008-01-21 at 18:51 -0500, Behdad Esfahbod wrote:

 D) Another fix, not easy to implement right now:
 
 ELIF
   +-+
   |NEPO |
   +-+
   |txt.OLLEH .1 |
   |hello.txt .2 |
   |  hello world.txt .3 |
   |it was a dream... .4 |
   +-+
 
 
 Case (D) is not easy to implement right now.  It needs ones to render
 the number and the filename as separate fields.  I plan to add pango
 attributes to make it easier, like in HTML for example.  This is the
 tracker for that:
 
   http://bugzilla.gnome.org/show_bug.cgi?id=70399
 
 Note that if you knew the direction of the subtext, you could get away
 with sandwiching it between LRE/PDF or RLE/PDF, but there's no neutral
 bid embedding character in Unicode.  So needs to be implemented in
 markup.

You could insert a tab, right? Unfortunately, then you'd get the space
from a tab as well...

- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GtkInvisible and the style-set signal

2008-01-18 Thread Owen Taylor

On Fri, 2008-01-18 at 16:47 +0100, Carlos Garnacho wrote:
 Hi!,
 
 In order to improve the locate mouse functionality in
 gnome-settings-daemon, I've tried to attach a GdkWindow to a
 GtkInvisible in order to paint to it, it works nicely, with one
 exception: GtkInvisible doesn't receive any style-set signals, so if I
 use theme colors when painting, changing theme doesn't have any effect.
 
 Tim kindly pointed me to this piece of code:
 
 static void
 gtk_invisible_style_set (GtkWidget *widget,
GtkStyle  *previous_style)
 {
   /* Don't chain up to parent implementation */
 }
 
 Now, I'm wondering, is this for some reason? or it's just product of the
 assumption that you'll never need that signal in a invisible widget?

It's been a long time, but I suspect that the problem was that at least 
in some cases, the GtkWindow style-set default handler could result in
setting the background pixmap for widget-window, and that's going to be
an X error when widget-window is INPUT_ONLY.

- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Static compose table in gtkimcontextsimple.c

2007-12-06 Thread Owen Taylor
On Thu, 2007-12-06 at 12:28 +, Paul LeoNerd Evans wrote:
 On Tue, 04 Dec 2007 05:38:56 +
 Simos Xenitellis [EMAIL PROTECTED] wrote:
 
  If you would like to help with bug 321896 it would be great. The current 
  state is on how to make the table much smaller, even with the addition of
  more keysyms. There is a script that converts en_US.UTF-8/Compose into a
  series of arrays that should be easy for GTK+ to work on. 
 
 OK, I've had a good read through that bug, and now I'm confused again.
 
 Can someone explain why GTK has to have this large table compiled into
 it..? I thought X itself provided ways to perform input composition into
 Unicode strings. Otherwise, why do I have a file
 
   /usr/share/X11/locale/en_US.UTF-8/Compose
 
 Can we just use that?

Note also that loading /usr/share/X11/locale/en_US.UTF-8/Compose causes
a large amount of per-process memory to be allocated, and quite a bit of
time spent parsing it. While the GTK+ table is large, it is mapped
read-only so shared between all GTK+ applications. (*) (**)

I don't have any exact or recent numbers here; the Compose table was a
significant fraction of the per-process overhead when I measured it
before writing gtkimcontextsimple.c, and current UTF-8 table is much
bigger than anything I measured. On the other hand, it's possible that
optimization has been done within Xlib in the subsequent 5-6 years.

The original motivations in order of priority:

 1. Reliable compose sequences in non-UTF-8 locales
 2. Efficiency
 3. Cross-platform portability
 
1. is luckily no longer an issue, but the two still apply.

- Owen

(*) The Xlib problem could obviously be fixed by precompiling and
  mem-mapping the Compose tables, as we do for similiar things

(**) The one thing to be careful about when modifying
gtkimcontextsimple.c is not to save size by introducing relocations.
Arrays that include pointers to other arrays cannot be mapped read-only.
Other than that, go for it!



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: Static compose table in gtkimcontextsimple.c

2007-12-06 Thread Owen Taylor

On Thu, 2007-12-06 at 17:30 +, Paul LeoNerd Evans wrote:
 On Thu, 06 Dec 2007 12:12:39 -0500
 Owen Taylor [EMAIL PROTECTED] wrote:
 
  Note also that loading /usr/share/X11/locale/en_US.UTF-8/Compose
 
 That's not quite what I meant.
 
 What I meant was, I thought that the X11 server did some of this work
 for us? So can we not ask it to do that?
 
 Or have I misunderstood how it works, and that this is really a
 clientside thing done by Xlib?

The latter.

- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: keyboard functions

2007-12-03 Thread Owen Taylor

On Sat, 2007-12-01 at 14:19 +0100, linux user wrote:
 Every day is more and more necessary to dominate the keyboard,
 especially in applications running in a globalized contexts which work
 with different languages, so need to switch / change the configuration
 of characters (letters) along the keyboard (according those
 languages). Or more exactly, with the same keycodes (because usually
 the same keyboard) produce different letters according the prefered
 layouts of each language.
 
 X Window System foresaw this situation and provide us some
 mechanisms... among them, we have the configuration files placed (at
 least on my computer) in /usr/share/X11/xkb and /etc/X11. Default
 keyboard configurations (layouts  variants, mainly, rigth here..)
 -set in /etc/X11/xorg.conf- are defined in /usr/share/X11/xkb/...
 Furthermore, if an option like grp:switch,grp:shift_toggle or
 grp:switch,grp:alt_shift_toggle is provided, it's possible to switch
 among two, three or four keyboards (= character configurations).
 
 It's also possible, in a graphical console, to use the command
 setxkbmap, and in
 Gnome Dektop Environment, we have the gnome-keyboard-properties
 application. But... Why the silence in the GTK+ users list when I ask
 the way to do it in my GTK+ applications ?

Perhaps:

http://freedesktop.org/wiki/Software/LibXklavier

Would help you. These features will not be added to GTK+ itself because:

 - They are only useful for a very small number of applications

 - They are not useful for applications running inside a desktop
   environment like GNOME or KDE.

 - They are X specific. Other systems configure keyboards very
   differently.

 - There's no advantage in adding them to GTK+, since they are X
   specific. Applications can use libXkb and libXklavier 
   if they need the functionality.
 
- Owen



signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: gtk_bindings_activate

2007-12-01 Thread Owen Taylor

On Sat, 2007-12-01 at 19:16 +0300, Evgeniy Ivanov wrote:
 Hi! I think there is a bug in the gtk_bindings_activate from
 gtkbindings.
 I'm not sure, thus I didn't open it.
 
 Here is example of what happens:
 
 GtkWidget *entry;
 entry = gtk_entry_new (); 
 //... Some routines like settext
   int modifiers = 0;
   modifiers |= GDK_CONTROL_MASK;
 gtk_bindings_activate(G_OBJECT(entry),118,modifiers); //118 is the
 keycode of 'V', and it works (text is pasted). 
 gtk_bindings_activate(G_OBJECT(entry),1741,modifiers);
 /*
 1741 is the code of Cyrillic_em, which is located on the same physical
 key as 'V'. So thei shortcut should work too, but
 gtk_bindings_activate returns FALSE. 
 What's wrong? Should it work or the idea of gtk_bindings_activate in
 something else?
 */

The full handling is only present when you use 
gtk_bindings_activate_event(). By the time that you go from an event to
a keyval, needed information has been lost.

- Owen

(gtk_bindings_activate() is basically just there for compatibility - it
existed before the fancy handling that does the Cyrillic vs. Latin on
the same key was implemented.)




signature.asc
Description: This is a digitally signed message part
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
http://mail.gnome.org/mailman/listinfo/gtk-devel-list


  1   2   3   4   5   6   7   8   9   10   >