any ideas for a fix?

2014-08-19 Thread Gary Kline
=
Organization: Thought Unlimited.  Public service Unix since 1986.
Of_Interest: With 28 years  of service  to the  Unix  community.

guys,

last time I encloused this leftJ* code,  it output a 700 by 900
label with label1, label2, label3.  in labelWidgets.h I've got
a *label[32], and use a global tt=0 that is inc in a for loop.
the gcc line builds ./leftJ after you unshar or simply run sh against
the sharball.

it doesn't segv or anything; but it only printfs the last line.  WITH
complaiints.  can any body help me here?

tia,

gary

Attached: leftJustify.shar

-- 
 Gary Kline  kl...@thought.org  http://www.thought.org  Public Service Unix
 Twenty-eight years of service to the Unix community.


___
gtk-app-devel-list mailing list
gtk-app-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-app-devel-list


Re: Desktop performances (was: Re: GTK+ scene graph)

2014-08-19 Thread Emmanuele Bassi
hi;

On 18 August 2014 19:49, Colomban Wendling lists@herbesfolles.org wrote:
 Hi,

 Le 18/08/2014 19:55, Emmanuele Bassi a écrit :
 [...]

 when we introduced GL in the pipeline through the compositor 5 years
 ago, stuff that was 5 years old *at the time* already could run
 decently.

 Not completely, no.

yes, completely, in terms of hardware capabilities. when it comes to
driver capabilities, the only way to improve the situation is actually
*fixing* the drivers, not ignoring the issue and hope that this fad of
multiple dedicated CPU and GPU cores for separate tasks is going away,
because it's not.

the drivers landscape has dramatically improved in the last 8 years,
and that happened because projects outside of the random game port
started relying on these features. well, the drivers landscape has
improved even more since people started porting games as well. the
clear commonality is that, if people needs it, stuff gets fixed.

  I had (and still have) a really decent '07 card
 (non-integrated 6600GT on a desktop) that always was largely sufficient
 for everything I wanted -- desktop, development, multimedia, even most
 3D Linux games would run smoothly.  It worked like a charm for
 everything when I ran non-compositing WM (I used GNOME2/Metacity at that
 time), and it was able to perform accelerated video decoding through
 VDPAU, which was very nice as my CPU of that time couldn't really cope
 with full-HD decoding -- and my current one is just barely able to keep up.

that seems to be an issue in the compositor, or again in the drivers.

 So even though compositing WMs looked like a nice idea originally, I'm
 really not convinced anymore (or at least the X11/mutter impl is wrong,
 I didn't try anything else seriously).

desktop compositing is the only way to provide you with a correct,
reliable, and possibly not resource intensive environment. that's why
everybody else, like Apple or Microsoft, has spent years improving
their graphic stack and display servers in order to implement this
functionality. that's one of the tenets of Wayland as well, since we
finally have a sporting chance of not having a joke acting as our
display server.

in any case, we digressed considerably from my original point, which
was using hardware acceleration to draw applications — which is not
even entirely correct: the plan is to use OpenGL to do compositing of
image surfaces inside the application (except possibly for videos, in
which we should use Wayland subsurfaces mapped to hardware overlays in
case they are available), because that's the only API that we can
actually rely on, until we have a better API shared among vendors.
whether that API is actually implemented in hardware or in software I
don't particularly care. if you *do* care, then feel free to start
contributing to Mesa and to the free software drivers stack, instead
of asking for GTK+ to continue down the path of irrelevance because
you conjured out of thin ait a hard requirement for it run on a single
core machine from 2004 with a Matrox G200 card as fast as it can on a
quad core machine from 2014 with an nVidia discrete GPU.

ciao,
 Emmanuele.

-- 
http://www.bassi.io
[@] ebassi [@gmail.com]
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GTK+ scene graph

2014-08-19 Thread Emmanuele Bassi
hi;

On 18 August 2014 19:09, Paul Davis p...@linuxaudiosystems.com wrote:

 On Mon, Aug 18, 2014 at 1:55 PM, Emmanuele Bassi eba...@gmail.com wrote:
 [ .. ]

 realistically, by comparison with other platforms and with many home-grown
 GUI toolkits, there's really only one sane design for any toolkit in 2014:

* recursively nested objects with a common draw virtual method that
 renders onto a surface (*) as the only way pixels get drawn
* access to a GL surface when and if necessary
* packing and layout orthogonal to drawing from the application code
 perspective

 anything that moves GTK closer to this is good. anything that moves it
 further away or impedes motion towards it is bad.

 (*) presumably cairo for 2D and GL for 3D but smart people can disagree
 about this.

if we ignore the GL side, then good job: GTK+ 3.12 is already pretty
much that. except, you know:

 * the API being weird
 * the lack of a box model for CSS to draw on
 * the lack of support for 3D transforms and animations
 * the lack of an animation API capable of implementing dynamic layout
management policies

and a bunch of other smaller details.

in any case, the end goal is to make GTK+ 4.0 a better API for writing
dynamic and compelling applications for a variety of form factors.
maybe provide enough building blocks for people to write their own
smaller canvases/toy toolkits on top, in order to satisfy ad hoc
requirements.

ciao,
 Emmanuele.

-- 
http://www.bassi.io
[@] ebassi [@gmail.com]
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GTK+ scene graph, API deprecations

2014-08-19 Thread Alexander Larsson
On mån, 2014-08-18 at 18:09 +0200, Sébastien Wilmet wrote:
 On Mon, 2014-08-18 at 10:01 -0400, Jasper St. Pierre wrote:
  Because every time we try to clean up GtkTreeView, we break some random
  application. It's a widget that has twenty three gazillion use cases, and
  so we have to keep it a mess, because removing the mess means removing one
  use case, and we can't do that.
 
 So the problem is because GtkTreeView was developed as policy-free as
 possible. Now the new widgets are written with more policy, which makes
 the API easier to use, with a simpler implementation.

As I see it there are two fundamental problems with GtkTreeView. 

First of all it has a model-view split, but there is no way to save any
state in the view. For example, if the model has a char *icon_name
column, then every time we draw a row we read the icon name from the
column, set it as a property of the (one and only) pixbuf cell renderer,
and then ask the cell renderer to draw itself. Now, since the cell
renderer has no state it has to start from scratch each time, including
reading the icon from *disk*. A few releases ago we added a cache for
the icon theme, so we're not quite as bad as this any more, but it still
happens if you have more different icons than the size of the cache.

This doesn't only affect pixbufs. For instance if you have a tree with a
lot of (mostly static) text you want to be able to cache pangolayouts
for things like rendering of visible rows.

The lack of a calculated state for the view also makes it hard to
integrate with CSS for theming, as there is no view tree to do matching
against.

Secondly, it duplicates the entire size calculation machinery. Since we
had no other way to make lists people started using TreeView for a lot
of things, including more complex layouts. This lead to things like
GtkCellAreaBox which essentially re-creates GtkBox but for cell
renderers only. This kind of duplication is wasteful, hard to maintain
and hard for users to understand.

___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GTK+ scene graph

2014-08-19 Thread Paul Davis
On Tue, Aug 19, 2014 at 6:52 AM, Emmanuele Bassi eba...@gmail.com wrote:

 hi;

 On 18 August 2014 19:09, Paul Davis p...@linuxaudiosystems.com wrote:

  On Mon, Aug 18, 2014 at 1:55 PM, Emmanuele Bassi eba...@gmail.com
 wrote:
  [ .. ]
 
  realistically, by comparison with other platforms and with many
 home-grown
  GUI toolkits, there's really only one sane design for any toolkit in
 2014:
 
 * recursively nested objects with a common draw virtual method that
  renders onto a surface (*) as the only way pixels get drawn
 * access to a GL surface when and if necessary
 * packing and layout orthogonal to drawing from the application code
  perspective
 
  anything that moves GTK closer to this is good. anything that moves it
  further away or impedes motion towards it is bad.
 
  (*) presumably cairo for 2D and GL for 3D but smart people can disagree
  about this.

 if we ignore the GL side, then good job: GTK+ 3.12 is already pretty
 much that. except, you know:

  * the API being weird
  * the lack of a box model for CSS to draw on
  * the lack of support for 3D transforms and animations
  * the lack of an animation API capable of implementing dynamic layout
 management policies


   * cell renderer API as additional cruft over signal_draw.
___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list


Re: GTK+ scene graph, API deprecations

2014-08-19 Thread Piñeiro

On 08/19/2014 03:24 PM, Alexander Larsson wrote:
 On mån, 2014-08-18 at 18:09 +0200, Sébastien Wilmet wrote:
 On Mon, 2014-08-18 at 10:01 -0400, Jasper St. Pierre wrote:
 Because every time we try to clean up GtkTreeView, we break some random
 application. It's a widget that has twenty three gazillion use cases, and
 so we have to keep it a mess, because removing the mess means removing one
 use case, and we can't do that.
 So the problem is because GtkTreeView was developed as policy-free as
 possible. Now the new widgets are written with more policy, which makes
 the API easier to use, with a simpler implementation.
 As I see it there are two fundamental problems with GtkTreeView. 

 First of all it has a model-view split, but there is no way to save any
 state in the view. For example, if the model has a char *icon_name
 column, then every time we draw a row we read the icon name from the
 column, set it as a property of the (one and only) pixbuf cell renderer,
 and then ask the cell renderer to draw itself. Now, since the cell
 renderer has no state it has to start from scratch each time, including
 reading the icon from *disk*. A few releases ago we added a cache for
 the icon theme, so we're not quite as bad as this any more, but it still
 happens if you have more different icons than the size of the cache.

 This doesn't only affect pixbufs. For instance if you have a tree with a
 lot of (mostly static) text you want to be able to cache pangolayouts
 for things like rendering of visible rows.

This also affects the accessibility support, as at any point, the
accessibility technologies would want to get info (text, status, etc)
from a given cell. This leads to custom cell objects (in theory
fly-weight objects) that maintain accessibility-related caches. And the
accessibility implementation of GtkTreeView is also really complex and
regression-prone.

BR

-- 

Alejandro Piñeiro

___
gtk-devel-list mailing list
gtk-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gtk-devel-list