Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-15 Thread John Kåre Alsaker
On Tue, May 14, 2013 at 9:46 PM, Bill Spitzak spit...@gmail.com wrote:



 John Kåre Alsaker wrote:


 I expect a compositor to render deviations of the desired scaling
 factor without scaling windows. The range when this is allowed is
 reported to clients so they can try to render at a size which will
 avoid scaling.

 For example a compositor may want to use a 1-1.2 range with 1.1 as
 the desired scaling factor. A clients which are only able to draw at
 integer scaling factor would round that up to 2 and let the
 compositor downscale it. When the range for which compositor won't
 scale is send to clients we can avoid this.


 I don't think a range is necessary. The client can just claim that it's
 window is scaled at 1.1 even though it drew it at 1. Or at 2.2 even though
 it drew it at 2. Nothing stops the client from doing this so you might as
 well make that the way integer scales are done.

Then the user drags the window that claims to be draw at 1.1 over to a
monitor with scaling factor 1. It will be downscaled even though it's a
perfect match for the monitor.



 With the range, what happens to a surface with a scale of 1.3? Is it
 scaled by 1.3? Or should it be 1.3/1.2 times larger than the one scaled at
 1.2, which is actually 1.191666? For this reason I think any scale wanted
 by the client should be obeyed literally.

It will be scaled by client_scaling_factor/output_scaling_factor, which is
1.3/1.1.



  We may also allow scaling factors below 1.


 I think scaling factors less than 1 are going to be a requirement.
 Otherwise the units have to be for the lowest-resolution device, which
 seems silly if you have a huge hi-res screen and a small lcd low-res
 display on your keyboard.

Perhaps, but I'd expect clients to do a lot of rounded up making the
scaling not very linear.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 0/2] Support for high DPI outputs via scaling

2013-05-15 Thread Bill Spitzak



John Kåre Alsaker wrote:


For example a compositor may want to use a 1-1.2 range with
1.1 as
the desired scaling factor. A clients which are only able to
draw at
integer scaling factor would round that up to 2 and let the
compositor downscale it. When the range for which compositor
won't
scale is send to clients we can avoid this.


I don't think a range is necessary. The client can just claim that
it's window is scaled at 1.1 even though it drew it at 1. Or at 2.2
even though it drew it at 2. Nothing stops the client from doing
this so you might as well make that the way integer scales are done.

Then the user drags the window that claims to be draw at 1.1 over to a 
monitor with scaling factor 1. It will be downscaled even though it's a 
perfect match for the monitor.


That makes sense. I'm still not sold on the range, especially the upper 
end of the range. The only likely scheme I see for choosing a size to 
draw at will ignore the upper range.


What happens in your scale=1.1, range=1,1.2 scenario if the client 
draws at a size of 2? It looks like it will draw at a scale if 1.1/2, 
but it may be better for it to draw at a size of 1/2.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] protocol: Add buffer_scale to wl_surface and wl_output

2013-05-15 Thread Pekka Paalanen
On Tue, 14 May 2013 13:43:12 -0700
Bill Spitzak spit...@gmail.com wrote:

 You may also want to allow different horizontal and vertical scales, 
 mostly because all plausable implementations can do this with no loss of 
 speed, and the scaler api allows this. You will need to define if this 
 is before or after the buffer transform...

Only if there are monitors with non-square pixels that we care about.
Otherwise no.
- pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] protocol: Add buffer_scale to wl_surface and wl_output

2013-05-15 Thread Alexander Larsson
On tis, 2013-05-14 at 13:43 -0700, Bill Spitzak wrote:
 al...@redhat.com wrote:
 
  +  /description
  +  arg name=scale type=fixed/
  +/request
  /interface
 
 Fixed is not a good idea for scaling factors. You cannot accurately 
 represent values like 2/3 or 1/fixed. For an actual problem with 
 scaling, if the accurate scaling is an odd fixed number, the client 
 cannot specify that their scale is exactly 1/2 that, thus losing the 
 ability to get a 2x scale done by the compositor.
 
 I would specify the scale as two integers defining a rational fraction.
 
 This would also allow completely lossless multiplication with the 
 rational numbers used by the scaler proposal.
 
 You may also want to allow different horizontal and vertical scales, 
 mostly because all plausable implementations can do this with no loss of 
 speed, and the scaler api allows this. You will need to define if this 
 is before or after the buffer transform...

In fact, working on this in weston a bit it seems that in general, scale
is seldom used by itself but rather its used to calculate the buffer and
screen size which are then used, and we want both of these to be
integers. So, it seems to me that we should specify scaling by giving
the width/heigh of the surface, which in combination with the buffer
size gives the exact scaling ratios, plus it guarantees that the scaling
maps integers to integers.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] protocol: Add buffer_scale to wl_surface and wl_output

2013-05-15 Thread Pekka Paalanen
On Tue, 14 May 2013 12:26:48 +0200
al...@redhat.com wrote:

 From: Alexander Larsson al...@redhat.com
 
 This adds wl_surface_set_buffer_scale() to set the buffer scale of a
 surface.
 
 It is similar to set_buffer_transform that the buffer is stored in a
 way that has been transformed (in this case scaled). This means that
 if an output is scaled we can directly use the pre-scaled buffer with
 additional data, rather than having to scale it.
 
 It also adds a geometry2 event with a scale member to wl_output that
 specifies the scaling of an output.
 
 This is meant to be used for outputs with a very high DPI to tell the
 client that this particular output has subpixel precision. Coordinates
 in other parts of the protocol, like input events, relative window
 positioning and output positioning are still in the compositor space
 rather than the scaled space. However, input has subpixel precision
 so you can still get input at full resolution.

I think I can understand this paragraph, but could we express it more
clearly?

Output with sub-pixel precision could probably use some explanation
about what it is here, like is it about sub-pixels in the RGB pixel
parts sense. Can this be used for RGB-sub-pixel things somehow?

Compositor space and scaled space need to be clearly defined, I am
not sure what they refer to here. The well-known coordinate spaces we
already have are surface (local) coordinates, and global coordinates.
Output coordinates likely too, and buffer coordinates will be
introduced with the surface crop  scale extension[1] to further
differentiate from surface coordinates.

 This setup means global properties like mouse acceleration/speed,
 pointer size, monitor geometry, etc can be specified in a mostly
 similar resolution even on a multimonitor setup where some monitors
 are low dpi and some are e.g. retina-class outputs.
 ---
  protocol/wayland.xml | 41 +++--
  1 file changed, 39 insertions(+), 2 deletions(-)
 
 diff --git a/protocol/wayland.xml b/protocol/wayland.xml
 index 3bce022..e5744c7 100644
 --- a/protocol/wayland.xml
 +++ b/protocol/wayland.xml
 @@ -876,7 +876,7 @@
  /event
/interface
  
 -  interface name=wl_surface version=2
 +  interface name=wl_surface version=3

You have to bump also the wl_compositor version. wl_surface is not a
global, and only globals can have their interface version negotiated.
The global that can create wl_surface objects is wl_compositor, and
wl_compositor only.

The version of wl_surface in use will be implied by the negotiated
version of wl_compositor.

(Yes, it is a bit strange perhaps, but that is how it is.)

  description summary=an onscreen surface
A surface is a rectangular area that is displayed on the screen.
It has a location, size and pixel contents.
 @@ -1110,6 +1110,30 @@
/description
arg name=transform type=int/
  /request
 +
 +!-- Version 3 additions --
 +
 +request name=set_buffer_scale since=3
 +  description summary=sets the buffer scale
 + This request sets an optional scaling factor on how the compositor
 + interprets the contents of the buffer attached to the surface. A
 + value larger than 1, like e.g. 2 means that the buffer is 2 times the
 + size of the surface.

..in each dimension. Ok.

 +
 + Buffer scale is double-buffered state, see wl_surface.commit.
 +
 + A newly created surface has its buffer scale set to 1.
 +
 + The purpose of this request is to allow clients to supply higher 
 resolution
 + buffer data for use on high-resolution outputs where the output itself
 + has a scaling factor set. For instance, a laptop with a high DPI
 + internal screen and an low DPI external screen would have two outputs
 + with different scaling, and a wl_surface rendered on the scaled output
 + would normally be scaled up. To avoid this upscaling the app can supply
 + a pre-scaled version with more detail by using set_buffer_scale.

You could also mention, that it is expected that clients will use
an output's scale property value as the set_buffer_scale argument. Or
at least that is the intended use here.

 +  /description
 +  arg name=scale type=fixed/

Are you sure you really want fixed as the type?
Integer scaling factors sounded a lot more straightforward. When we are
dealing with pixel buffers, integers make sense.

Also, I do not buy the argument, that integer scaling factors are not
finely grained enough. If an output device (monitor) has such a hidpi,
and a user wants the default scaling, then we will simply have an
integer scaling factor 1, for example 2. Clients will correspondingly
somehow see, that the output resolution is small, so they will adapt,
and the final window size will not be doubled all the way unless it
actually fits the output. This happens by the client choosing to draw a
smaller window to begin with, not by scaling, when compared to what it
would do if the 

[PATCH 1/2] cms-colord: Fix build after the API change 'Honor XDG_CONFIG_DIRS'

2013-05-15 Thread Richard Hughes
---
 src/cms-colord.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/cms-colord.c b/src/cms-colord.c
index 33f23b2..af6b5fa 100644
--- a/src/cms-colord.c
+++ b/src/cms-colord.c
@@ -478,7 +478,7 @@ colord_cms_output_destroy(gpointer data)
 
 WL_EXPORT int
 module_init(struct weston_compositor *ec,
-   int *argc, char *argv[], const char *config_file)
+   int *argc, char *argv[])
 {
gboolean ret;
GError *error = NULL;
-- 
1.8.2.1

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH 2/2] cms-colord: Warn if reading or writing to the FD failed

2013-05-15 Thread Richard Hughes
This also fixes a compile warning when building the tarball.
---
 src/cms-colord.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/src/cms-colord.c b/src/cms-colord.c
index af6b5fa..6056407 100644
--- a/src/cms-colord.c
+++ b/src/cms-colord.c
@@ -127,6 +127,7 @@ static void
 update_device_with_profile_in_idle(struct cms_output *ocms)
 {
gboolean signal_write = FALSE;
+   ssize_t rc;
struct cms_colord *cms = ocms-cms;
 
colord_idle_cancel_for_output(cms, ocms-o);
@@ -139,7 +140,9 @@ update_device_with_profile_in_idle(struct cms_output *ocms)
/* signal we've got updates to do */
if (signal_write) {
gchar tmp = '\0';
-   write(cms-writefd, tmp, 1);
+   rc = write(cms-writefd, tmp, 1);
+   if (rc == 0)
+   weston_log(colord: failed to write to pending fd);
}
 }
 
@@ -365,6 +368,7 @@ colord_dispatch_all_pending(int fd, uint32_t mask, void 
*data)
 {
gchar tmp;
GList *l;
+   ssize_t rc;
struct cms_colord *cms = data;
struct cms_output *ocms;
 
@@ -387,7 +391,9 @@ colord_dispatch_all_pending(int fd, uint32_t mask, void 
*data)
g_mutex_unlock(cms-pending_mutex);
 
/* done */
-   read(cms-readfd, tmp, 1);
+   rc = read(cms-readfd, tmp, 1);
+   if (rc == 0)
+   weston_log(colord: failed to read from pending fd);
return 1;
 }
 
-- 
1.8.2.1

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCHES weston] Three small series

2013-05-15 Thread Quentin Glidic

Hello happy coders,

Here are a few patches for weston, with some explanations.
They all should be updated from the previous reviews.


http://git.sardemff7.net/wayland/weston/log/?id=master..wip/patchesshowmsg=1

These patches are all independant changes to weston:
— weston.pc and layers patches are needed for external notification 
support plugin.
— tests, module path and headless-backend patches are needed for 
subsequent options and tests-related patches

— weston-launch patch is fully independent

Each patch from this series can be applied on its own without 
conflicting with others (to ease the reviewcommit process).


I hope this first series will hit the tree soon, since they are mostly 
trivial fixes.



http://git.sardemff7.net/wayland/weston/log/?id=wip/patches..wip/optionsshowmsg=1

This short second series is about CLI options.
Basically, it make Weston support all formats (spaced or equal, and 
unspaced for short options). It also adds a string lists support.


Future patches will come to add a better parsing to the config parser to 
support string lists too.
CLI arguments are not freed, a future patch will fix that: it was 
already the case for string args, so string lists support is just 
mimicing the leak for now.



http://git.sardemff7.net/wayland/weston/log/?id=wip/options..wip/testsshowmsg=1

The short third series is aiming to drop weston-tests-env script, 
removing the automake 1.11 hack. It is also the first step (with the 
headless-backend patch) to support automated tests without a graphical 
backend (which is needed for nightlies or package manager tests).



Cheers,

--

Quentin “Sardem FF7” Glidic
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: minimized and stick windows

2013-05-15 Thread Alexander Preisinger
Hello,

I thought a bit about it and like to present my ideas.
I mainly thought about it from the shell/compositor site when I like to
minimize, maximize surfaces from keybindings, like in some window managers.

For example the client can still request minimize, maximize, fullsrceen and
toplevel actions, but now the compositor responds with an state_update
event.
The compositor can also send this state_update when the compositor likes
change the window on it's own (like some task bar or compositor key
bindings).
The client can then save the state and act accordingly (like hiding same
menus if maximized or fullscreen).

diff --git a/protocol/wayland.xml b/protocol/wayland.xml
index 3bce022..e0f2c4a 100644
--- a/protocol/wayland.xml
+++ b/protocol/wayland.xml
@@ -811,6 +811,14 @@
   arg name=output type=object interface=wl_output
allow-null=true/
 /request

+request name=set_minimized
+description summary=minimize the surface
+Minimize the surface.
+
+The compositor responds with state_update event.
+/description
+/request
+
 request name=set_title
   description summary=set surface title
Set a short title for the surface.
@@ -867,6 +875,30 @@
   arg name=height type=int/
 /event

+enum name=state
+  description summary=different states for a surfaces
+  /description
+  entry name=toplevel value=1 summary=surface is neither
maximized, minizized or fullscreen/
+  entry name=maximized value=2 summary=surface is maximized/
+  entry name=minimized value=3 summary=surface is minizimed/
+  entry name=fullscreen value=4 summary=surface is fullscreen/
+/enum
+
+event name=state_update
+description summary=update surface state
+Tells the surface which state is has on the output.
+
+This event is sent in respons to set_maximized, set_minimized or
+set_fullscreen request to acknowledge the request. The client can
update it
+own state if it wants to keep track of it.
+
+The also compositor sends this event if itt wants the surface
minimized or
+maximized. For example by clicking on a task list item or compositor
key
+bindings for fullscreen.
+/description
+arg name=state type=uint summary=new surface state/
+/event
+
 event name=popup_done
   description summary=popup interaction is done
The popup_done event is sent out when a popup grab is broken,


I don't know about multiple window applications and maybe missed some other
use cases, but I hope this isn't too wrong of an idea. At least this should
hopefully not break the protocol too much.


Best Regards,


Alexander Preisinger


2013/5/14 Kristian Høgsberg k...@bitplanet.net

 On Tue, May 14, 2013 at 2:30 AM, Pekka Paalanen ppaala...@gmail.com
 wrote:
  On Mon, 13 May 2013 17:26:28 -0500
  Jason Ekstrand ja...@jlekstrand.net wrote:
 
  On Mon, May 13, 2013 at 4:14 PM, Rafael Antognolli 
 antogno...@gmail.comwrote:
 
   Hi Jason,
  
   On Wed, May 8, 2013 at 9:26 PM, Jason Ekstrand ja...@jlekstrand.net
   wrote:
Hi Rafael,
   
   
On Wed, May 8, 2013 at 6:04 PM, Rafael Antognolli 
 antogno...@gmail.com
wrote:
   
Hello,
   
I've been looking the Weston code relative to maximized windows,
 and
it seems that the respective code for minimized windows wouldn't be
hard to implement.
   
The questions are: are there any plans to add it? Is there someone
already working on it? If not, would it be OK if I start submitting
patches to try to add support for this?
   
   
A month or two ago, Scott Morreau was working on it.  However, his
 work
never made into weston for a variety of reasons.  Personally, I'm
 glad to
see someone interested in working on it again because it's
 something that
wayland will need eventually.
   
The place to start on it is probably with the following e-mail and
 the
   long
string of replies:
   
   
  
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007814.html
   
There was quite a bit of discussion about how to handle it from a
   protocol
level, but Scott never made an actual version 2.  I'd suggest you
 start
   by
reading the chain of e-mails (it goes into April, not just March).
  There
were quite a few suggestions in there that could be incorporated.
Hopefully, you can pick through the e-mail discussion and figure
 out what
the consensus was.  It'd be good to have a pair of fresh eyes look
 at it.
  
   Thanks for pointing that out. I just went through the chain of
   e-mails, but I don't think there was a consensus there.
  
   It also seems that the minimize implementation is a little more
   complex than just hiding surfaces and marking some flags. Which makes
   me not so comfortable doing an implementation without a consensus
   about what should be implemented, and with some orientation.
  
   That said, I'm not sure I'm really going to take this task.
  
 
  I didn't 

Re: minimized and stick windows

2013-05-15 Thread Pekka Paalanen
On Wed, 15 May 2013 14:20:21 +0200
Alexander Preisinger alexander.preisin...@gmail.com wrote:

 Hello,
 
 I thought a bit about it and like to present my ideas.
 I mainly thought about it from the shell/compositor site when I like to
 minimize, maximize surfaces from keybindings, like in some window managers.
 
 For example the client can still request minimize, maximize, fullsrceen and
 toplevel actions, but now the compositor responds with an state_update
 event.
 The compositor can also send this state_update when the compositor likes
 change the window on it's own (like some task bar or compositor key
 bindings).
 The client can then save the state and act accordingly (like hiding same
 menus if maximized or fullscreen).
 
 diff --git a/protocol/wayland.xml b/protocol/wayland.xml
 index 3bce022..e0f2c4a 100644
 --- a/protocol/wayland.xml
 +++ b/protocol/wayland.xml
 @@ -811,6 +811,14 @@
arg name=output type=object interface=wl_output
 allow-null=true/
  /request
 
 +request name=set_minimized
 +description summary=minimize the surface
 +Minimize the surface.
 +
 +The compositor responds with state_update event.
 +/description
 +/request
 +
  request name=set_title
description summary=set surface title
 Set a short title for the surface.
 @@ -867,6 +875,30 @@
arg name=height type=int/
  /event
 
 +enum name=state
 +  description summary=different states for a surfaces
 +  /description
 +  entry name=toplevel value=1 summary=surface is neither
 maximized, minizized or fullscreen/
 +  entry name=maximized value=2 summary=surface is maximized/
 +  entry name=minimized value=3 summary=surface is minizimed/
 +  entry name=fullscreen value=4 summary=surface is fullscreen/
 +/enum
 +
 +event name=state_update
 +description summary=update surface state
 +Tells the surface which state is has on the output.
 +
 +This event is sent in respons to set_maximized, set_minimized or
 +set_fullscreen request to acknowledge the request. The client can
 update it
 +own state if it wants to keep track of it.
 +
 +The also compositor sends this event if itt wants the surface
 minimized or
 +maximized. For example by clicking on a task list item or compositor
 key
 +bindings for fullscreen.
 +/description
 +arg name=state type=uint summary=new surface state/
 +/event
 +
  event name=popup_done
description summary=popup interaction is done
 The popup_done event is sent out when a popup grab is broken,
 
 
 I don't know about multiple window applications and maybe missed some other
 use cases, but I hope this isn't too wrong of an idea. At least this should
 hopefully not break the protocol too much.

If I understood right, here you have the client asking the compositor
for permission, and then the compositor orders the client to be in a
certain state and will compose it as such, regardless of what the client
actually draws.

This won't work, fixing the races it causes will complicate the
protocol and cause roundtrips.

The client draws its window, hence the client is in charge of how it
looks, and the compositor cannot force that.

Hence, it must be compositor proposing to the client that it should
e.g. maximize. It the client does that at some point, perhaps first
sending a few new frames since it was animating, the client will tell
the compositor it will now go maximized, and then the very next frame
it draws will be maximized. This avoids flicker.

Minimize is a little special, since the client does not need to react
specially for it to look right. For everything else it will need to.
Actually, if you think about a multi-window application, minimize needs
to work the same way, so that application can hide all relevant
windows (but maybe not *all* windows).


Deja vu,
pq
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] protocol: Add buffer_scale to wl_surface and wl_output

2013-05-15 Thread Alexander Larsson
On ons, 2013-05-15 at 11:13 +0300, Pekka Paalanen wrote:
 On Tue, 14 May 2013 12:26:48 +0200
 al...@redhat.com wrote:

Lots of good stuff snipped. I'll try to fix things up based on that.
Some responses below.

  +  /description
  +  arg name=scale type=fixed/
 
 Are you sure you really want fixed as the type?
 Integer scaling factors sounded a lot more straightforward. When we are
 dealing with pixel buffers, integers make sense.
 
 Also, I do not buy the argument, that integer scaling factors are not
 finely grained enough. If an output device (monitor) has such a hidpi,
 and a user wants the default scaling, then we will simply have an
 integer scaling factor 1, for example 2. Clients will correspondingly
 somehow see, that the output resolution is small, so they will adapt,
 and the final window size will not be doubled all the way unless it
 actually fits the output. This happens by the client choosing to draw a
 smaller window to begin with, not by scaling, when compared to what it
 would do if the default scaling factor was 1. Fractional scaling factors
 are simply not needed here, in my opinion.

I agree that fixed is a poor choice here. The alternative is to always
use an int scaling factor, or allow the client to separately specify the
surface size and the buffer size. Both of these guarantee that both
buffer and surface are integers, which I agree with you that they have
to be. Of course, the later means that the actual scaling factor differs
slightly from window to window for fractional scaling due to rounding.

Having started a bit on the implementation in gtk+ and weston it seems
that allowing fractional scales increases the implementation complexity
quite a bit. For instance, having widgets end on non-whole-integer
positions makes clipping and dirty region tracking harder. Another
example is that damage regions on a buffer need not correspond to a
integer region in global coordinates (or vice versa if we define damage
to be in surface coordinates).

On the other hand, It seems that a few OSX users seem to want to use
fractional scaling (in particular the 1.5 scaling from 2880x1800 to
1920x1200 seems very popular even if its not as nice looking as the 2x
one), so there seems to be a demand for it.

I'm more and more likeing the way OSX solves this, i.e. only allow and
expose integer scaling factors in the APIs, but then do fractional
downscaling in the compositor (i.e. say the output is 1920x1200 in
global coords with a scaling factor of two, but actually render this by
scaling the user-supplied buffer by 0.75). It keeps the implementation
and APIs very simple, it does the right thing for the nice case of 2x
scaling and it allows the fractional scaling.

 Can we have any use for scales less than one?

I don't think so.

 Also, one issue raised was that if an output has a scaling factor A,
 and a buffer has a scaling factor B, then final scaling factor is
 rational. To me that is a non-issue. It can only occur for a
 misbehaving client, in which case it gets what it deserves, or in a
 multi-output case of one surface spanning several non-identical
 monitors. I think the latter case it not worth caring about.
 Non-identical monitors are not identical, and you get what you happen
 to get when you use a single buffer to composite to both.

Yeah, i don't think this is really a practical problem. It'll look
somewhat fuzzy in some construed cases.


 The important thing is to make all client-visible coordinate systems
 consistent and logical.

Yeah, i'll try to use these names in the docs and be more clear which
coordinate spaces different requests/events work in.

 And now the questions.
 
 If an output has a scaling factor f, what does the wl_output
 report as the output's current and supported video modes?

I believe it should report the resolution in global coordinates.
Although we should maybe extend the mode with scale information. This is
the right thing to do in terms of backwards compatibility, but it is
also useful for e.g. implementing the fractional scaling. So, a
2880x1800 panel would report that there exists a 1920x1200@2x mode,
which wouldn't be possible if we had to report the size in output
coordinates.

It also seems right from a user perspective. The list of resolutions
would be 2880x1800, 1920x1200, 1400x900, which is to a first degree
what users will experience with these modes. Furthermore, this will
allow us to expose fake modes for lower resolutions that maybe some
LCD panels don't support (or do with a bad looking scaling), which some
games may want.

 What about x,y in the wl_output.geometry event (which I think are just a
 global coordinate space leak that should not be there)?

Yeah, this is in global coords, and seems like a leak.

 The video modes are important because of the
 wl_shell_surface.set_fullscreen with method DRIVER. A fullscreen
 surface with method DRIVER implies, that the client wants the
 compositor to change the video mode to match this surface. 

Re: [PATCH] protocol: Add buffer_scale to wl_surface and wl_output

2013-05-15 Thread Alex Deucher
On Wed, May 15, 2013 at 9:11 AM, Alexander Larsson al...@redhat.com wrote:
 On ons, 2013-05-15 at 11:13 +0300, Pekka Paalanen wrote:
 On Tue, 14 May 2013 12:26:48 +0200
 al...@redhat.com wrote:

 Lots of good stuff snipped. I'll try to fix things up based on that.
 Some responses below.

  +  /description
  +  arg name=scale type=fixed/

 Are you sure you really want fixed as the type?
 Integer scaling factors sounded a lot more straightforward. When we are
 dealing with pixel buffers, integers make sense.

 Also, I do not buy the argument, that integer scaling factors are not
 finely grained enough. If an output device (monitor) has such a hidpi,
 and a user wants the default scaling, then we will simply have an
 integer scaling factor 1, for example 2. Clients will correspondingly
 somehow see, that the output resolution is small, so they will adapt,
 and the final window size will not be doubled all the way unless it
 actually fits the output. This happens by the client choosing to draw a
 smaller window to begin with, not by scaling, when compared to what it
 would do if the default scaling factor was 1. Fractional scaling factors
 are simply not needed here, in my opinion.

 I agree that fixed is a poor choice here. The alternative is to always
 use an int scaling factor, or allow the client to separately specify the
 surface size and the buffer size. Both of these guarantee that both
 buffer and surface are integers, which I agree with you that they have
 to be. Of course, the later means that the actual scaling factor differs
 slightly from window to window for fractional scaling due to rounding.

 Having started a bit on the implementation in gtk+ and weston it seems
 that allowing fractional scales increases the implementation complexity
 quite a bit. For instance, having widgets end on non-whole-integer
 positions makes clipping and dirty region tracking harder. Another
 example is that damage regions on a buffer need not correspond to a
 integer region in global coordinates (or vice versa if we define damage
 to be in surface coordinates).

 On the other hand, It seems that a few OSX users seem to want to use
 fractional scaling (in particular the 1.5 scaling from 2880x1800 to
 1920x1200 seems very popular even if its not as nice looking as the 2x
 one), so there seems to be a demand for it.

 I'm more and more likeing the way OSX solves this, i.e. only allow and
 expose integer scaling factors in the APIs, but then do fractional
 downscaling in the compositor (i.e. say the output is 1920x1200 in
 global coords with a scaling factor of two, but actually render this by
 scaling the user-supplied buffer by 0.75). It keeps the implementation
 and APIs very simple, it does the right thing for the nice case of 2x
 scaling and it allows the fractional scaling.

 Can we have any use for scales less than one?

 I don't think so.

 Also, one issue raised was that if an output has a scaling factor A,
 and a buffer has a scaling factor B, then final scaling factor is
 rational. To me that is a non-issue. It can only occur for a
 misbehaving client, in which case it gets what it deserves, or in a
 multi-output case of one surface spanning several non-identical
 monitors. I think the latter case it not worth caring about.
 Non-identical monitors are not identical, and you get what you happen
 to get when you use a single buffer to composite to both.

 Yeah, i don't think this is really a practical problem. It'll look
 somewhat fuzzy in some construed cases.


 The important thing is to make all client-visible coordinate systems
 consistent and logical.

 Yeah, i'll try to use these names in the docs and be more clear which
 coordinate spaces different requests/events work in.

 And now the questions.

 If an output has a scaling factor f, what does the wl_output
 report as the output's current and supported video modes?

 I believe it should report the resolution in global coordinates.
 Although we should maybe extend the mode with scale information. This is
 the right thing to do in terms of backwards compatibility, but it is
 also useful for e.g. implementing the fractional scaling. So, a
 2880x1800 panel would report that there exists a 1920x1200@2x mode,
 which wouldn't be possible if we had to report the size in output
 coordinates.

 It also seems right from a user perspective. The list of resolutions
 would be 2880x1800, 1920x1200, 1400x900, which is to a first degree
 what users will experience with these modes. Furthermore, this will
 allow us to expose fake modes for lower resolutions that maybe some
 LCD panels don't support (or do with a bad looking scaling), which some
 games may want.

Just a note that a lot of drivers already expose fake scaled modes
using the scalers in the display hardware for fixed mode panels so
you'll have to differentiate whether you want the display hardware or
wayland to do the scaling.

Alex


 What about x,y in the wl_output.geometry event (which I think are 

[PATCH] wl_shell: Add surface state changed event

2013-05-15 Thread Mikko Levonmaa
This allows the shell to inform the surface that it has changed
state, current supported states are default, minimized, maximized
and fullscreen. The shell implementation is free to interpret the
meaning for the state, i.e. the minimized might not always mean
that the surface is fully hidden for example.

Signed-off-by: Mikko Levonmaa mikko.levon...@lge.com
---
 protocol/wayland.xml |   15 +++
 1 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/protocol/wayland.xml b/protocol/wayland.xml
index 3bce022..ee7d32d 100644
--- a/protocol/wayland.xml
+++ b/protocol/wayland.xml
@@ -874,6 +874,21 @@
to the client owning the popup surface.
   /description
 /event
+
+enum name=state
+  entry name=default value=0/
+  entry name=minimized value=1/
+  entry name=maximized value=2/
+  entry name=fullscreen value=4/
+/enum
+
+event name=state_changed
+  description summary=The surface state was changed
+The compositor or the user has taken action that has resulted in
+this surface to change state.
+  /description
+  arg name=state type=uint/
+/event
   /interface
 
   interface name=wl_surface version=2
-- 
1.7.4.1

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH 1/2] cms-colord: Fix build after the API change 'Honor XDG_CONFIG_DIRS'

2013-05-15 Thread Kristian Høgsberg
On Wed, May 15, 2013 at 09:17:37AM +0100, Richard Hughes wrote:
 ---
  src/cms-colord.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

Thanks, this and 2/2 applied.

Kristian

 diff --git a/src/cms-colord.c b/src/cms-colord.c
 index 33f23b2..af6b5fa 100644
 --- a/src/cms-colord.c
 +++ b/src/cms-colord.c
 @@ -478,7 +478,7 @@ colord_cms_output_destroy(gpointer data)
  
  WL_EXPORT int
  module_init(struct weston_compositor *ec,
 - int *argc, char *argv[], const char *config_file)
 + int *argc, char *argv[])
  {
   gboolean ret;
   GError *error = NULL;
 -- 
 1.8.2.1
 
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH weston] Fix missing corner resize cursors in Kubuntu (oxy-white theme)

2013-05-15 Thread Kristian Høgsberg
On Mon, May 13, 2013 at 11:51:11PM -0700, Dima Ryazanov wrote:
 Looks like that theme uses different names. Also, add the correspoding
 horizontal and vertical resize cursors, just for consistency.
 ---
  clients/window.c | 24 
  1 file changed, 16 insertions(+), 8 deletions(-)

Thanks, applied.

Kristian

 diff --git a/clients/window.c b/clients/window.c
 index 1562957..06ef453 100644
 --- a/clients/window.c
 +++ b/clients/window.c
 @@ -1094,17 +1094,20 @@ shm_surface_create(struct display *display, struct 
 wl_surface *wl_surface,
  
  static const char *bottom_left_corners[] = {
   bottom_left_corner,
 - sw-resize
 + sw-resize,
 + size_bdiag
  };
  
  static const char *bottom_right_corners[] = {
   bottom_right_corner,
 - se-resize
 + se-resize,
 + size_fdiag
  };
  
  static const char *bottom_sides[] = {
   bottom_side,
 - s-resize
 + s-resize,
 + size_ver
  };
  
  static const char *grabbings[] = {
 @@ -1122,27 +1125,32 @@ static const char *left_ptrs[] = {
  
  static const char *left_sides[] = {
   left_side,
 - w-resize
 + w-resize,
 + size_hor
  };
  
  static const char *right_sides[] = {
   right_side,
 - e-resize
 + e-resize,
 + size_hor
  };
  
  static const char *top_left_corners[] = {
   top_left_corner,
 - nw-resize
 + nw-resize,
 + size_fdiag
  };
  
  static const char *top_right_corners[] = {
   top_right_corner,
 - ne-resize
 + ne-resize,
 + size_bdiag
  };
  
  static const char *top_sides[] = {
   top_side,
 - n-resize
 + n-resize,
 + size_ver
  };
  
  static const char *xterms[] = {
 -- 
 1.8.1.2
 
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: minimized and stick windows

2013-05-15 Thread Kristian Høgsberg
On Mon, May 13, 2013 at 06:14:46PM -0300, Rafael Antognolli wrote:
 Hi Jason,
 
 On Wed, May 8, 2013 at 9:26 PM, Jason Ekstrand ja...@jlekstrand.net wrote:
  Hi Rafael,
 
 
  On Wed, May 8, 2013 at 6:04 PM, Rafael Antognolli antogno...@gmail.com
  wrote:
 
  Hello,
 
  I've been looking the Weston code relative to maximized windows, and
  it seems that the respective code for minimized windows wouldn't be
  hard to implement.
 
  The questions are: are there any plans to add it? Is there someone
  already working on it? If not, would it be OK if I start submitting
  patches to try to add support for this?
 
 
  A month or two ago, Scott Morreau was working on it.  However, his work
  never made into weston for a variety of reasons.  Personally, I'm glad to
  see someone interested in working on it again because it's something that
  wayland will need eventually.
 
  The place to start on it is probably with the following e-mail and the long
  string of replies:
 
  http://lists.freedesktop.org/archives/wayland-devel/2013-March/007814.html
 
  There was quite a bit of discussion about how to handle it from a protocol
  level, but Scott never made an actual version 2.  I'd suggest you start by
  reading the chain of e-mails (it goes into April, not just March).  There
  were quite a few suggestions in there that could be incorporated.
  Hopefully, you can pick through the e-mail discussion and figure out what
  the consensus was.  It'd be good to have a pair of fresh eyes look at it.
 
 Thanks for pointing that out. I just went through the chain of
 e-mails, but I don't think there was a consensus there.
 
 It also seems that the minimize implementation is a little more
 complex than just hiding surfaces and marking some flags. Which makes
 me not so comfortable doing an implementation without a consensus
 about what should be implemented, and with some orientation.
 
 That said, I'm not sure I'm really going to take this task.

I agree that the thread is a little daunting and gets
political/personal towards the end.  But between Pekka, Jason and Bill
I see concensus and I'll try to summarize here:

 - The server needs to be able to initiate state changes, but the
   client is in control.  The server requests a state change but the
   client ultimately has to set the new state and provide a new
   buffer.  In case of maximize and unmaximize, the client has to
   provide a new buffer before the state can change and in case of
   minimize, the client may want to hide other windows as well as it
   minimizes.

 - Discussion about whether states are orthogonal flags that the
   client sets or if the client just sets the current state.  The
   distinction is whether the compositor knows the full set of states
   or only the effective state.  For example, if you maximize a window
   and then minimize it, does the compositor know that it's maximized
   and minimized or only that it's currently minimized?  I think the
   compositor needs to know all the state, so that it's possible to
   implement something like unmaximize while minimized.

   There's a catch: the current model (set_toplevel, set_fullscreen
   and set_maximized) doesn't work this way, these requests always set
   the current state, not a flag.  I think we can fit those into the
   new mechanism: set_toplevel clears all states, set_maximized sets
   maximized and clears fullscreen, and set_fullscreen sets fullscreen.

 - Enum vs set_minimized.  Do we add an enum with states and a
   set(state) request or do we add set_minimized etc?  We only lack
   set_minimized currently, but we also need events to let the
   compositor initiate state changes, so we would have to add
   request_maximized/minimized events as well as
   request_unmaximized/unminimized.  If we add an enum of states
   instead, we can add set and clear requests and request_set and
   request_clear events.

   Using an enum also lets us add sticky and always-on-top as enum
   values.

 - Visibility and frame events during minimized is orthogonal and up
   to the compositor.  The compositor can keep sending frame events at
   the full frame rate or throttle the application down to a few
   frames per second for example.  But the compositor can do that at
   any time, for example if the window is fully obscured by an opaque
   surface, there's really no interaction with being minimized.

 - Stacking is an orthogonal issue.  Currently clients can't assume
   anything about their stacking order relative to other clients, so a
   compositor is free to unminimize surfaces to anywhere in the stack.

 - We've also talked about a request_close event that the compositor
   can use to ask a client to close its window.  This useful for
   closing from a window list or from something like the GNOME Shell
   overview.  I think this is straight forward, though not directly
   related to the state stuff here.

If we turn this into protocol, I think it will look something like this:

  interface 

Re: [PATCH] wl_shell: Add surface state changed event

2013-05-15 Thread Mikko Levonmaa
On Wed, May 15, 2013 at 12:12:43PM -0500, Jason Ekstrand wrote:
 On Wed, May 15, 2013 at 9:39 AM, Mikko Levonmaa mikko.levon...@gmail.com
 wrote:
 
 This allows the shell to inform the surface that it has changed
 state, current supported states are default, minimized, maximized
 and fullscreen. The shell implementation is free to interpret the
 meaning for the state, i.e. the minimized might not always mean
 that the surface is fully hidden for example.
 
 
 We cannot simply have the shell telling clients it changed their state.  The
 clients need to be in control of the state of each surface.  This is because
 minimizing a client (for example) might not be as simple as hiding a specific
 window.  Only the client can actually know how to minimize/maximize it.

Hmm... not sure I fully understand nor agree (perhaps lack of
understanding;). So to me it seems that the compositor should be the
driver, not the passenger, i.e. it know how to animate the surface when
it gets minimized and when maximied. How would the client know this?
Also wouldn't this imply more knowledge on the toolkits side as well?

 Please read earlier min/max discussions or yesterday's IRC logs for more
 details.

Neato, seems to be a hot topic, good to see someone else looking into
this as well. I read through the email and pq's commmets about avoiding flicker
make sense, so having the compositor and the client be in sync about whats
going on is needed. Also naturally the client can be the originator, so
clearly a request is needed. However in some cases the request might not be
honored by the compositor, especially in an embedded environment. And
actually also the compositor might only show window only in certain
state, i.e. fullscreen so having the client full to decline a request
might not be good either.

  
 
 Signed-off-by: Mikko Levonmaa mikko.levon...@lge.com
 ---
  protocol/wayland.xml |   15 +++
  1 files changed, 15 insertions(+), 0 deletions(-)
 
 diff --git a/protocol/wayland.xml b/protocol/wayland.xml
 index 3bce022..ee7d32d 100644
 --- a/protocol/wayland.xml
 +++ b/protocol/wayland.xml
 @@ -874,6 +874,21 @@
 to the client owning the popup surface.
/description
  /event
 +
 +enum name=state
 +  entry name=default value=0/
 +  entry name=minimized value=1/
 +  entry name=maximized value=2/
 +  entry name=fullscreen value=4/
 +/enum
 +
 +event name=state_changed
 +  description summary=The surface state was changed
 +The compositor or the user has taken action that has resulted in
 +this surface to change state.
 +  /description
 +  arg name=state type=uint/
 +/event
/interface
 
interface name=wl_surface version=2
 --
 1.7.4.1
 
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
 
 
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] protocol: Add buffer_scale to wl_surface and wl_output

2013-05-15 Thread Bill Spitzak

Alexander Larsson wrote:


In fact, working on this in weston a bit it seems that in general, scale
is seldom used by itself but rather its used to calculate the buffer and
screen size which are then used, and we want both of these to be
integers. So, it seems to me that we should specify scaling by giving
the width/heigh of the surface, which in combination with the buffer
size gives the exact scaling ratios, plus it guarantees that the scaling
maps integers to integers.


This now sounds exactly like the scaler api.

What is really happening is that the hi-dpi scheme proposed is providing 
a denominator to the x,y, and size of the output rectangle provided to 
the scaler api, so that it can now be fractions. It would work perfectly 
well to move that denominator into the scaler api.


Except that events are reported as though the positions are multiplied 
by the denominator. I think this is a mistake and events should be 
reported in the input space, but this strangeness can be worked around 
pretty easily.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] wl_shell: Add surface state changed event

2013-05-15 Thread Jason Ekstrand
On Wed, May 15, 2013 at 1:37 PM, Mikko Levonmaa mikko.levon...@gmail.comwrote:

 On Wed, May 15, 2013 at 12:12:43PM -0500, Jason Ekstrand wrote:
  On Wed, May 15, 2013 at 9:39 AM, Mikko Levonmaa 
 mikko.levon...@gmail.com
  wrote:
 
  This allows the shell to inform the surface that it has changed
  state, current supported states are default, minimized, maximized
  and fullscreen. The shell implementation is free to interpret the
  meaning for the state, i.e. the minimized might not always mean
  that the surface is fully hidden for example.
 
 
  We cannot simply have the shell telling clients it changed their state.
  The
  clients need to be in control of the state of each surface.  This is
 because
  minimizing a client (for example) might not be as simple as hiding a
 specific
  window.  Only the client can actually know how to minimize/maximize it.

 Hmm... not sure I fully understand nor agree (perhaps lack of
 understanding;). So to me it seems that the compositor should be the
 driver, not the passenger, i.e. it know how to animate the surface when
 it gets minimized and when maximied. How would the client know this?
 Also wouldn't this imply more knowledge on the toolkits side as well?


The clients don't need to know any of that.  The client tells the
compositor minimize this surface and the compositor animates it.  Sure,
the compositor has to know what's going on, but that doesn't mean it needs
to be the driver.  Also, the compositor doesn't know what all else needs to
happen. For instance, let's say that gimp wants to run in multi-window mode
most of the time but destroy the dialogues and go to single-window mode
when you maximize it.  How would the compositor know what windows need to
go where?  Only the app can know that.

There are a lot of other possible scenarios and the compositor can't know
what to do there.



  Please read earlier min/max discussions or yesterday's IRC logs for more
  details.

 Neato, seems to be a hot topic, good to see someone else looking into
 this as well. I read through the email and pq's commmets about avoiding
 flicker
 make sense, so having the compositor and the client be in sync about whats
 going on is needed. Also naturally the client can be the originator, so
 clearly a request is needed. However in some cases the request might not be
 honored by the compositor, especially in an embedded environment. And
 actually also the compositor might only show window only in certain
 state, i.e. fullscreen so having the client full to decline a request
 might not be good either.


Clients can *always* misbehave.  Suppose a client gives the wrong size
surface when it goes maximized.  Or that it doesn't get rid of its window
shadows around the sides.  Since the client is drawing itself, it can
always misbehave.  Yes, there are cases where the compositor would want to
run everything fullscreen.  However, those wouldn't be desktop
compositors.  a fullscreen-only compositor would probably be a different
interface than wl_shell.  No one has really put a lot of time or effort
into non-desktop compositors as of yet.

For more information, you can also read this e-mail thread.  Be ware, there
is a lot of noise in it:

http://lists.freedesktop.org/archives/wayland-devel/2013-March/007814.html

--Jason Ekstrand
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: minimized and stick windows

2013-05-15 Thread Bill Spitzak

Alexander Preisinger wrote:

+  entry name=toplevel value=1 summary=surface is neither 
maximized, minizized or fullscreen/


Maybe normal? toplevel sounds like it is in the same layer as popup 
notifiers.



+This event is sent in respons to set_maximized, set_minimized or
+set_fullscreen request to acknowledge the request. The client can 
update it

+own state if it wants to keep track of it.


No. The client *has* to assume the requests work. Echoing these will 
just confuse clients and they will have to do tricks to distinguish 
these from real requests from the shell. Similar to the ugly things X 
clients have to do to distinguish real configure notifies from echoes.


And set_fullscreen and set_maximize already have a response, which is a 
configure request for the size needed.


+The also compositor sends this event if itt wants the surface 
minimized or
+maximized. For example by clicking on a task list item or 
compositor key

+bindings for fullscreen.


Yes, this is what this event is for and should be it's only use.

I think you are imagining that the shell can do something before it 
sends these events. It cannot, because only the client knows exactly 
what effect these have. Only it knows if other surfaces should be 
hidden, shown, raised, or resized. Only it knows the size of a toplevel 
surface (imagine it was shown maximized first, so the shell has never 
seen it un-maximized).


If a client ignores these events then nothing happens. The client is 
mis-behaving but this is the way it has to be.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: minimized and stick windows

2013-05-15 Thread Bill Spitzak

Pekka Paalanen wrote:


Minimize is a little special, since the client does not need to react
specially for it to look right.


The client does have to react if there is a floating panel that also has 
to disappear.


For example the floating shared toolbox with 2 main windows. It should 
only disappear when *both* main windows are minimized.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] wl_shell: Add surface state changed event

2013-05-15 Thread Bill Spitzak

Alexander Preisinger wrote:


+enum name=state
+  entry name=default value=0/
+  entry name=minimized value=1/
+  entry name=maximized value=2/
+  entry name=fullscreen value=4/

Are these supposed to be flags? Like that it can send multiple states in 
one request?
I think the client should keep track of the previous state itself and 
the compositor only

sends the state he wants the client to have.


This came up before. It looks like it does have to be flags. The shell 
is interested in knowing what state it would be in if minimized or 
fullscreen is turned off, otherwise it cannot implement a turn off 
fullscreen button. I think however it does not have to be any more 
complex than flags, there can be precedence rules:


  Minimized means ignore maximize/fullscreen
  Fullscreen means ignore maximize
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] wl_shell: Add surface state changed event

2013-05-15 Thread Bill Spitzak

Mikko Levonmaa wrote:


+event name=state_changed
+  description summary=The surface state was changed
+The compositor or the user has taken action that has resulted in
+this surface to change state.
+  /description
+  arg name=state type=uint/
+/event
   /interface


changed is very misleading. If the client does not do anything, the 
state has not changed. This is a request from the shell and the state 
does not change until the client does set_fullscreen or whatever and 
does a commit.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] wl_shell: Add surface state changed event

2013-05-15 Thread Mikko Levonmaa
On Wed, May 15, 2013 at 01:57:10PM -0500, Jason Ekstrand wrote:
 On Wed, May 15, 2013 at 1:37 PM, Mikko Levonmaa mikko.levon...@gmail.com
 wrote:
 
 On Wed, May 15, 2013 at 12:12:43PM -0500, Jason Ekstrand wrote:
  On Wed, May 15, 2013 at 9:39 AM, Mikko Levonmaa 
 mikko.levon...@gmail.com
 
  wrote:
 
  This allows the shell to inform the surface that it has changed
  state, current supported states are default, minimized, maximized
  and fullscreen. The shell implementation is free to interpret the
  meaning for the state, i.e. the minimized might not always mean
  that the surface is fully hidden for example.
 
 
  We cannot simply have the shell telling clients it changed their state. 
  
 The
  clients need to be in control of the state of each surface.  This is
 because
  minimizing a client (for example) might not be as simple as hiding a
 specific
  window.  Only the client can actually know how to minimize/maximize it.
 
 Hmm... not sure I fully understand nor agree (perhaps lack of
 understanding;). So to me it seems that the compositor should be the
 driver, not the passenger, i.e. it know how to animate the surface when
 it gets minimized and when maximied. How would the client know this?
 Also wouldn't this imply more knowledge on the toolkits side as well?
 
 
 The clients don't need to know any of that.  The client tells the compositor
 minimize this surface and the compositor animates it.  Sure, the compositor
 has to know what's going on, but that doesn't mean it needs to be the driver. 
 Also, the compositor doesn't know what all else needs to happen. For instance,
 let's say that gimp wants to run in multi-window mode most of the time but
 destroy the dialogues and go to single-window mode when you maximize it.  How
 would the compositor know what windows need to go where?  Only the app can 
 know
 that.

Right, true. I initially misunderstood what you ment with Only client can
actually know...

 There are a lot of other possible scenarios and the compositor can't know what
 to do there.
  
 
 
  Please read earlier min/max discussions or yesterday's IRC logs for more
  details.
 
 Neato, seems to be a hot topic, good to see someone else looking into
 this as well. I read through the email and pq's commmets about avoiding
 flicker
 make sense, so having the compositor and the client be in sync about whats
 going on is needed. Also naturally the client can be the originator, so
 clearly a request is needed. However in some cases the request might not 
 be
 honored by the compositor, especially in an embedded environment. And
 actually also the compositor might only show window only in certain
 state, i.e. fullscreen so having the client full to decline a request
 might not be good either.
 
 
 Clients can *always* misbehave.  Suppose a client gives the wrong size surface
 when it goes maximized.  Or that it doesn't get rid of its window shadows
 around the sides.  Since the client is drawing itself, it can always 
 misbehave.
   Yes, there are cases where the compositor would want to run everything
 fullscreen.  However, those wouldn't be desktop compositors.  a 
 fullscreen-only
 compositor would probably be a different interface than wl_shell.  No one has
 really put a lot of time or effort into non-desktop compositors as of yet.

Why would we want the fullscreen compositor use a different shell interface?
This would force the toolkits to have different implementations for various
compositors/WMs, granted that some of them already do, but still to me
it seems like a step in a wrong direction. I would much rather see that the
wl_shell interface serves them all and the implementations of that interface
might behave differently. Not saying that the wl_shell should be a god
like interface, but I think that there is enough common ground.

 For more information, you can also read this e-mail thread.  Be ware, there is
 a lot of noise in it:
 
 http://lists.freedesktop.org/archives/wayland-devel/2013-March/007814.html
 
 --Jason Ekstrand
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: minimized and stick windows

2013-05-15 Thread Mikko Levonmaa
 I agree that the thread is a little daunting and gets
 political/personal towards the end.  But between Pekka, Jason and Bill
 I see concensus and I'll try to summarize here:
 
  - The server needs to be able to initiate state changes, but the
client is in control.  The server requests a state change but the
client ultimately has to set the new state and provide a new
buffer.  In case of maximize and unmaximize, the client has to
provide a new buffer before the state can change and in case of
minimize, the client may want to hide other windows as well as it
minimizes.
 
  - Discussion about whether states are orthogonal flags that the
client sets or if the client just sets the current state.  The
distinction is whether the compositor knows the full set of states
or only the effective state.  For example, if you maximize a window
and then minimize it, does the compositor know that it's maximized
and minimized or only that it's currently minimized?  I think the
compositor needs to know all the state, so that it's possible to
implement something like unmaximize while minimized.
 
There's a catch: the current model (set_toplevel, set_fullscreen
and set_maximized) doesn't work this way, these requests always set
the current state, not a flag.  I think we can fit those into the
new mechanism: set_toplevel clears all states, set_maximized sets
maximized and clears fullscreen, and set_fullscreen sets fullscreen.
 
  - Enum vs set_minimized.  Do we add an enum with states and a
set(state) request or do we add set_minimized etc?  We only lack
set_minimized currently, but we also need events to let the
compositor initiate state changes, so we would have to add
request_maximized/minimized events as well as
request_unmaximized/unminimized.  If we add an enum of states
instead, we can add set and clear requests and request_set and
request_clear events.
 
Using an enum also lets us add sticky and always-on-top as enum
values.
 
  - Visibility and frame events during minimized is orthogonal and up
to the compositor.  The compositor can keep sending frame events at
the full frame rate or throttle the application down to a few
frames per second for example.  But the compositor can do that at
any time, for example if the window is fully obscured by an opaque
surface, there's really no interaction with being minimized.
 
  - Stacking is an orthogonal issue.  Currently clients can't assume
anything about their stacking order relative to other clients, so a
compositor is free to unminimize surfaces to anywhere in the stack.
 
  - We've also talked about a request_close event that the compositor
can use to ask a client to close its window.  This useful for
closing from a window list or from something like the GNOME Shell
overview.  I think this is straight forward, though not directly
related to the state stuff here.
 
 If we turn this into protocol, I think it will look something like this:


   interface name=wl_shell_surface version=1
 
 ...
 
 enum name=state
   description summary=surface states
 This is a bitmask of capabilities this seat has; if a member is
 set, then it is present on the seat.
   /description
   entry name=maximized value=1/
   entry name=minimized value=2/
   entry name=sticky value=3/
   entry name=always_on_top value=4/
 /enum

Arent we missing the fullscreen from the above enum? Also the rationale
for me adding the default state (in the other thread) was that it would
indicate to the compositor that it is the normal state of the the app
i.e. when going from maximized/fullscreen to the default state the
compositor could remember the last size and propose that to the client.

 request name=set since=2
   description summary=Set the specified surface state/
   arg name=state type=uint/
 /request

To me the word 'set' implies that this will happen and in some cases the
compositor might not honor this, so in a way it is a request. Perhaps
'request_state'?

 
 request name=clear since=2
   description summary=Clear the specified surface state/
   arg name=state type=uint/
 /request

This is a bit unclear to me. Does the compositor take some action after
this request or is the state just cleared on the compositors side? It
seems a bit open ended... if the client has set the state to say fullscreen
and then clears it will the surface still stay fullscreen.

 ...
 
 event name=request_set since=2
   description summary=request to set the specified surface state/
   arg name=state type=uint/
 /event
 
 event name=request_clear since=2
   description summary=request to clear the specified surface state/
   arg name=state type=uint/
 /event
 
 event name=request_close since=2/
 
   /interface
 
 How does that look?

Excellent ;)

 
 

Re: minimized and stick windows

2013-05-15 Thread Jason Ekstrand
On May 15, 2013 9:37 PM, Mikko Levonmaa mikko.levon...@gmail.com wrote:

  I agree that the thread is a little daunting and gets
  political/personal towards the end.  But between Pekka, Jason and Bill
  I see concensus and I'll try to summarize here:
 
   - The server needs to be able to initiate state changes, but the
 client is in control.  The server requests a state change but the
 client ultimately has to set the new state and provide a new
 buffer.  In case of maximize and unmaximize, the client has to
 provide a new buffer before the state can change and in case of
 minimize, the client may want to hide other windows as well as it
 minimizes.
 
   - Discussion about whether states are orthogonal flags that the
 client sets or if the client just sets the current state.  The
 distinction is whether the compositor knows the full set of states
 or only the effective state.  For example, if you maximize a window
 and then minimize it, does the compositor know that it's maximized
 and minimized or only that it's currently minimized?  I think the
 compositor needs to know all the state, so that it's possible to
 implement something like unmaximize while minimized.
 
 There's a catch: the current model (set_toplevel, set_fullscreen
 and set_maximized) doesn't work this way, these requests always set
 the current state, not a flag.  I think we can fit those into the
 new mechanism: set_toplevel clears all states, set_maximized sets
 maximized and clears fullscreen, and set_fullscreen sets fullscreen.
 
   - Enum vs set_minimized.  Do we add an enum with states and a
 set(state) request or do we add set_minimized etc?  We only lack
 set_minimized currently, but we also need events to let the
 compositor initiate state changes, so we would have to add
 request_maximized/minimized events as well as
 request_unmaximized/unminimized.  If we add an enum of states
 instead, we can add set and clear requests and request_set and
 request_clear events.
 
 Using an enum also lets us add sticky and always-on-top as enum
 values.
 
   - Visibility and frame events during minimized is orthogonal and up
 to the compositor.  The compositor can keep sending frame events at
 the full frame rate or throttle the application down to a few
 frames per second for example.  But the compositor can do that at
 any time, for example if the window is fully obscured by an opaque
 surface, there's really no interaction with being minimized.
 
   - Stacking is an orthogonal issue.  Currently clients can't assume
 anything about their stacking order relative to other clients, so a
 compositor is free to unminimize surfaces to anywhere in the stack.
 
   - We've also talked about a request_close event that the compositor
 can use to ask a client to close its window.  This useful for
 closing from a window list or from something like the GNOME Shell
 overview.  I think this is straight forward, though not directly
 related to the state stuff here.
 
  If we turn this into protocol, I think it will look something like this:


interface name=wl_shell_surface version=1
 
  ...
 
  enum name=state
description summary=surface states
  This is a bitmask of capabilities this seat has; if a member is
  set, then it is present on the seat.
/description
entry name=maximized value=1/
entry name=minimized value=2/
entry name=sticky value=3/
entry name=always_on_top value=4/
  /enum

 Arent we missing the fullscreen from the above enum? Also the rationale
 for me adding the default state (in the other thread) was that it would
 indicate to the compositor that it is the normal state of the the app
 i.e. when going from maximized/fullscreen to the default state the
 compositor could remember the last size and propose that to the client.

Fullscreen is a bit special as it requires other arguments (mode and
output). You cant merely set it as a flag. It should probably be considered
as a different mode all together. More specifically, the flags only apply
to toplevel surfaces. (Maximized will require a little work to keep
backwards compatibility.)


  request name=set since=2
description summary=Set the specified surface state/
arg name=state type=uint/
  /request

 To me the word 'set' implies that this will happen and in some cases the
 compositor might not honor this, so in a way it is a request. Perhaps
 'request_state'?

The client is setting surface flags (perhaps unset would be better than
clear below). Exactly what the compositor does will depend on a
precedence order. Kristian didn't define it above but it shouldn't be too
hard to do.  That said, the order should be well documented.

Why wouldn't the server respect the flags? Also, when this request is
handled, the flag is set. The only strange 

Re: minimized and stick windows

2013-05-15 Thread Bill Spitzak

Mikko Levonmaa wrote:


i.e. when going from maximized/fullscreen to the default state the
compositor could remember the last size and propose that to the client.


The client has to know the default size:

1. It may have initially shown maximized. The compositor therefore has 
not seen it in normal state and does not know the size.


2. The state of the client may have changed while it was maximized such 
that it's normal size has changed.


An actual example of a bug we have run into repeatedly on Windows and in 
Qt (though underlying X you can fix it) is that we want to save our 
window state in a file, and we want to save both the fact that it is 
maximized and what the un-maximized size is. This is not possible if 
only the compositor knows it.


On Windows we are forced to blink a maximized window when created so 
that Windows sees the normal size and remembers it. On X a big kludge is 
done to get around Qt emulating the Windows bug.


It would be nice if Wayland avoided this problem.

So as I see it, if the user hits the un-maximize hot key:

1. The compositor sends a state_change event that turns off maximize

2. Client figures out it's un-maximized size, and configures the 
surface, drawing the new resized image.


3. Client sends the state_change request to tell the compositor that 
this new image is not maximized.


4. Client does a commit so the new size, image, and non-maximized state 
are all updated atomically.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] libinputmapper: Input device configuration for graphic-servers

2013-05-15 Thread Peter Hutterer
On Sun, May 12, 2013 at 04:20:59PM +0200, David Herrmann wrote:
 [bcc to gnome-shell-list and kwin, to keep discussion on wayland-devel]
 
 Without a generic graphics-server like xserver, compositors need to
 handle input devices themselves if run as wayland compositors. To
 avoid having several different conflicting implementations, I wrote up
 a small proposal and library API to have a common configuration.
 
 How is it currently handled?
 A compositor uses udev to listen for input devices. For each
 input-device, they use some heuristics (test for KEY_*, ABS_*, ..
 event-bits) to figure out what kind of input is provided by the
 device. Unknown devices and events are ignored, devices that look
 useful are passed to the correct input-driver.
 For keyboard input, libxkbcommon is used. For
 mouse/touchpad/touchscreen input, every compositor has it's own simple
 driver (I am not aware of an attempt to put xf86-input-synaptics into
 an independent library). 

fwiw, I tried this once, but the amount of legacy junk in the X driver is
large enough that it would be better writing synaptics from scratch as a
library and hook that up instead.

 For other device types, applications handle
 input themselves (gamepads, joysticks, and so on). And then there are
 devices that have x11 drivers, but I am not aware of external drivers
 for wayland-compositors (like wacom digitizers).
 
 I am not interested in the device-drivers itself (for now in this
 project). I think it would be nice to write a libsynapticscommon,

ftr, please don't name such an effort synaptics. the name in the X driver is
historical and should be changed, so libtouchpad is a better name.

 libmousecommon, .. which provide a libxkbcommon'ish interface for
 other device types independent of the compositor implementation, but
 that's another independent issue.
 Instead, I am more interested in the device-detection and enumeration.
 If I plug in a gamepad device, I don't want _any_ compositor to handle
 it as a mouse, just because it provides REL_X/Y values. I don't want
 compositors to erroneously handle accelerometers as mice because they
 provide ABS_X/Y values. But on the other hand, I want users to be able
 to tell compositors to handle ABS_X/Y input from their custom hardware
 as mouse input, _iff_ they want to.
 Furthermore, if a device has a buggy kernel driver and reports BTN_X
 instead of BTN_A, I want all compositors to detect that and apply a
 simple fixup button-mapping. Or allow users to remap
 buttons/axis/LEDs/EV_WHATEVER arbitrarily.
 
 udev provides some very basic heuristics with device-tags, but talking
 to Kay Sievers, he'd like to avoid having huge detection-tables and
 heuristics in udev (which is understandable).
 
 Dmitry Torokhov is not averse to providing device-type properties in
 the kernel input-subsystem, but on the other hand, it doesn't help us
 much. Generic HID devices might still provide any arbitrary input that
 we would have to write custom drivers/quirks for, if they don't match.
 So if no-one steps up to do all that work, I recommend providing these
 fixups in user-space. This also has the advantage, that users can
 arbitrarily modify these rules if they want crazy setups (which users
 normally want..). And we can ship fixup-rules for new devices, while
 in the meantime writing kernel drivers for them and waiting for the
 next kernel release. And don't forget the kernel drivers with broken
 mappings, which we cannot fix due to backwards-compatibility, but
 still want them to be correctly mapped in new
 compositors/applications.
 
 
 So what is the proposed solution?
 My recommendation is, that compositors still search for devices via
 udev and use device drivers like libxkbcommon. So linux evdev handling
 is still controlled by the compositor. However, I'd like to see
 something like my libinputmapper proposal being used for device
 detection and classification.
 
 libinputmapper provides an inmap_evdev object which reads device
 information from an evdev-fd or sysfs /sys/class/input/inputnum
 path, performs some heuristics to classify it and searches it's global
 database for known fixups for broken devices.
 It then provides capabilities to the caller, which allow them to see
 what drivers to load on the device. And it provides a very simple
 mapping table that allows to apply fixup mappings for broken devices.
 These mappings are simple 1-to-1 mappings that are supposed to be
 applied before drivers handle the input. This is to avoid
 device-specific fixup in the drivers and move all this to the
 inputmapper. An example would be a remapping for gamepads that report
 BTN_A instead of BTN_NORTH, but we cannot fix them in the kernel for
 backwards-compatibility reasons. The gamepad-driver can then assume
 that if it receives BTN_NORTH, it is guaranteed to be BTN_NORTH and
 doesn't need to special case xbox360/etc. controllers, because they're
 broken.

I think evdev is exactly that interface and apparently it