Mapping surfaces created through a nested compositor to UI elements

2014-01-30 Thread Iago Toral
Hi,

in the process of porting webkitgtk+ to wayland and following advise
provided here, I implemented a nested compositor to share surfaces
between the two processes that do the rendering. This works fine with a
single widget/surface, but things get a bit more complicated when
dealing with various widgets (browser tabs, windows).

I am trying to understand if I can solve my problem, which is basically
a problem of matching wayland surfaces created in the nested compositor
with their corresponding widgets in the UI, within the scope of the
wayland protocol or if I need to resort to ad-hoc communications between
the two processes outside the protocol. Below I describe the problem in
more detail together with the possible solutions I looked into and my
conclusions. I'd appreciate a lot if someone could confirm whether these
are correct:

As far as I understand the code in the nested client example from
Weston, when there is need to repaint the UI it goes through all the
surfaces in the compositor and paints them one by one. In our case, when
GTK needs to repaint it will go through the widgets in the UI and ask
them to repaint as needed. This means that we need to know, for a given
widget, which is the surface in the nested compositor that provides the
contents for it.

However, when the nested compositor receives a request to create a
surface it will not know for which widget it is creating it (obviously
information on things like UI widgets is outside the scope of the
wayland protocol), and as far as I can see, there is no way for the
client to provide this info to the compositor either when the surface is
created or after it has been created.

Assuming that this is not something I can so using available APIs, I
looked into adding this API to my nested compositor implementation, so I
can have a surface constructor like this:

wl_compositor_create_surface_for_widget(struct wl_compositor*, int);

where that additional 'int' parameter would be used on the compositor's
side to associate the new surface with a specific UI widget.

Unfortunately, for this I would really want to reuse existing code and
APIs from wayland to do message communication between client and
compositor, but a good part of this is private to wayland (the
wl_closure stuff for example) so it feels like I would end up having to
duplicate quite some code from Wayland in WebKit, which I think is
really not a good idea. Also, the fact that these APIs have been kept
internal to Wayland means that this is not something that developers are
expected to do.

If the above is not a good solution either, I understand there would be
no solution for my problem within the wayland protocol and I would need
to add additional messages between the two processes outside the
protocol after the surface is created in order to associate the surface
and the widget on the compositor's side. In that case, I would need to
communicate the widget ID and the ID of the Wayland object representing
the surface (which I understand I'd get by calling wl_proxy_get_id on
the client for the surface).

Is my analysis of the problem correct or is there some way in which I
can achieve my objective within the wayland protocol?

Iago

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH] dim-layer: fix dimming for unfocused surfaces

2014-01-30 Thread Emilio Pozuelo Monfort
Hi Ander,

On 29/01/14 16:09, Ander Conselvan de Oliveira wrote:
 On 01/15/2014 10:30 AM, Emilio Pozuelo Monfort wrote:
 bump

 On 07/01/14 17:23, poch...@gmail.com wrote:
 From: Emilio Pozuelo Monfort emilio.pozu...@collabora.co.uk

 Unfocusing a surface should dim it when dim-layer is enabled,
 but this got broken in commit 83ffd9.
 ---
   desktop-shell/shell.c | 13 -
   1 file changed, 12 insertions(+), 1 deletion(-)

 diff --git a/desktop-shell/shell.c b/desktop-shell/shell.c
 index f85a269..cca96be 100644
 --- a/desktop-shell/shell.c
 +++ b/desktop-shell/shell.c
 @@ -4141,6 +4141,7 @@ activate(struct desktop_shell *shell, struct
 weston_surface *es,
struct weston_seat *seat)
   {
   struct weston_surface *main_surface;
 +struct weston_view *main_view;
   struct focus_state *state;
   struct workspace *ws;
   struct weston_surface *old_es;
 @@ -4162,8 +4163,18 @@ activate(struct desktop_shell *shell, struct
 weston_surface *es,
   shsurf = get_shell_surface(main_surface);
   if (shsurf-state.fullscreen)
   shell_configure_fullscreen(shsurf);
 -else
 +else {
 +ws = get_current_workspace(shell);
 +main_view = get_default_view(main_surface);
 +if (main_view) {
 +wl_list_remove(main_view-layer_link);
 +wl_list_insert(ws-layer.view_list, main_view-layer_link);
 +weston_view_damage_below(main_view);
 +weston_surface_damage(main_view-surface);
 +}
 
 So you're basically rewriting weston_view_restack() here. Wouldn't a better 
 fix
 be to move the animation logic below the call to shell_surface_update_layer(),
 which is the place where the surface is restacked after the commit you 
 mentioned.

You are absolutely right. I'll send a new patch soon together with other fixes
for fullscreen surfaces.

Regards,
Emilio
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Mapping surfaces created through a nested compositor to UI elements

2014-01-30 Thread Pekka Paalanen
On Thu, 30 Jan 2014 10:32:03 +0100
Iago Toral ito...@igalia.com wrote:

 Hi,
 
 in the process of porting webkitgtk+ to wayland and following advise
 provided here, I implemented a nested compositor to share surfaces
 between the two processes that do the rendering. This works fine with
 a single widget/surface, but things get a bit more complicated when
 dealing with various widgets (browser tabs, windows).
 
 I am trying to understand if I can solve my problem, which is
 basically a problem of matching wayland surfaces created in the
 nested compositor with their corresponding widgets in the UI, within
 the scope of the wayland protocol or if I need to resort to ad-hoc
 communications between the two processes outside the protocol. Below
 I describe the problem in more detail together with the possible
 solutions I looked into and my conclusions. I'd appreciate a lot if
 someone could confirm whether these are correct:
 
 As far as I understand the code in the nested client example from
 Weston, when there is need to repaint the UI it goes through all the
 surfaces in the compositor and paints them one by one. In our case,
 when GTK needs to repaint it will go through the widgets in the UI
 and ask them to repaint as needed. This means that we need to know,
 for a given widget, which is the surface in the nested compositor
 that provides the contents for it.
 
 However, when the nested compositor receives a request to create a
 surface it will not know for which widget it is creating it (obviously
 information on things like UI widgets is outside the scope of the
 wayland protocol), and as far as I can see, there is no way for the
 client to provide this info to the compositor either when the surface
 is created or after it has been created.
 
 Assuming that this is not something I can so using available APIs, I
 looked into adding this API to my nested compositor implementation,
 so I can have a surface constructor like this:
 
 wl_compositor_create_surface_for_widget(struct wl_compositor*, int);
 
 where that additional 'int' parameter would be used on the
 compositor's side to associate the new surface with a specific UI
 widget.
 
 Unfortunately, for this I would really want to reuse existing code and
 APIs from wayland to do message communication between client and
 compositor, but a good part of this is private to wayland (the
 wl_closure stuff for example) so it feels like I would end up having
 to duplicate quite some code from Wayland in WebKit, which I think is
 really not a good idea. Also, the fact that these APIs have been kept
 internal to Wayland means that this is not something that developers
 are expected to do.
 
 If the above is not a good solution either, I understand there would
 be no solution for my problem within the wayland protocol and I would
 need to add additional messages between the two processes outside the
 protocol after the surface is created in order to associate the
 surface and the widget on the compositor's side. In that case, I
 would need to communicate the widget ID and the ID of the Wayland
 object representing the surface (which I understand I'd get by
 calling wl_proxy_get_id on the client for the surface).
 
 Is my analysis of the problem correct or is there some way in which I
 can achieve my objective within the wayland protocol?

Hi,

the short answer to your solution is write a custom shell extension.

That means that you would be writing your own private Wayland protocol
extension in the same XML format as all Wayland protocol is already
defined. Let's take xdg_shell as an example:
http://cgit.freedesktop.org/wayland/weston/tree/protocol/xdg-shell.xml

A compositor advertises a global object of type xdg_shell. The
xdg_shell interface has two requests that allow you to add meaning to
wl_surfaces: get_xdg_surface and get_xdg_popup. These requests
associate additional information to the given wl_surface, and create a
new protocol object to allow manipulating the new features added to the
wl_surface object.

If all you need to do is to associate an integer ID to a wl_surface,
you could define a new global interface with a single request, that has
a wl_surface and the ID as arguments. If you need more, you can add
more like xdg_shell does. There are also many other examples of how
wl_surface can be extended with the help of a new global interface.

This new global interface would be advertised by the nested compositor
only, and the clients of that compositor would be using it. No-one else
would ever see it. The clients would use the standard wl_compositor to
create standard wl_surface objects, and then add new meaning to them
with your extension.

Does this help?

Btw. do not write an extension that has a new request to create
wl_surface objects, or any objects that are already creatable via other
interfaces. Doing so would lead to interface versioning problems, as
explained in:
http://wayland.freedesktop.org/docs/html/sect-Protocol-Versioning.html



Re: Mapping surfaces created through a nested compositor to UI elements

2014-01-30 Thread Jason Ekstrand
Yeah, Pekka pretty much covered it.  I've just got one more observation to
add.  Are you doing one process per tab? Or do you have one process
handling multiple tabs?  If you have one process per tab, then you can
easily differentiate by which wl_client the surface is attached to.  You
can know which client is which because (I assume) you should be launching
them privately and manually creating the wl_client objects.

Thanks,
--Jason Ekstrand
On Jan 30, 2014 5:35 AM, Pekka Paalanen ppaala...@gmail.com wrote:

 On Thu, 30 Jan 2014 10:32:03 +0100
 Iago Toral ito...@igalia.com wrote:

  Hi,
 
  in the process of porting webkitgtk+ to wayland and following advise
  provided here, I implemented a nested compositor to share surfaces
  between the two processes that do the rendering. This works fine with
  a single widget/surface, but things get a bit more complicated when
  dealing with various widgets (browser tabs, windows).
 
  I am trying to understand if I can solve my problem, which is
  basically a problem of matching wayland surfaces created in the
  nested compositor with their corresponding widgets in the UI, within
  the scope of the wayland protocol or if I need to resort to ad-hoc
  communications between the two processes outside the protocol. Below
  I describe the problem in more detail together with the possible
  solutions I looked into and my conclusions. I'd appreciate a lot if
  someone could confirm whether these are correct:
 
  As far as I understand the code in the nested client example from
  Weston, when there is need to repaint the UI it goes through all the
  surfaces in the compositor and paints them one by one. In our case,
  when GTK needs to repaint it will go through the widgets in the UI
  and ask them to repaint as needed. This means that we need to know,
  for a given widget, which is the surface in the nested compositor
  that provides the contents for it.
 
  However, when the nested compositor receives a request to create a
  surface it will not know for which widget it is creating it (obviously
  information on things like UI widgets is outside the scope of the
  wayland protocol), and as far as I can see, there is no way for the
  client to provide this info to the compositor either when the surface
  is created or after it has been created.
 
  Assuming that this is not something I can so using available APIs, I
  looked into adding this API to my nested compositor implementation,
  so I can have a surface constructor like this:
 
  wl_compositor_create_surface_for_widget(struct wl_compositor*, int);
 
  where that additional 'int' parameter would be used on the
  compositor's side to associate the new surface with a specific UI
  widget.
 
  Unfortunately, for this I would really want to reuse existing code and
  APIs from wayland to do message communication between client and
  compositor, but a good part of this is private to wayland (the
  wl_closure stuff for example) so it feels like I would end up having
  to duplicate quite some code from Wayland in WebKit, which I think is
  really not a good idea. Also, the fact that these APIs have been kept
  internal to Wayland means that this is not something that developers
  are expected to do.
 
  If the above is not a good solution either, I understand there would
  be no solution for my problem within the wayland protocol and I would
  need to add additional messages between the two processes outside the
  protocol after the surface is created in order to associate the
  surface and the widget on the compositor's side. In that case, I
  would need to communicate the widget ID and the ID of the Wayland
  object representing the surface (which I understand I'd get by
  calling wl_proxy_get_id on the client for the surface).
 
  Is my analysis of the problem correct or is there some way in which I
  can achieve my objective within the wayland protocol?

 Hi,

 the short answer to your solution is write a custom shell extension.

 That means that you would be writing your own private Wayland protocol
 extension in the same XML format as all Wayland protocol is already
 defined. Let's take xdg_shell as an example:
 http://cgit.freedesktop.org/wayland/weston/tree/protocol/xdg-shell.xml

 A compositor advertises a global object of type xdg_shell. The
 xdg_shell interface has two requests that allow you to add meaning to
 wl_surfaces: get_xdg_surface and get_xdg_popup. These requests
 associate additional information to the given wl_surface, and create a
 new protocol object to allow manipulating the new features added to the
 wl_surface object.

 If all you need to do is to associate an integer ID to a wl_surface,
 you could define a new global interface with a single request, that has
 a wl_surface and the ID as arguments. If you need more, you can add
 more like xdg_shell does. There are also many other examples of how
 wl_surface can be extended with the help of a new global interface.

 This new global interface would 

Re: Mapping surfaces created through a nested compositor to UI elements

2014-01-30 Thread Iago Toral
On Thu, 2014-01-30 at 13:34 +0200, Pekka Paalanen wrote:
 On Thu, 30 Jan 2014 10:32:03 +0100
 Iago Toral ito...@igalia.com wrote:
 
  Hi,
  
  in the process of porting webkitgtk+ to wayland and following advise
  provided here, I implemented a nested compositor to share surfaces
  between the two processes that do the rendering. This works fine with
  a single widget/surface, but things get a bit more complicated when
  dealing with various widgets (browser tabs, windows).
  
  I am trying to understand if I can solve my problem, which is
  basically a problem of matching wayland surfaces created in the
  nested compositor with their corresponding widgets in the UI, within
  the scope of the wayland protocol or if I need to resort to ad-hoc
  communications between the two processes outside the protocol. Below
  I describe the problem in more detail together with the possible
  solutions I looked into and my conclusions. I'd appreciate a lot if
  someone could confirm whether these are correct:
  
  As far as I understand the code in the nested client example from
  Weston, when there is need to repaint the UI it goes through all the
  surfaces in the compositor and paints them one by one. In our case,
  when GTK needs to repaint it will go through the widgets in the UI
  and ask them to repaint as needed. This means that we need to know,
  for a given widget, which is the surface in the nested compositor
  that provides the contents for it.
  
  However, when the nested compositor receives a request to create a
  surface it will not know for which widget it is creating it (obviously
  information on things like UI widgets is outside the scope of the
  wayland protocol), and as far as I can see, there is no way for the
  client to provide this info to the compositor either when the surface
  is created or after it has been created.
  
  Assuming that this is not something I can so using available APIs, I
  looked into adding this API to my nested compositor implementation,
  so I can have a surface constructor like this:
  
  wl_compositor_create_surface_for_widget(struct wl_compositor*, int);
  
  where that additional 'int' parameter would be used on the
  compositor's side to associate the new surface with a specific UI
  widget.
  
  Unfortunately, for this I would really want to reuse existing code and
  APIs from wayland to do message communication between client and
  compositor, but a good part of this is private to wayland (the
  wl_closure stuff for example) so it feels like I would end up having
  to duplicate quite some code from Wayland in WebKit, which I think is
  really not a good idea. Also, the fact that these APIs have been kept
  internal to Wayland means that this is not something that developers
  are expected to do.
  
  If the above is not a good solution either, I understand there would
  be no solution for my problem within the wayland protocol and I would
  need to add additional messages between the two processes outside the
  protocol after the surface is created in order to associate the
  surface and the widget on the compositor's side. In that case, I
  would need to communicate the widget ID and the ID of the Wayland
  object representing the surface (which I understand I'd get by
  calling wl_proxy_get_id on the client for the surface).
  
  Is my analysis of the problem correct or is there some way in which I
  can achieve my objective within the wayland protocol?
 
 Hi,
 
 the short answer to your solution is write a custom shell extension.
 
 That means that you would be writing your own private Wayland protocol
 extension in the same XML format as all Wayland protocol is already
 defined. Let's take xdg_shell as an example:
 http://cgit.freedesktop.org/wayland/weston/tree/protocol/xdg-shell.xml
 
 A compositor advertises a global object of type xdg_shell. The
 xdg_shell interface has two requests that allow you to add meaning to
 wl_surfaces: get_xdg_surface and get_xdg_popup. These requests
 associate additional information to the given wl_surface, and create a
 new protocol object to allow manipulating the new features added to the
 wl_surface object.
 
 If all you need to do is to associate an integer ID to a wl_surface,
 you could define a new global interface with a single request, that has
 a wl_surface and the ID as arguments. If you need more, you can add
 more like xdg_shell does. There are also many other examples of how
 wl_surface can be extended with the help of a new global interface.
 
 This new global interface would be advertised by the nested compositor
 only, and the clients of that compositor would be using it. No-one else
 would ever see it. The clients would use the standard wl_compositor to
 create standard wl_surface objects, and then add new meaning to them
 with your extension.
 
 Does this help?
 
 Btw. do not write an extension that has a new request to create
 wl_surface objects, or any objects that are already creatable via other
 

Re: Mapping surfaces created through a nested compositor to UI elements

2014-01-30 Thread Iago Toral
Hi Jason, we have a single process managing all the tabs so this is not
an option in this case. Thanks for the suggestion though.
Regards,
Iago

On Thu, 2014-01-30 at 05:39 -0600, Jason Ekstrand wrote:
 Yeah, Pekka pretty much covered it.  I've just got one more
 observation to add.  Are you doing one process per tab? Or do you have
 one process handling multiple tabs?  If you have one process per tab,
 then you can easily differentiate by which wl_client the surface is
 attached to.  You can know which client is which because (I assume)
 you should be launching them privately and manually creating the
 wl_client objects.
 
 Thanks,
 --Jason Ekstrand
 
 On Jan 30, 2014 5:35 AM, Pekka Paalanen ppaala...@gmail.com wrote:
 On Thu, 30 Jan 2014 10:32:03 +0100
 Iago Toral ito...@igalia.com wrote:
 
  Hi,
 
  in the process of porting webkitgtk+ to wayland and
 following advise
  provided here, I implemented a nested compositor to share
 surfaces
  between the two processes that do the rendering. This works
 fine with
  a single widget/surface, but things get a bit more
 complicated when
  dealing with various widgets (browser tabs, windows).
 
  I am trying to understand if I can solve my problem, which
 is
  basically a problem of matching wayland surfaces created in
 the
  nested compositor with their corresponding widgets in the
 UI, within
  the scope of the wayland protocol or if I need to resort to
 ad-hoc
  communications between the two processes outside the
 protocol. Below
  I describe the problem in more detail together with the
 possible
  solutions I looked into and my conclusions. I'd appreciate a
 lot if
  someone could confirm whether these are correct:
 
  As far as I understand the code in the nested client example
 from
  Weston, when there is need to repaint the UI it goes through
 all the
  surfaces in the compositor and paints them one by one. In
 our case,
  when GTK needs to repaint it will go through the widgets in
 the UI
  and ask them to repaint as needed. This means that we need
 to know,
  for a given widget, which is the surface in the nested
 compositor
  that provides the contents for it.
 
  However, when the nested compositor receives a request to
 create a
  surface it will not know for which widget it is creating it
 (obviously
  information on things like UI widgets is outside the scope
 of the
  wayland protocol), and as far as I can see, there is no way
 for the
  client to provide this info to the compositor either when
 the surface
  is created or after it has been created.
 
  Assuming that this is not something I can so using available
 APIs, I
  looked into adding this API to my nested compositor
 implementation,
  so I can have a surface constructor like this:
 
  wl_compositor_create_surface_for_widget(struct
 wl_compositor*, int);
 
  where that additional 'int' parameter would be used on the
  compositor's side to associate the new surface with a
 specific UI
  widget.
 
  Unfortunately, for this I would really want to reuse
 existing code and
  APIs from wayland to do message communication between client
 and
  compositor, but a good part of this is private to wayland
 (the
  wl_closure stuff for example) so it feels like I would end
 up having
  to duplicate quite some code from Wayland in WebKit, which I
 think is
  really not a good idea. Also, the fact that these APIs have
 been kept
  internal to Wayland means that this is not something that
 developers
  are expected to do.
 
  If the above is not a good solution either, I understand
 there would
  be no solution for my problem within the wayland protocol
 and I would
  need to add additional messages between the two processes
 outside the
  protocol after the surface is created in order to associate
 the
  surface and the widget on the compositor's side. In that
 case, I
  would need to communicate the widget ID and the ID of the
 Wayland
  object representing the surface (which I understand I'd get
 by
  calling wl_proxy_get_id on the client for the surface).
 
  Is my analysis of the problem correct or is there some way
 in which I
  can achieve my objective within the 

[PATCH weston v2] dim-layer: fix dimming for unfocused surfaces

2014-01-30 Thread pochu27
From: Emilio Pozuelo Monfort emilio.pozu...@collabora.co.uk

Unfocusing a surface should dim it when dim-layer is enabled,
but this got broken in commit 83ffd9.
---
 desktop-shell/shell.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/desktop-shell/shell.c b/desktop-shell/shell.c
index 30bd273..3087042 100644
--- a/desktop-shell/shell.c
+++ b/desktop-shell/shell.c
@@ -4250,14 +4250,14 @@ activate(struct desktop_shell *shell, struct 
weston_surface *es,
else
restore_all_output_modes(shell-compositor);
 
+   /* Update the surface’s layer. This brings it to the top of the stacking
+* order as appropriate. */
+   shell_surface_update_layer(shsurf);
+
if (shell-focus_animation_type != ANIMATION_NONE) {
ws = get_current_workspace(shell);
animate_focus_change(shell, ws, get_default_view(old_es), 
get_default_view(es));
}
-
-   /* Update the surface’s layer. This brings it to the top of the stacking
-* order as appropriate. */
-   shell_surface_update_layer(shsurf);
 }
 
 /* no-op func for checking black surface */
-- 
1.8.5.3

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[PATCH weston] desktop-shell: Properly handle lowered fullscreen surfaces

2014-01-30 Thread pochu27
From: Emilio Pozuelo Monfort emilio.pozu...@collabora.co.uk

lower_fullscreen_surface() was removing fullscreen surfaces from
the fullscreen layer and inserting them in the normal workspace
layer. However, those fullscreen surfaces were never put back in
the fullscreen layer, causing bugs such as unrelated surfaces
being drawn between a fullscreen surface and its black view.

Change the lower_fullscreen_surface() logic so that it lowers
fullscreen surfaces to the workspace layer *and* hides the
black views. Make this reversible by re-configuring the lowered
fullscreen surface: when it is re-configured, the black view
will be shown again and the surface will be restacked in the
fullscreen layer.

https://bugs.freedesktop.org/show_bug.cgi?id=73575
https://bugs.freedesktop.org/show_bug.cgi?id=74221
https://bugs.freedesktop.org/show_bug.cgi?id=74222
---
 desktop-shell/exposay.c |  8 +++
 desktop-shell/shell.c   | 56 +
 desktop-shell/shell.h   |  5 +
 3 files changed, 42 insertions(+), 27 deletions(-)

diff --git a/desktop-shell/exposay.c b/desktop-shell/exposay.c
index fe7a3a7..f09224f 100644
--- a/desktop-shell/exposay.c
+++ b/desktop-shell/exposay.c
@@ -141,7 +141,7 @@ exposay_highlight_surface(struct desktop_shell *shell,
shell-exposay.row_current = esurface-row;
shell-exposay.column_current = esurface-column;
 
-   activate(shell, view-surface, shell-exposay.seat);
+   activate(shell, view-surface, shell-exposay.seat, false);
shell-exposay.focus_current = view;
 }
 
@@ -283,8 +283,6 @@ exposay_layout(struct desktop_shell *shell)
if (shell-exposay.focus_current == esurface-view)
highlight = esurface;
 
-   set_alpha_if_fullscreen(get_shell_surface(view-surface));
-
exposay_animate_in(esurface);
 
i++;
@@ -502,10 +500,10 @@ exposay_transition_inactive(struct desktop_shell *shell, 
int switch_focus)
 * to the new. */
if (switch_focus  shell-exposay.focus_current)
activate(shell, shell-exposay.focus_current-surface,
-shell-exposay.seat);
+shell-exposay.seat, true);
else if (shell-exposay.focus_prev)
activate(shell, shell-exposay.focus_prev-surface,
-shell-exposay.seat);
+shell-exposay.seat, true);
 
wl_list_for_each(esurface, shell-exposay.surface_list, link)
exposay_animate_out(esurface);
diff --git a/desktop-shell/shell.c b/desktop-shell/shell.c
index 3087042..a8a0537 100644
--- a/desktop-shell/shell.c
+++ b/desktop-shell/shell.c
@@ -173,6 +173,7 @@ struct shell_surface {
struct {
bool maximized;
bool fullscreen;
+   bool lowered; /* fullscreen but lowered, see 
lower_fullscreen_layer() */
bool relative;
} state, next_state; /* surface states */
bool state_changed;
@@ -223,13 +224,6 @@ struct shell_seat {
} popup_grab;
 };
 
-void
-set_alpha_if_fullscreen(struct shell_surface *shsurf)
-{
-   if (shsurf  shsurf-state.fullscreen)
-   shsurf-fullscreen.black_view-alpha = 0.25;
-}
-
 static struct desktop_shell *
 shell_surface_get_shell(struct shell_surface *shsurf);
 
@@ -611,7 +605,7 @@ focus_state_surface_destroy(struct wl_listener *listener, 
void *data)
shell = state-seat-compositor-shell_interface.shell;
if (next) {
state-keyboard_focus = NULL;
-   activate(shell, next, state-seat);
+   activate(shell, next, state-seat, true);
} else {
if (shell-focus_animation_type == ANIMATION_DIM_LAYER) {
if (state-ws-focus_animation)
@@ -1762,10 +1756,10 @@ busy_cursor_grab_button(struct weston_pointer_grab 
*base,
struct weston_seat *seat = grab-grab.pointer-seat;
 
if (shsurf  button == BTN_LEFT  state) {
-   activate(shsurf-shell, shsurf-surface, seat);
+   activate(shsurf-shell, shsurf-surface, seat, true);
surface_move(shsurf, seat);
} else if (shsurf  button == BTN_RIGHT  state) {
-   activate(shsurf-shell, shsurf-surface, seat);
+   activate(shsurf-shell, shsurf-surface, seat, true);
surface_rotate(shsurf, seat);
}
 }
@@ -2036,7 +2030,7 @@ shell_surface_calculate_layer_link (struct shell_surface 
*shsurf)
switch (shsurf-type) {
case SHELL_SURFACE_POPUP:
case SHELL_SURFACE_TOPLEVEL:
-   if (shsurf-state.fullscreen) {
+   if (shsurf-state.fullscreen  !shsurf-state.lowered) {
return shsurf-shell-fullscreen_layer.view_list;
} else if (shsurf-parent) {
/* Move the surface to its parent layer so
@@ -2533,6 +2527,8 @@ 

[PATCH weston] Fullscreen surfaces

2014-01-30 Thread pochu27
From: Emilio Pozuelo Monfort emilio.pozu...@collabora.co.uk

Hi,

The following patch fixes a bunch of issues related to fullscreen
surfaces. From my testing, this makes fullscreen clients behave
much better.

We still have the following problem, but it is not a regression. I am
not sure how to fix it without regressing other situations. Have a
fullscreen client and launch a new client, it won't appear in
the front but behind the fullscreen client.

https://bugs.freedesktop.org/show_bug.cgi?id=74219

Emilio Pozuelo Monfort (1):
  desktop-shell: Properly handle lowered fullscreen surfaces

 desktop-shell/exposay.c |  8 +++
 desktop-shell/shell.c   | 56 +
 desktop-shell/shell.h   |  5 +
 3 files changed, 42 insertions(+), 27 deletions(-)

-- 
1.8.5.3

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


RE: [PATCH 2/2] udev-seat: break early when output is found and log the mapping

2014-01-30 Thread Eoff, Ullysses A
bump

 -Original Message-
 From: Eoff, Ullysses A
 Sent: Friday, January 10, 2014 10:15 AM
 To: wayland-devel@lists.freedesktop.org
 Cc: Eoff, Ullysses A
 Subject: [PATCH 2/2] udev-seat: break early when output is found and log the 
 mapping
 
 When an input device has a WL_OUTPUT udev property specified and
 that output is found, log it... also break from the loop immediately.
 
 Log a warning if the requested output is not found.
 
 Signed-off-by: U. Artie Eoff ullysses.a.e...@intel.com
 ---
  src/udev-seat.c | 38 +-
  1 file changed, 29 insertions(+), 9 deletions(-)
 
 diff --git a/src/udev-seat.c b/src/udev-seat.c
 index f9723f2..f4fdae0 100644
 --- a/src/udev-seat.c
 +++ b/src/udev-seat.c
 @@ -105,13 +105,14 @@ device_added(struct udev_device *udev_device, struct 
 udev_input *input)
device-abs.calibration[4],
device-abs.calibration[5]) == 6) {
   device-abs.apply_calibration = 1;
 - weston_log (Applying calibration: %f %f %f %f %f %f\n,
 - device-abs.calibration[0],
 - device-abs.calibration[1],
 - device-abs.calibration[2],
 - device-abs.calibration[3],
 - device-abs.calibration[4],
 - device-abs.calibration[5]);
 + weston_log_continue(STAMP_SPACE
 + applying calibration: %f %f %f %f %f %f\n,
 + device-abs.calibration[0],
 + device-abs.calibration[1],
 + device-abs.calibration[2],
 + device-abs.calibration[3],
 + device-abs.calibration[4],
 + device-abs.calibration[5]);
   }
 
   wl_list_insert(seat-devices_list.prev, device-link);
 @@ -125,8 +126,20 @@ device_added(struct udev_device *udev_device, struct 
 udev_input *input)
   if (output_name) {
   device-output_name = strdup(output_name);
   wl_list_for_each(output, c-output_list, link)
 - if (strcmp(output-name, device-output_name) == 0)
 + if (strcmp(output-name, device-output_name) == 0) {
   device-output = output;
 + weston_log_continue(
 + STAMP_SPACE
 + mapping to output: %s\n,
 + device-output-name);
 + break;
 + }
 + if (!device-output || strcmp(device-output-name, 
 device-output_name) != 0) {
 + weston_log_continue(
 + STAMP_SPACE
 + warning: map to output %s failed... output not 
 found\n,
 + device-output_name);
 + }
   }
 
   if (input-enabled == 1)
 @@ -354,8 +367,15 @@ notify_output_create(struct wl_listener *listener, void 
 *data)
 
   wl_list_for_each(device, seat-devices_list, link)
   if (device-output_name 
 - strcmp(output-name, device-output_name) == 0)
 + strcmp(output-name, device-output_name) == 0) {
   device-output = output;
 + weston_log(%s\n, device-devname);
 + weston_log_continue(
 + STAMP_SPACE
 + mapping to output: %s\n,
 + device-output-name);
 + break;
 + }
  }
 
  static struct udev_seat *
 --
 1.8.4.2

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


[RFC v2] Wayland presentation extension (video protocol)

2014-01-30 Thread Pekka Paalanen
Hi,

it's time for a take two on the Wayland presentation extension.


1. Introduction

The v1 proposal is here:
http://lists.freedesktop.org/archives/wayland-devel/2013-October/011496.html

In v2 the basic idea is the same: you can queue frames with a
target presentation time, and you can get accurate presentation
feedback. All the details are new, though. The re-design started
from the wish to handle resizing better, preferably without
clearing the buffer queue.

All the changed details are probably too much to describe here,
so it is maybe better to look at this as a new proposal. It
still does build on Frederic's work, and everyone who commented
on it. Special thanks to Axel Davy for his counter-proposal and
fighting with me on IRC. :-)

Some highlights:

- Accurate presentation feedback is possible also without
  queueing.

- You can queue also EGL-based rendering, and get presentation
  feedback if you want. Also EGL can do this internally, too, as
  long as EGL and the app do not try to use queueing at the same time.

- More detailed presentation feedback to better allow predicting
  future display refreshes.

- If wl_viewport is used, neither video resolution changes nor
  surface (window) size changes alone require clearing the queue.
  Video can continue playing even during resizes.

The protocol interfaces are arranged as

global.method(wl_surface, ...)

just for brewity. We could as well do the factory approach:

o = global.get_presentation(wl_surface)
o.method(...)

Or if we wanted to make it a mandatory part of the Wayland core
protocol, we could just extend wl_surface itself:

wl_surface.method(...)

and put the clock_id event in wl_compositor. That all is still
open and fairly uninteresting, so let's concentrate on the other
details.

The proposal refers to wl_viewport.set_source and
wl_viewport.destination requests, which do not yet exist in the
scaler protocol extension. These are just the wl_viewport.set
arguments split into separate src and dst requests.

Here is the new proposal, some design rationale follows. Please,
do ask why something is designed like it is if it puzzles you. I
have a load of notes I couldn't clean up for this email. This
does not even intend to completely solve all XWayland needs, but
for everything native on Wayland I hope it is sufficient.


2. The protocol specification

?xml version=1.0 encoding=UTF-8?
protocol name=presentation_timing

  copyright
Copyright © 2013-2014 Collabora, Ltd.

Permission to use, copy, modify, distribute, and sell this
software and its documentation for any purpose is hereby granted
without fee, provided that the above copyright notice appear in
all copies and that both that copyright notice and this permission
notice appear in supporting documentation, and that the name of
the copyright holders not be used in advertising or publicity
pertaining to distribution of the software without specific,
written prior permission.  The copyright holders make no
representations about the suitability of this software for any
purpose.  It is provided as is without express or implied
warranty.

THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
THIS SOFTWARE.
  /copyright

  interface name=presentation version=1
description summary=timed presentation related wl_surface requests
  The main features of this interface are accurate presentation
  timing feedback, and queued wl_surface content updates to ensure
  smooth video playback while maintaining audio/video
  synchronization. Some features use the concept of a presentation
  clock, which is defined in presentation.clock_id event.

  Requests 'feedback' and 'queue' can be regarded as additional
  wl_surface methods. They are part of the double-buffered
  surface state update mechanism, where other requests first set
  up the state and then wl_surface.commit atomically applies the
  state into use. In other words, wl_surface.commit submits a
  content update.

  Interface wl_surface has requests to set surface related state
  and buffer related state, because there is no separate interface
  for buffer state alone. Queueing requires separating the surface
  from buffer state, and buffer state can be queued while surface
  state cannot.

  Buffer state includes the wl_buffer from wl_surface.attach, the
  state assigned by wl_surface requests frame,
  set_buffer_transform and 

Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Manuel Bachmann
Hi folks,

Where I work, we need to have some logic to manage surfaces from a client
point of view (application or toolkit). For example, we need to be able to :
- minimize (hide/show or more) surfaces, just like most desktop
environments allow ;
- manage layers, by arranging surfaces by z-orders e.g. ;
- ...

Having searched a bit, and because the core Wayland protocol does not
provide this, it seems that the xdg_shell protocol would be the way to go.

I just looked in the current Weston codebase, and found there are already
stubbed implementations for xdg_shell_set_minimize() e.g.. I plan to write
a minimal implementation adding the call handling to shell.c, and a new
taskbar.c plugin eventually receiving calls.
-
As a proof-of-concept, I just wrote a patch for weston 1.3.x that
implements all the logic needed for a graphical taskbar, and manages
minimization/raise events for surfaces.

 Here's the patched version, which basically modifies shell, desktop_shell
and toytoolkit :
https://github.com/Tarnyko/weston-taskbar

 Here are some screenshots :
http://www.tarnyko.net/repo/weston131-taskbar1.png
http://www.tarnyko.net/repo/weston131-taskbar2.png

 And for the lazy ;-) , here is a video :
http://www.youtube.com/watch?v=7Svrb3iGBAs
-
Here's how it works :

- When the compositor creates a shell_surface having the TOPLEVEL type,
it sets a numeral ID for it, and sends a map event to the desktop_shell
client ;

- the desktop_shell client receives the event, and then creates a
button on the taskbar associated with this ID. If the surface has a title
(typically set client-side with wl_shell_surface_set_title()), the button
will display it ; otherwise it will just contain Default:ID.

- when the button is clicked, and the window is shown, it asks the
compositor
to hide it... or asks the contrary in the other case ;-) ;

- if it should be hidden, then the compositor sends the shell_surface to a
new
weston_layer named taskbar_layer. This layer is not displayed at all. If
it
shouldn't, then it's moved back to the current workspace layer.

- lots of weston clients use the toytoolkit library (weston-terminal
e.g.).
When their minimize button is pressed, they now call a
taskbar_move_surface()
function which will do the former, and additionally send a hint to the
desktop_shell
that this has been done (so the corresponding taskbar button stays tuned).
---

As lots of code changed in 1.4, and xdg_shell interface is now implemented,
I will try to port it to git master along with xdg_shell additions.

Comments on this subject are very welcome !

PS : I will be at FOSDEM this w-e (
https://fosdem.org/2014/schedule/event/porting_legacy_x11_to_wayland/) if
anyone wants to discuss the subject with me.

-- 
Regards,



*Manuel BACHMANN Tizen Project VANNES-FR*
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: libinput requirements for feature parity with X

2014-01-30 Thread Ping Cheng
On Tue, Jan 28, 2014 at 6:18 PM, Peter Hutterer peter.hutte...@who-t.netwrote:

 Here's a list of features I consider the minimum to get something akin to
 feature-parity with the current X.Org-based stack. This is not a wishlist
 for features, it's a list of minimum requirements that covers 90% of the
 user base.

 keyboard:
 I don't think there's much to do, keyboards are fairly simple and the hard
 bits are handled in the client with XKB.

 mouse-like pointer devices:
 * middle mouse button emulation (left+right → middle)
 * configuration interface for mouse button mapping, specifically
 left-handed
 * lower-priority: wheel emulation
 * lower-priority: rotation

 direct-touch touchscreens:
 * optional: configuration interface for rotation. can be achieved with the
   calibration matrix already

 touchpads:
 * clickpad-style software buttons
 * middle mouse button emulation (for physical buttons)
 * two/three-finger tapping + configuration interface
 * edge scrolling
 * support for Lenovo T440 style trackstick buttons
 * disable-while-typing
 * clickfinger handling
 * lower-priority: palm detection
 * lower-priority: accidental click detection

 graphics tablets:
 * extended axis event support
 * tool change notification (could be just button events? not sure)


Will tool id, serial number, and tool type be supported here?

Ping


 * interface to switch between relative and absolute mode
 * device rotation
 * touch-vs-pen event synchronization (disable touch while the pen is in
 use,
   etc.)

 generic:
 * type identifier interface, so that a compositor can tell that there's a
   touchpad present, or a mouse, or...
 * configuration interfaces for the various settings
 * device capability discovery interfaces for axis resolutions, number of
   buttons, etc.

 Anything obvious I missed here?

 Cheers,
Peter
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: libinput requirements for feature parity with X

2014-01-30 Thread Bill Spitzak

Ping Cheng wrote:


graphics tablets:
* extended axis event support
* tool change notification (could be just button events? not sure)


Will tool id, serial number, and tool type be supported here?


Shouldn't each tool be a different pointing device? It at least needs to 
know which tool is being used when moving it, it can't be deferred until 
the first button is being pushed.


Are tablets capable of handling more than one tool at a time? If this is 
at all plausable I think they all have to be different pointers since 
otherwise there is no way to indicate which x/y position is which tool. 
Otherwise I guess a tool changed event would work.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: libinput requirements for feature parity with X

2014-01-30 Thread Peter Hutterer
On Thu, Jan 30, 2014 at 10:30:40AM -0800, Ping Cheng wrote:
 On Tue, Jan 28, 2014 at 6:18 PM, Peter Hutterer 
 peter.hutte...@who-t.netwrote:
 
  Here's a list of features I consider the minimum to get something akin to
  feature-parity with the current X.Org-based stack. This is not a wishlist
  for features, it's a list of minimum requirements that covers 90% of the
  user base.
 
  keyboard:
  I don't think there's much to do, keyboards are fairly simple and the hard
  bits are handled in the client with XKB.
 
  mouse-like pointer devices:
  * middle mouse button emulation (left+right → middle)
  * configuration interface for mouse button mapping, specifically
  left-handed
  * lower-priority: wheel emulation
  * lower-priority: rotation
 
  direct-touch touchscreens:
  * optional: configuration interface for rotation. can be achieved with the
calibration matrix already
 
  touchpads:
  * clickpad-style software buttons
  * middle mouse button emulation (for physical buttons)
  * two/three-finger tapping + configuration interface
  * edge scrolling
  * support for Lenovo T440 style trackstick buttons
  * disable-while-typing
  * clickfinger handling
  * lower-priority: palm detection
  * lower-priority: accidental click detection
 
  graphics tablets:
  * extended axis event support
  * tool change notification (could be just button events? not sure)
 
 
 Will tool id, serial number, and tool type be supported here?

eventually, yes, though I'm not quite sure yet how.

Cheers,
   Peter

  * interface to switch between relative and absolute mode
  * device rotation
  * touch-vs-pen event synchronization (disable touch while the pen is in
  use,
etc.)
 
  generic:
  * type identifier interface, so that a compositor can tell that there's a
touchpad present, or a mouse, or...
  * configuration interfaces for the various settings
  * device capability discovery interfaces for axis resolutions, number of
buttons, etc.
 
  Anything obvious I missed here?
 
  Cheers,
 Peter
  ___
  wayland-devel mailing list
  wayland-devel@lists.freedesktop.org
  http://lists.freedesktop.org/mailman/listinfo/wayland-devel
 
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: libinput requirements for feature parity with X

2014-01-30 Thread Peter Hutterer
On Thu, Jan 30, 2014 at 01:42:20PM -0800, Bill Spitzak wrote:
 Ping Cheng wrote:
 
 graphics tablets:
 * extended axis event support
 * tool change notification (could be just button events? not sure)
 
 
 Will tool id, serial number, and tool type be supported here?
 
 Shouldn't each tool be a different pointing device? 

That largely depends on where tools are expected to be unified. Some pens
(Intuos 4, 5, Pro series) have unique IDs. So if you take them from one
tablet to the next, the ID obviously stays the same so in theory you could
attach a color to that tool and have it span multiple tablets.

I think this is something the client stack should provide, if at all. That
leaves libinput with the tablet as the main device, and the tool as a
subfeature on it.

 It at least
 needs to know which tool is being used when moving it, it can't be
 deferred until the first button is being pushed.
 
 Are tablets capable of handling more than one tool at a time? If

In the Wacom range the answer to that is some old serial ones did, but none
of the tablets that came out in recent years. And I think it's unlikely to
happen again.

 this is at all plausable I think they all have to be different
 pointers since otherwise there is no way to indicate which x/y
 position is which tool. Otherwise I guess a tool changed event
 would work.

You can augment events through other means to indicate the tool in use, you
don't need several devices.

Cheers,
   Peter
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH weston] Fullscreen surfaces

2014-01-30 Thread Bill Spitzak
There really should not be a fullscreen layer which is what is causing 
this problem. layers are imho a mistake except for the desttop and the 
mouse cursor.


What I think needs to happen:

Fullscreen, normal windows, and panels can be arranged in any stacking 
order, except the compositor enforces this rule:


The panels are always just below the lowest fullscreen window. If 
there are no fullscreen windows then the panel is above all windows.


There are several ways to enforce this but one that matches current 
window apis is:


1. When a window is raised and there are no fullscreen windows, the 
panels are also raised to remain above it. If there are fullscreen 
windows then the panel is not moved. Note that a window can be raised 
above a fullscreen window, thus solving this bug.


2. Whan a window switches to fullscreen it is also raised (thus it will 
end up above the panel). (an alternative is to lower the panel but that 
is not standard behavior in existing windowing systems).


3. When the last fullscreen window switches to non-fullscreen, the panel 
is raised above all windows.


poch...@gmail.com wrote:


We still have the following problem, but it is not a regression. I am
not sure how to fix it without regressing other situations. Have a
fullscreen client and launch a new client, it won't appear in
the front but behind the fullscreen client.

https://bugs.freedesktop.org/show_bug.cgi?id=74219

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH libinput 1/2] Replace output screen size callback with transform helpers

2014-01-30 Thread Peter Hutterer
On Thu, Jan 30, 2014 at 08:38:02AM +0100, Jonas Ådahl wrote:
 On Thu, Jan 30, 2014 at 01:02:15PM +1000, Peter Hutterer wrote:
  On Wed, Jan 29, 2014 at 09:33:11PM +0100, Jonas Ådahl wrote:
   Instead of automatically transforming absolute coordinates of touch and
   pointer events to screen coordinates, the user now uses the corresponding
   transform helper function. This means the coordinates returned by
   libinput_event_pointer_get_absolute_x(),
   libinput_event_pointer_get_absolute_y(), libinput_touch_get_x() and
   libinput_touch_get_y() has changed from being in output screen coordinate
   space to being in device specific coordinate space.
   
   For example, where one before would call 
   libinput_event_touch_get_x(event),
   one now calls libinput_event_touch_get_x_transformed(event, output_width).
   
   Signed-off-by: Jonas Ådahl jad...@gmail.com
   ---
src/evdev.c|  54 ++--
src/evdev.h|  10 +
src/libinput.c |  44 
src/libinput.h | 128 
   +
test/litest.c  |  11 -
5 files changed, 186 insertions(+), 61 deletions(-)
   
   diff --git a/src/evdev.c b/src/evdev.c
   index 46bd35a..cb83a1f 100644
   --- a/src/evdev.c
   +++ b/src/evdev.c
   @@ -86,6 +86,24 @@ transform_absolute(struct evdev_device *device, 
   int32_t *x, int32_t *y)
 }
}

   +li_fixed_t
   +evdev_device_transform_x(struct evdev_device *device,
   +  li_fixed_t x,
   +  uint32_t width)
   +{
   + return (x - device-abs.min_x) * width /
   + (device-abs.max_x - device-abs.min_x);
   +}
   +
   +li_fixed_t
   +evdev_device_transform_y(struct evdev_device *device,
   +  li_fixed_t y,
   +  uint32_t height)
   +{
   + return (y - device-abs.min_y) * height /
   + (device-abs.max_y - device-abs.min_y);
  
  you're mixing coordinate systems here, x and y are in fixed_t but
  abs.min/max is in normal integers. that breaks if you have a non-zero min.
  You'll need to convert the rest to li_fixed_t too if you want to keep the
  integer division.
 
 Yea, missed the wl_fixed_from_int here (and in _x), and they were all 0
 so didn't notice it either. For multiplication, one of the factors cannot be
 li_fixed_t though. Same goes for division where the denominator needs to
 be a normal int even if the numerator is li_fixed_t.

I agree, but cannot should read does not need to be. the complete formula 
is:

  scaled = (x - xmin) * (screen_max_x - screen_min_x)/(xmax - xmin) + 
screen_min_x

fixed_t is essentially (foo * 256), so if we assume x is in fixed_t and we
convert everything to fixed, we have

  = (x - xmin * 256) * (screen_max_x * 256 - screen_min_x * 256)/(xmax * 256 - 
xmin * 256) + screen_min_x * 256
  = (x - xmin * 256) * (screen_max_x - screen_min_x) * 256/((xmax - xmin) * 
256) + screen_min_x  * 256
  = (x - xmin * 256) * (screen_max_x - screen_min_x)/(xmax - xmin) + 
screen_min_x * 256

and because we have an offset of 0, and thus screen_max_x == width, we end
up with

  = (x - xmin * 256) * width/(xmax - xmin)

so yes, you only need to convert xmin to li_fixed_t, but that only
applies because we expect a 0 screen offset.

and that concludes today's math tutorial.
(which I mainly did because I wasn't 100% sure on this either ;)

It'd probably be worth noting this somewhere, or at least writing down the
base formula so that if there are ever patches that change this at least the
base formula is clear. We've messed up missing out on (+ screen_min_x) a few
times in the X stack over the years.

Also, there is one problem with the formula. the screen dimensions are
exclusive [0,width[, the device coordinates are inclusive [min, max]. so the
correct scaling should be (xmax - xmin + 1).
   
  also, should we add a non-zero min for width and height to scale to a screen
  not the top/left-most? The compositor can just add it afterwards, but 
  it would have to convert to fixed_t as well:
  
  x = libinput_event_touch_get_x_transformed(event, screen_width);
  x += li_fixed_from_int(screen_offset);
  
  which is more error prone than something like:
  
  x = libinput_event_touch_get_x_transformed(event, screen_offset_x, 
  screen_width);
 
 
 That transform wouldn't be enough. We'd have to rotate etc as well. See
 http://cgit.freedesktop.org/wayland/weston/tree/src/compositor.c#n3408 .
 Given that, I think its easiest to let libinput do the device specific
 transform (device coords - output coords) and then have the user
 translate, and do other transformations.

fair enough. There is an argument to be made for libinput to do these
things, or provide helper functions to avoid callers writing potentially
buggy code. but not this time :)

  also, is it likely that the caller always has the screen dimensions handy
  when it comes to processing events? or would an config-style approach work
  better:
 
 

Re: libinput mouse mode for tablets

2014-01-30 Thread Peter Hutterer
On Thu, Jan 30, 2014 at 02:14:30PM -0800, Bill Spitzak wrote:
 It is not clear from this discussion what support there will be for
 mouse mode for the tablets.
 
 A problem I have had with the current tablet api is that it is
 designed for mapping the tablet to the bounding box surrounding all
 the outputs. mouse mode simply means that the movement is relative
 and does not change this scaling.
 
 What is wanted in mouse mode is a fixed translation of a 1 square
 on the tablet to a square in output space. Other operating systems
 do this when you switch to mouse mode. I have to run a rather
 annoying Python program every time the screen layout is changed to
 calculate the very non-intuitive rectangle I have to send to the
 mouse driver.
 
 Also I only want mouse mode when I have two outputs. If I have one
 the tablet can be used in direct mode. This may also be true if a
 program could grab the tablet and direct it to it's window that
 turning off mouse mode would be useful. I also have a smaller tablet
 that I would like to be in mouse mode all the time.
 
 I think a much more intelligent version can be done, which
 automatically goes into mouse mode. Basically the user chooses how
 big a 1-inch square on the tablet turns into in output space, and a
 limit to how distorted this output square can be (perhaps from 1.5:1
 to 1:1.5). On every change of the tablet or outputs wayland/libinput
 then figures out a mapping that is not too far from this scale and
 within the distortion dimensions for non-mouse-mode, if that is
 impossible it goes to mouse mode.

this has so many more cases where it won't work correctly from the user's
POV that it's likely easier to just have an easily accessible way of
switching between absolute and relative mode.

other than that, there will be support for relative mouse mode on tablet
hardware. Your general use-case is not unique, though I don't think I've
heard of the the case of mapping a tablet area to the exact screen area
before. Mapping it so that a square is a square, yes, but the requirement
for exact size matches is new to me.

Cheers,
   Peter

 I don't know if this would be libinput or the compositor but the
 ability to do this would be a nice addition to Wayland.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: libinput mouse mode for tablets

2014-01-30 Thread Bill Spitzak

Peter Hutterer wrote:


this has so many more cases where it won't work correctly from the user's
POV that it's likely easier to just have an easily accessible way of
switching between absolute and relative mode.


I really feel this automatic switch will work perfectly and would be a 
huge improvement over how Windows and OS/X work.


An automatic switch would also be really nice if clients are able to 
restrict the tablet to a rectangle: it could enter absolute mode 
automatically when the rectangle is small enough, and go back to mouse 
mode when the client releases the tablet.


If you are really worried about it, the user control could be 3-way, 
with a default of automatic.



other than that, there will be support for relative mouse mode on tablet
hardware. Your general use-case is not unique, though I don't think I've
heard of the the case of mapping a tablet area to the exact screen area
before. Mapping it so that a square is a square, yes, but the requirement
for exact size matches is new to me.


It is not intended to be exact but I may have confused things by not 
mentioning a size range. The given size is actually a maximum value for 
the smaller side of the resulting rectangle. The limits on w/h ratio 
fully control the algorithm so no minimum is needed (though I am 
assuming that the tablet is not a lot larger than the outputs).

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Jasper St. Pierre
On Thu, Jan 30, 2014 at 6:31 PM, Bill Spitzak spit...@gmail.com wrote:


  - When the compositor creates a shell_surface having the TOPLEVEL type,
 it sets a numeral ID for it, and sends a map event to the desktop_shell
 client ;


 You must allow a toplevel to become a non-toplevel and vice-versa,
 otherwise useful api for rearranging windows is impossible. My
 recommendation is that a surface has a parent that can be changed at any
 time to any other surface or to NULL, the only restriction is that a loop
 cannot be created. In any case please do not make a type called TOPLEVEL.


This type already exists in wl_shell_surface_set_toplevel. It says nothing
about transient parents, only that it's a toplevel as opposed to a
subsurface.

Perhaps a misguided name, which is why xdg-shell removes the terminology.
However, weston's shell.c still contains a type called TOPLEVEL since it
started as an implementation for wl_shell_surface.


 - if it should be hidden, then the compositor sends the shell_surface to a
 new
 weston_layer named taskbar_layer. This layer is not displayed at all.


 NO! The compositor must only send a hide request to the client. The
 client MUST be in final control over when and how it's surfaces disappear.
 This is to allow it to atomically remove child surfaces or to reparent them
 to other surfaces that are not being hidden.


Hiding windows should not have any influence over the client, as many
desktop environments, including Weston, want to show live previews for
minimized or hidden windows in alt-tab or Expose-alikes.

Also, it matches user expectations. If the user clicks minimize on a
window, they want it hidden, and they mean it, not get bested with
surprise! You tried to hide me but I resist by mapping my subsurfaces
elsewhere!


 When their minimize button is pressed, they now call a
 taskbar_move_surface()
 function which will do the former, and additionally send a hint to the
 desktop_shell
 that this has been done (so the corresponding taskbar button stays tuned).


 I'm not clear on why the former api can't do this and you felt a new api
 had to be added.
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel




-- 
  Jasper
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Manuel Bachmann
Hi Bill, and thanks a lot for sharing your thoughts,

 You must allow a toplevel to become a non-toplevel and vice-versa

That's true ; the current implementation doesn't address this case.

 My recommendation is that a surface has a parent that can be changed at
any time to any other surface or to NULL

So for an application having a main surface (let's say, the first that
has been created) and child (transient ?) surfaces, the schema would be :
NULL - shell_surface - shell_surface - shell_surface
 |-shell_surface

In that case, an implementation *could* just display a button for the first
shell_surface on the left, and minimize it and all its children at the same
time. Well, I suppose it's really something up to the actual
implementation, so a sample impl. could just do it the easiest way.

 NO! The compositor must only send a hide request to the client. The
client MUST be in final control over when and how it's surfaces disappear.

Have just been reading the other (old) thread on this issue, so I get your
objection :-).
I suppose I'll have to write a sample client application able to process
such a request. GTK+ seems to have an impl, so I will check what it does.

 I'm not clear on why the former api can't do this and you felt a new api
had to be added.

Well it's just a demo, I don't really feel like it should be merged in this
state.In fact,  I'd like to avoid adding any API.


Taking your comments into account, here's what I think I should do next :

- write a sample client able to send xdg_shell_set_minimized() requests,
and process the responses to it ;
- write a minimal implementation for a taskbar/main_shell_surfaces_list
(?) in the shell directory, and allow it to be built on demand ;
- make sure these 2 components communicate and react well.

Thank you ; any further recommendation appreciated !

Regards,
Manuel


2014-01-31 Bill Spitzak spit...@gmail.com:


  - When the compositor creates a shell_surface having the TOPLEVEL type,
 it sets a numeral ID for it, and sends a map event to the desktop_shell
 client ;


 You must allow a toplevel to become a non-toplevel and vice-versa,
 otherwise useful api for rearranging windows is impossible. My
 recommendation is that a surface has a parent that can be changed at any
 time to any other surface or to NULL, the only restriction is that a loop
 cannot be created. In any case please do not make a type called TOPLEVEL.


  - if it should be hidden, then the compositor sends the shell_surface to
 a new
 weston_layer named taskbar_layer. This layer is not displayed at all.


 NO! The compositor must only send a hide request to the client. The
 client MUST be in final control over when and how it's surfaces disappear.
 This is to allow it to atomically remove child surfaces or to reparent them
 to other surfaces that are not being hidden.


  When their minimize button is pressed, they now call a
 taskbar_move_surface()
 function which will do the former, and additionally send a hint to the
 desktop_shell
 that this has been done (so the corresponding taskbar button stays tuned).


 I'm not clear on why the former api can't do this and you felt a new api
 had to be added.




-- 
Regards,



*Manuel BACHMANN Tizen Project VANNES-FR*
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Manuel Bachmann
Well, having read Jasper's comment, he has some valid points, the most
important being in my opinion :

  it matches user expectations. If the user clicks minimize on a window,
they want it hidden

It the logic of what should *really* happen when a window is minimized is
implemented client-side, that means the UI experience will differ among
client applications. Some may hide, some may iconify, some may just do
nothing...

One could object that the logic is to be implemented in the toolkit, so all
applications using the same toolkit will end up doing the same. But that
still means that if we have, for example, an application using GTK+,
another one using EFL, another one using toytoolkit (weston-terminal),
they'll behave differently ? We'd really like to avoid that.

Maybe a middle ground can be found if :

1) the desktop-shell-component can force a behaviour which WON'T touch any
wl_surface nor wl_shell_surface state (i.e. no switch to any new MINIMIZED
or whatever state, surface still considered fullscreen or maximized if it
was before)

2) the client application can still get the response back, and react the
way it wants

Which would imply, taking the case of Weston :

- if no desktop_shell_taskbar or whatever manager plugin is present as a
module, then client app just gets its response, and react the way it wants ;

- if it is present, then this plugin communicates with the compositor to
enforce some behaviour, BUT the client app still react the way it wants.

What do you think of that ?

Regards,
Tarnyko


2014-01-31 Manuel Bachmann manuel.bachm...@open.eurogiciel.org:

 Hi Bill, and thanks a lot for sharing your thoughts,

  You must allow a toplevel to become a non-toplevel and vice-versa

 That's true ; the current implementation doesn't address this case.

  My recommendation is that a surface has a parent that can be changed
 at any time to any other surface or to NULL

 So for an application having a main surface (let's say, the first that
 has been created) and child (transient ?) surfaces, the schema would be :
 NULL - shell_surface - shell_surface - shell_surface
  |-shell_surface

 In that case, an implementation *could* just display a button for the
 first shell_surface on the left, and minimize it and all its children at
 the same time. Well, I suppose it's really something up to the actual
 implementation, so a sample impl. could just do it the easiest way.

  NO! The compositor must only send a hide request to the client. The
 client MUST be in final control over when and how it's surfaces disappear.

 Have just been reading the other (old) thread on this issue, so I get your
 objection :-).
 I suppose I'll have to write a sample client application able to process
 such a request. GTK+ seems to have an impl, so I will check what it does.

  I'm not clear on why the former api can't do this and you felt a new api
 had to be added.

 Well it's just a demo, I don't really feel like it should be merged in
 this state.In fact,  I'd like to avoid adding any API.
 

 Taking your comments into account, here's what I think I should do next :

 - write a sample client able to send xdg_shell_set_minimized() requests,
 and process the responses to it ;
 - write a minimal implementation for a taskbar/main_shell_surfaces_list
 (?) in the shell directory, and allow it to be built on demand ;
 - make sure these 2 components communicate and react well.

 Thank you ; any further recommendation appreciated !

 Regards,
 Manuel


 2014-01-31 Bill Spitzak spit...@gmail.com:


  - When the compositor creates a shell_surface having the TOPLEVEL type,
 it sets a numeral ID for it, and sends a map event to the
 desktop_shell client ;


 You must allow a toplevel to become a non-toplevel and vice-versa,
 otherwise useful api for rearranging windows is impossible. My
 recommendation is that a surface has a parent that can be changed at any
 time to any other surface or to NULL, the only restriction is that a loop
 cannot be created. In any case please do not make a type called TOPLEVEL.


  - if it should be hidden, then the compositor sends the shell_surface to
 a new
 weston_layer named taskbar_layer. This layer is not displayed at all.


 NO! The compositor must only send a hide request to the client. The
 client MUST be in final control over when and how it's surfaces disappear.
 This is to allow it to atomically remove child surfaces or to reparent them
 to other surfaces that are not being hidden.


  When their minimize button is pressed, they now call a
 taskbar_move_surface()
 function which will do the former, and additionally send a hint to the
 desktop_shell
 that this has been done (so the corresponding taskbar button stays
 tuned).


 I'm not clear on why the former api can't do this and you felt a new api
 had to be added.




 --
 Regards,



 *Manuel BACHMANN Tizen Project VANNES-FR*




-- 
Regards,



*Manuel BACHMANN Tizen Project VANNES-FR*

Re: libinput requirements for feature parity with X

2014-01-30 Thread Ping Cheng
On Thu, Jan 30, 2014 at 1:42 PM, Bill Spitzak spit...@gmail.com wrote:

 Ping Cheng wrote:

  graphics tablets:
 * extended axis event support
 * tool change notification (could be just button events? not sure)


 Will tool id, serial number, and tool type be supported here?


 Shouldn't each tool be a different pointing device? It at least needs to
 know which tool is being used when moving it, it can't be deferred until
 the first button is being pushed.


Good point.



 Are tablets capable of handling more than one tool at a time? If this is
 at all plausable I think they all have to be different pointers since
 otherwise there is no way to indicate which x/y position is which tool.
 Otherwise I guess a tool changed event would work.


Even when there is only one tool on the tablet at a time, artists/users may
have different configuration for different tools. Application needs to
remember which tool, such as air brush 1 vs air brush 2, matches with
which setting so it can switch setting automatically when a specific tool
is detected.

Ping
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Bill Spitzak

Jasper St. Pierre wrote:

Hiding windows should not have any influence over the client, as many 
desktop environments, including Weston, want to show live previews for 
minimized or hidden windows in alt-tab or Expose-alikes.


Also, it matches user expectations. If the user clicks minimize on a 
window, they want it hidden, and they mean it, not get bested with 
surprise! You tried to hide me but I resist by mapping my subsurfaces 
elsewhere!


The problem is that it makes window management very complicated, or 
limiting. This is why no modern applications use overlapping windows, 
instead doing their own tiled window layout inside one big window. They 
cannot get the window system to overlay windows correctly except for 
trivial temporary popups.


A simple problem is a floating window shared by two main windows. Some 
parts of the compositor want to know that the floating window belongs to 
at least one main window (for instance to not show it in a toolbar). But 
the client does not want it to vanish when that window is closed, yet it 
should vanish when both windows are closed.


This will require the client to tell the compositor that both of the 
main windows are parents, thus giving rise to a Directed Acyclic Graph 
of parents. I believe reliably updating this structure from the client 
is enormously more complex and error-prone than a simple tree. Also I 
suspect there are desirable window manipulations that cannot be 
described by a DAG and thus even more complex api must be provided in 
Wayland.


Kristin has proposed that nothing be sent from the client to the 
compositor, I think he is worried about the api bloat this may cause.


I feel that an intermediate solution that limits the data to a tree, 
since only a set parent api is needed to manage it. When a client 
requests that a window be raised or hidden then this tree causes 
children to do the same. But this is only done after a request from the 
client, so the client can first rearrange or delete the tree in order to 
get the behavior it wants. The main reason for this is that I think it 
is necessary so wayland clients can be displayed remotely on non-wayland 
systems that support such trees, otherwise remote display may be very 
blinky when users raise windows.


Both of these however require that all decisions about window 
relationships be left to the client. There is no way around this, no 
matter how much you want to pretend otherwise.


A client that fails to hide the window after the request and a timeout 
can get force-hidden, if you think this is a problem. But you cannot use 
misbehaving clients as a reason for designing an api, since there are a 
billion other ways a client can misbehave and you are not stopping them 
all with this one api.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Jasper St. Pierre
On Thu, Jan 30, 2014 at 8:48 PM, Bill Spitzak spit...@gmail.com wrote:

 Jasper St. Pierre wrote:

  Hiding windows should not have any influence over the client, as many
 desktop environments, including Weston, want to show live previews for
 minimized or hidden windows in alt-tab or Expose-alikes.

 Also, it matches user expectations. If the user clicks minimize on a
 window, they want it hidden, and they mean it, not get bested with
 surprise! You tried to hide me but I resist by mapping my subsurfaces
 elsewhere!


 The problem is that it makes window management very complicated, or
 limiting. This is why no modern applications use overlapping windows,
 instead doing their own tiled window layout inside one big window. They
 cannot get the window system to overlay windows correctly except for
 trivial temporary popups.

 A simple problem is a floating window shared by two main windows. Some
 parts of the compositor want to know that the floating window belongs to at
 least one main window (for instance to not show it in a toolbar). But the
 client does not want it to vanish when that window is closed, yet it should
 vanish when both windows are closed.


Can you give a concrete example of such a case? Not because I'm assuming
none exist, but because I want a specific example to evaluate and think
about.


 This will require the client to tell the compositor that both of the main
 windows are parents, thus giving rise to a Directed Acyclic Graph of
 parents. I believe reliably updating this structure from the client is
 enormously more complex and error-prone than a simple tree. Also I suspect
 there are desirable window manipulations that cannot be described by a DAG
 and thus even more complex api must be provided in Wayland.

 Kristin has proposed that nothing be sent from the client to the
 compositor, I think he is worried about the api bloat this may cause.

 I feel that an intermediate solution that limits the data to a tree, since
 only a set parent api is needed to manage it. When a client requests that
 a window be raised or hidden then this tree causes children to do the same.
 But this is only done after a request from the client, so the client can
 first rearrange or delete the tree in order to get the behavior it wants.
 The main reason for this is that I think it is necessary so wayland clients
 can be displayed remotely on non-wayland systems that support such trees,
 otherwise remote display may be very blinky when users raise windows.

 Both of these however require that all decisions about window
 relationships be left to the client. There is no way around this, no matter
 how much you want to pretend otherwise.

 A client that fails to hide the window after the request and a timeout can
 get force-hidden, if you think this is a problem. But you cannot use
 misbehaving clients as a reason for designing an api, since there are a
 billion other ways a client can misbehave and you are not stopping them all
 with this one api.


Like what?

-- 
  Jasper
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Bill Spitzak

Manuel Bachmann wrote:

Hi Bill, and thanks a lot for sharing your thoughts,



GTK+ seems to have an impl, so I will check what it does.


I think it may be trying to use X window groups which is an example of 
an excessively complex api to try to solve this. It has not worked and 
Gimp is now giving up on having floating windows, just like everybody else.


I also tried X window groups to fix this for our software and was unable 
to get it to work at all, though it was partly due to broken 
implementaions, not just design. It was ridiculously complex, too, and 
still could not do the arbitrary window ordering I wanted.

___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Bill Spitzak

Jasper St. Pierre wrote:


A simple problem is a floating window shared by two main windows.

Can you give a concrete example of such a case? Not because I'm assuming 
none exist, but because I want a specific example to evaluate and think 
about.


A toolbox over a painting program that has two documents open.


since there are a billion other ways a client can misbehave and you
are not stopping them all with this one api.

Like what?


A client can ignore attempts to close it with the close box.
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Jasper St. Pierre
On Thu, Jan 30, 2014 at 9:03 PM, Bill Spitzak spit...@gmail.com wrote:

 Jasper St. Pierre wrote:

  A simple problem is a floating window shared by two main windows.

 Can you give a concrete example of such a case? Not because I'm assuming
 none exist, but because I want a specific example to evaluate and think
 about.


 A toolbox over a painting program that has two documents open.


So, what is the expected behavior here exactly? There's a minimize button
on both of the content window's decorations, and clicking on one should
minimize all three windows?

What should using the minimize keyboard shortcut functionality of your
compositor do? Should it differ from using the button in the UI? What does
it do right now on X11 or other platforms?


 since there are a billion other ways a client can misbehave and you
 are not stopping them all with this one api.

 Like what?


 A client can ignore attempts to close it with the close box.


Are you talking about simply having a minimize button in the server-side
decoration that does nothing? Or are you talking about the compositor
forcibly minimizing a window with e.g. a keyboard shortcut?

The former is an application bug (the button does nothing because it was
wired to do nothing), and while it's certainly frustrating from a user
perspective, the compositor can still forcibly minimize or close the window.

-- 
  Jasper
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Weston : ideas about xdg_sell, and implementation for a taskbar

2014-01-30 Thread Bill Spitzak



Jasper St. Pierre wrote:


A toolbox over a painting program that has two documents open.

So, what is the expected behavior here exactly? There's a minimize 
button on both of the content window's decorations, and clicking on one 
should minimize all three windows?


Clicking minimize of one of the documents makes only the document 
disappear. But then clicking on the minimize of the other document makes 
both the document and toolbox disappear.


What should using the minimize keyboard shortcut functionality of your 
compositor do? Should it differ from using the button in the UI? What 
does it do right now on X11 or other platforms?


It should do EXACTLY the same thing as a minimize button. Any different 
behavior is a bug.



A client can ignore attempts to close it with the close box.

Are you talking about simply having a minimize button in the server-side 
decoration that does nothing? Or are you talking about the compositor 
forcibly minimizing a window with e.g. a keyboard shortcut?


The former is an application bug (the button does nothing because it was 
wired to do nothing), and while it's certainly frustrating from a user 
perspective, the compositor can still forcibly minimize or close the window.


I would expect that a compositor shortcut key to close a window would 
first try sending a message to the app saying it wants to close, and the 
app can decide to not close (ideally by asking the user if they are 
sure, and the user says no). If it just killed the app or destroyed the 
window that could be very user-unfriendly and I am rather suprised 
anybody would suggest that.


If an app is non-cooperative the compositor can do some stuff. For close 
it can kill the client if it is not responding to pings. Minimize 
probably should also force-hide the surface after a timeout even if the 
client is responding to pings. However this fallback stuff should not be 
part of the Wayland api and can be left up to the compositor writers to 
decide.


___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel