Re: [PATCH weston] launcher: don't exit when user is not root

2017-10-31 Thread Michal Suchanek
On 31 October 2017 at 08:49, Pekka Paalanen <ppaala...@gmail.com> wrote:
> On Mon, 30 Oct 2017 18:56:02 +0100
> Michal Suchanek <hramr...@gmail.com> wrote:
>
>> On 30 October 2017 at 16:02, Pekka Paalanen <ppaala...@gmail.com> wrote:
>> > On Mon, 30 Oct 2017 15:20:42 +0100
>> > Emre Ucan <eu...@de.adit-jv.com> wrote:
>> >
>> >> weston does not need to be root.
>> >> It requires adjusting ownership on the given tty device.
>> >>
>> >> If weston does not have proper rights, it will get
>> >> an error at startup anyway.
>> >>
>> >> Signed-off-by: Emre Ucan <eu...@de.adit-jv.com>
>> >> ---
>> >>  libweston/launcher-direct.c | 3 ---
>> >>  1 file changed, 3 deletions(-)
>> >>
>> >> diff --git a/libweston/launcher-direct.c b/libweston/launcher-direct.c
>> >> index a5d3ee5..b05d214 100644
>> >> --- a/libweston/launcher-direct.c
>> >> +++ b/libweston/launcher-direct.c
>> >> @@ -276,9 +276,6 @@ launcher_direct_connect(struct weston_launcher **out, 
>> >> struct weston_compositor *
>> >>  {
>> >>   struct launcher_direct *launcher;
>> >>
>> >> - if (geteuid() != 0)
>> >> - return -EINVAL;
>> >> -
>> >>   launcher = zalloc(sizeof(*launcher));
>> >>   if (launcher == NULL)
>> >>   return -ENOMEM;
>> >
>> > NAK, for the reasons explained in
>> > https://lists.freedesktop.org/archives/wayland-devel/2017-October/035582.html
>> >
>> > To summarize, it's not only tty permissions but DRM and input devices
>> > as well.
>>
>> DRM and input is supposed to be accessible by console user on desktop 
>> systems.
>
> Hi Michal,
>
> thanks for your concern, but I believe the world has moved on. We have
> a much better model with an agent like logind now.

Why is the model better?

In the end the agent relies on permissions as well.

On systems with multiple users it makes sense to automate the task of
setting up the user permissions with an agent.

However, on an embedded system setting the permissions statically in
an installation image may make more sense. Then you have one less
thing to audit for security - namely the agent which you do not use.

>
> That old approach had the inherent security issues which I assume have
> discouraged its use and encouraged looking for better alternatives.
>
>> Ever heard of rootless X?
>
> Yes. I believe it uses logind now.

The documentation says otherwise.

>
>> Any user on the console should be able to randomly decide to run a GUI
>> server without any special privileges.
>
> Presuming yes, then that is what logind or another agent like
> weston-launch allows. They also make it harder for you to shoot
> yourself in the foot by e.g. running two display servers on the same
> devices simultaneously.

Which is what tracking service units is for as well - it should run
the server only once.

>
>> This can be set up by logind or it can be hardcoded by the
>> administrator to a particular user. Whatever the case just running the
>> GUI server should work without issues when permissions are set up
>> correctly.
>
> It can be done by setting up user permissions. That does not mean it is
> the best available solution.

It can be done by logind or weston-launch. It does not mean it is the
best solution.

>
>> > If you set all these so that weston can actually run without
>> > root using the direct launcher, then quite likely you have opened some
>> > security holes.
>> >
>> > The direct launcher is specifically meant for running weston as root.
>> > Running as root is only for debugging and development, never for
>> > production.
>>
>> If you can run it as root you can run it as any user with sufficient
>> permissions.
>>
>> The security implications of different setups should be the concern of
>> the system administrator and not launcher-direct.
>
> I will still refuse to take in code that promotes bad practices where I
> see it. Enforcement in code is always more powerful than documentation
> saying one should not do this.

And what exactly is the bad practice here?

Accessing devices that you have permission to access granted by the
system administrator but which are not set up as accessible to you by
policykit?

If you should not have access to some devices then the system
administrator should revoke your permissions. weston is a display
server. It is not a security audit software. So it has no business
auditing your security setup.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH weston] launcher: don't exit when user is not root

2017-10-30 Thread Michal Suchanek
On 30 October 2017 at 16:02, Pekka Paalanen  wrote:
> On Mon, 30 Oct 2017 15:20:42 +0100
> Emre Ucan  wrote:
>
>> weston does not need to be root.
>> It requires adjusting ownership on the given tty device.
>>
>> If weston does not have proper rights, it will get
>> an error at startup anyway.
>>
>> Signed-off-by: Emre Ucan 
>> ---
>>  libweston/launcher-direct.c | 3 ---
>>  1 file changed, 3 deletions(-)
>>
>> diff --git a/libweston/launcher-direct.c b/libweston/launcher-direct.c
>> index a5d3ee5..b05d214 100644
>> --- a/libweston/launcher-direct.c
>> +++ b/libweston/launcher-direct.c
>> @@ -276,9 +276,6 @@ launcher_direct_connect(struct weston_launcher **out, 
>> struct weston_compositor *
>>  {
>>   struct launcher_direct *launcher;
>>
>> - if (geteuid() != 0)
>> - return -EINVAL;
>> -
>>   launcher = zalloc(sizeof(*launcher));
>>   if (launcher == NULL)
>>   return -ENOMEM;
>
> NAK, for the reasons explained in
> https://lists.freedesktop.org/archives/wayland-devel/2017-October/035582.html
>
> To summarize, it's not only tty permissions but DRM and input devices
> as well.

DRM and input is supposed to be accessible by console user on desktop systems.

Ever heard of rootless X?

Any user on the console should be able to randomly decide to run a GUI
server without any special privileges.

This can be set up by logind or it can be hardcoded by the
administrator to a particular user. Whatever the case just running the
GUI server should work without issues when permissions are set up
correctly.

> If you set all these so that weston can actually run without
> root using the direct launcher, then quite likely you have opened some
> security holes.
>
> The direct launcher is specifically meant for running weston as root.
> Running as root is only for debugging and development, never for
> production.

If you can run it as root you can run it as any user with sufficient
permissions.

The security implications of different setups should be the concern of
the system administrator and not launcher-direct.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC wayland-protocols v4] Add Primary Selection Protocol Version 1

2016-03-01 Thread Michal Suchanek
On 24 February 2016 at 10:54, Jonas Ådahl <jad...@gmail.com> wrote:
> On Wed, Feb 24, 2016 at 10:25:19AM +0100, Michal Suchanek wrote:
>> On 24 February 2016 at 05:33, Jonas Ådahl <jad...@gmail.com> wrote:
>> > On Sat, Feb 20, 2016 at 01:31:59AM +0100, Carlos Garnacho wrote:
>> >> From: Lyude <cp...@redhat.com>
>> >>
>> >> This primary selection is similar in spirit to the eponimous
>> >> in X11, allowing a quick "select text + middle click" shortcut
>> >> to copying and pasting.
>> >>
>> >> It's otherwise very similar to it wayland counterpart, and
>> >> explicitly made consistent with it.
>> >>
>> >> Signed-off-by: Lyude <cp...@redhat.com>
>> >> Signed-off-by: Carlos Garnacho <carl...@gnome.org>
>> >> ---
>> >> After having talked with Lyude, I'll be trying to move this ahead.
>> >>
>> >> Changes since v3:
>> >> - Added a rather verbose protocol description, including a
>> >>   high-level overview of the workings.
>> >> - Made event emission 1:1 with wayland core protocol selections,
>> >>   wp_primary_selection_offer.offer events are now expected to be
>> >>   emitted between wp_primary_data_device.data_offer and
>> >>   wp_primary_data_device.selection
>> >> - Improved wording here and there.
>> >> - Added serial argument to wp_primary_data_offer.receive that can be
>> >>   used to match against recent events.
>> >>
>> >>
>> >>  Makefile.am|   1 +
>> >>  unstable/primary-selection/README  |   4 +
>> >>  .../primary-selection-unstable-v1.xml  | 229 
>> >> +
>> >>  3 files changed, 234 insertions(+)
>> >>  create mode 100644 unstable/primary-selection/README
>> >>  create mode 100644 
>> >> unstable/primary-selection/primary-selection-unstable-v1.xml
>> >>
>> >> diff --git a/Makefile.am b/Makefile.am
>> >> index 57d0023..eefa20f 100644
>> >> --- a/Makefile.am
>> >> +++ b/Makefile.am
>> >> @@ -7,6 +7,7 @@ unstable_protocols =  
>> >> \
>> >>   unstable/xdg-shell/xdg-shell-unstable-v5.xml
>> >> \
>> >>   unstable/relative-pointer/relative-pointer-unstable-v1.xml  
>> >> \
>> >>   unstable/pointer-constraints/pointer-constraints-unstable-v1.xml
>> >> \
>> >> + unstable/primary-selection/primary-selection-unstable-v1.xml
>> >> \
>> >>   $(NULL)
>> >>
>> >>  stable_protocols =   
>> >> \
>> >> diff --git a/unstable/primary-selection/README 
>> >> b/unstable/primary-selection/README
>> >> new file mode 100644
>> >> index 000..6d8c0c6
>> >> --- /dev/null
>> >> +++ b/unstable/primary-selection/README
>> >> @@ -0,0 +1,4 @@
>> >> +Primary selection protocol
>> >> +
>> >> +Maintainers:
>> >> +Lyude 
>> >> diff --git a/unstable/primary-selection/primary-selection-unstable-v1.xml 
>> >> b/unstable/primary-selection/primary-selection-unstable-v1.xml
>> >> new file mode 100644
>> >> index 000..a3618d5
>> >> --- /dev/null
>> >> +++ b/unstable/primary-selection/primary-selection-unstable-v1.xml
>> >> @@ -0,0 +1,229 @@
>> >> +
>> >> +
>> >> +  
>> >> +Copyright © 2015 Red Hat
>> >> +
>> >> +Permission is hereby granted, free of charge, to any person 
>> >> obtaining a
>> >> +copy of this software and associated documentation files (the 
>> >> "Software"),
>> >> +to deal in the Software without restriction, including without 
>> >> limitation
>> >> +the rights to use, copy, modify, merge, publish, distribute, 
>> >> sublicense,
>> >> +and/or sell copies of the Software, and to permit persons to whom the
>> >> +Software is furnished to do so, subject to the following conditions:
>> >> +
>> >> +The above copyright notice and this permission notice (including the 
>> >> next
>> >&

Re: [RFC wayland-protocols v4] Add Primary Selection Protocol Version 1

2016-02-24 Thread Michal Suchanek
On 24 February 2016 at 05:33, Jonas Ådahl  wrote:
> On Sat, Feb 20, 2016 at 01:31:59AM +0100, Carlos Garnacho wrote:
>> From: Lyude 
>>
>> This primary selection is similar in spirit to the eponimous
>> in X11, allowing a quick "select text + middle click" shortcut
>> to copying and pasting.
>>
>> It's otherwise very similar to it wayland counterpart, and
>> explicitly made consistent with it.
>>
>> Signed-off-by: Lyude 
>> Signed-off-by: Carlos Garnacho 
>> ---
>> After having talked with Lyude, I'll be trying to move this ahead.
>>
>> Changes since v3:
>> - Added a rather verbose protocol description, including a
>>   high-level overview of the workings.
>> - Made event emission 1:1 with wayland core protocol selections,
>>   wp_primary_selection_offer.offer events are now expected to be
>>   emitted between wp_primary_data_device.data_offer and
>>   wp_primary_data_device.selection
>> - Improved wording here and there.
>> - Added serial argument to wp_primary_data_offer.receive that can be
>>   used to match against recent events.
>>
>>
>>  Makefile.am|   1 +
>>  unstable/primary-selection/README  |   4 +
>>  .../primary-selection-unstable-v1.xml  | 229 
>> +
>>  3 files changed, 234 insertions(+)
>>  create mode 100644 unstable/primary-selection/README
>>  create mode 100644 
>> unstable/primary-selection/primary-selection-unstable-v1.xml
>>
>> diff --git a/Makefile.am b/Makefile.am
>> index 57d0023..eefa20f 100644
>> --- a/Makefile.am
>> +++ b/Makefile.am
>> @@ -7,6 +7,7 @@ unstable_protocols = 
>>  \
>>   unstable/xdg-shell/xdg-shell-unstable-v5.xml   
>>  \
>>   unstable/relative-pointer/relative-pointer-unstable-v1.xml 
>>  \
>>   unstable/pointer-constraints/pointer-constraints-unstable-v1.xml   
>>  \
>> + unstable/primary-selection/primary-selection-unstable-v1.xml   
>>  \
>>   $(NULL)
>>
>>  stable_protocols =  
>>  \
>> diff --git a/unstable/primary-selection/README 
>> b/unstable/primary-selection/README
>> new file mode 100644
>> index 000..6d8c0c6
>> --- /dev/null
>> +++ b/unstable/primary-selection/README
>> @@ -0,0 +1,4 @@
>> +Primary selection protocol
>> +
>> +Maintainers:
>> +Lyude 
>> diff --git a/unstable/primary-selection/primary-selection-unstable-v1.xml 
>> b/unstable/primary-selection/primary-selection-unstable-v1.xml
>> new file mode 100644
>> index 000..a3618d5
>> --- /dev/null
>> +++ b/unstable/primary-selection/primary-selection-unstable-v1.xml
>> @@ -0,0 +1,229 @@
>> +
>> +
>> +  
>> +Copyright © 2015 Red Hat
>> +
>> +Permission is hereby granted, free of charge, to any person obtaining a
>> +copy of this software and associated documentation files (the 
>> "Software"),
>> +to deal in the Software without restriction, including without 
>> limitation
>> +the rights to use, copy, modify, merge, publish, distribute, sublicense,
>> +and/or sell copies of the Software, and to permit persons to whom the
>> +Software is furnished to do so, subject to the following conditions:
>> +
>> +The above copyright notice and this permission notice (including the 
>> next
>> +paragraph) shall be included in all copies or substantial portions of 
>> the
>> +Software.
>> +
>> +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 
>> OR
>> +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>> +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
>> +THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 
>> OTHER
>> +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
>> +FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
>> +DEALINGS IN THE SOFTWARE.
>> +  
>> +
>> +  
>> +This protocol provides the ability to have a primary selection device to
>> +match that of the X server. This primary selection is a shortcut to the
>> +common clipboard selection, where text just needs to be selected in 
>> order
>> +to allow copying it elsewhere. The de facto way to perform this action
>> +is the middle mouse button, although it is not limited to this one.
>> +
>> +Clients wishing to honor primary selection should create a primary
>> +selection source and set it as the selection through
>> +wp_primary_selection_device.set_selection whenever the text selection
>> +changes. In order to minimize calls in pointer-driven text selection,
>> +it should happen only once after the operation finished. Similarly,
>> +a NULL source should be set when text is unselected.
>> +
>> +wp_primary_selection_offer objects are first announced through the
>> + 

Re: [RFC wayland-protocols v4] Add Primary Selection Protocol Version 1

2016-02-23 Thread Michal Suchanek
On 23 February 2016 at 20:03, Bill Spitzak <spit...@gmail.com> wrote:
>
>
> On Tue, Feb 23, 2016 at 1:32 AM, Michal Suchanek <hramr...@gmail.com> wrote:
>>
>> On 22 February 2016 at 19:23, Carlos Garnacho <carl...@gnome.org> wrote:
>>
>> > Right, that's why I suggest having those reunited in a single logical
>> > focus. Anything else is plagued of corner cases.
>>
>> That's totally not going to work. When  you have multiple touch panels
>> you can  touch multiple places. Are you proposing that on whichever
>> panel you happen to touch first locks the other panel from working or
>> on whichever panel you touch last steals the touch on the earlier
>> panel?
>>
>> I do not think either is expected behaviour.
>
>
> What? Absolutely this is the expected behavior. All the touch events go to
> the same client as the first touch event. For a more obvioius example,
> keystrokes and modifier states need to be sent to the client that you are
> pressing a mouse button on, even if the "keyboard focus" is some other
> client. There is only one focus for every single thing in the seat, the
> thing you are calling the "keyboard focus" is just a helper for what that
> focus is when no mouse buttons are held down.
>
> If you want them to go to different clients, put the touch panels on
> different seats.
>
> I fully agree that having "number of focus" != "number of seats" is going to
> be plagued with corner cases.
>
>> > Citation needed :). Windows can be certainly arranged so that it's not
>> > possible to move the pointer between app A and B without going through
>> > a third application. The problem with doing this on pointer focus is
>>
>> That can happen only with relative axis. With absolute axis you can
>> point anywhere anytime without going through anywhere else. Let's say
>> that for the sake of rodent users it is better to consider entry and
>> motion events insignificant.
>
>
> Who cares that it can't happen for absolute axis. It does happen for
> relative and those exist, even if you personally don't own a mouse.
>
>> > What is unreasonable about serial checking?
>>
>> How is the serial related to the paste? How is the application
>> supposed to pick serial so it can receive the paste? You can pick the
>> event which triggers the paste in the application logic. Will that
>> mean that when compositor fails to check events from a device (or the
>> application uses a device exclusively and possibly directly drives the
>> hardware) binding to some buttons will work and binding to other
>> buttons will fail?
>
>
> It's really easy: the client sends the event it thinks triggered the paste.
> The compositor checks to make sure it is an event that really existed and
> that it counts as some active user interaction (ie it is a mouse or keyboard
> click). If the client sends a fake event or a focus-in event or anything
> else the compositor does not like, it will not get access to the clipboard
> data.
>
> The entire point of this is so that it would be possible to put sensitive
> data into the selection, because client cannot look at it without the user
> doing something obvious, such as clicking. Moving the mouse around should
> not cause clients to be able to look at the selection.
>
>> > Let's take the most extreme case, primary selection can be broadcasted
>> > and clients can be free to read data right away. You've just allowed
>> > compositors to replicate all the flaws of X11 primary selection.
>>
>> And you have allowed all the legacy X11 clients to perform flawlessly.
>
>
> Except that the user has to be careful to not select passwords or banking
> numbers or anything else sensitive.
>
>> So it's fine to suggest reasonable default policy for compositor
>> implementors but it's also fine to allow for different policies.
>>
>> I would not mandate broadcasting the selection changes
>> indiscriminately. However, if people are concerned about applications
>> that listen for the broadcasts in X11 land it should be possible to
>> set up special policy for them so they can receive the broadcasts in
>> Wayland as well. Similarly when an application is supposed to run
>> sandboxed and there is enough concern about information leak through
>> clipboard it should be possible to set up a policy for it to never
>> receive selection offers.
>
>
> It sounds like you are basically saying "paste does not work unless the
> client is specially privileged".

You are saying that also. You say that the client must have keyboard

Re: [RFC wayland-protocols v4] Add Primary Selection Protocol Version 1

2016-02-23 Thread Michal Suchanek
On 22 February 2016 at 19:23, Carlos Garnacho <carl...@gnome.org> wrote:
> Hi Michal,
>
> On Mon, Feb 22, 2016 at 4:53 PM, Michal Suchanek <hramr...@gmail.com> wrote:
>> On 22 February 2016 at 15:57, Carlos Garnacho <carl...@gnome.org> wrote:
>>> Hi Michal,
>>>
>>> On Mon, Feb 22, 2016 at 2:25 PM, Michal Suchanek <hramr...@gmail.com> wrote:
>>>> Hello,
>>>>
>>>> On 20 February 2016 at 01:31, Carlos Garnacho <carl...@gnome.org> wrote:
>>>>
>>>>> +
>>>>> +  
>>>>> +This protocol provides the ability to have a primary selection 
>>>>> device to
>>>>> +match that of the X server. This primary selection is a shortcut to 
>>>>> the
>>>>> +common clipboard selection, where text just needs to be selected in 
>>>>> order
>>>>> +to allow copying it elsewhere. The de facto way to perform this 
>>>>> action
>>>>> +is the middle mouse button, although it is not limited to this one.
>>>>> +
>>>>> +Clients wishing to honor primary selection should create a primary
>>>>> +selection source and set it as the selection through
>>>>> +wp_primary_selection_device.set_selection whenever the text selection
>>>>> +changes. In order to minimize calls in pointer-driven text selection,
>>>>> +it should happen only once after the operation finished. Similarly,
>>>>> +a NULL source should be set when text is unselected.
>>>>> +
>>>>> +wp_primary_selection_offer objects are first announced through the
>>>>> +wp_primary_selection_device.data_offer event. Immediately after this 
>>>>> event,
>>>>> +the primary data offer will emit wp_primary_selection_offer.offer 
>>>>> events
>>>>> +to let know of the mime types being offered.
>>>>> +
>>>>> +When the primary selection changes, the client with the keyboard 
>>>>> focus
>>>>> +will receive wp_primary_selection_device.selection events. Only the 
>>>>> client
>>>>
>>>> Why keyboard focus?
>>>>
>>>> Since paste is done mainly using mouse this has nothing to do with
>>>> keyboard focus.
>>>
>>> Doing this so allows us to behave just the same than we do with the
>>> core protocol selection, slightly divergent protocols make sharing
>>> code harder.
>>>
>>> Conceptually, it also makes some sense to me. I argue that a logical
>>> "key" focus is needed in compositors, even on lack of wl_keyboard
>>> capabilities. Things that IMO make sense to tie together in this
>>> focus, per-seat are:
>>> - wl_keyboard focus
>>> - wp_text_input focus
>>> - focus por (possibly several) pads/buttonsets
>>> - clipboard selection
>>> - primary selection
>>>
>>> Of course these are only guidelines, and compositors may attempt to
>>> implement split foci for these. But still, selection should be tied to
>>> some definite focus, the other option is broadcasting, and I'd very
>>> much prefer not to do that.
>>>
>>> I may try to change the wording just to suggest it's loosely attached
>>> to keyboard focus though.
>>
>> If you put an Insert sticker on your pad button and bind pasting to
>> that pad button and the pad focus is not tied to keyboard focus you
>> have potentially a problem there.
>
> Right, that's why I suggest having those reunited in a single logical
> focus. Anything else is plagued of corner cases.

That's totally not going to work. When  you have multiple touch panels
you can  touch multiple places. Are you proposing that on whichever
panel you happen to touch first locks the other panel from working or
on whichever panel you touch last steals the touch on the earlier
panel?

I do not think either is expected behaviour.

>
>>
>>>
>>>>
>>>>> +with the keyboard focus will receive such events with a non-NULL
>>>>> +wp_primary_selection_offer. Across keyboard focus changes, previously
>>>>> +focused clients will receive wp_primary_selection_device.events with 
>>>>> a
>>>>> +NULL wp_primary_selection_offer.
>>>>> +
>>>>> +In order to request the primary selection data, the client must pass
>>>>> +a recent

Re: [RFC wayland-protocols v4] Add Primary Selection Protocol Version 1

2016-02-22 Thread Michal Suchanek
On 22 February 2016 at 15:57, Carlos Garnacho <carl...@gnome.org> wrote:
> Hi Michal,
>
> On Mon, Feb 22, 2016 at 2:25 PM, Michal Suchanek <hramr...@gmail.com> wrote:
>> Hello,
>>
>> On 20 February 2016 at 01:31, Carlos Garnacho <carl...@gnome.org> wrote:
>>
>>> +
>>> +  
>>> +This protocol provides the ability to have a primary selection device 
>>> to
>>> +match that of the X server. This primary selection is a shortcut to the
>>> +common clipboard selection, where text just needs to be selected in 
>>> order
>>> +to allow copying it elsewhere. The de facto way to perform this action
>>> +is the middle mouse button, although it is not limited to this one.
>>> +
>>> +Clients wishing to honor primary selection should create a primary
>>> +selection source and set it as the selection through
>>> +wp_primary_selection_device.set_selection whenever the text selection
>>> +changes. In order to minimize calls in pointer-driven text selection,
>>> +it should happen only once after the operation finished. Similarly,
>>> +a NULL source should be set when text is unselected.
>>> +
>>> +wp_primary_selection_offer objects are first announced through the
>>> +wp_primary_selection_device.data_offer event. Immediately after this 
>>> event,
>>> +the primary data offer will emit wp_primary_selection_offer.offer 
>>> events
>>> +to let know of the mime types being offered.
>>> +
>>> +When the primary selection changes, the client with the keyboard focus
>>> +will receive wp_primary_selection_device.selection events. Only the 
>>> client
>>
>> Why keyboard focus?
>>
>> Since paste is done mainly using mouse this has nothing to do with
>> keyboard focus.
>
> Doing this so allows us to behave just the same than we do with the
> core protocol selection, slightly divergent protocols make sharing
> code harder.
>
> Conceptually, it also makes some sense to me. I argue that a logical
> "key" focus is needed in compositors, even on lack of wl_keyboard
> capabilities. Things that IMO make sense to tie together in this
> focus, per-seat are:
> - wl_keyboard focus
> - wp_text_input focus
> - focus por (possibly several) pads/buttonsets
> - clipboard selection
> - primary selection
>
> Of course these are only guidelines, and compositors may attempt to
> implement split foci for these. But still, selection should be tied to
> some definite focus, the other option is broadcasting, and I'd very
> much prefer not to do that.
>
> I may try to change the wording just to suggest it's loosely attached
> to keyboard focus though.

If you put an Insert sticker on your pad button and bind pasting to
that pad button and the pad focus is not tied to keyboard focus you
have potentially a problem there.

>
>>
>>> +with the keyboard focus will receive such events with a non-NULL
>>> +wp_primary_selection_offer. Across keyboard focus changes, previously
>>> +focused clients will receive wp_primary_selection_device.events with a
>>> +NULL wp_primary_selection_offer.
>>> +
>>> +In order to request the primary selection data, the client must pass
>>> +a recent serial pertaining to the press event that is triggering the
>>> +operation, if the compositor deems the serial valid and recent, the
>>
>> Why press event when it has an offer event to base the request on?
>>
>> There is no need to involve other unrelated events.
>
> IIRC The first protocol drafts attempted to limit the circumstances in
> which a client could read the primary selection. This is a change of
> approach.
>
>>
>> IMHO the fact that the application receives ANY input event suffices.
>> eg. a pointer entry event.
>
> Do you mean wl_pointer.enter should be enough to have the application
> read the primary selection? seems open to data leaks to me.
>
> This serial event is meant to check for user interaction rather than
> "any input event", so just focusing a client is not enough to have it
> retrieve the primary selection.

And why is clicking enough and focusing not?

Accidentally clicking an application can happen as much as
accidentally pointing at it. With touch interface it's pretty much the
same thing. With click-to-focus also. If you want to prevent data
leaks you can unmap windows that should not receive the paste or use a
compositor with per-application access policy for clipboards.

So instead of saying that a butto

Re: [RFC wayland-protocols v4] Add Primary Selection Protocol Version 1

2016-02-22 Thread Michal Suchanek
Hello,

On 20 February 2016 at 01:31, Carlos Garnacho  wrote:

> +
> +  
> +This protocol provides the ability to have a primary selection device to
> +match that of the X server. This primary selection is a shortcut to the
> +common clipboard selection, where text just needs to be selected in order
> +to allow copying it elsewhere. The de facto way to perform this action
> +is the middle mouse button, although it is not limited to this one.
> +
> +Clients wishing to honor primary selection should create a primary
> +selection source and set it as the selection through
> +wp_primary_selection_device.set_selection whenever the text selection
> +changes. In order to minimize calls in pointer-driven text selection,
> +it should happen only once after the operation finished. Similarly,
> +a NULL source should be set when text is unselected.
> +
> +wp_primary_selection_offer objects are first announced through the
> +wp_primary_selection_device.data_offer event. Immediately after this 
> event,
> +the primary data offer will emit wp_primary_selection_offer.offer events
> +to let know of the mime types being offered.
> +
> +When the primary selection changes, the client with the keyboard focus
> +will receive wp_primary_selection_device.selection events. Only the 
> client

Why keyboard focus?

Since paste is done mainly using mouse this has nothing to do with
keyboard focus.

> +with the keyboard focus will receive such events with a non-NULL
> +wp_primary_selection_offer. Across keyboard focus changes, previously
> +focused clients will receive wp_primary_selection_device.events with a
> +NULL wp_primary_selection_offer.
> +
> +In order to request the primary selection data, the client must pass
> +a recent serial pertaining to the press event that is triggering the
> +operation, if the compositor deems the serial valid and recent, the

Why press event when it has an offer event to base the request on?

There is no need to involve other unrelated events.

IMHO the fact that the application receives ANY input event suffices.
eg. a pointer entry event.

Otherwise you are going to have very fragile protocol that often fails
because the application did not happen to receive whatever even is
requested by the protocol.

It's even worse with the keyboard focus. If the event that triggers
the paste also triggers getting keyboard focus you are going to have
protocol open to all kind of ugly race conditions. If it does not
trigger getting the keyboard focus the paste just fails.

There are point-to-type and click-to-type keyboard focus models which
should be both supported by the primary selection protocol.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH wayland 2/2] protocol: Add DnD actions

2016-01-20 Thread Michal Suchanek
Hello,

On 14 January 2016 at 13:51, Jonas Ådahl <jad...@gmail.com> wrote:
> On Thu, Jan 14, 2016 at 11:50:29AM +0100, Michal Suchanek wrote:
>> On 14 January 2016 at 01:54, Carlos Garnacho <carl...@gnome.org> wrote:
>> > Hi Michal,
>> >

>> > have you
>> > considered compatibility with Xdnd?
>>
>> As compatibility with X is always best effort so long as something can
>> be transferred it's probably ok.
>
> No, "best effort" is not Ok. Compatibility with XDND means compatibility
> with the rest of the whole world for probably quite some time. Part of
> the reason why the current proposal looks like it does is for making it
> possible to interoperate with XDND via Xwayland.
>
> You seem to mostly care about use cases where the applications use
> special modifier or button combinations to change the mime-type that is
> transfered. This something the current additions doesn't support, since
> it is the compositor that takes the input grab during the grab and we
> don't introduce any new modifier events anywhere. This, however, doesn't
> mean it is impossible to introduce modifier events in some way in the
> future.
>
>>
>> > This all was already discussed in
>> > ealier threads with no clear gains on either other option. And being
>> > at v9 of this patch, I'm not personally keen on modifying such
>> > fundamental aspect.
>> >
>>
>>
>> Without that I doubt many people will be keen on using it.
>
> On the contrary, these improvements already make it possible for the
> majority of DnD clients to start working properly. It's a step in the
> right direction, but to start supporting every thinkable use case from
> the beginning, before we actually have real world examples of how it
> should actually work is risky business, and that is not the intention of
> the discussed patches.
>
>

Ok, so I give you some very practical example.

On Windows the DnD convention is that Ctrl triggers copy action, Shift
triggers move and Ctrl+Shift triggers ask.

Now if I wanted to implement Windows feel-alike client so users can
easily use the interface they are accustomed to I would have to add
the Ctrl+Shift binding which is *not* possible.

On a Mac the modifiers are different. IIRC it pretty much uses the
Windows key everywhere a PC uses the Control key for keyboard
shortcuts. So for Mac feel-alike application I need to remap the
modifiers.

The hypothetical example with Alt and 10th mouse button is not
completely made up - many applications use configureble key bindings
to boost productivity of often used functions. With inability to
modify the DnD action with custom modifiers in either client the
options are severely limited here. Sure, one can use the primary
selection instead if it ends up allowing custom paste keybindings but
that's supposed to be experimental optional feature and not core part
of the protocol.

If you make the protocol so limited that the whole thing is
best-effort then sure the XDND translation can be 100%. I am not sure
that should be the goal here, though.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [PATCH wayland 2/2] protocol: Add DnD actions

2016-01-14 Thread Michal Suchanek
On 14 January 2016 at 01:54, Carlos Garnacho <carl...@gnome.org> wrote:
> Hi Michal,
>
> On Mon, Jan 11, 2016 at 11:34 AM, Michal Suchanek <hramr...@gmail.com> wrote:
>> On 24 December 2015 at 01:58, Carlos Garnacho <carl...@gnome.org> wrote:
>>
>>> @@ -757,6 +883,40 @@
>>>
>>>
>>>  
>>> +
>>> +
>>> +
>>> +
>>> +  
>>> +This is a bitmask of the available/preferred actions in a
>>> +drag-and-drop operation.
>>> +
>>> +In the compositor, the selected action comes out as a result of
>>> +matching the actions offered by the source and destination sides.
>>> +"action" events with a "none" action will be sent to both source
>>> +and destination if there is no match. All further checks will
>>> +effectively happen on (source actions ∩ destination actions).
>>> +
>>> +In addition, compositors may also pick different actions in
>>> +reaction to key modifiers being pressed, one common ground that
>>> +has been present in major toolkits (and the behavior recommended
>>> +for compositors) is:
>>> +
>>> +- If no modifiers are pressed, the first match (in bit order)
>>> +  will be used.
>>> +- Pressing Shift selects "move", if enabled in the mask.
>>> +- Pressing Control selects "copy", if enabled in the mask.
>>> +
>>> +Behavior beyond that is considered implementation-dependent.
>>> +Compositors may for example bind other modifiers (like Alt/Meta)
>>> +or drags initiated with other buttons than BTN_LEFT to specific
>>> +actions (e.g. "ask").
>>> +  
>>> +  
>>> +  
>>> +  
>>> +  
>>> +
>>>
>>>
>>
>> Hello,
>>
>> how do you go about implementing those implementation-specific actions?
>
> One detail I think you've missed: it's compositors who decide the
> action. Clients on both sides will only receive wl_data_source events
> on the source side and wl_data_device/wl_data_offer events on the
> destination side.

That's exactly the problem. The specification suggests that clients
can implement other application-specific behaviour but there is no way
to do that.

>
>>
>> Let's say I have a client which id DnD source in an operation and
>> wants to paste text without formatting when ALT is pressed. It can
>> technically change the offered mime-types or perform a different
>> object conversion when sending the data if it learns the key is down.
>> Will it learn about the key state?
>
> It can't technically do that, wl_data_source.offer is cummulative,
> there's no wl_data_source.reset nor somesuch. There's also no way then
> to have the drag destination know when/whether the mimetypes list is
> definitive. And besides, it's the destination which chooses the most
> suitable mimetype through wl_data_offer.accept. This sounds like the
> opposite than the current protocol in git offers.
>
> Also, why should the drag source choose whether plain/formatted text
> is transferred?

Because pressing that modifier does that for local pastes and it wants
to do that for remote pastes as well?

> it'd be the drag destination which can actually tell
> if it can manage either. Your usecase actually doesn't stand if you
> s/without/with/, what should weston-terminal do if the source enforces
> formatted text?

Reject it. However, if it was possible to retract an offered type then
the source client could offer the text as any of HTML/RTF/plain or
plain only.

>
> Third, this patch is about DnD actions, mimetypes are just tangential,
> besides you having to .accept one for the transfer to succeed.
> Mimetypes are exclusively about the destination choosing the most
> suitable/lossless/etc format.

They can be used by the source to designate what kind of information
it is willing to transfer as well.

>
>>
>> Another example would be a client that is DnD destination and wants to
>> paste only text style when the user holds the 10th mouse button or 5th
>> touchscreen softbutton when the object is dropped. Will it get to know
>> that the event happened? It will probably want to reject move action
>> in this case since the object is not fully transferred. There is no
>> mime type for text style there is no way to transfer it other than
>> transferring whole formatted text clip and then trashing the text (and
>>

Re: [PATCH wayland 2/2] protocol: Add DnD actions

2016-01-11 Thread Michal Suchanek
On 24 December 2015 at 01:58, Carlos Garnacho  wrote:

> @@ -757,6 +883,40 @@
>
>
>  
> +
> +
> +
> +
> +  
> +This is a bitmask of the available/preferred actions in a
> +drag-and-drop operation.
> +
> +In the compositor, the selected action comes out as a result of
> +matching the actions offered by the source and destination sides.
> +"action" events with a "none" action will be sent to both source
> +and destination if there is no match. All further checks will
> +effectively happen on (source actions ∩ destination actions).
> +
> +In addition, compositors may also pick different actions in
> +reaction to key modifiers being pressed, one common ground that
> +has been present in major toolkits (and the behavior recommended
> +for compositors) is:
> +
> +- If no modifiers are pressed, the first match (in bit order)
> +  will be used.
> +- Pressing Shift selects "move", if enabled in the mask.
> +- Pressing Control selects "copy", if enabled in the mask.
> +
> +Behavior beyond that is considered implementation-dependent.
> +Compositors may for example bind other modifiers (like Alt/Meta)
> +or drags initiated with other buttons than BTN_LEFT to specific
> +actions (e.g. "ask").
> +  
> +  
> +  
> +  
> +  
> +
>
>

Hello,

how do you go about implementing those implementation-specific actions?

Let's say I have a client which id DnD source in an operation and
wants to paste text without formatting when ALT is pressed. It can
technically change the offered mime-types or perform a different
object conversion when sending the data if it learns the key is down.
Will it learn about the key state?

Another example would be a client that is DnD destination and wants to
paste only text style when the user holds the 10th mouse button or 5th
touchscreen softbutton when the object is dropped. Will it get to know
that the event happened? It will probably want to reject move action
in this case since the object is not fully transferred. There is no
mime type for text style there is no way to transfer it other than
transferring whole formatted text clip and then trashing the text (and
it's going to be hairy if the text is not formatted uniformly).

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC wayland-protocols V3] Add Primary Selection Protocol Version 1

2016-01-07 Thread Michal Suchanek
On 7 January 2016 at 04:42, Jonas Ådahl  wrote:
> On Wed, Jan 06, 2016 at 09:50:36PM -0500, Lyude wrote:
>> Signed-off-by: Lyude 
>> ---
>>
>> Notes:
>>   Changes since V2
>> * Bunch of grammatical/wording fixes from whot
>> * Addition of wp_primary_selection_offer::end_offers, for marking the 
>> end of a
>>   list of mime type offers
>> * selection_offers are no longer sent before an input event, and are 
>> sent at the
>>   first opportunity a client has to do a primary selection paste. This 
>> decision
>>   comes from a discussion with Jasper, where a couple of clients (such 
>> as emacs)
>>   were brought up that have their own bindings for primary selection 
>> pasting.
>>   Eventually I will probably work on adding some sort of paste_hint 
>> event to
>>   this so that the compositor can decide what keybinding triggers a 
>> primary
>>   selection paste, I agree with Jasper that it would be best to solve 
>> the issue
>>   of rebinding primary selection pastes after we have the basic protocol 
>> for
>>   primary selection worked out.
>
> Does this mean that the offer always comes on keyboard focus? Or pointer
> focus? Or touch focus? Or does it come a user interaction of some kind?
> And after that it may retrieve the primary selection at any point? Could
> it not be done as request that is a response to an input event carrying
> a serial, where the serial can be used to match the request to the
> triggering user interaction. Or would that break some expectations of
> the primary selection use case (i.e. retrieve not from a user
> interaction)?

The primary selection expectation in X is that an application can
retrieve it at any time.

It has been pointed out that focus is not what it used to be in X and
and hence is not useful for determining paste ability.

Also the application should be responsible for determining what action
(if any) triggers paste and it gives no useful information if the
paste request is tied to an input event, anyway.

However, the compositor can and should apply a policy to pastes and
not send the paste offer to all applications as soon as a selection is
set.

One way to do that and preserve the feel of X primary selection is to
send an offer once an application receives user input (event) after a
selection was set. That way applications can decide which user action
triggers a paste using application-specific bindings. As the offer is
invalidated once new selection is set an application cannot get
arbitrary pastes from the paste buffer without user action. If desired
different non-default policy on pastes (event filters) can be applied
to different applications to accommodate both paste managers that
should receive paste offer as soon as selection is set and sandboxed
applications that should not have access to the paste buffer.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC v2] Add Primary Selection Protocol Version 1

2016-01-05 Thread Michal Suchanek
Hello,

On 5 January 2016 at 07:04, Bill Spitzak  wrote:
> On 01/04/2016 08:33 PM, Peter Hutterer wrote:
>
> Also it seems like you will send this on *every* middle click. Some
> clients
> require clicking the middle button a lot without pasting (it is very
> common
> to use it as a pan control in 2D/3D).
>>>
>>>
 in the normal use-case you will get a lot more focus changes than middle
 clicks.
>>>
>>>
>>> Not if middle-click is used for panning. That's why I mentioned it.
>>>
>>> You are correct though that it is probably irrelevant. Though I am a bit
>>> concerned that this event includes an object with a new id, and all the
>>> necessary work on both the compositor and client to track it.
>>
>>
>> one of the big benefits that input related stuff has is that you don't
>> need
>> to care too much about resources. yes, this protocol involves creating and
>> destroying objects on every middle click. Compared to the textures you
>> just
>> had to load to display the web browser, this is peanuts. Crushed into very
>> small pieces :)
>
>
> Maybe it should only send the newid if the selection has changed. The client
> can keep the old object around until it is destroyed, even if it gets many
> clicks.
>
> ie. when the selection changes, any old offers get the destroyed event. But
> clients do not get the new offer until they get another middle click. If the
> user clicks many times only one offer proxy is created.
>
> The overhead may not be so minimal in creating/destroying local objects in
> some languages such as Python, so it does seem nice to avoid this. A bigger
> concern is that if more events are decided to be "pasteable" by the
> compostior, it will have to send the offer even more times (imagine if it is
> decided that any keystroke can paste). This rule means it only has to send
> it before the first such event.

Actually, this sounds reasonable.

When there is a selection set and an application receives any input
event it will also receive a paste offer. It can then act on that
offer any time until a new selection is set. It does need  any new
offer until new selection is set.

When concerned about pasting into the wrong application the user can
unmap/minimize/whatever the application window so it cannot receive
input events.

There is only slight problem with paste buffer managers and download
managers which try to watch for paste buffer changes in the background
without receiving any input. This can be probably amended by setting
different paste policies (filters for events that allow access to the
paste buffer) for different applications.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC v2] Add Primary Selection Protocol Version 1

2015-12-23 Thread Michal Suchanek
On 18 December 2015 at 20:34, Bill Spitzak  wrote:
>
>
> On Fri, Dec 18, 2015 at 9:03 AM, Lyude  wrote:
>>
>> Signed-off-by: Lyude 
>> ---
>> Changes
>> * Add new interfaces to replace reuse of wl_data_(source|offer)
>> * Get rid of the selection changed event since we now have our own version
>>   of wl_data_(source|offer), and clients can just assume destroyed events
>>   indicate that their data in the primary clipboard has been replaced.
>> * Get rid of summary on arguments, I noticed most of the official wayland
>>   protocol doesn't actually use these, and they were mostly redundant
>>   anyway.
>> * s/selection_set/set_selection/
>> * Add destructor requests for all interfaces
>
>
> I do not like the fact that a program has to create both a
> wl_primary_selection_offer and a wl_data_offer on any selection (yes it can
> defer the second one until the user starts a drag, but it still would be
> nice to reuse the same one for both). Still not a huge problem, like in many
> other cases the two proxies can be stuffed into the same higher-level api
> object.
>
> Some assurance that the abilities of both will be kept identical would help,
> however. I don't want some data transfers to be impossible depending on
> whether the user does dnd or middle-click paste.
>
>>
>> +
>> +  
>> +Singleton global object that manages the
>> zwp_primary_selection_device_v1
>> +objects for each wl_seat.
>> +  
>> +  > interface="zwp_primary_selection_device_v1"/>
>> +  
>> +
>
>
> Also, looking at what happened to wl_pointer, it appears the design of
> Wayland is to not have any "singletons" except the globally-advertised
> objects. I'm not sure if this has any visible effects on your api because
> there is no actual state stored on the object, and you must already support
> creating more than one, but it might make sense to get rid of the words
> "Singleton global".
>
>> +  
>> +
>> +  
>> +Set the current contents of the primary selection buffer. This
>> clears
>> +anything which was previously held in the primary selection
>> buffer.
>> +
>> +This request can only be used while the window is focused.
>> +  
>> +  > interface="zwp_primary_selection_source_v1"/>
>> +
>
>
> Please don't require the "window is focused". The rules used by the
> compositor to accept/reject these offers should be defined by the compositor
> (a possible rule is that the window either got a mouse click or a keystroke
> from the seat). Clicking in a window should never be required to cause
> focus/activate/raise by the low-level api, that is instead part of the
> desktop and client definition.

So when a window is once clicked it can from then on arbitrarily set
primary selection?

Or is there a timeout? How long?

You cannot possibly tie the setting of the selection to a particular
click. I mean the user does something, as a result the application
gets an event, the application processes the event, and as a result it
finds out it would be a good idea to set the selection. Where does the
fact that the user clicked the window sometime in the past help when
the request to set the selection actually arrives?

>
>> +
>> +  
>> +Sent when the client has permission to read from the primary
>> selection
>> +buffer.
>> +
>> +This event is sent whenever the client receives a middle click,
>> and will
>> +be received by the client before the actual middle click event.
>> While
>> +the compositor is free to bind this event to another input event
>> (such
>> +as a keyboard shortcut), the client should always treat pastes
>> from the
>> +primary selection as middle clicks. This is to ensure the
>> behavior is
>> +identical to that of primary selection pasting in X.
>> +
>> +It is up to the client to decide whether or not it is appropriate
>> to
>> +read from the primary buffer and paste it's contents.
>> +  
>> +  > interface="zwp_primary_selection_offer_v1"/>
>> +
>
>
> I think it would be better to send this to the client when it gets any focus
> from the seat, and also if the selection is changed while it has focus. Then
> the client is able to free to use any method to do the paste (though of
> course using the middle mouse is encouraged). Also means there is not
> redundant events if the user clicks the middle mouse many times (a lot of
> programs use the middle-mouse drag as a pan operation).

Same. If setting selection should not require focus why pulling the
selection should?

If middle-clicking a window is encouraged to perform a paste and it is
not encouraged to imply that the middle click did anything to the
window focus then it is advisable to not depend on focus state for
pasting.

The thing with focus here is probably meant to add some security so
that some arbitrary 

Re: [RFC v2] Add Primary Selection Protocol Version 1

2015-12-18 Thread Michal Suchanek
On 18 December 2015 at 18:03, Lyude  wrote:
> Signed-off-by: Lyude 

> +
> +  
> +Sent when the client has permission to read from the primary 
> selection
> +buffer.
> +
> +This event is sent whenever the client receives a middle click, and 
> will
> +be received by the client before the actual middle click event. While
> +the compositor is free to bind this event to another input event 
> (such
> +as a keyboard shortcut), the client should always treat pastes from 
> the
> +primary selection as middle clicks. This is to ensure the behavior is
> +identical to that of primary selection pasting in X.
> +
> +It is up to the client to decide whether or not it is appropriate to
> +read from the primary buffer and paste it's contents.
> +  
> +   interface="zwp_primary_selection_offer_v1"/>
> +
> +

Why this?

Is this an artifact of cut from DnD spec?

Drop is something initiated by dropping an object from the outside but
paste is something the application initiates by itself. The primary
selection is something that sits there all the time ready to be pasted
unlike dropped objects which appear and disappear momentarily.

It is customary in current desktop that paste happens on middle click
but that is something that is just configured in every toolkit
separately to work the same across whole desktop. The application
itself should decide if and on what event it tries to pull the primary
selection content. Note also that there are cut buffer managers that
just pull and store the primary buffer content every time it is set so
if X compatibility is desired setting the paste buffer should generate
an event.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: RFC: idle protocol

2015-12-09 Thread Michal Suchanek
On 9 December 2015 at 08:12, Martin Graesslin  wrote:
> On Tuesday, December 8, 2015 2:59:38 PM CET Bryce Harrington wrote:
>> On Tue, Dec 08, 2015 at 02:12:01PM +0100, Martin Graesslin wrote:
>> > Hi Wayland-developers,
>> >
>> > at KDE we developed a protocol for our framework kidletime [1]. The idea
>> > is to notify Wayland clients when a wl_seat has been idle for a specified
>> > time. We use this for example for power management, screen locking etc.
>> > But a common use case is also setting a user as away in a chat
>> > application.
>> >
>> > We think that this protocol can be in general useful for all Wayland based
>> > systems. Our current protocol is attached to this mail. Of course for
>> > integration into wayland protocols the namespace needs adjustments (I'm
>> > open for suggestions).
>> >
>> > The reference implementation of the protocol can be found in the KWayland
>> > repository (client at [2], server at [3]).
>> >
>> > Best Regards
>> > Martin Gräßlin
>> >
>> > [1] http://inqlude.org/libraries/kidletime.html
>> > [2] git://anongit.kde.org/kwayland (path src/client/idle.h and src/client/
>> > idle.cpp)
>> > [3] git://anongit.kde.org/kwayland (path src/server/idle_interface.h and
>> > src/ server/idle_interface.cpp)
>>
>> Hi Martin, thanks for proposing this protocol.  You may have seen the
>> screensaver inhibition protocol proposed recently[1];
>
> no I hadn't seen this and I'm slightly surprised about a screensaver
> inhibition as that is IMHO a solved problem on DBus level - both on the
> org.freedesktop.ScreenSaver as well on
> org.freedesktop.PowerManagement.Inhibit.

This is IMHO not a solution but another problem to work around.

When an application requests screensaver inhibition over DBus you have
its DBus address but not its window ID so you have no idea on which
screen the window is displayed or if it is displayed at all. IIRC
Wayland has some sort of ping protocol to determine if an application
that created a window is responsive. So the compositor can deny the
inhibit request when the application stops responding or ceases to be
visible and it can point to the window that inhibits the screensaver
when the user asks for that.

IIRC there is no global window ID space shared between compositor and
all clients so it is not possible to send a window ID over DBus. Even
if it was it's no longer desktop-agnostic. Without that you get into
the situation that your screensaver does not save screen but you have
no idea why.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC wayland-protocols v2] Add screensaver inhibition protocol

2015-11-25 Thread Michal Suchanek
On 25 November 2015 at 08:24, Bryce Harrington <br...@osg.samsung.com> wrote:
> On Wed, Nov 25, 2015 at 08:09:16AM +0100, Michal Suchanek wrote:
>> Hello
>>
>> On 25 November 2015 at 07:49, Bryce Harrington <br...@osg.samsung.com> wrote:
>> > This interface allows disabling of screensaver/screenblanking on a
>> > per-surface basis.  As long as the surface remains visible and
>> > non-occluded it blocks the screensaver, etc. from activating on the
>> > output(s) that the surface is visible on.
>> >
>> > To uninhibit, simply destroy the inhibitor object.
>> >
>> > Signed-off-by: Bryce Harrington <br...@osg.samsung.com>
>> > ---
>> >  Makefile.am|  1 +
>> >  unstable/screensaver-inhibit/README|  4 ++
>> >  .../screensaver-inhibit-unstable-v1.xml| 80 
>> > ++
>> >  3 files changed, 85 insertions(+)
>> >  create mode 100644 unstable/screensaver-inhibit/README
>> >  create mode 100644 
>> > unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml
>> >
>> > diff --git a/Makefile.am b/Makefile.am
>> > index f1bac16..7af18c5 100644
>> > --- a/Makefile.am
>> > +++ b/Makefile.am
>> > @@ -5,6 +5,7 @@ nobase_dist_pkgdata_DATA = 
>> >  \
>> > unstable/text-input/text-input-unstable-v1.xml 
>> >  \
>> > unstable/input-method/input-method-unstable-v1.xml 
>> >  \
>> > unstable/xdg-shell/xdg-shell-unstable-v5.xml   
>> >  \
>> > +   unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml   
>> >  \
>> > $(NULL)
>> >
>> >  pkgconfigdir = $(libdir)/pkgconfig
>> > diff --git a/unstable/screensaver-inhibit/README 
>> > b/unstable/screensaver-inhibit/README
>> > new file mode 100644
>> > index 000..396e871
>> > --- /dev/null
>> > +++ b/unstable/screensaver-inhibit/README
>> > @@ -0,0 +1,4 @@
>> > +Screensaver inhibition protocol
>> > +
>> > +Maintainers:
>> > +Bryce Harrington <br...@osg.samsung.com>
>> > diff --git 
>> > a/unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml 
>> > b/unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml
>> > new file mode 100644
>> > index 000..4252baf
>> > --- /dev/null
>> > +++ b/unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml
>> > @@ -0,0 +1,80 @@
>> > +
>> > +
>> > +
>> > +  
>> > +Copyright © 2015 Samsung Electronics Co., Ltd
>> > +
>> > +Permission is hereby granted, free of charge, to any person obtaining 
>> > a
>> > +copy of this software and associated documentation files (the 
>> > "Software"),
>> > +to deal in the Software without restriction, including without 
>> > limitation
>> > +the rights to use, copy, modify, merge, publish, distribute, 
>> > sublicense,
>> > +and/or sell copies of the Software, and to permit persons to whom the
>> > +Software is furnished to do so, subject to the following conditions:
>> > +
>> > +The above copyright notice and this permission notice (including the 
>> > next
>> > +paragraph) shall be included in all copies or substantial portions of 
>> > the
>> > +Software.
>> > +
>> > +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 
>> > EXPRESS OR
>> > +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 
>> > MERCHANTABILITY,
>> > +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT 
>> > SHALL
>> > +THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 
>> > OTHER
>> > +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 
>> > ARISING
>> > +FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
>> > +DEALINGS IN THE SOFTWARE.
>> > +  
>> > +
>> > +  
>> > +
>> > +  This interface is implemented by servers whose screensaver and/or 
>> > screen
>> > +  blanking are able to be disabled by a given
>> > +
>> > +  An object to establish an inhibition on 

Re: [RFC wayland-protocols v2] Add screensaver inhibition protocol

2015-11-24 Thread Michal Suchanek
Hello

On 25 November 2015 at 07:49, Bryce Harrington  wrote:
> This interface allows disabling of screensaver/screenblanking on a
> per-surface basis.  As long as the surface remains visible and
> non-occluded it blocks the screensaver, etc. from activating on the
> output(s) that the surface is visible on.
>
> To uninhibit, simply destroy the inhibitor object.
>
> Signed-off-by: Bryce Harrington 
> ---
>  Makefile.am|  1 +
>  unstable/screensaver-inhibit/README|  4 ++
>  .../screensaver-inhibit-unstable-v1.xml| 80 
> ++
>  3 files changed, 85 insertions(+)
>  create mode 100644 unstable/screensaver-inhibit/README
>  create mode 100644 
> unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml
>
> diff --git a/Makefile.am b/Makefile.am
> index f1bac16..7af18c5 100644
> --- a/Makefile.am
> +++ b/Makefile.am
> @@ -5,6 +5,7 @@ nobase_dist_pkgdata_DATA =
>   \
> unstable/text-input/text-input-unstable-v1.xml
>   \
> unstable/input-method/input-method-unstable-v1.xml
>   \
> unstable/xdg-shell/xdg-shell-unstable-v5.xml  
>   \
> +   unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml  
>   \
> $(NULL)
>
>  pkgconfigdir = $(libdir)/pkgconfig
> diff --git a/unstable/screensaver-inhibit/README 
> b/unstable/screensaver-inhibit/README
> new file mode 100644
> index 000..396e871
> --- /dev/null
> +++ b/unstable/screensaver-inhibit/README
> @@ -0,0 +1,4 @@
> +Screensaver inhibition protocol
> +
> +Maintainers:
> +Bryce Harrington 
> diff --git a/unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml 
> b/unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml
> new file mode 100644
> index 000..4252baf
> --- /dev/null
> +++ b/unstable/screensaver-inhibit/screensaver-inhibit-unstable-v1.xml
> @@ -0,0 +1,80 @@
> +
> +
> +
> +  
> +Copyright © 2015 Samsung Electronics Co., Ltd
> +
> +Permission is hereby granted, free of charge, to any person obtaining a
> +copy of this software and associated documentation files (the 
> "Software"),
> +to deal in the Software without restriction, including without limitation
> +the rights to use, copy, modify, merge, publish, distribute, sublicense,
> +and/or sell copies of the Software, and to permit persons to whom the
> +Software is furnished to do so, subject to the following conditions:
> +
> +The above copyright notice and this permission notice (including the next
> +paragraph) shall be included in all copies or substantial portions of the
> +Software.
> +
> +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS 
> OR
> +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> +THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 
> OTHER
> +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> +FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> +DEALINGS IN THE SOFTWARE.
> +  
> +
> +  
> +
> +  This interface is implemented by servers whose screensaver and/or 
> screen
> +  blanking are able to be disabled by a given
> +
> +  An object to establish an inhibition on the screensaver and screen
> +  blanking of the output that the specified surface is shown on.
> +
> +  Warning! The protocol described in this file is experimental and
> +  backward incompatible changes may be made. Backward compatible changes
> +  may be added together with the corresponding interface version bump.
> +  Backward incompatible changes are done by bumping the version number in
> +  the protocol and interface names and resetting the interface version.
> +  Once the protocol is to be declared stable, the 'z' prefix and the
> +  version number in the protocol and interface names are removed and the
> +  interface version number is reset.
> +
> +
> +
> +  
> +   Create a new inhibition object associated with the given surface.
> +  
> +   interface="zwp_screensaver_inhibition_inhibit_v1"/>
> +   +  summary="the surface that inhibits the screensaver"/>
> +
> +  
> +
> +  
> +
> +  An inhibitor prevents the output that the surface is visible on from
> +  being blanked, dimmed, locked, set to power save, or otherwise 
> obscuring
> +  the screen visuals due to lack of user interaction.  Any active
> +  screensaver processes are also temporarily blocked from displaying.

The compositor may at its discretion prevent other outputs from
blanking as well. This should be configurable by user settings and it

Re: [RFC] Screensaver/blanking inhibition

2015-11-20 Thread Michal Suchanek
On 20 November 2015 at 12:48, Pekka Paalanen <ppaala...@gmail.com> wrote:
> On Fri, 20 Nov 2015 12:18:29 +0100
> Michal Suchanek <hramr...@gmail.com> wrote:
>
>> On 20 November 2015 at 11:39, Pekka Paalanen <ppaala...@gmail.com> wrote:
>> > On Thu, 19 Nov 2015 22:46:06 +0100
>> > Michal Suchanek <hramr...@gmail.com> wrote:
>> >
>> >> On 19 November 2015 at 20:12, Daniel Stone <dan...@fooishbar.org> wrote:
>> >> > Hi,
>> >> >
>> >> > On 19 November 2015 at 19:05, Bill Spitzak <spit...@gmail.com> wrote:
>> >> >> I feel like there is no need to tie it to a surface. In Wayland the 
>> >> >> client
>> >> >> is always notified of any changes to it's state, so it can update the
>> >> >> screensaver object to match. (destruction of the screensaver object 
>> >> >> would of
>> >> >> course remove the inhibit).
>> >> >>
>> >> >> The surface may be necessary to indicate if only one output is to have 
>> >> >> the
>> >> >> screensaver inhibited, but I think wayland clients are aware of which 
>> >> >> output
>> >> >> their surfaces are on so instead the output could be indicated 
>> >> >> directly.
>> >> >
>> >> > By default it should be tied to a surface.
>> >>
>> >> That sounds quite reasonable. The compositor can point to a particular
>> >> surface that inhibits screen blanking, the surface can be removed
>> >> one-sidedly by the compositor, the compositor may choose to cease the
>> >> blank-inhibit function when the surface is obscured/unmapped/..
>> >>
>> >> >
>> >> >>>>>>>> In X11, various getter functions are provided for the 
>> >> >>>>>>>> application to
>> >> >>>>>>>> poll inhibition status.  For Wayland, instead of getters, the 
>> >> >>>>>>>> current
>> >> >>>>>>>> state is just be pushed to the client when the inhibitor global 
>> >> >>>>>>>> is
>> >> >>>>>>>> bound, and then updates pushed if/when the status changes.
>> >> >>>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> This makes sense, and follows "standard" wayland practice
>> >> >>
>> >> >>
>> >> >> I don't see any reason for a client to know about any other clients
>> >> >> inhibiting the screensaver, and this might be a security leak too.
>> >> >
>> >> > Yes, seems a bit pointless.
>> >> >
>> >> >>>>>>>> A corresponding uninhibit API will be added as well.  For 
>> >> >>>>>>>> example, a
>> >> >>>>>>>> movie player may wish to inhibit while a video is playing but
>> >> >>>>>>>> uninhibit
>> >> >>>>>>>> when it is paused.
>> >> >
>> >> > Just make the inhibit request return a new object, which upon destroy,
>> >> > removes the inhibition. That way you don't even have duplicate
>> >> > codepaths for client exiting uncleanly vs. client removed inhibition.
>> >>
>> >> Except then it is not bound to a surface anymore.
>> >
>> > Yes, it is tied to the surface. The create method on the global
>> > interface can have the surface as an argument. This is a common pattern
>> > in Wayland extensions.
>> >
>> >> Also the protocol should probably cover the cases when a client locks up.
>> >
>> > We don't want yet another ping/pong protocol in this one. We can leave
>> > it as a shell detail to determine whether a client is locked up and how
>> > to respond to that. I do not think it is really necessary to spec that
>> > in the protocol.
>>
>> So long as the object is tied to a surface ther is no need for a ping-pong.
>
> Could you elaborate?

This is pretty much a usability point of view. When I have a (paused)
media player on a virtual X desktop that is not shown it still
inhibits the screensaver (through some message call that does not
actually quite point to a particular X11 client).

When this is tied to a surface the compositor can deci

Re: [RFC] Screensaver/blanking inhibition

2015-11-20 Thread Michal Suchanek
On 20 November 2015 at 14:43, Pekka Paalanen <ppaala...@gmail.com> wrote:
> On Fri, 20 Nov 2015 14:10:47 +0100
> Michal Suchanek <hramr...@gmail.com> wrote:
>
>> On 20 November 2015 at 12:48, Pekka Paalanen <ppaala...@gmail.com> wrote:
>> > On Fri, 20 Nov 2015 12:18:29 +0100
>> > Michal Suchanek <hramr...@gmail.com> wrote:
>> >
>> >> On 20 November 2015 at 11:39, Pekka Paalanen <ppaala...@gmail.com> wrote:
>> >> > On Thu, 19 Nov 2015 22:46:06 +0100
>> >> > Michal Suchanek <hramr...@gmail.com> wrote:
>> >> >
>> >> >> On 19 November 2015 at 20:12, Daniel Stone <dan...@fooishbar.org> 
>> >> >> wrote:
>> >> >> > Hi,
>> >> >> >
>> >> >> > On 19 November 2015 at 19:05, Bill Spitzak <spit...@gmail.com> wrote:
>> >> >> >> I feel like there is no need to tie it to a surface. In Wayland the 
>> >> >> >> client
>> >> >> >> is always notified of any changes to it's state, so it can update 
>> >> >> >> the
>> >> >> >> screensaver object to match. (destruction of the screensaver object 
>> >> >> >> would of
>> >> >> >> course remove the inhibit).
>> >> >> >>
>> >> >> >> The surface may be necessary to indicate if only one output is to 
>> >> >> >> have the
>> >> >> >> screensaver inhibited, but I think wayland clients are aware of 
>> >> >> >> which output
>> >> >> >> their surfaces are on so instead the output could be indicated 
>> >> >> >> directly.
>> >> >> >
>> >> >> > By default it should be tied to a surface.
>> >> >>
>> >> >> That sounds quite reasonable. The compositor can point to a particular
>> >> >> surface that inhibits screen blanking, the surface can be removed
>> >> >> one-sidedly by the compositor, the compositor may choose to cease the
>> >> >> blank-inhibit function when the surface is obscured/unmapped/..
>> >> >>
>> >> >> >
>> >> >> >>>>>>>> In X11, various getter functions are provided for the 
>> >> >> >>>>>>>> application to
>> >> >> >>>>>>>> poll inhibition status.  For Wayland, instead of getters, the 
>> >> >> >>>>>>>> current
>> >> >> >>>>>>>> state is just be pushed to the client when the inhibitor 
>> >> >> >>>>>>>> global is
>> >> >> >>>>>>>> bound, and then updates pushed if/when the status changes.
>> >> >> >>>>>>>>
>> >> >> >>>>>>>
>> >> >> >>>>>>> This makes sense, and follows "standard" wayland practice
>> >> >> >>
>> >> >> >>
>> >> >> >> I don't see any reason for a client to know about any other clients
>> >> >> >> inhibiting the screensaver, and this might be a security leak too.
>> >> >> >
>> >> >> > Yes, seems a bit pointless.
>> >> >> >
>> >> >> >>>>>>>> A corresponding uninhibit API will be added as well.  For 
>> >> >> >>>>>>>> example, a
>> >> >> >>>>>>>> movie player may wish to inhibit while a video is playing but
>> >> >> >>>>>>>> uninhibit
>> >> >> >>>>>>>> when it is paused.
>> >> >> >
>> >> >> > Just make the inhibit request return a new object, which upon 
>> >> >> > destroy,
>> >> >> > removes the inhibition. That way you don't even have duplicate
>> >> >> > codepaths for client exiting uncleanly vs. client removed inhibition.
>> >> >>
>> >> >> Except then it is not bound to a surface anymore.
>> >> >
>> >> > Yes, it is tied to the surface. The create method on the global
>> >> > interface can have the surface as an argument. This is a common pattern
>> &

Re: [RFC] Screensaver/blanking inhibition

2015-11-20 Thread Michal Suchanek
On 20 November 2015 at 11:39, Pekka Paalanen <ppaala...@gmail.com> wrote:
> On Thu, 19 Nov 2015 22:46:06 +0100
> Michal Suchanek <hramr...@gmail.com> wrote:
>
>> On 19 November 2015 at 20:12, Daniel Stone <dan...@fooishbar.org> wrote:
>> > Hi,
>> >
>> > On 19 November 2015 at 19:05, Bill Spitzak <spit...@gmail.com> wrote:
>> >> I feel like there is no need to tie it to a surface. In Wayland the client
>> >> is always notified of any changes to it's state, so it can update the
>> >> screensaver object to match. (destruction of the screensaver object would 
>> >> of
>> >> course remove the inhibit).
>> >>
>> >> The surface may be necessary to indicate if only one output is to have the
>> >> screensaver inhibited, but I think wayland clients are aware of which 
>> >> output
>> >> their surfaces are on so instead the output could be indicated directly.
>> >
>> > By default it should be tied to a surface.
>>
>> That sounds quite reasonable. The compositor can point to a particular
>> surface that inhibits screen blanking, the surface can be removed
>> one-sidedly by the compositor, the compositor may choose to cease the
>> blank-inhibit function when the surface is obscured/unmapped/..
>>
>> >
>> >>>>>>>> In X11, various getter functions are provided for the application to
>> >>>>>>>> poll inhibition status.  For Wayland, instead of getters, the 
>> >>>>>>>> current
>> >>>>>>>> state is just be pushed to the client when the inhibitor global is
>> >>>>>>>> bound, and then updates pushed if/when the status changes.
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>> This makes sense, and follows "standard" wayland practice
>> >>
>> >>
>> >> I don't see any reason for a client to know about any other clients
>> >> inhibiting the screensaver, and this might be a security leak too.
>> >
>> > Yes, seems a bit pointless.
>> >
>> >>>>>>>> A corresponding uninhibit API will be added as well.  For example, a
>> >>>>>>>> movie player may wish to inhibit while a video is playing but
>> >>>>>>>> uninhibit
>> >>>>>>>> when it is paused.
>> >
>> > Just make the inhibit request return a new object, which upon destroy,
>> > removes the inhibition. That way you don't even have duplicate
>> > codepaths for client exiting uncleanly vs. client removed inhibition.
>>
>> Except then it is not bound to a surface anymore.
>
> Yes, it is tied to the surface. The create method on the global
> interface can have the surface as an argument. This is a common pattern
> in Wayland extensions.
>
>> Also the protocol should probably cover the cases when a client locks up.
>
> We don't want yet another ping/pong protocol in this one. We can leave
> it as a shell detail to determine whether a client is locked up and how
> to respond to that. I do not think it is really necessary to spec that
> in the protocol.

So long as the object is tied to a surface ther is no need for a ping-pong.

>
>> Xscreensaver has an inhibit protocol which just allows the application
>> reset the idle timer as if the user pressed a key. That's not ideal
>> but when combined with the possibility to register (and unregister) a
>> surface as blank-inhibiting when visible this would probably cover
>> most without much effort on the application side.
>
> What use cases are there for the idle timer reset ("poke")?

It's for the case when you want something that is not tied to a
surface and you want the client still have the ability to inhibit
screensaver yet do not want it to inhibit screensaver indefinitely
(and then forget to re-enable it).

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [RFC] Screensaver/blanking inhibition

2015-11-19 Thread Michal Suchanek
On 19 November 2015 at 20:12, Daniel Stone  wrote:
> Hi,
>
> On 19 November 2015 at 19:05, Bill Spitzak  wrote:
>> I feel like there is no need to tie it to a surface. In Wayland the client
>> is always notified of any changes to it's state, so it can update the
>> screensaver object to match. (destruction of the screensaver object would of
>> course remove the inhibit).
>>
>> The surface may be necessary to indicate if only one output is to have the
>> screensaver inhibited, but I think wayland clients are aware of which output
>> their surfaces are on so instead the output could be indicated directly.
>
> By default it should be tied to a surface.

That sounds quite reasonable. The compositor can point to a particular
surface that inhibits screen blanking, the surface can be removed
one-sidedly by the compositor, the compositor may choose to cease the
blank-inhibit function when the surface is obscured/unmapped/..

>
 In X11, various getter functions are provided for the application to
 poll inhibition status.  For Wayland, instead of getters, the current
 state is just be pushed to the client when the inhibitor global is
 bound, and then updates pushed if/when the status changes.

>>>
>>> This makes sense, and follows "standard" wayland practice
>>
>>
>> I don't see any reason for a client to know about any other clients
>> inhibiting the screensaver, and this might be a security leak too.
>
> Yes, seems a bit pointless.
>
 A corresponding uninhibit API will be added as well.  For example, a
 movie player may wish to inhibit while a video is playing but
 uninhibit
 when it is paused.
>
> Just make the inhibit request return a new object, which upon destroy,
> removes the inhibition. That way you don't even have duplicate
> codepaths for client exiting uncleanly vs. client removed inhibition.

Except then it is not bound to a surface anymore.

Also the protocol should probably cover the cases when a client locks up.

Xscreensaver has an inhibit protocol which just allows the application
reset the idle timer as if the user pressed a key. That's not ideal
but when combined with the possibility to register (and unregister) a
surface as blank-inhibiting when visible this would probably cover
most without much effort on the application side.

 Makes sense ("potentially" could inhibit other things depending on scope
 and how it grows)
>>
>> Absolutely it should by default inhibit any kind of notifier or any other
>> changes to the display not triggered by the user (it also should NOT inhibit
>> changes that the user triggers, such as hitting a shortcut key that creates
>> a popup).
>>
>> Among these changes that must be inhibited are "things that have not been
>> invented yet but may appear in a future desktop". Therefore a per-thing api
>> to inhibit them is not going to work and this must inhibit them all.
>>
>> The api can be enhanced in the future if you want finer control over what is
>> inhibited.
>
> No. People want to receive notifications whilst they watch movies.

At least some people sometimes, yes.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Xwayland and weston’s scale=2 (hi-dpi display)

2015-11-09 Thread Michal Suchanek
DPI is read from the X server, not Xresources.

You can probably adjust DPI by the scale factor in X emulation, Or not
scale emulated X windows as suggested.

Thanks

Michal

On 9 November 2015 at 19:42, Bill Spitzak  wrote:
> Have the X emulator assume the client set the scale to the one determined
> from the dpi in the .Xresources?
>
>
>
> On Sun, Nov 8, 2015 at 6:10 PM, Jonas Ådahl  wrote:
>>
>> On Sat, Nov 07, 2015 at 09:48:59PM +0100, Michael Stapelberg wrote:
>> > Hey,
>> >
>> > I just got around to trying Wayland on my ThinkPad X1 Carbon 2015.
>> >
>> > The machine has a 2560x1440px display with 220 DPI, hence I’m using it
>> > as a
>> > “retina display”, i.e. with a scale factor of 2. On Xorg, I achieve this
>> > by
>> > setting “Xft.dpi: 192” in my ~/.Xresources. All the applications then
>> > come
>> > up with crystal-clear text, in comparison to a regular 96 dpi screen at
>> > least… :)
>> >
>> > When running weston without a ~/.config/weston.ini, everything is
>> > rendered
>> > with the native resolution of 2560x1440px, meaning the text is
>> > unreadably
>> > small (see the left weston-terminal window in
>> > http://t.zekjur.net/xwayland-scale-1-x.png).
>> >
>> > Therefore, I’ve set the following in my ~/.config/weston.ini:
>> >
>> >   [output]
>> >   name=eDP-1
>> >   scale=2
>> >
>> > This is recommended on
>> > https://wiki.archlinux.org/index.php/Wayland#High_DPI_displays, but I’m
>> > not
>> > sure if it’s actually the best method or the desired end state of hi-dpi
>> > support in wayland/weston. My uncertainty stems from the fact that while
>> > text in weston-terminal is rendered clearly, all assets (icons, mouse
>> > cursors) are low-resolution, even though higher-resolution versions are
>> > available.
>> >
>> > For the actual issue I’m trying to describe, my test procedure is to use
>> > “xrdb ~/.Xresources && urxvt”, then place the urxvt window such that it
>> > occupies about half of the screen.
>> >
>> > With scale=2 (see http://t.zekjur.net/xwayland-scale-2-x.png), I get a
>> > window with about 640px width in xwininfo and an extremely big font. I
>> > suppose this is because the Xwayland window (is that how it works?) is
>> > scaled to 2x.
>> >
>> > With scale=1 (see http://t.zekjur.net/xwayland-scale-1-x.png), I get a
>> > window with about 1280px width in xwininfo and the font I expect.
>> >
>> > So, it seems to me that I have to use scale=2 to get wayland apps to
>> > render
>> > correctly on a hi-dpi screen, and scale=1 to get xwayland apps to render
>> > correctly on a hi-dpi screen, and I obviously can’t do both at the same
>> > time.
>>
>> Pretty much. weston pretty much assumes that X11 clients are simply just
>> not HiDPI capable, and will just scale up as if it was a Wayland client,
>> and this produces this result.
>>
>> As a side note, in mutter we currently don't scale up Xwayland client
>> surfaces because of this reason; in GNOME, clients tend to be rendered
>> with respect to the DPI (i.e. at double the scale if the DPI is high
>> enough), and since the X server has no clue about the scale in use, it
>> will always set the buffer scale to 1. Pointer cursors has the same
>> issue; the X side of things might be aware of it (but without the actual
>> X server having any idea of what's going on).
>>
>> An issue with this is that non-DPI-aware X clients will look very tiny;
>> but this since that wasn't really a regression from before, we are
>> currently living with it. Another issue is that even though the display
>> server is aware of each monitor's individual DPI, X11 windows will not
>> adapt their size to the monitor.
>>
>> A possible way to deal with this in the future is to add per window
>> properties specifying the scale the client actually drew its content
>> with, an X11 equivalent of wl_surface.set_buffer_scale more or less.
>> This might be problematic though because of race conditions and what
>> not, and I'm not aware of it being tried out anywhere.
>>
>> >
>> > Shall I file a bug about this, or am I misunderstanding something?
>>
>> Feel free to report one. I took a quick look, but couldn't see any
>> existing bug in the fdo bug tracker which covers this.
>>
>>
>> Jonas
>>
>> >
>> > Thanks,
>> > Best regards,
>> > Michael
>>
>> > ___
>> > wayland-devel mailing list
>> > wayland-devel@lists.freedesktop.org
>> > http://lists.freedesktop.org/mailman/listinfo/wayland-devel
>>
>> ___
>> wayland-devel mailing list
>> wayland-devel@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/wayland-devel
>
>
>
> ___
> wayland-devel mailing list
> wayland-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/wayland-devel
>
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org

Re: [RFC] weston: implement inert objects for keyboard/pointer/touch

2015-10-19 Thread Michal Suchanek
Hello,

On 19 October 2015 at 15:47, David FORT  wrote:
> This is the second version. I have restored the ref counting of input devices,
> I think with the name weston_seat_init_pointer is not accurate, perhaps
> weston_seat_add_pointer_device would be better. I'm really wondering if it's
> the weston core that should do that refcounting, or if the input backend 
> should
> do it itself. I can't see any case where we would have 2 input backends (which
> would be a justification for weston doing it).
> Note that with this patch, we don't save the last position of the pointer. I'm
> wondering why we wanna do this, does that mean that we want the same kind of
> behaviour for other input devices (saving locks state for keyboard device for
> example) ?
>


On widely used systems that have a pointer the position of relative
pointer is saved across device reconnect. So users will expect that
disconnecting a mouse and connecting another one will not change
pointer position.

Same with keyboard locks. While buttons on a device should be released
on disconnect internal state that is not mechanically represented by
device position should stay. Which includes the pointer position for
devices that drive the pointer with relative axis and state of
keyboard locks that remain locked when the key itself is released.

For devices that drive pointer with absolute axis and have axis that
can be out of proximity and hence can be in state that provides no
pointer position reading the initial pointer position is also needed.
This is like almost all devices, actually.

Since changing the pointer position or keyboard lock state at random
on device reconnect would cause problems for people with USB bus
issues I would think that saving the state is not just something
current systems randomly happen to do. Note that hardware is sometimes
not fixable to work reliably. This is especially true for wireless
buses like Bluetooth.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Relative Pointer API Progress

2015-04-20 Thread Michal Suchanek
On 20 April 2015 at 13:44, x414e54 x414...@linux.com wrote:
 This is kinda completely derailed from the whole include mice in the
 game controller protocol talk.

 On Mon, Apr 20, 2015 at 6:44 PM, Michal Suchanek hramr...@gmail.com wrote:
 On 20 April 2015 at 10:48, Pekka Paalanen ppaala...@gmail.com wrote:
 On Mon, 20 Apr 2015 10:13:34 +0200
 Michal Suchanek hramr...@gmail.com wrote:

 On 20 April 2015 at 09:36, Pekka Paalanen ppaala...@gmail.com wrote:
  On Sun, 19 Apr 2015 09:46:39 +0200
  Michal Suchanek hramr...@gmail.com wrote:
 
  So the device is always absolute and interpretation varies.
 
  I disagree.
 
  Let's take a mouse, optical or ball, doesn't matter. What you get out
  is a position delta over time. This is also know as velocity. Sampling
  rate affects the scale of the values, and you cannot reasonably define
  a closed range for the possible values. There is no home position. All

 There is a home position. That is when you do not move the mouse. The
 reading is then 0.

 That is not a unique position, hence it cannot be a home position. That
 is only a unique velocity. By definition, if your measurement is a
 velocity, it does not directly give you an absolute position.

 When we talk about absolute, we really mean absolute position.

 And what does absolute position of a sensor somewhere outside of the
 PC give you?

 A trackball and touchpad has as absolute position as joystick.

 Trackball measures velocity, touchpad finger position(s), joystick
 stick position.

 None of these is almost ever used for absolute input mapping
 particular reading of a sensor to a particular screen coordinate.


  A mouse could be an absolute device only if you were never able to lift
  it off the table and move it without it generating motion events. This
  is something you cannot do with an absolute device like a joystick.

 You are too much fixed on the construction of the sensor. Mouse is a
 velocity sensor similar to some nunchuck or whatever device with
 reasonable precision accelerometer. That you can and do lift it off
 the table is only relevant to how you use such sensor in practice.

 Accelerometers measure acceleration. Acceleration, like velocity, is
 not a position. It does not give you an absolute position directly.

 And what is practical impact of accelerometers not giving an absolute
 position compared to joystick?

 You can warp a relative motion cursor but cannot warp an absolute
 position cursor.

Indeed. But that's property of how the sensor data is used by the
compositor to move the cursor and not of the sensor.

If joystick was ever used to position the cursor it would most likely
be done in relative mode although you repeat that joystick is
'absolute'. There is no practical mapping of raw stick excentricity to
absolute screen coordinates.


 Warping a relative motion cursor is still a UX pain because you may be
 at the edge of your physical reach but warping an absolute position
 cursor is actually an offset and may make the interface unusable.

Warping a cursor that is operated using an input device in absolute
mode is completely possible. However, unless the cursor is also
confined it will likely warp back on the next input event. When the
cursor is confined you effectively get the situation that you have a
sensor reading that puts the cursor through mapping to absolute screen
coordinates outside of (active) screen area. You can also implement
the confinement by changing the mapping - not necessarily only the
offset.



 Joystick can stay in an extreme position, mouse cannot. But if you
 take a nunchuck attached to a string and rotate it above your head the
 reading stays in an extreme position all the same.

 There is no sense in saying the sensor reading itself as absolute or
 relative. Either gives you some number in unknown units which you
 calibrate to get usable results. You have no idea where the stick is
 from the numbers you get. And there is absolutely no point caring. It
 may have some sense for a particular application and no sense for
 other.

 One of my original points was that a user should be able to hot-swap a
 mouse and a gamepad thumbstick without a game caring and that games do
 not care about mice/joystick/touchpad they just want raw axis values
 that they can use, evdev makes this abstraction.

 But you certainly need to know if the axis is relative or absolute to
 convert it to what the application needs.

And my point is that there is no such thing as relative and absolute
axis. There are sensors that give numbers as readings. Sometimes you
know that a bunch of sensors are actually axis which are physically
connected and orthogonal on a device which is nice.

It might be worthwhile to provide adapting filters that try to mimic
the dynamic input interface properties of one type of device using
another type of device. However, this is nxn filters for n kinds of
devices. Certainly more than 2.

Thanks

Michal

Re: Wayland Relative Pointer API Progress

2015-04-20 Thread Michal Suchanek
On 20 April 2015 at 14:49, x414e54 x414...@linux.com wrote:
 There is no sense in saying the sensor reading itself as absolute or
 relative. Either gives you some number in unknown units which you
 calibrate to get usable results. You have no idea where the stick is
 from the numbers you get. And there is absolutely no point caring. It
 may have some sense for a particular application and no sense for
 other.

 One of my original points was that a user should be able to hot-swap a
 mouse and a gamepad thumbstick without a game caring and that games do
 not care about mice/joystick/touchpad they just want raw axis values
 that they can use, evdev makes this abstraction.

 But you certainly need to know if the axis is relative or absolute to
 convert it to what the application needs.


 If you had an application wanting to move an object around and I gave
 you the value 500 and then 10 seconds later 400.
 Has the object moved 900 units or -100 units? You need to know this,
 this is the difference between absolute and relative.

Actually, that's determined by the application. It can use any sensor
in relative or absolute mapping as it sees fit. And since we are
talking about replacing a controller like joystick which is typically
used in relative mode with a mouse which is typically also used in
relative mode or a tablet or touch layer which can also be used in
relative mode (eg. when scrolling) there is no real problem to solve
here. If you were really diligent you could adapt the touch layer by
adding an offset so the center of the touch area reads as 0.


 But there is nothing stopping me giving you the position 500 and then
 measuring the next value relatively -100 and then calculating the last
 position plus the relative distance and giving this value to you.
 Hence the hot-swapping.

There is no relative distance anywhere, ever. Unless you make one up.

The application is not interested in something you make up but in
sensor readings.

If it is interested in data you make up it can use the pointer position.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Relative Pointer API Progress

2015-04-20 Thread Michal Suchanek
On 20 April 2015 at 10:48, Pekka Paalanen ppaala...@gmail.com wrote:
 On Mon, 20 Apr 2015 10:13:34 +0200
 Michal Suchanek hramr...@gmail.com wrote:

 On 20 April 2015 at 09:36, Pekka Paalanen ppaala...@gmail.com wrote:
  On Sun, 19 Apr 2015 09:46:39 +0200
  Michal Suchanek hramr...@gmail.com wrote:
 
  So the device is always absolute and interpretation varies.
 
  I disagree.
 
  Let's take a mouse, optical or ball, doesn't matter. What you get out
  is a position delta over time. This is also know as velocity. Sampling
  rate affects the scale of the values, and you cannot reasonably define
  a closed range for the possible values. There is no home position. All

 There is a home position. That is when you do not move the mouse. The
 reading is then 0.

 That is not a unique position, hence it cannot be a home position. That
 is only a unique velocity. By definition, if your measurement is a
 velocity, it does not directly give you an absolute position.

 When we talk about absolute, we really mean absolute position.

And what does absolute position of a sensor somewhere outside of the
PC give you?

A trackball and touchpad has as absolute position as joystick.

Trackball measures velocity, touchpad finger position(s), joystick
stick position.

None of these is almost ever used for absolute input mapping
particular reading of a sensor to a particular screen coordinate.


  A mouse could be an absolute device only if you were never able to lift
  it off the table and move it without it generating motion events. This
  is something you cannot do with an absolute device like a joystick.

 You are too much fixed on the construction of the sensor. Mouse is a
 velocity sensor similar to some nunchuck or whatever device with
 reasonable precision accelerometer. That you can and do lift it off
 the table is only relevant to how you use such sensor in practice.

 Accelerometers measure acceleration. Acceleration, like velocity, is
 not a position. It does not give you an absolute position directly.

And what is practical impact of accelerometers not giving an absolute
position compared to joystick?

Joystick can stay in an extreme position, mouse cannot. But if you
take a nunchuck attached to a string and rotate it above your head the
reading stays in an extreme position all the same.

There is no sense in saying the sensor reading itself as absolute or
relative. Either gives you some number in unknown units which you
calibrate to get usable results. You have no idea where the stick is
from the numbers you get. And there is absolutely no point caring. It
may have some sense for a particular application and no sense for
other.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Relative Pointer API Progress

2015-04-20 Thread Michal Suchanek
On 20 April 2015 at 09:36, Pekka Paalanen ppaala...@gmail.com wrote:
  On 18 April 2015 at 16:58, x414e54 x414...@linux.com wrote:

  USB HID specifications define a pointer and a mouse as two completely
  different inputs. A mouse can be a used as a pointer because it is
  pushing the cursor around but the pointer points at a specific
  location.

 Okay. Using different definitions for terms from different places and
 interpreting the terms used by other people with your own different
 definitions is obviously going to cause disagreement.

 I explained what a wl_pointer in Wayland terms is in another email.
 Sounds like it is specifically not a HID pointer device.


 On Sun, 19 Apr 2015 09:46:39 +0200
 Michal Suchanek hramr...@gmail.com wrote:

 So the device is always absolute and interpretation varies.

 I disagree.

 Let's take a mouse, optical or ball, doesn't matter. What you get out
 is a position delta over time. This is also know as velocity. Sampling
 rate affects the scale of the values, and you cannot reasonably define
 a closed range for the possible values. There is no home position. All

There is a home position. That is when you do not move the mouse. The
reading is then 0.

And there is range. The construction of the mouse sensor defines
maximum speed measurable in hardware. Although you do not get this
speed in mouse specifications on many low end mice this threshold is
actually reachable.

 this reads to me as relative. The home position is the important
 thing, and that the home position is observable by the human user.

Indeed, and both a joystick and a mouse have a home position.


 Take a joystick. The stick has a home position, the center. You can
 only tilt the stick up to it's hardware limits. Those limits are well
 established and obvious to the human using the stick without knowing
 anything else than just looking at the device. The measurements you get
 tell you the position of the stick. Sampling rate does not affect the
 readings, and they are not related to time. Therefore the readings are
 not velocity but position. This is what I would call absolute.

Sampling rate does not actually affect measured speed either so long
as what you measure is speed. It affects measured distance in the
sampling period so you have to take sampling period into account when
determining speed. And since the sampling period is typically fixed
for a mouse what you get is a sensor reading which is absolutely
comparable with any other reading from the same sensor. It's the
distance the mouse moved in the sampling interval or the mouse
movement speed in some unspecified units.


 Yes, the trackpoint has been raised here before, and it seems much
 closer to a joystick than a traditional mouse. That's ok, you probably
 could use it as a joystick, since it does have a home position that is
 obvious to a human user. Like you said, for trackpoints the absolute
 measurement is only interpreted as a velocity through some
 non-decreasing function.

The practical difference between mouse and joystick is that you can
move the stick to an extreme position and hold it in that position
which is not possible with a mouse. That's why trackpoint is a
joystick unless the hardware cooks the stick data in a very weird way.


 A mouse could be an absolute device only if you were never able to lift
 it off the table and move it without it generating motion events. This
 is something you cannot do with an absolute device like a joystick.

You are too much fixed on the construction of the sensor. Mouse is a
velocity sensor similar to some nunchuck or whatever device with
reasonable precision accelerometer. That you can and do lift it off
the table is only relevant to how you use such sensor in practice.

Or is by your definition of relative a trackball bolted to the table
an absolute input device because you cannot lift it?


 You are trying to make a distinction that is only relevant to use of
 the device readings for generating pointer motion events but otherwise
 does not exist.

 Converting one input device to emulate another (trackpoint - mouse,
 touchpad - mouse, keyboard - mouse, mouse - keyboard, mouse -
 joystick) is one thing. I don't think that is on topic here.

 A mouse is inherently a relative input device. What we're discussing
 here is exposing the relative measurements to apps, rather than the
 absolute position that the compositor manufactures by integrating over
 the relative measurements.

But that's confusing things. Mouse is as absolute as joystick. The
compositor input is all about converting absolute sensor data into
relative pointer movement because the sensor range cannot be
practically mapped 1-to-1 to absolute screen coordinates for most
sensors.

What the programs that eschew this conversion want is access to raw
unconverted sensor readings as much as possible so that they can
convert it to other input such as scene rotation. And they want to
convert the sensor reading to relative or absolute

Re: Wayland Relative Pointer API Progress

2015-04-19 Thread Michal Suchanek
On 19 April 2015 at 06:15, x414e54 x414...@linux.com wrote:
 On Sun, Apr 19, 2015 at 12:45 AM, Michal Suchanek hramr...@gmail.com wrote:
 On 18 April 2015 at 16:58, x414e54 x414...@linux.com wrote:



 A joystick does not necessarily have 2 axis and in most cases yes they
 are reporting an absolute position of the axis in the driver but it
 does not necessarily mean that the the hardware is absolute. For
 example if a joystick is using the same slotted optical system as the
 ball mice then this would be measuring relative motion and using it to
 calculate the absolute axis value, under some circumstances the
 position could become out of sync until re-calibrated by moving to the
 joystick to the maximum and minimum values for all axes or having it
 automatically re-center.


 USB HID specifications define a pointer and a mouse as two completely
 different inputs. A mouse can be a used as a pointer because it is
 pushing the cursor around but the pointer points at a specific
 location.

 And there is no practical way to point with a mouse to a specific
 location. Nor is there for most joysticks because they are not precise
 enough but you technically could map the stick excentricity to screen
 coordinates. Similarly a small touchpad has no practical mapping of
 touch coordinates to screen coordinates but for a big graphics tablet
 or touchscreen surface this can be done.

 The device is never relative, only interpretation of the obtained data is.

 Thanks

 Michal

 Rather than waste time on this I will just direct you over the the
 universal teacher Google.

 relative device is probably the best search term.

yes, and all the articles I found

like https://en.wikipedia.org/wiki/Input_device
https://msdn.microsoft.com/en-us/library/windows/desktop/ee418779%28v=vs.85%29.aspx

boil down to the fact that the device reports some value which can be
interpreted as relative pointer position increment or reported as
absolute value read from the sensor.

So the device is always absolute and interpretation varies.

You are trying to make a distinction that is only relevant to use of
the device readings for generating pointer motion events but otherwise
does not exist.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Relative Pointer API Progress

2015-04-18 Thread Michal Suchanek
On 17 April 2015 at 12:52, Hans de Goede hdego...@redhat.com wrote:
 Hi,


 On 17-04-15 11:47, Michal Suchanek wrote:

 On 17 April 2015 at 09:11, Pekka Paalanen ppaala...@gmail.com wrote:

 On Fri, 17 Apr 2015 13:43:11 +0900
 x414e54 x414...@linux.com wrote:

 Thank you for the comments.
 I do have a few counterpoints but I will leave after that.


 Not sure an IR/laser/wii mote pointer should even be considered a
 relative pointer since they operate in absolute coordinates. Given
 this, there is no set position hint to consider. Transmitting
 acceleramoter data via a relative pointer doesn't sound reasonable.


 I think this is the issue right here. Pointers are not relative, mice
 are not pointers.


 What definition of a pointer are you using?

 The definition Wayland uses for a wl_pointer is a device that:
 - requires a cursor image on screen to be usable
 - the physical input is relative, not absolute

 This definition is inspired by mice, and mice have been called pointer
 devices, so we picked the well-known name pointer for mice-like
 devices.

 Specifically, a pointer is *not* a device where you directly point a
 location on screen, like a touchscreen for example. For touchscreens,
 there is a separate protocol wl_touch.

 For drawing tablets, there will be yet another procotol.

 Joysticks or gamepads fit into none of the above. For the rest of the
 conversation, you should probably look up the long gamepad protocol
 discussions from the wayland-devel mailing list archives.


 And how is a joystick different from a trackpoint, exactly?

 It uses different hardware interface and later different software
 interface but for no good reason. It's just 2 axis relative input
 device with buttons. Sure, the big joystick, gamepad directional cap
 and trackpoint are at a different place of the stick size scale and
 might have different hardware sensors which should be reflected with
 different acceleration settings but ultimately it's the same kind of
 device.


 Actually joystick analog inputs are absolute not relative. They give a value
 for exactly how much the stick has moved from the center.

 Except for dpads which are really buttons not relative axis, so joysticks
 really are pretty much not like trackpoints in anyway.


Hi,

then actually mice are absolute not relative. They have two axis that
measure absolute ball rotation speed in two directions just like
joystick has two axis that measure absolute stick excentricity.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Relative Pointer API Progress

2015-04-18 Thread Michal Suchanek
On 18 April 2015 at 16:58, x414e54 x414...@linux.com wrote:
 Hi,

 then actually mice are absolute not relative. They have two axis that
 measure absolute ball rotation speed in two directions just like
 joystick has two axis that measure absolute stick excentricity.

 Thanks

 Michal

 This is not really constructive to the api but:

 Mice are not absolute because they are just measuring movement of a
 surface relative to itself, when you are not moving the mouse there is
 no axis value.

There is: the value is 0. Which is the same as with properly
calibrated joystick - when you release it it returns to the position
where reading of both axis is 0.

And that is the interface expected when using a grabbed mouse with
hidden cursor most of the time.

There is even parallel for removing springs from a joystick in the
mouse world - there were Russian trackballs with really big metal ball
which would keep spinning due to its momentum until stopped.

You could take the ball out and turn it over and put it
 back in the absolute position of the ball has changed but the mouse
 axis has not. For some ball mice the rollers measure the movement of a

That's because it measures speed of the ball not its position. When
you roll the ball outside of the mouse it cannot measure it.

 wheel with small holes inside it, when it moves it breaks the
 connection the chip registers this and uses it to calculate the delta
 for that axis. Optical mice are just taking small images of the
 surface and using that information to calculate a distance moved,
 again relative motion.

However, the reported value is absolute speed of the mouse as measured
against the surface. There is no more relativity than with measurement
of excentricity of a stick relative to a central position.


 A joystick does not necessarily have 2 axis and in most cases yes they
 are reporting an absolute position of the axis in the driver but it
 does not necessarily mean that the the hardware is absolute. For
 example if a joystick is using the same slotted optical system as the
 ball mice then this would be measuring relative motion and using it to
 calculate the absolute axis value, under some circumstances the
 position could become out of sync until re-calibrated by moving to the
 joystick to the maximum and minimum values for all axes or having it
 automatically re-center.


 USB HID specifications define a pointer and a mouse as two completely
 different inputs. A mouse can be a used as a pointer because it is
 pushing the cursor around but the pointer points at a specific
 location.

And there is no practical way to point with a mouse to a specific
location. Nor is there for most joysticks because they are not precise
enough but you technically could map the stick excentricity to screen
coordinates. Similarly a small touchpad has no practical mapping of
touch coordinates to screen coordinates but for a big graphics tablet
or touchscreen surface this can be done.

The device is never relative, only interpretation of the obtained data is.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Compositor grabs (was: Re: [PATCH] protocol: Add DnD actions)

2015-04-18 Thread Michal Suchanek
On 17 April 2015 at 23:40, Bill Spitzak spit...@gmail.com wrote:
 On 04/17/2015 05:16 AM, Carlos Garnacho wrote:

 Let's expand on that example, maybe far-streched, but certainly possible:
 - I'm manipulating a client window with 2 fingers on the touchscreen
 (say zooming an image)
 - Any other interaction on the client makes it pop up an xdg_popup
 (say a third touch, a key combo, or the pointer)
 - Q: what happens with the two first touches?


 The touches held down should continue to go to the surface they were going
 to before, while the events related to the event that triggered the grab
 will go to the grab client.

 When the two touches are fully released then the next press of them will go
 to the grab client.

Yes, I think from user point of view this is the only option that makes sense.

Making gesture result dependent on exact touch and release event order
will lead to very inconsistent behaviour depending on minor timing
changes in performing the touches.


 Wayland could guarantee that the release and drag events go to the same
 client that got the press event. Grabs just dictate where new press events
 go.

 I don't think this situation will happen much, due to server-induced grabs
 which are fully synchronous, so you cannot press two buttons in two
 different widgets and get two grabs. Instead one of them will get the grab
 and that one will see the other button.

Actually I think this should be possible.

Consider a hypothetical photo organizing application which has photo
preview in one widget and thumbnail organizer in another widget.

As a user of such app you should be able to zoom and rotate the
preview and drag around the thumbnails around and to different windows
(eg file manager) at the same time. If starting a drag in the
thumbnail management widget (accidentally or intentionally) cancels
the zoom gesture in the preview you are going to have a hell of a time
trying to explain to a user WTF is going on.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Relative Pointer API Progress

2015-04-17 Thread Michal Suchanek
On 17 April 2015 at 09:11, Pekka Paalanen ppaala...@gmail.com wrote:
 On Fri, 17 Apr 2015 13:43:11 +0900
 x414e54 x414...@linux.com wrote:

 Thank you for the comments.
 I do have a few counterpoints but I will leave after that.

 
  Not sure an IR/laser/wii mote pointer should even be considered a
  relative pointer since they operate in absolute coordinates. Given
  this, there is no set position hint to consider. Transmitting
  acceleramoter data via a relative pointer doesn't sound reasonable.
 

 I think this is the issue right here. Pointers are not relative, mice
 are not pointers.

 What definition of a pointer are you using?

 The definition Wayland uses for a wl_pointer is a device that:
 - requires a cursor image on screen to be usable
 - the physical input is relative, not absolute

 This definition is inspired by mice, and mice have been called pointer
 devices, so we picked the well-known name pointer for mice-like
 devices.

 Specifically, a pointer is *not* a device where you directly point a
 location on screen, like a touchscreen for example. For touchscreens,
 there is a separate protocol wl_touch.

 For drawing tablets, there will be yet another procotol.

 Joysticks or gamepads fit into none of the above. For the rest of the
 conversation, you should probably look up the long gamepad protocol
 discussions from the wayland-devel mailing list archives.

And how is a joystick different from a trackpoint, exactly?

It uses different hardware interface and later different software
interface but for no good reason. It's just 2 axis relative input
device with buttons. Sure, the big joystick, gamepad directional cap
and trackpoint are at a different place of the stick size scale and
might have different hardware sensors which should be reflected with
different acceleration settings but ultimately it's the same kind of
device.


 A fundamental difference between a wiimote and a pointer, as far as I
 understand, is that wiimote might be off-screen while a pointer never
 can. You also would not unfocus a wiimote from an app window just
 because it went off-screen or off-window, right? Button events should
 still be delivered to the app? A Pointer will unfocus, because without
 grabs, the focus is expected to shift to whatever is under the pointer.

And why should wiimote not unfocus unless grabbed?

I am not sure how wiimote actually works but from your comments it
seems it's some absolute pointing device with buttons. I should be
able to use an absolute pointing device with buttons as pointer input
if I choose so. In fact, I am using my Wacom tablet that way right now
in X11 which happens to be an absolute pointing device with buttons.
And due to aspect mismatch my pointer can technically go off-screen.
And I will not change to a windowing system that does not allow that.
Similarly I should be able to map the Wacom tablet for exclusive use
with a particular application window or the application window
currently in focus. I do not see any reason why the wiimote should be
special and different and only allow mapping to a particular
application.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Relative Pointer API Progress

2015-04-17 Thread Michal Suchanek
On 17 April 2015 at 12:52, Hans de Goede hdego...@redhat.com wrote:
 Hi,


 On 17-04-15 11:47, Michal Suchanek wrote:

 On 17 April 2015 at 09:11, Pekka Paalanen ppaala...@gmail.com wrote:

 On Fri, 17 Apr 2015 13:43:11 +0900
 x414e54 x414...@linux.com wrote:

 Thank you for the comments.
 I do have a few counterpoints but I will leave after that.


 Not sure an IR/laser/wii mote pointer should even be considered a
 relative pointer since they operate in absolute coordinates. Given
 this, there is no set position hint to consider. Transmitting
 acceleramoter data via a relative pointer doesn't sound reasonable.


 I think this is the issue right here. Pointers are not relative, mice
 are not pointers.


 What definition of a pointer are you using?

 The definition Wayland uses for a wl_pointer is a device that:
 - requires a cursor image on screen to be usable
 - the physical input is relative, not absolute

 This definition is inspired by mice, and mice have been called pointer
 devices, so we picked the well-known name pointer for mice-like
 devices.

 Specifically, a pointer is *not* a device where you directly point a
 location on screen, like a touchscreen for example. For touchscreens,
 there is a separate protocol wl_touch.

 For drawing tablets, there will be yet another procotol.

 Joysticks or gamepads fit into none of the above. For the rest of the
 conversation, you should probably look up the long gamepad protocol
 discussions from the wayland-devel mailing list archives.


 And how is a joystick different from a trackpoint, exactly?

 It uses different hardware interface and later different software
 interface but for no good reason. It's just 2 axis relative input
 device with buttons. Sure, the big joystick, gamepad directional cap
 and trackpoint are at a different place of the stick size scale and
 might have different hardware sensors which should be reflected with
 different acceleration settings but ultimately it's the same kind of
 device.


 Actually joystick analog inputs are absolute not relative. They give a value
 for exactly how much the stick has moved from the center.

 Except for dpads which are really buttons not relative axis, so joysticks
 really are pretty much not like trackpoints in anyway.


Do you mean that the absolute trackpoint excentricity is somehow
translated to relative motion delta in hardware so that it does look
like a mouse although it is in fact a joystick?

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Compositor grabs (was: Re: [PATCH] protocol: Add DnD actions)

2015-04-17 Thread Michal Suchanek
On 17 April 2015 at 14:16, Carlos Garnacho carl...@gnome.org wrote:
 Hey Jonas,

 This is drifting a bit off the topic of the original thread, better to
 spin this off. I'll reply to the DnD bits in another email.

 On Fri, Apr 17, 2015 at 9:50 AM, Jonas Ådahl jad...@gmail.com wrote:
 On Thu, Apr 16, 2015 at 12:55:31PM +0200, Carlos Garnacho wrote:
 Hey Jonas,

 On Thu, Apr 16, 2015 at 10:15 AM, Jonas Ådahl jad...@gmail.com wrote:

 More generally, I have the opinion that compositors grabs should
 behave all consistently, as in:

 - Ensuring clients reset all input state (we eg. don't cancel ongoing
 touches when xdg_popup/dnd/... grabs kick in)

 What does client reset all input state mean? What state can a client
 reset?

 Let's expand on that example, maybe far-streched, but certainly possible:
 - I'm manipulating a client window with 2 fingers on the touchscreen
 (say zooming an image)
 - Any other interaction on the client makes it pop up an xdg_popup
 (say a third touch, a key combo, or the pointer)
 - Q: what happens with the two first touches?

 Ideally, these touches should be cancelled on the first surface
 (wl_touch.cancelled seems to be per client, not per-surface though),
 and stay non-reactive within the grab, so they wouldn't trigger
 anything unintended (they're implicitly grabbed on another surface
 after all)

 Currently, on the weston code, focusing a bit on the all-touch case,
 it actually happens the worst that could happen, the xdg_popup touch
 grab redirects already started touch sequences to the grabbing surface
 right away, and the original surface will be deaf to them of a sudden,
 leading to inconsistent state on both the original and the grabbing
 surface wrt those touch sequences. The DnD touch grab doesn't fare
 much better, it will ignore other touches than the one starting the
 drag, so the pre-grab touches would effectively go nowhere, and AFAICS
 similar issues arise with pointer grabs vs touch.

 With keyboards, it happens likewise, if compositors are to possibly
 consume events there, focus should move out of the previous surface.
 IMO, any grabbing model that does not redirect all input, nor ensures
 a coherent state in clients is calling for trouble...

 In the X11 world, this would roughly be a Virtual Core
 Pointer+Keyboard grab (not that touch and active grabs are trouble
 free in X11, but...), GTK+ for example does grab both devices on every
 of those grabbing places wayland/xdg protocols are trying to cater for
 (I've even pondered about adding a gdk_device_grab_pair() for years).

 I think some consistent model should be devised here, and embedded
 into the protocol (docs).


The serious problem with X11 grabs is that they are completely
independent of the event that triggered them and can only be released
by the application that started them.

So it happens that an application that gets stuck due to code error or
is running in a debugger at the time a grab is active never releases
the grab which

1) prevents other applications from receiving input
2) prevents further grabs

It might be worth considering if there is generic enough drag semantic
that the grab could be handled in compositor outside of application
code, even with click-lock and whatnot.

During a grab you might want to

1) process seemingly unrelated events in the grabbing application (eg.
Esc for canceling the action in the application)
2) process 'global' window management keybindings (eg. close any
random application, including the one that is grabbing input)
3) perform window management action (eg. raise the intended drag
target with mouse movement if you have bindings for raising widows
with mouse movement)
 - note that you might want to be able to raise windows with DnD
active but not with a slider active

As to what happens with unrelated input that is not consumed by
compositor is questionable. Canceling all but the input that started
the grab might seem like a good idea but what if I am manipulating a
window with two fingers and then I get the idea to slide a slider with
third finger?

IMHO the two finger gesture should continue at the very least after I
release the slider but preferably even as I manipulate the slider.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Relative Pointer API Progress

2015-04-17 Thread Michal Suchanek
On 17 April 2015 at 14:37, Hans de Goede hdego...@redhat.com wrote:
 Hi,


 On 17-04-15 13:17, Michal Suchanek wrote:

 On 17 April 2015 at 12:52, Hans de Goede hdego...@redhat.com wrote:

 Hi,


 On 17-04-15 11:47, Michal Suchanek wrote:


 On 17 April 2015 at 09:11, Pekka Paalanen ppaala...@gmail.com wrote:


 On Fri, 17 Apr 2015 13:43:11 +0900
 x414e54 x414...@linux.com wrote:

 Thank you for the comments.
 I do have a few counterpoints but I will leave after that.


 Not sure an IR/laser/wii mote pointer should even be considered a
 relative pointer since they operate in absolute coordinates. Given
 this, there is no set position hint to consider. Transmitting
 acceleramoter data via a relative pointer doesn't sound reasonable.


 I think this is the issue right here. Pointers are not relative, mice
 are not pointers.



 What definition of a pointer are you using?

 The definition Wayland uses for a wl_pointer is a device that:
 - requires a cursor image on screen to be usable
 - the physical input is relative, not absolute

 This definition is inspired by mice, and mice have been called pointer
 devices, so we picked the well-known name pointer for mice-like
 devices.

 Specifically, a pointer is *not* a device where you directly point a
 location on screen, like a touchscreen for example. For touchscreens,
 there is a separate protocol wl_touch.

 For drawing tablets, there will be yet another procotol.

 Joysticks or gamepads fit into none of the above. For the rest of the
 conversation, you should probably look up the long gamepad protocol
 discussions from the wayland-devel mailing list archives.



 And how is a joystick different from a trackpoint, exactly?

 It uses different hardware interface and later different software
 interface but for no good reason. It's just 2 axis relative input
 device with buttons. Sure, the big joystick, gamepad directional cap
 and trackpoint are at a different place of the stick size scale and
 might have different hardware sensors which should be reflected with
 different acceleration settings but ultimately it's the same kind of
 device.



 Actually joystick analog inputs are absolute not relative. They give a
 value
 for exactly how much the stick has moved from the center.

 Except for dpads which are really buttons not relative axis, so joysticks
 really are pretty much not like trackpoints in anyway.


 Do you mean that the absolute trackpoint excentricity is somehow
 translated to relative motion delta in hardware so that it does look
 like a mouse although it is in fact a joystick?


 Yes.

 Also have you ever used a trackpoint it is really nothing like a joystick,
 with a joystick you move the stick and then it stays in position (there
 are springs to center the stick when you let go, but you can remove those
 and everything will still work just fine).

 Where as a trackpoint is more of a presure sensor which senses how much you
 push against it in a certain direction, it does not actually move.

That's implementation detail. The input concept is the same. And yes,
it might be hard to see the similarity between a full size joystick
and a trackpoint. When you throw in all those GPIO mini joysticks and
the gamepad directional joystick-like inputs and half size joysticks
and arcade sticks you can see that there is a concept of stick input
that scales to different sizes with different limitations.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Compositor grabs (was: Re: [PATCH] protocol: Add DnD actions)

2015-04-17 Thread Michal Suchanek
On 17 April 2015 at 16:15, Carlos Garnacho carl...@gnome.org wrote:
 Hey Michal,

 On Fri, Apr 17, 2015 at 2:47 PM, Michal Suchanek hramr...@gmail.com wrote:
 snip

 In the X11 world, this would roughly be a Virtual Core
 Pointer+Keyboard grab (not that touch and active grabs are trouble
 free in X11, but...), GTK+ for example does grab both devices on every
 of those grabbing places wayland/xdg protocols are trying to cater for
 (I've even pondered about adding a gdk_device_grab_pair() for years).

 I think some consistent model should be devised here, and embedded
 into the protocol (docs).


 The serious problem with X11 grabs is that they are completely
 independent of the event that triggered them and can only be released
 by the application that started them.

 So it happens that an application that gets stuck due to code error or
 is running in a debugger at the time a grab is active never releases
 the grab which

 1) prevents other applications from receiving input
 2) prevents further grabs

 It might be worth considering if there is generic enough drag semantic
 that the grab could be handled in compositor outside of application
 code, even with click-lock and whatnot.

 Nothing prevents wayland compositors today from undoing grabs when
 clients get destroyed, or surfaces don't respond to pings. Affecting
 this proposal, the pointer should re-enter the surface underneath and
 keyboard focus restablished after the grab is broken.

Nothing prevents the X server either. It hopefully breaks the grabs
initiated by clients when they are destroyed, too. However, there is
no mechanism for breaking a grab of a client other than destroying it
*and* destroying the client cannot be done from within the X session
because it is grabbed.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Compositor grabs (was: Re: [PATCH] protocol: Add DnD actions)

2015-04-17 Thread Michal Suchanek
On 17 April 2015 at 14:16, Carlos Garnacho carl...@gnome.org wrote:
 Hey Jonas,

 This is drifting a bit off the topic of the original thread, better to
 spin this off. I'll reply to the DnD bits in another email.

 On Fri, Apr 17, 2015 at 9:50 AM, Jonas Ådahl jad...@gmail.com wrote:
 On Thu, Apr 16, 2015 at 12:55:31PM +0200, Carlos Garnacho wrote:
 Hey Jonas,

 On Thu, Apr 16, 2015 at 10:15 AM, Jonas Ådahl jad...@gmail.com wrote:

 snip

 
  I'd have to agree on that it doesn't seem like the best thing to let the
  compositor choose the preferred action. Having it apply compositor
  specific policy given what the keyboard state or similar will probably
  never work out very well, given that for example what modifier state
  means what type of action is very application dependent.
 
  On the other hand, I'm not sure we can currently rely on either side
  having keyboard focus during the drag. In weston the source will have the
  focus because starting the drag was done with a click which gave the
  surface keyboard focus implicitly, but what'd happen if the compositor
  has keyboard-focus-follows-mouse? We could probably say that drag implies
  an implicit grab on another device on the same seat to enforce no
  changing of keyboard focus, but not sure that is better.

 In gtk+/gnome we currently have the following keybindings active during DnD:

 - Cursor keys move the drag point, modifiers affect speed
 - Esc key cancels drag
 - Modifiers alone pick an action from the offered list

 So ok, the latter is dubious to punt to compositors, but there's
 basically no other choice with the 2 first ones.

 More generally, I have the opinion that compositors grabs should
 behave all consistently, as in:

 - Ensuring clients reset all input state (we eg. don't cancel ongoing
 touches when xdg_popup/dnd/... grabs kick in)

 What does client reset all input state mean? What state can a client
 reset?

 Let's expand on that example, maybe far-streched, but certainly possible:
 - I'm manipulating a client window with 2 fingers on the touchscreen
 (say zooming an image)
 - Any other interaction on the client makes it pop up an xdg_popup
 (say a third touch, a key combo, or the pointer)
 - Q: what happens with the two first touches?

 Ideally, these touches should be cancelled on the first surface
 (wl_touch.cancelled seems to be per client, not per-surface though),
 and stay non-reactive within the grab, so they wouldn't trigger
 anything unintended (they're implicitly grabbed on another surface
 after all)


The other option is to keep the implicit grab for those touches. You
could just keep those two fingers grabbed to the old surface until
released but if the old surface wants some other input there is no way
to tell. The user can, however, potentially get rid of the popup/dnd
and continue the operation that was started in the previous grab,
presumably. Which may not make that much sense with a modal popup but
might actually work with dnd.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: libinput: the road to 1.0

2015-02-23 Thread Michal Suchanek
Hello,

I heard there is attempt to collect a database of mouse speeds (DPI)
so all mice behave the same.

What is the semantic of speed for trackballs? Technically the DPI of
scanning the ball movement can be determined and is often part of the
specification but the perceived speed will likely depend on ball size
and mounting as well.

Thanks

Michal

On 23 February 2015 at 04:34, Peter Hutterer peter.hutte...@who-t.net wrote:
 Just as a heads-up, here's a short summary on what the plan is for libinput.
 There are three milestones that are somewhat independent of each other:
 * API/ABI stability promise
 * libinput 1.0
 * tablet, buttonset and touchpad gesture support

 I made vague promises (on private channels) that we'll have a stable API by
 the end of February. At the moment it looks like 0.11 is that API, there may
 be another change in the next week or so but right now it looks like we're
 good. I'll probably remove libinput_device_has_button() with one last soname
 bump to have a clean start so you may want to update that soon.
 So summary: the stable API/ABI is nigh, and 0.12 will likely have the last
 API changes (if any).

 the timeline for libinput 1.0 is currently unclear but it's just a number
 anyway once we have a stable API. I expect it to happen either around 0.13
 or 0.14 though, once parts of the gesture code have been merged and
 polished.

 tablet support: won't be in 0.12 and probably not in 1.0 either. one reason
 we're frantically trying to get it almost finished is so we see whether
 adding tablets would require changes to the rest of the API (as opposed to
 just additions). atm it looks like what is left are merely additions, so
 we're good. either way, still needs more polishing.

 buttonset support: like tablet support, but it's even less mature and we
 don't quite know what the API will be yet. but again, it doesn't look like
 we'll need to change the main API. definitely not 0.12 or 1.0.

 gesture support: we'll merge much of the gesture handling without the public
 API for 0.12 or 0.13. gestures may make 1.0, as an addition on top of the
 current API. we'll see.

 Cheers,
Peter
 ___
 wayland-devel mailing list
 wayland-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/wayland-devel
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: X11 weston lockup

2015-02-09 Thread Michal Suchanek
Hello

On 9 February 2015 at 19:02, Daniel Stone dan...@fooishbar.org wrote:
 Hi,

 On 9 February 2015 at 13:23, Michal Suchanek hramr...@gmail.com wrote:
 I don't see any indication of dri3 being used or not in the weston
 output

 It's entirely silent in Weston because it's just an EGL implementation detail.

 so I checked the Xorg log and it says that dri2 is used:

 Apparently the Xorg log doesn't tell you either. Could you please try
 building 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/tools/dri3info.c
 (standalone tool) to see if you're using DRI3, and/or running weston
 with LIBGL_DRI3_DISABLE=1 to see if that helps?

I get

Unable to connect to DRI3 on display ':0'

from dri3info.

Also I retried running weston and while the mouse cursor updates are
still very slow I cannot reproduce the lockup. Instead weston shows
the 'screen locker' after a while which did not happen before.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: X11 weston lockup

2015-02-09 Thread Michal Suchanek
On 9 February 2015 at 20:37, Daniel Stone dan...@fooishbar.org wrote:
 Hi,

 On 9 February 2015 at 19:16, Michal Suchanek hramr...@gmail.com wrote:
 On 9 February 2015 at 19:02, Daniel Stone dan...@fooishbar.org wrote:
 On 9 February 2015 at 13:23, Michal Suchanek hramr...@gmail.com wrote:
 I don't see any indication of dri3 being used or not in the weston
 output

 It's entirely silent in Weston because it's just an EGL implementation 
 detail.

 so I checked the Xorg log and it says that dri2 is used:

 Apparently the Xorg log doesn't tell you either. Could you please try
 building 
 http://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/tools/dri3info.c
 (standalone tool) to see if you're using DRI3, and/or running weston
 with LIBGL_DRI3_DISABLE=1 to see if that helps?

 I get

 Unable to connect to DRI3 on display ':0'

 from dri3info.

 Also I retried running weston and while the mouse cursor updates are
 still very slow I cannot reproduce the lockup. Instead weston shows
 the 'screen locker' after a while which did not happen before.

 Are you running Weston from git? If so, could you please try reverting
 to commit 3e4d4bdd (i.e. right before 'compositor-x11: Move the x11
 window close to an idle handler') and see if that changes anything;
 also, if you're running Wayland itself from git, reverting to 7575e2ea
 (i.e. right before 'event-loop: Dispatch idle callbacks twice').

No, I am running weston 1.6.0 which packaged is in Debian (as the log shows).

OK, it's Debian. How obsolete is that?

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: X11 weston lockup

2015-02-09 Thread Michal Suchanek
On 9 February 2015 at 12:35, Michal Suchanek hramr...@gmail.com wrote:
 Hello,

 I am running X11 on an AMD Redwood card with XMonad window manager and 
 xcompmgr.
Actually, I forgot I replaced the graphics card:


Date: 2015-02-09 CET
[12:15:49.609] weston 1.6.0
   http://wayland.freedesktop.org/
   Bug reports to:
https://bugs.freedesktop.org/enter_bug.cgi?product=Waylandcomponent=westonversion=1.6.0
   Build: 1.5.93-5-g2858cc2 configure.ac: bump version to
1.6.0 (2014-09-19 13:40:14 +0300)
[12:15:49.609] OS: Linux, 3.18.0-trunk-amd64, #1 SMP Debian
3.18.3-1~exp1 (2015-01-18), x86_64
[12:15:49.609] Starting with no config file.
[12:15:49.609] Loading module '/usr/lib/x86_64-linux-gnu/weston/x11-backend.so'
[12:15:49.611] initializing x11 backend
[12:15:49.612] Loading module '/usr/lib/x86_64-linux-gnu/weston/gl-renderer.so'
[12:15:49.629] warning: EGL_EXT_buffer_age not supported. Performance
could be affected.
[12:15:49.629] warning: EGL_EXT_swap_buffers_with_damage not
supported. Performance could be affected.
[12:15:49.629] Using gl renderer
[12:15:49.629] launching '/usr/lib/weston/weston-keyboard'
xkbcommon: ERROR: Symbol Alt_L added to modifier map for multiple
modifiers; Using Mod4, ignoring Mod1
xkbcommon: ERROR: Symbol Alt_R added to modifier map for multiple
modifiers; Using Mod4, ignoring Mod1
xkbcommon: ERROR: Symbol Alt_L added to modifier map for multiple
modifiers; Using Mod4, ignoring Mod1
xkbcommon: ERROR: Symbol Alt_R added to modifier map for multiple
modifiers; Using Mod4, ignoring Mod1
xkbcommon: ERROR: Key META added to modifier map for multiple
modifiers; Using Mod4, ignoring Mod1
[12:15:49.693] EGL version: 1.4 (Gallium)
[12:15:49.694] EGL vendor: Mesa Project
[12:15:49.694] EGL client APIs: OpenGL OpenGL_ES OpenGL_ES2 OpenVG
[12:15:49.694] EGL extensions: EGL_WL_bind_wayland_display EGL_KHR_image_base
   EGL_KHR_image_pixmap EGL_KHR_image EGL_KHR_reusable_sync
   EGL_KHR_fence_sync EGL_KHR_surfaceless_context
   EGL_NOK_swap_region EGL_NV_post_sub_buffer
[12:15:49.694] GL version: OpenGL ES 3.0 Mesa 10.3.2
[12:15:49.694] GLSL version: OpenGL ES GLSL ES 3.0
[12:15:49.694] GL vendor: nouveau
[12:15:49.694] GL renderer: Gallium 0.4 on NVC1
[12:15:49.694] GL extensions: GL_EXT_blend_minmax GL_EXT_multi_draw_arrays
   GL_EXT_texture_filter_anisotropic
   GL_EXT_texture_compression_dxt1 GL_EXT_texture_format_BGRA
   GL_OES_compressed_ETC1_RGB8_texture GL_OES_depth24
   GL_OES_element_index_uint GL_OES_fbo_render_mipmap
   GL_OES_mapbuffer GL_OES_rgb8_rgba8 GL_OES_standard_derivatives
   GL_OES_stencil8 GL_OES_texture_3D GL_OES_texture_npot
   GL_OES_EGL_image GL_OES_depth_texture
   GL_OES_packed_depth_stencil GL_EXT_texture_type_2_10_10_10_REV
   GL_OES_get_program_binary GL_APPLE_texture_max_level
   GL_EXT_discard_framebuffer GL_EXT_read_format_bgra
   GL_NV_fbo_color_attachments GL_OES_EGL_image_external
   GL_OES_vertex_array_object GL_ANGLE_texture_compression_dxt3
   GL_ANGLE_texture_compression_dxt5 GL_EXT_texture_rg
   GL_EXT_unpack_subimage GL_NV_draw_buffers GL_NV_read_buffer
   GL_EXT_map_buffer_range GL_OES_depth_texture_cube_map
   GL_OES_surfaceless_context GL_EXT_color_buffer_float
   GL_EXT_separate_shader_objects GL_EXT_shader_integer_mix
[12:15:49.694] GL ES 2 renderer features:
   read-back format: BGRA
   wl_shm sub-image to texture: yes
   EGL Wayland extension: yes
[12:15:49.694] Chosen EGL config details:
   RGBA bits: 8 8 8 0
   swap interval range: 0 - 0
[12:15:49.694] x11 output 1200x1600, window id 50331653
[12:15:49.694] Compositor capabilities:
   arbitrary surface rotation: yes
   screen capture uses y-flip: yes
[12:15:49.694] Loading module
'/usr/lib/x86_64-linux-gnu/weston/desktop-shell.so'
[12:15:49.774] launching '/usr/lib/weston/weston-desktop-shell'


 I tried to run weston under X and noticed that cursor updates are very 
 sluggish.

 After a while any updates in the weston window stopped completely (eg.
 cursor does not move, clock shows old time).

 Is there any point debugging this issue or is it just expected that
 the x11 plugin sometimes breaks?

 Thanks

 Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


X11 weston lockup

2015-02-09 Thread Michal Suchanek
Hello,

I am running X11 on an AMD Redwood card with XMonad window manager and xcompmgr.

I tried to run weston under X and noticed that cursor updates are very sluggish.

After a while any updates in the weston window stopped completely (eg.
cursor does not move, clock shows old time).

Is there any point debugging this issue or is it just expected that
the x11 plugin sometimes breaks?

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: X11 weston lockup

2015-02-09 Thread Michal Suchanek
On 9 February 2015 at 12:44, Daniel Stone dan...@fooishbar.org wrote:
 Hi,

 On 9 February 2015 at 11:37, Michal Suchanek hramr...@gmail.com wrote:
 On 9 February 2015 at 12:35, Michal Suchanek hramr...@gmail.com wrote:
 I tried to run weston under X and noticed that cursor updates are very 
 sluggish.

 After a while any updates in the weston window stopped completely (eg.
 cursor does not move, clock shows old time).

 Is there any point debugging this issue or is it just expected that
 the x11 plugin sometimes breaks?

 No, it's definitely not expected. A few of us still use it quite
 often, and not only does it seem to work in general, but it definitely
 should work as well.

 If updates freeze, I expect this is due to EGL blocking. DRI3 seems to
 be the primary cause of this: could you please try to use DRI2 and see
 how that goes?

I don't see any indication of dri3 being used or not in the weston
output so I checked the Xorg log and it says that dri2 is used:
[21.375] (II) [drm] nouveau interface version: 1.2.1
[21.375] (II) Loading sub module dri2
[21.375] (II) LoadModule: dri2
[21.375] (II) Module dri2 already built-in
[21.375] (--) NOUVEAU(0): Chipset: NVIDIA NVC1
[21.375] (II) NOUVEAU(0): Creating default Display subsection in
Screen section
Default Screen Section for depth/fbbpp 24/32
[21.375] (==) NOUVEAU(0): Depth 24, (--) framebuffer bpp 32
...
[21.578] (II) NOUVEAU(0): Channel setup complete.
[21.579] (II) NOUVEAU(0): [COPY] async initialised.
[21.582] (II) NOUVEAU(0): [DRI2] Setup complete
[21.582] (II) NOUVEAU(0): [DRI2]   DRI driver: nouveau
[21.582] (II) NOUVEAU(0): [DRI2]   VDPAU driver: nouveau
[21.810] (II) AIGLX: enabled GLX_MESA_copy_sub_buffer
[21.810] (II) AIGLX: enabled GLX_ARB_create_context
[21.810] (II) AIGLX: enabled GLX_ARB_create_context_profile
[21.810] (II) AIGLX: enabled GLX_EXT_create_context_es2_profile
[21.810] (II) AIGLX: enabled GLX_INTEL_swap_event
[21.810] (II) AIGLX: enabled GLX_SGI_swap_control and GLX_MESA_swap_control
[21.810] (II) AIGLX: enabled GLX_EXT_framebuffer_sRGB
[21.810] (II) AIGLX: enabled GLX_ARB_fbconfig_float
[21.810] (II) AIGLX: GLX_EXT_texture_from_pixmap backed by buffer objects
[21.810] (II) AIGLX: Loaded and initialized nouveau
[21.810] (II) GLX: Initialized DRI2 GL provider for screen 0
[21.811] (II) NOUVEAU(0): NVEnterVT is called.
[21.930] (II) NOUVEAU(0): Setting screen physical size to 211 x 158

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland on Beagle board

2011-10-01 Thread Michal Suchanek
2011/10/1 Üstün Ergenoglu ustun.ergeno...@gmail.com:
 Recently, there have been talks about SGX drivers adding support for
 Wayland. I was googling around but couldn't find any specific build
 instructions for omap3. I currently have a Beagle board running Ubuntu
 10.10. I'll be glad to hear if anybody got it working and how.

You can try upgrading to a newer Ubuntu. Ubuntu has some Wayland
packages so if those work for you it would be a good place to start.

HTH

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Window stacking

2011-09-16 Thread Michal Suchanek
On 16 September 2011 11:18, Giovanni Campagna scampa.giova...@gmail.com wrote:


 Sorry, I also assume any task manager will just be part of the
 compositor process. The problem is that the user of the task manager
 probably wants an icon in there that says GIMP even though there are
 perhaps 2 image windows that are raised by it. The best way I can see to
 do this is to have dummy windows that you can also send all the requests
 and notifys to. GIMP would create this, and requests to raise this dummy
 window would instead raise all the real ones.

 GNOME Shell already groups by pid, and allow for actions that affect the
 whole group, without worrying about the group leader window.

Group by pid is not portable. The client should be able to define
window groups of windows that logically fit together, be they all (or
some  - eg. when running through a proxy)  windows of a single client
going through a single connection or windows of multiple clients going
through multiple connection participating in a single apparent
application by way of plugins of whatever.

If windows grouping support is wanted then it should be implemented
such that the client can say to which group it belongs. Then a window
group is something that a client should be able to create and it
should be able to pass some handle to other clients so that they can
join the group which goes back to managing resources and security.

 In any case, the problem with duplicating the actions client-side is
 that you don't have the complete picture and don't know the effects. For
 example, let's say you click on GIMP and as a result the client asks
 to raise both image windows.
 What if one of them is an another workspace? In current mutter, this
 results in the demands-attention state (which in the shell is translated
 to a GIMP is ready notification).

You also may or may not want to raise the other parts of gimp. In OS X
there are two operations avaialble - raise window and raise
application. And both have its uses on hopelessly cluttered desktop.

 Current shell solves this by activating the last window when selecting
 an app, but that's a just one possibility. In GNOME Panel, IIRC, there
 was an option that brought all windows to the current workspace when
 activating a group.


In general the WM is in the position to manage windows because it
knows what windows are actually present on the desktop and what user
asked to do with which windows and what window management preferences
are set and applications are in the position to hint how their windows
fit together (or not) and what kind of windows they are.

For seamless experience there needs to be some cooperation between the
two sides, neither has the full picture.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: seprate window management component

2011-09-14 Thread Michal Suchanek
Hello,

On 14 September 2011 16:55, cat zixo...@gmail.com wrote:
 would it be to much trouble to make window management a proxy program?

The wayland server has to know how the windows stack but the clients
are not trusted to tell it how the windows should stack so either the
server has to figure that out by itself or a separate privileged
component (eg. a proxy or a plugin or attached process of some sort)
would need to decide that.

I thought that it would make sense to determine what policy needs to
be decided by this manager and add protocol for it in Wayland even if
the default implementation will just grant every request an
application ever makes.

When I asked about that it was not outright rejected but there was
nobody else who though putting some structure into Wayland would be of
any use.

I guess this will be considered only after Wayland evolves so many
warts as X has and putting some sanity in the protocol will be
impossible at that point for reasons of backward compatibility and
whatnot.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Xft and XCursor

2011-06-17 Thread Michal Suchanek
On 17 June 2011 10:29, Lukas Sommer sommer...@gmail.com wrote:
 Hello.

 About Wayland I have the following questions:

 When we run, in some future, a Wayland-only session, how will
 Wayland-enabled applications behave in the following things:

 - Xft: How to retrieve the font DPI setting and how to determine which font
 at which size will be used?

 - XCursor: Will the X11 cursor themes work on Wayland or will there be a
 different standard? How can we configure it?

 Currently, I'm taking a look at KDE configuration and how to set the font
 DPI and how to change the cursor size. Will this still work in Wayland or do
 we need to reimplement it?

It will definitely have to be reimplemented, at the very least to read
the settings from Wayland and not X.

Cursors are likely trivial.

However, DPI is a problem. I am not sure Wyaland even handles DPI in
any way at this point.

One problem is that DPI (and RGB order or even the available color
components) may vary in different parts of the display (eg. in
multiscreen configuration). X and current Xft uses don't account for
this.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Window Management Proposal

2011-05-17 Thread Michal Suchanek
On 16 May 2011 23:13, Bill Spitzak spit...@gmail.com wrote:
 Michal Suchanek wrote:

 The thing is that in Wayland the server is not aware of any remote vs
 local windows. Remote applications are in no way part of the protocol
 and will supposedly sneak in later by means of some remoting proxy.

 My understanding is the exact opposite: the compositor is *VERY* aware of
 remote windows, as it is it's job to do the remoting. A client connects to a

Then point me to any place where it does any remoting.

 remoting wayland compositor, which sends the window contents and update

What is this remoting compositor? A proxy that allows remote clients
to connect to Wayland on other system?

 information to the real wayland compositor on the remote machine. The real
 one knows how to communicate with the remoting compositor.

The proxy is just a plain client, how does the local Wayland tell it's a proxy?


 As a technical detail since vrefresh is the point when the screen
 should be updated, it typically happens 50-60times per second and
 seemingly smooth movement requires about 25fps at the very least the
 timeout for the compositor to start drawing some replacement should be
 at most some 2-3 vrefresh intervals. This is something that can be
 communicated to the client so that it is well aware when it lags.

 My tests show that update can be much slower than this and still appear
 smooth. The important thing is that the contents update in exact lock-step
 with the border and never flash, but rates as slow as 5fps look quite
 smooth. This can be seen in some X media players that do both double
 buffering and client-side decoration.

It is still in the same ballpark - fractions of second. And would
probably depend on the user.


 The client, however, must communicate to the wayland window manager
 the resizability of its window so that the windowmanager can tell
 apart clients that lag and clients that plain refuse to resize because
 they rely on the window being fixed size (yuck).

 The Wayland client will send an indication that it responded to the resize
 request, so the compositor will know this happened, even if the client
 decided not to change the window size. It is also the client's
 responsibility to initiate the resize, so it can just skip this if it knows
 it is not resizable.

 If the replacement is the last window content stretched to the new
 size and slightly blurred then the distortion might not be noticable
 even for clients that take slightly longer but not too long. For even
 less cooperative clients the rubberband or full window with some
 generic stoned image would be required. There is room for user
 preferences here for sure.

 Comparing compiz and old X, this looks worse to me. It looks best to just
 have all the new window area contain whatever pixels were there before (ie
 the intersection of the old and new window, surrounded by pixels from other
 windows, old window borders, etc). The reason is that the pixels only change
 once, from old contents to new contents. Putting anything else there makes
 them change twice, from old to temporary to new contents.

And what's the problem if the difference between the replacement
content and the actual content is small and very unlikely to be
noticed?

Anyway, there is not only look but also feel of the environment. You
are obsessed with the window looking pixel-correct. However, for the
feel to be smooth the window must quickly react to user action. If
mouse resizes are implemented this is one of the few places where many
user actions happen in quick succession. The drag action results in
lots of small resizes and all of these have to happen fast. Otherwise
the UI appears laggy or worse, the user gets lost because the screen
content does not correspond to the actual state - window sizes.


 On the other hand, some apps always lag behind and probably should be
 allowed to do so if they are very important to the user. The question
 is how. Possibly this could be *configured* via a special effect-plugin
 that manages single or all windows different to the default setting.
 This is like *theme'ing* those problematic issues ;) At least it allows
 the server to follow a strict default mode without forbidding the user
 to decide differently...

 I think the wayland compositor could track how long the clients take to
 respond to events. They would only disable if they suddenly took several
 times longer than before. If the recorded lag exceeds a threshold the
 compositor could resort to rubber-band resize.


No way. This must be a hard limit on the compositor side so that the
UI works reasonably at all times. It should be configurable by the
user but not the client applications.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Window Management Proposal

2011-05-16 Thread Michal Suchanek
On 16 May 2011 16:17, Solerman Kaplon soler...@gmail.com wrote:
 Em 13-05-2011 15:38, Michal Suchanek escreveu:

 If the client takes, say, half a second to update which is completely
 reasonable for a full re-layout and repaint of a window that normally
 gets only partial updates then the resize will be *very* jerky, and if
 the client is uploading a bitmap over network to update the window you
 can't really avoid that.

 That's why Windows server disable repaint's on windows resize by default
 when running over a network. It's just sends the final resize to the window
 and you get partial screen updates all the way to it. Users seems to not be
 really annoyed by it really.

The thing is that in Wayland the server is not aware of any remote vs
local windows. Remote applications are in no way part of the protocol
and will supposedly sneak in later by means of some remoting proxy.

Also there are local sources of lag like applications that are low
priority, busy, swapped out, poorly written, etc.

This is something that a decent desktop must deal with.


 You can make the compositor such that the bookkeeping required for
 resizing a window in the compositor does not take long but you have no
 guarantee that every client will do the same, and it's not even
 possible for all clients to achieve.

 If you take the client in a debugger example (or otherwise stopped
 client) the window would resize only after the client is started
 again, etc, etc.

 I think current resize in X is good enough. If you are using a debugger, you
 ain't any kind of normal user who can't understand that if you pause all
 threads in the debugger you going to hang screen drawing for that app at the
 same time.

Well, the whole thread is about the fact that many people here think it is not.

On 14 May 2011 01:48, Bill Spitzak spit...@gmail.com wrote:
 Michal Suchanek wrote:

 It may be rubber-band or it may be some other effect but either way
 you need something to draw on the screen until the client performs the
 update which will draw a not fully updated window in case the client
 does not update fast enough and by some is unacceptable in wayland.

 A rubber band resize is part of the window management design and is not a
 partial update, any more than the mouse cursor atop a window means it is not
 fully updated. The image is fully expected to appear when the user drags the
 mouse.

 A rubber band that appears after a timeout when it detects the client is
 locked up is what you say, as the user will see an image that they would not
 see if the client was responsive. However there is nothing wrong with wrong
 images when the compositor detects that the client is not responding. What
 is necessary however is that a client that reacts within a timeout will
 never display a partially updated image.

I guess that this is something that can accommodate both client that
repaint in time to have smooth resizes and imperfect clients that
require workaround in the compositor for the resizes to appear smooth.

As a technical detail since vrefresh is the point when the screen
should be updated, it typically happens 50-60times per second and
seemingly smooth movement requires about 25fps at the very least the
timeout for the compositor to start drawing some replacement should be
at most some 2-3 vrefresh intervals. This is something that can be
communicated to the client so that it is well aware when it lags.

The client, however, must communicate to the wayland window manager
the resizability of its window so that the windowmanager can tell
apart clients that lag and clients that plain refuse to resize because
they rely on the window being fixed size (yuck).

If the replacement is the last window content stretched to the new
size and slightly blurred then the distortion might not be noticable
even for clients that take slightly longer but not too long. For even
less cooperative clients the rubberband or full window with some
generic stoned image would be required. There is room for user
preferences here for sure.

On 14 May 2011 12:09, maledetto malede...@online.de wrote:
 The only *generally acceptable* way to manage lags in communication I
 see is that the server *fades-out* the window in question to signal that
 the client is unresponsive and waits for it to respond in a time before
 the kill-dialog appears. This is a good standard that doesn't need
 hacks or special effects and doesn't paint nonsense on screen.

I don't think a client needs to be responsive at all times. It only
needs to be responsive at times when a response is required, at other
times it can do nothing at all and it's fine.

eg. a window resize to be completed properly requires the client to
submit a buffer of the new size so that the compositor has some
content that it can paint in the new resized window. However, when the
compositor decides to hide a window (eg. to switch virtual desktops)
the client should be informed but no action is necessarily required on
the client's

Re: Wayland Window Management Proposal

2011-05-13 Thread Michal Suchanek
On 13 May 2011 11:26, Daniel Stone dan...@fooishbar.org wrote:
 On Thu, May 12, 2011 at 06:22:01PM +0200, Michal Suchanek wrote:
 You can't expect that every single client is high-priority and lag-free.

 Run better clients, then? Or stop trying to micro-optimise for the case
 of pressing the close button on an unresponsive client?


This is not about pressing the close button. It need not have an
immediate response and people can accept that, there are workarounds
and you close windows only so often.

However, window resizes need to be responsive otherwise you introduce
lag, possibly to the point that the person moving the mouse has no
clue what is going on the moment a window resize is initiated.

Lag is something that can easily kill otherwise workable interface,
and fractions of second might seem reasonable on the drawing board but
they are still lag.

Lag-free resize is not something reasonably doable if you have to wait
for the client to respond for every size change to take place.

X can handle remote clients and low priority clients participating in
the desktop environment.

If Wayland can't then it is not an evolution of X, it is a step backward.

And this is not skipping a  micro-optimization, this is closing the
desktop for entry of whole classes of clients.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Window Management Proposal

2011-05-13 Thread Michal Suchanek
On 13 May 2011 19:45, Corbin Simpson mostawesomed...@gmail.com wrote:
 I was trying to stay out of this, but...

 On Fri, May 13, 2011 at 9:03 AM, Michal Suchanek hramr...@centrum.cz wrote:
 This is *not* *about* *optimization*.  If you rely on *every* *single*
 *client* to be responsive for your WM to work then the moment *any*
 *single* *client* becomes unresponsive your WM *breaks*.

 If you think a non-broken WM is an optimization I guess we live in
 somewhat different worlds.

 Strawman; it is always possible to multiplex I/O in a way that
 prevents any single client from blocking things being done in other
 clients or internal server work.


No, you can't if you bind the visible reaction to the input to some
operation potentially unbound in time - client update.

The user cannot figure out that the window is virtually resized and
the WM is waiting for client update if the on-screen window is still
the same size.

If the client takes, say, half a second to update which is completely
reasonable for a full re-layout and repaint of a window that normally
gets only partial updates then the resize will be *very* jerky, and if
the client is uploading a bitmap over network to update the window you
can't really avoid that.

You can make the compositor such that the bookkeeping required for
resizing a window in the compositor does not take long but you have no
guarantee that every client will do the same, and it's not even
possible for all clients to achieve.

If you take the client in a debugger example (or otherwise stopped
client) the window would resize only after the client is started
again, etc, etc.

Oh, and BTW we would not really need this debate if there was a
provision for replacing the compositor or window manager or whatever
but some time earlier it was suggested that it should be built into
the Wayland server and be so awesome that nobody will ever need to
replace it with a different one.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Window Management Proposal

2011-05-13 Thread Michal Suchanek
2011/5/13 Rui Tiago Cação Matos tiagoma...@gmail.com:
 On 13 May 2011 18:59, Mike Paquette paquette...@gmail.com wrote:

 Completely agree. The compositor/WM has no business in working around
 application bugs. If application programmers are lazy and can't get
 their windows acting timely on input then, the ecosystem (users,
 distributors) will just naturally select those apps out and the well
 behaved ones will just be more popular.

 Hiding badly designed applications' problems is just rewarding bad
 work and, in this case, it's even worse. If the compositor acts on
 input before the application draws the final frame it will create
 graphical flashes (background color, autofill, junk, whatever) for
 *every* application which actually penalizes the good ones because the
 graphical glitch will be there, even if for a single frame, since this
 is inherently how server side asynchronous actions behave.

Again, do you really know only one transition between two frames - flashing?

With all the effects compositors are capable of today this is the only
thing you can think of?

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Window Management Proposal

2011-05-13 Thread Michal Suchanek
On 13 May 2011 22:14, Elijah Insua tmp...@gmail.com wrote:

 On May 13, 2011, at 4:02 PM, Casey Dahlin wrote:

 On Fri, May 13, 2011 at 03:13:01PM +0200, Michal Suchanek wrote:
 On 13 May 2011 11:26, Daniel Stone dan...@fooishbar.org wrote:
 On Thu, May 12, 2011 at 06:22:01PM +0200, Michal Suchanek wrote:
 You can't expect that every single client is high-priority and lag-free.

 Run better clients, then? Or stop trying to micro-optimise for the case
 of pressing the close button on an unresponsive client?


 This is not about pressing the close button. It need not have an
 immediate response and people can accept that, there are workarounds
 and you close windows only so often.

 However, window resizes need to be responsive otherwise you introduce
 lag, possibly to the point that the person moving the mouse has no
 clue what is going on the moment a window resize is initiated.


 You can always use the rubber band style of resize, in which case the 
 window
 only needs to be told about the resize, and respond to it, when the user 
 picks
 a size and drops the corner.

 In fact you can pretty easily do both, where the rubber band appears when the
 window hasn't managed to keep up, so the user still has a visual cue to what
 they are doing.

 --CJD

 Agreed, although I've always hated the rubber band technique as it makes 
 windows feel fragile.  In the slow/unresponsive application case, they 
 probably are fragile.


It may be rubber-band or it may be some other effect but either way
you need something to draw on the screen until the client performs the
update which will draw a not fully updated window in case the client
does not update fast enough and by some is unacceptable in wayland.

Also note that this requires agreement between Wayland and the
application whether the window is resizable to a particular size.
Otherwise you might end up with a rubber band displayed forever and
both Wayland and the client thinking everything is OK.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: Wayland Window Management Proposal

2011-05-12 Thread Michal Suchanek
On 11 May 2011 20:25, Bill Spitzak spit...@gmail.com wrote:
 Michal Suchanek wrote:

 Moves and resizes implemented in the client can't work well.

 Any resize solution that does not allow an atomic on-screen update of a
 window to it's new size, with the resized decorations and contents, is
 unacceptable. The whole point of Wayland is that the user NEVER sees a
 partially-updated window.

 It is therefore impossible to finish a resize without waiting for the client
 to update the window contents. Since you have to wait for that, there is no
 reason the client can't also draw the decorations. I'm sorry if this makes
 writing clients harder. Deal with it.

Always waiting for the client is something that cannot be upheld.

There are situations when

 - the client is busy or stuck
 - the client is swapped out or a low priority process
 - the client is remote and therefore resizing it will take some time
whatever  you do

If Wayland can't deal with any of the above it's junk.

The window management functions should be working without lag so long
as the window manager and Wayland server have enough resources and
high enough priority.

You can't expect that every single client is high-priority and lag-free.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: client side decorations

2011-05-12 Thread Michal Suchanek
On 12 May 2011 19:47, Bill Spitzak spit...@gmail.com wrote:
 microcai wrote:

 They can't care how big a windows is in the pixel, but in the inch.

 People should have different monitors with different DPI. Windows should
 stay same size regardless the DPI.

 Force DPI==96 on every monitor is a stupid idea, and we should avoid it
 on the protocol side.

 The reason this had to be done was due to the incredibly stupid idea that
 only *fonts* are measured in points, and every other graphic is measured in
 pixels. This strange idea was on both X and Windows, likely due to the
 initial programs being terminal emulators where there was no graphics other
 than text. What this really means is that there are two different coordinate
 systems for all the graphics, and programmers just assumed these two systems
 always lined up exactly like they did on their screen.

 After a lot of awful looking output on screens with different DPI, both

That's not really true. Of course, there are things that look awful in
different DPI (or because you happen to have slightly different fonts
than the author) because they were done by braindead people. This
includes but is not limited to

- many web sites
- (some?) Adobe and HP software
- OS X (which actually prevents changing the DPI in the first place
leaving you with ridiculous font sizes)

Note that very first UIs were virtually bitmap-free and all the WM
buttons and borders were vector graphics or generated on the spot from
a few user-specified parameters (Windows 3.1, old stuff like mwm,
olwm, fvwm, ..) and could be scaled to any size you specified.

Then bitmaps were used heavily and made their way to stuff like window
borders because the level of complexity people desired for their
eye-candy could not be done with simple vector graphics anymore, and
complex vector graphics required more power than people were willing
to sacrifice for window borders.

Still GTK bitmap themes have provisions for scaling the bitmaps to
suit any text sizes in window captions and buttons. The button border
will be relatively thinner to the text size (or thicker for smaller
text) but will still render as intended, and people are free to choose
a different theme.

Similarly in Windows you can set different border sizes or font sizes
if you accept that tons of braindead software breaks (eg. Adobe CS or
HP scanner dialogs). Also Windows bitmapped window buttons look
terribly on non-default sized borders but vector buttons are still
available.

The web is a problem because due to the specs being how they are they
are not really friendly to people conscious about the look of their
sites and many effects can't be achieved in a portable way without
setting the element sizes exactly in pixels. You can say it's a defect
in the specs or an error on the side of the people who are trying to
stretch them to make their sites look flashy at the expense of
usability but it's totally offtopic here.

 Windows and then X resorted to just forcing the DPI to 96, thus making the
 systems obey the programmer's assumptions. Bad DPI settings are still a bug
 on X, producing ridiculous large and tiny font sizes unexpectedly, and this
 is NEVER wanted.

 The correct solution would have been to specify all coordinates in the same
 units, likely 1 unit in the CTM. For practical reasons on current-day
 screens this wants to be a pixel by default, but there is no reason a
 program can't read the DPI and set the CTM to draw actual-size graphics.

Well, they can't on systems where DPI is always forced to be 96
regardless of the actual screen physical properties.

Also this is reportedly[1] done so that people don't get ridiculously
small text on TVs but it is at the expense of getting ridiculously
small DPI on most netbooks and high-end notebooks so this is not
really a good tradeoff IMHO.

Thanks

Michal

[1] https://bugs.freedesktop.org/show_bug.cgi?id=23705
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: client side decorations

2011-05-11 Thread Michal Suchanek
On 10 May 2011 05:46, Russell Shaw rjs...@netspace.net.au wrote:
 On 10/05/11 07:29, Daniel wrote:

 El dg 08 de 05 de 2011 a les 09:47 -0700, en/na Bill Spitzak va
 escriure:

 Though it is possible, I don't like the idea of clients sending hints
 about what areas are the close box or window border, since it implies
 there are such concepts as title bar and close box. The compositor
 can just have clicks anywhere raise and move the non-responsive
 window, and lots of clicks (indicating user frustration) pop up a box
 offering to kill the program. On Linux, since it is standard,
 compositors can also have Alt+click always raise/move windows, and alt
 +right click pop up a menu of compositor-side window actions.


 This would be actually a good way to handle it. Use an special mode or
 tool, a la xkill, to deal with stuck applications. It can take the form
 of an special key/mouse combination, gestures, or as I said before, an
 external tool like xkill. Note that it needs not be limited to killing,
 but could do any other thing, like minimizing, sending to another
 virtual desktop, etc.

 Keeping track of dead clients could be done like this:

 A client program opens a socket connection to the window server,
 and the window server determines the PID of the client via a
 means that the client has no control over (some kind of kernel
 call that can determine the client using that socket).

 The client also sends the window server the title bar area
 that contains the maximize/minimize/close buttons.

 All clients must handle an is_alive probe event from the window
 server at any time, replying with something unspecified to confirm
 it is not dead.

 Whenever the mouse is clicked in the title bar, the window server
 can expect the client to send it an is_alive notification within
 say 1 second. If it doesn't, the window server can send the client an
 is_alive probe event. If there's no response after a certain time,
 the window server can kill the client. Alternatively, it could pop up
 a gui task manager window for the user to manually kill stuff.

Clearly it's up to the user to decide if an application is stuck or not.

The is_alive request may look like a nice idea at the first glance but
it is not very reliable.

How long timeout is allowed before the application is marked
'unresponsive'? This is clearly application and system specific. Any
timeout based protocol is inherently unreliable.

The application may have a separate thread to fulfill this is_alive
requirement and the rest may still be stuck.

The application may be running but in undesirable state which is not
something the compositor can decide.

An utility like xkill resolves all of the above. You don't like the
application so you get rid of it.

The compositor can resize or hide the application window at any time
without any cooperation from the application.

The application may publish hints as to how it wants the window
content treated when it does not match the size of the displayed
window and the compositor may use these to present the window in a
reasonable way until the application resizes the content. This
requires that the compositor notifies the application when the window
is resized.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: wayland screen locker and security in general

2011-04-12 Thread Michal Suchanek
On 12 April 2011 21:04, Iskren Chernev iskren.cher...@gmail.com wrote:
 I don't think this lengthy discussion led to any concrete answers, but I do
 think that the questions are important and need such answers.
 I'll try to summarize the problems that need attention:
 1. screen locking
 1.1 who is going to implement it: compositor/compositor plugin/app

The problem is that screen locking is in fact several generic things in one.

While it is completely possible to have a screenlock plugin that
does just that but soon you will need fullscreening plugin, vnc
plugin, ...

A screensaver needs
 - event spying (to know when the user is active/idle) - also useful
to IM clients, input methods (to get the actual events), magnifying
glass application (to know cursor position), keyboard launchers (which
launch an application whenever a special button is pressed)
 - event grabbing - to prevent other applications receiving events
while the screensaver, and especially the unlock dialog is running -
also useful to games and remote desktop clients (to lock input in
window), possibly drag and drop implementation (to allow dragging
objects over foreign windows), input methods (to get all events while
the method is active)
 - screen grabbing - the ability to paint over whole screen,
preventing anything else doing so - also useful to various other
applications

Some things that a screensaver does not require but other special clients might
 - event injection - vnc server, input methods (both text input
methods and pointer gesture filters)
 - screen spying - used traditionally by some screensaver effects and
useful for magnifying glass application and other effect applications

Some things that a normal applicationis expected to do but can be
still limited in some cases
 - create arbitrary amount of window(s) of arbitrary size
 - refresh the window content as often as the application wishes, eg.
million times per second
 - receive events that happen over the application window - some
windows may not require input and should be marked so so that they
don't get focus either
 - set an receive paste buffers
 - register paste buffer format convertor

These are not that many operations, and given a protocol that includes
the concept of security and granting applications arbitrary subsets of
these possible access permissions should accomodate pretty much any
current application.

But the replies so far aside from yours can bu summarized as Shush,
we don't need that, we can just add a hack to the compositor for
this.


 1.2 inhibit locking for movie players, slide-shows etc
 1.2.1 what protocol to use to make sure screen saver wont stay inhibited
 forever because of broken app
 1.2.2 what communication mechanism to use: compositor/dbus/other
 2. Full screen apps -- need a way to specify that no other windows should be
 displayed on top of this one. What if multiple apps want to claim the
 screen?
 3 Security issues
 3.1 Protect from bad clients allocating too much pixmap space (and maybe
 other resources)
 3.2 Make sure that the password requesting prompts are genuine, i.e. no app
 is going to look like another one (screen saver) and request the user pass
 in the same way so the user is tricked to enter it inside. I don't think any
 other OS tries to do this, but I might be wrong.

Other OSes do try, eg NT based Windows.

 4. In case we choose to use apps to implement special features as screen
 locking (opposed to compositor[-plugins]) then I don't understand how can an
 app authorize itself for the compositor. For example the screen saver app
 should tell the compositor that it is that exact app, so the compositor will
 grant special privileges (or proxy or arbiter, whatever). So is this process
 going to involve asymmetric cryptography, and if yes where is the private
 key going to be stored. This may be a very stupid question, but I haven't
 seen anything of that sort so I'm wondering how its going to be made.

Since on POSIX the unit of security is UID no cryptography is
required, a magic cookie like ~/.Xauthority suffices to prove that the
application is running under that user on that machine. This should
make it possible to run Wayland applications using sudo and not have
those applications take over the user's session.

The default policy should include some limit on the amount of
resources an application can allocate to prevent Wayland going down or
being littered by hundreds of windows. It should be possible to raise
the limit when needed, and since it is supposed to be something not
hit very often it is probably acceptable for this to be implemented
interactively  - eg. a compositor or a policy monitor opens a dialog
box saying Application 'awdsaf' is trying to open more than 20
windows, do you want to allow it to open another 20?. Of course,
there is a possibility that a separate connection will be made for
each window and that should be dealt with as well. Since there is no
security on POSIX the dialog or associated 

Re: wayland screen locker and security in general

2011-04-07 Thread Michal Suchanek
On 7 April 2011 02:00, cat zixo...@gmail.com wrote:


 On Wed, Apr 6, 2011 at 11:57 AM, Michal Suchanek hramr...@centrum.cz
 wrote:

 On 6 April 2011 18:34, Jerome Glisse j.gli...@gmail.com wrote:
  2011/4/6 Michal Suchanek hramr...@centrum.cz:
  2011/4/5 Kristian Høgsberg k...@bitplanet.net:
  On Tue, Apr 5, 2011 at 5:59 AM, Michal Suchanek hramr...@centrum.cz
  wrote:
  Hello,
 
  what is the plan for screensave/screenlocker support in wayland?
 
  The support in X is a fail in several ways.
 
  It sure is.  The plan for Wayland is that the lock screen is just part
  of the compositor.  There are no problems with detecting idle or other
  applications having grabs this way, and the compositor completely
  controls what goes on the screen so you don't have other applications
  raising their window over the screensaver.  There doesn't even have to
  be a screensaver window, the compositor can just paint a black screen.
 
  It is of course possible to define a plugin or an out-of-process (fork
  a special Wayland client and give it a special surface to render to,
  similar to your second option below) mechanism for rendering fun
  screensavers.
 
 
  IMHO having anything that requires somewhat uncommon functionality as
  part of the compositor is lame. The same if a new plugin is required
  for any new function somebody comes up with.
 
  Eventually there should be multiple compositors to choose from and
  they should *not* each include everything and the kitchen sink.
 
  Thanks
 
  Michal
 
  I don't think it's lame, a good compositor with good theming
  capabilities and you will only need one. Moreover you can add very

 Seriously, it's not just look what makes the difference between a good
 window manager and a bad one.

 Maybe with wayland there is need only for one but I doubt that, there
 is no one-size-fits-all application of any kind I know of.

 Yet if you insist on the compositor doing anything that a normal
 application is not allowed to do instead of a modular system that
 allows authorizing applications to do something you effectively make
 developing a new compositor very hard, and switching between different
 compositors as well.

  usefull feature in the compositor regarding locked screen, for
  instance you can force the compositor to always composite a small
  picture (some kind of icon) on top of fullscreen app so no app can
  malisiously present itself as being a screensaver and spy on user
  password. Only the compositor would paint without the icon. This is
  just one example.

 Sure. Or you could grant your movie player the privilege to paint
 without an icon as your screensaver does when you are bothered by it.
 But that requires putting the movie player into the compositor, eh?

 
  I think wayland design is to avoid redoing X and trying to add new
  protocol for screensaver or whatever new application one might come up
  with, is to be avoided. Of course that means that piece like the
  compositor will have to include bunch of code that were previously
  standalone.

 Oh, so also VNC client and server must be in the compositor (to spy on
 and inject events), and so must be x2x (or w2w), any application that
 needs to manipulate clipboard in unusual ways (such as providing
 format conversion), and the hotkeys application that launches your
 media player when you press | on the keyboard. And rdesktop that
 requires raw keycodes.

 And a bunch I missed, I am sure.

 It's not like it should be a separate scrensaver protocol. It should
 be a protocol that any application can use to do its thing.

 Thanks

 Michal

 my understanding is that wayland is suppost to  be a simple protocol, not
 the end all be all hub of the desktop, attributes could be nice but  I think

Still it should be designed such that it fits somewhere in the end
all be all hub of the desktop otherwise it will be a piece that fits
nowhere in it, just like the X server.

 the overall result is that things like screen locking should be left up to
 the implimentation. wayland needs to be a base for putting stuff on the

They should not. Wayland should come with a screenlocker and an
obvious and documented of for plugging a different one.

Otherwise you will get a hell of mutually incompatible solutions.

The X server has some 2-3 screensaver protocols none of which is
usable, and there are about 3 separate protocols to talk to the
screensaver outside of the X protocol because the X protocol is not
made to convey the information the screensaver needs.

 screen and giving input. screensavers could be setup to work different
 viewports or a second protocol could be used to operate the many features of
 the desktop
 the server could keep a whitelist of programs that inhibit the screen while
 running so that a malicious program doesn't leave the screen unlocked for
 intruders or scrape your password. There are some situations that these
 programs/features don't make sense .
 so that

No, wayland itself should

Re: wayland screen locker and security in general

2011-04-07 Thread Michal Suchanek
2011/4/7 Corbin Simpson mostawesomed...@gmail.com:
 2011/4/7 Michal Suchanek hramr...@centrum.cz:
 If you have some input on awesomeness of dbus which I miss I am all
 ears but so far nobody could point out any advantage to me when this
 topic came to the table.

 Sure, dbus likely includes a protocol for passing around the messages
 but I am sure there are already dozens of protocols for serializing
 data into datagrams and/or pipes (which is what all communication
 boils down to in the end), and if the one dbus uses is in some way
 awesome and standing out from the crowd then the authors and
 proponents of dbus fail miserably at explaining that.

 It's a de facto standard. People use it, people rely on it, people
 expect it to be in place. This is really the other way around: *You*
 should explain why dbus is inadequate and *you* should be suggesting
 alternatives.

I don't use it and I am perfectly fine.


 Speaking of which, what are these dozens of protocols, anyway? Can
 you name some of them? Can you suggest why they would be better than
 dbus for this task?


Since Wayland is not using dbus right now and is not going to use it
for its core protocol introducing it is superfluous.

I was merely asking if dbus has some merits on its own.  The fact that
gnome uses it does not convince me.
This is somewhat offtopic here so I suggest that if your further input
relates only to how awesome dbus is you send it offlist.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: HPC (High Performance Compute) Architecture

2011-03-18 Thread Michal Suchanek
On 18 March 2011 04:14, Trevour Crow cro...@yahoo.com wrote:
 From: Josh Leverette coder...@gmail.com
 Subject: Re: HPC (High Performance Compute) Architecture
 To: jonsm...@gmail.com jonsm...@gmail.com
 Cc: wayland-devel@lists.freedesktop.org 
 wayland-devel@lists.freedesktop.org
 Date: Thursday, March 17, 2011, 9:13 PM
 http://www.onlive.com ? But yeah, I was
 wanting this to be user transparent for all applications,
 since there is no way we could modify proprietary
 applications that use a lot of processor real estate and
 this would be a one time deal, no need to do it on an app by
 app basis. But, I understand.
 I'm actually working with Gallium to make this possible - after discovering
 that indirect mode GLX was limited to OpenGL 1.4(or at least the Mesa/X.Org
 stack is, which is all I care about), I decided to see if I couldn't use
 the pipe/stack tracker architecture to transparently relay rendering
 commands from one machine to another; I haven't quite started work on how
 netpipe will connect to the remote state tracker, but I've started laying 
 down the pipe driver side and it seems possible. This means that, combined
 with the remote Wayland GSoC project, what you're talking about should be
 possible for any program that ultimately renders using Gallium.

It's generally possible. Gallium removes one obstacle - it provides a
middleware that is the same regardless of the graphics that is used
for rendering. You will not be able to move an application from a more
powerful graphics to a graphics with substantially fewer features
available but that's it, Intel, ATI, nVidia, VMWare, all should work
if  you limit the features the application can use to those of your
least capable card.

The other issue is texture upload. Often 3D (or OpenGL) applications
use huge amounts of textures and require that these be uploaded to the
card quickly. You can present an unified netpipe model where both the
card memory and the system memory is presented as one huge netpipe
device memory and cache as many textures as you can on the remote end
but this won't help for more intensive applications that switch
between parts of complex scene (and swap the textures and models
accordingly) or things like MPlayer GL video output.

Relying on a powerful CPU/GPU to compress the rendered graphics into a
video stream might be a more general solution. It has the advantage of
decoupling the rendering and display. When there is fast scene change
you may get temporary compression artifacts in the video stream but
the rendering is not affected, and applications with poor event loops
that only check for input between rendering frames (most,
unfortunately) remain responsive. It also blends naturally with
Wayland protocol that only supports window pixmap updates.

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel


subpixel rendering in wayland?

2011-03-01 Thread Michal Suchanek
Hello,

I was just wondering about subpixel rendering and how is that going do
be done in Wayland.

In X an application is supposed to ask xrandr about current screen
layout, determine on what screen(s) the window is placed, and use the
subpixel order and screen rotation information to figure out how to
lay out subpixels in antialiased objects.

To the best of my knowledge no X toolkit does this. Last time I tried
to look there was a global setting of horizontal RGB vs BGR so I
turned off subpixel rendering completely.

Wayland currently does not support multiscreen but rotation should be
already possible.

Is there any provision for applications to get the information about
subpixel layout of their windows and is there any toolkit for Wayland
that honors it? Are there any tests in place that would make sure
toolkits render subpixels properly in various situations?

Or is subpixel rendering considered superfluous and unsupported by wayland?

Thanks

Michal
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/wayland-devel