Re: Chrome Remote Desktop and Wayland

2020-04-16 Thread Ray Strode
Hey,

On Wed, Apr 8, 2020 at 12:04 AM Erik Jensen  wrote:
> Chrome Remote Desktop currently works on Linux by spinning up its own
> Xvfb server and running a graphical session in that. However, as more
> and more parts of the stack assume that a user will have at most one
> graphical session, this is leading to more breakage more often. E.g.,
> several distros have switched DBUS to using a single session bus per
> user, which only supports one graphical session, and recent versions
> of GDM will fail to log a user in locally at all if a Chrome Remote
> Desktop session is running due to
> https://gitlab.gnome.org/GNOME/gdm/-/issues/580. Given that Chrome
> Remote Desktop starts at boot, the latter means that even just setting
> it up and rebooting is enough to break local logins for the user,
> which is obviously less than ideal.
Right, and as mentioned in the bug, GNOME doesn't really support
logging in more than once with the same user from a data reliability
point of view anyway.  I mean the settings database doesn't work well
if more than one thing is writing to it at the same time (which is one
of the reasons writes for all apps are funneled through a daemon).

> We have the following constraints for our use case:
>  * Chrome Remote Desktop must be usable after boot without the user
> needing to log in locally, first.
makes sense.

>  * It must be possible to "curtain" the session, meaning that, when a
> user is connected, their session is not displayed on the local
> monitor. (Imagine a user working from home and connecting to their
> workstation in a shared office space.)
Yea this is something we've wanted in GNOME for a long.  See for
instance this bug from 2005:

https://bugzilla.gnome.org/show_bug.cgi?id=311780

Back then we didn't even have compositing window
managers so it wasn't something we could practically implement.
Today's a different story!  This is something mutter/gnome-shell can
reasonably implement with some effort.

>  * It's okay to require X11 today, but there should be a reasonable
> path forward as more distributions switch to Wayland.
I think it's more likely for mutter, that this would land in the
native display server (wayland) side first.

> Possible idea brainstorming:
> I'm hoping for feedback for the feasibility of these, given I don't
> have a lot of experience with Wayland or the modern graphical session
> architecture. All of these have gaps which make them likely not usable
> *right now*, so the question is probably which approach would be the
> most likely to be accepted by the relevant projects, and potentially
> which would be the quickest to design and get working.
>
> There's likely other possibilities that I haven't thought of.
[snip]
> ~Add curtaining support to session compositors~
This seem like the most plausible way forward.

So the original idea with having a user bus instead of a session bus
was that e.g., gnome-shell would handle all sessions, not just one
session.  So rather than a gnome-shell per session, there would be
just the one for the user, started by systemd and running in its own
cgroup.  (Likewise, dconfd would handle setting writes for all sessions,
and then no worries about the settings database getting corrupted.)

The idea is, that if gnome-session is started in the mode that tells
it to use systemd, rather than starting gnome-shell and the rest of
the required components of the session itself, it instead defers to
the user's systemd instance to reach a specific target. That target
has various session services as dependencies including gnome-shell.

Crucially, if one gnome-session instance starts gnome-shell via a
systemd user service (say via chrome remote desktop), and another
gnome-session instance gets started in a new session (say via the
local gdm login screen), and also tries to set a systemd --user
target that brings in gnome-shell, systemd won't start gnome-shell
twice.  gnome-shell is instead a sort of factory process that starts
when the first session logs in and lasts until the last session logs
out.

But if gnome-shell is a factory process that may be around before a
session is started, how can it know when that new session comes
around so it can "adopt" or service it?

Sessions are registered with logind (usually via pam_systemd, but
there's an underlying D-Bus api too).

gnome-shell can easily ask logind to enumerate all the sessions that
belong to a user (see sd_uid_get_sessions), and can also easily
detect when a new session comes or goes (see sd_login_monitor).
It can also detect when a session is active or inactive via logind.

We don't really do a lot of this today, but one vision for the future is
something like this:

- User logs into their workstation at work and hacks away for a while
before needing to leave.

- Before going the user enables remote desktop, which leads to the
new session getting registered with logind .

- This, in turn, tells systemd --user to reach a particular target that
 pulls in gnome-shell, 

Re: Protocol backwards compatibility requirements?

2020-04-16 Thread Peter Hutterer
On Thu, Apr 16, 2020 at 05:47:56PM +1000, Christopher James Halse Rogers wrote:
> 
> 
> On Wed, Apr 15, 2020 at 14:27, Simon Ser  wrote:
> > Hi,
> > 
> > On Monday, April 13, 2020 1:59 AM, Peter Hutterer
> >  wrote:
> > >  Hi all,
> > > 
> > >  This is request for comments on the exact requirements for protocol
> > >  backwards compatibility for clients binding to new versions of an
> > > interface.
> > >  Reason for this are the high-resolution wheel scrolling patches:
> > >  https://gitlab.freedesktop.org/wayland/wayland/-/merge_requests/72
> > > 
> > >  Specifically, the question is: do we **change** protocol elements or
> > >  behaviour as the interface versions increase? A few random examples:
> > 
> > What we can't do is:
> > 
> > - Change existing messages' signature
> > - Completely remove a message
> 
> It should be relatively easy to modify wayland-scanner to support both of
> these things, *if* we decide that it's a reasonable thing to do. (You'd do
> something like add support for  and the
> like)
> 
> > 
> > >  - event wl_foo.bar introduced in version N sends a wl_fixed in
> > >surface coordinates. version N+1 changes this to a normalized
> > >[-1, +1] range.
> > 
> > Argument types can't be changed. This would be a breaking change for the
> > generated code, we can't do that.
> 
> But this isn't changing the argument type; it's changing the interpretation
> of the argument.
> In both cases the type is wl_fixed; in the first you interpret this wl_fixed
> as being in surface coordinates, in the second you interpret it differently.
> 
> This doesn't require any changes to code generation; I don't think this is
> (in principle) any more disruptive than changing “wl_foo.baz is sent exactly
> once” to “wl_foo.baz is sent zero or more times”, which you're happy with.
> 
> > 
> > >  - request wl_foo.bar introduced in version N takes an int. version
> > > N+1
> > >changes wl_foo.bar to take a wl_fixed and an enum.
> > 
> > Ditto.
> > 
> > >  - request wl_foo.bar introduced in version N guaranteed to generate
> > > a single
> > >event wl_foo.baz. if the client binds to version N+1 that event
> > > may be
> > >sent zero, one or multiple times.
> > 
> > This is fine.
> > 
> > >  I think these examples cover a wide-enough range of the possible
> > > changes.
> > > 
> > >  My assumption was that we only ever add new requests/events but
> > > never change
> > >  existing behaviour. So wl_foo.bar introduced in version N will
> > > always have
> > >  the same behaviour for any interface N+m.
> > 
> > We can change existing requests' behaviour. This has already been done a
> > number of times, see e.g. wl_data_offer.accept or
> > xdg_output.description.
> > 
> > Clients should always have a max-version, ie. they should never blindly
> > bind
> > to the compositor's version.
> > 
> > What is also fine is marking a message as "deprecated from version N".
> > Such a
> > message wouldn't be sent anymore starting from this version.
> > 
> > >  I've seen some pushback for above linked patchset because it gets
> > >  complicated and suggestions to just change the current interface.
> > >  The obvious advantage is being able to clean up any mess in the
> > > protocol.
> > > 
> > >  The disadvantages are the breakage of backwards compatibility with
> > > older
> > >  versions. You're effectively forcing every compositor/client to
> > > change the
> > >  code based on the version number, even where it's not actually
> > > needed. Or,
> > >  IOW, a client may want a new feature in N+2 but now needs to
> > > implement all
> > >  changes from N+1 since they may change the behaviour significantly.
> > 
> 
> This is the meat of the question - all of the changes described are
> technically fairly simple to implement.

yes, I agree, this is more a "political" choice or as you say, a question of
what do we limit ourselves to.
 
> To some extent this is a question of self-limitations. As has been
> mentioned, protocols have *already* been deliberately broken in this way,
> and people are happy enough with that. As long as we're mindful of the cost
> such changes impose, I think that having the technical capability to make
> such changes is of benefit - for example, rather than marking a message as
> “deprecated from version N” I think it would be preferable to just not have
> the message in the listener struct. (Note that I'm not volunteering to
> *implement* that capability, and there are probably more valuable things to
> work on, but if it magically appeared without any effort it'd be nice to
> have that capability).

I'd even argue that the hard-breaking changes are safer since they
definitely throw up warnings and/or break compilation, whereas the subtle
behaviour changes will quietly fly under the radar. e.g. the value range
change (which, not coincidentally, is what we're talking about here).

But yeah, it still comes down to "what are we happy with" which, ideally is
some sort of consensus.
 

Re: Protocol backwards compatibility requirements?

2020-04-16 Thread Christopher James Halse Rogers




On Wed, Apr 15, 2020 at 14:27, Simon Ser  wrote:

Hi,

On Monday, April 13, 2020 1:59 AM, Peter Hutterer 
 wrote:

 Hi all,

 This is request for comments on the exact requirements for protocol
 backwards compatibility for clients binding to new versions of an 
interface.

 Reason for this are the high-resolution wheel scrolling patches:
 https://gitlab.freedesktop.org/wayland/wayland/-/merge_requests/72

 Specifically, the question is: do we **change** protocol elements or
 behaviour as the interface versions increase? A few random examples:


What we can't do is:

- Change existing messages' signature
- Completely remove a message


It should be relatively easy to modify wayland-scanner to support both 
of these things, *if* we decide that it's a reasonable thing to do. 
(You'd do something like add support for removed_in="5"/> and the like)





 - event wl_foo.bar introduced in version N sends a wl_fixed in
   surface coordinates. version N+1 changes this to a normalized
   [-1, +1] range.


Argument types can't be changed. This would be a breaking change for 
the

generated code, we can't do that.


But this isn't changing the argument type; it's changing the 
interpretation of the argument.
In both cases the type is wl_fixed; in the first you interpret this 
wl_fixed as being in surface coordinates, in the second you interpret 
it differently.


This doesn't require any changes to code generation; I don't think this 
is (in principle) any more disruptive than changing “wl_foo.baz is 
sent exactly once” to “wl_foo.baz is sent zero or more times”, 
which you're happy with.




 - request wl_foo.bar introduced in version N takes an int. version 
N+1

   changes wl_foo.bar to take a wl_fixed and an enum.


Ditto.

 - request wl_foo.bar introduced in version N guaranteed to generate 
a single
   event wl_foo.baz. if the client binds to version N+1 that event 
may be

   sent zero, one or multiple times.


This is fine.

 I think these examples cover a wide-enough range of the possible 
changes.


 My assumption was that we only ever add new requests/events but 
never change
 existing behaviour. So wl_foo.bar introduced in version N will 
always have

 the same behaviour for any interface N+m.


We can change existing requests' behaviour. This has already been 
done a
number of times, see e.g. wl_data_offer.accept or 
xdg_output.description.


Clients should always have a max-version, ie. they should never 
blindly bind

to the compositor's version.

What is also fine is marking a message as "deprecated from version 
N". Such a

message wouldn't be sent anymore starting from this version.


 I've seen some pushback for above linked patchset because it gets
 complicated and suggestions to just change the current interface.
 The obvious advantage is being able to clean up any mess in the 
protocol.


 The disadvantages are the breakage of backwards compatibility with 
older
 versions. You're effectively forcing every compositor/client to 
change the
 code based on the version number, even where it's not actually 
needed. Or,
 IOW, a client may want a new feature in N+2 but now needs to 
implement all

 changes from N+1 since they may change the behaviour significantly.




This is the meat of the question - all of the changes described are 
technically fairly simple to implement.


To some extent this is a question of self-limitations. As has been 
mentioned, protocols have *already* been deliberately broken in this 
way, and people are happy enough with that. As long as we're mindful of 
the cost such changes impose, I think that having the technical 
capability to make such changes is of benefit - for example, rather 
than marking a message as “deprecated from version N” I think it 
would be preferable to just not have the message in the listener 
struct. (Note that I'm not volunteering to *implement* that capability, 
and there are probably more valuable things to work on, but if it 
magically appeared without any effort it'd be nice to have that 
capability).


The status quo is that we're happy (perhaps accidentally) with 
requiring a client to implement all changes from N+1 in order to get 
something from N+2. I think whether or not that's ok is a case-by-case 
decision. How difficult is it for clients to implement N+1? How much 
simpler does the break make protocol version N+1? If it's trivial for 
clients to handle and makes the protocol significantly simpler, I think 
it's obvious that we *should* make the break; likewise, if it's likely 
to be difficult for clients to handle and doesn't make N+1 much 
simpler, it's obvious that we *shouldn't*.


For the specific case at hand, it doesn't seem like it would be 
particularly difficult for clients to handle axis events changing 
meaning in version 8, and it looks like the protocol would be 
substantially simpler without the interaction between axis_v120, axis, 
and axis_discrete.



___