On Fr, 01.09.23 13:13, Christian Pernegger (perneg...@gmail.com) wrote:

> Of course, if you want to take the position that it's a bit weird for
> GNOME to use /dev/rfkill to detect the presence of BT devices, I can't
> argue against that. :)

Doesn't NM/bluez manage these things from privileged code anyway? Is
this really done from inside the GNOME UI with direct device access?

> (From a use case perspective, it would be nice if paired BT devices
> could somehow be tagged. I.e. so that each seat can pair devices and
> manage them, but not see or manage ones paired by other seats and/or
> users.)

Yeah, it would be great if bluez would gain native multi-seat support,
i.e. that it tracks seat assignments for paired devices. But that's
something to request from bluez upstream, not systemd.

> > You cannot attach devices to multiple seats.
>
> Roger that. Is there a way to exempt devices from the multiseat
> mechanism, though? Mark them not seat-specific? Or is that
> hard-coded?

change the udev rules to not set the "seat" udev property on the
relevant device. That's what decides whether seat mgmt is done for the
device.

> > You should be able to assign the device to a different seat though.
>
> systemctl attach won't let me, at least not using the path seat-status
> spits out. But I'm sure the version of systemd in Ubuntu 22.04 is
> ancient, and/or they may have done something to it. If you like, I can
> try whether adding a udev rule manually works, but personally I'm not
> too bothered about this particular issue.

So the problem is that the rfkill device does not carry the
ID_FOR_SEAT property right now, we only add that for pci/usb/…
devices, i.e. the usual busses. rfkill being a virtual device doesn't
carry that property.

That property carries the string identifier that we should use for
identifying the device for seat mgmt purposes. It's usually derived
from the path ID of the device.

To make rfkill managable via "loginctl attach" would mean adding such
a property there. happy to take a patch for that, please submit via
github.

> > that's how things work and people assume them to work: graphics render
> > services are used to bring stuff to screen.
>
> I don't know about this. Yes, seat1 could hog the GPU that seat0's
> outputs are attached to, or vice versa, but seat1 could just as well
> hog all the RAM or saturate the CPU. My point being, seats share the
> host's CPU power, RAM, ..., already, why not the rendering/compute
> power as well. IMVHO it's really just inputs and outputs that should
> be seat-specific. Restricting the shared resources available to a
> given seat, allocating them fairly, etc., is a different problem (and
> arguably one that I'd tackle per user and not per seat).

CPU/RAM are by default resource managed, i.e. each user logged in gets
a similar amount under pressure, as controlled via the cgroups
logic.

This is different from GPU resources, there's no such reosurce
management for that.

Lennart

--
Lennart Poettering, Berlin

Reply via email to