Re: [systemd-devel] starting processes for other users

2015-08-03 Thread Spencer Baugh
Colin Guthrie gm...@colin.guthr.ie writes:

 Michał Zegan wrote on 31/07/15 12:37:
 The thing is, if the user does it, then after he leaves, the process
 is running under the user's session.
 If I log in to my own account, su to the other user and start the
 process and then logout, this process, even though running as the
 other user, is in my own session.
 Actually it is sometimes confusing to see utmp entries saying
 different things than loginctl ;)
 

 Using tools like su is rarely doing what you expect. It doesn't start a
 new pam session and doesn't start  a systemd --user etc. etc.

Is there a tool like su that does do that? That is, a way to switch from
root to another user without authenticating, that does start a PAM
session and register with logind and all of that. That's something that
would be useful, if it's possible...
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Confusing error message

2015-07-14 Thread Spencer Baugh
Perhaps if there is an issue with polkit (or permissions in general) we should 
always print something like, Unable to perform action without privileges; try 
again with sudo. in addition to the polkit message.

On July 14, 2015 12:59:53 AM PDT, David Herrmann dh.herrm...@gmail.com wrote:
Hi

On Tue, Jun 23, 2015 at 4:28 AM, Johannes Ernst
johannes.er...@gmail.com wrote:
 $ systemctl restart systemd-networkd
 Failed to restart systemd-networkd.service: The name
org.freedesktop.PolicyKit1 was not provided by any .service files

 $ sudo systemctl restart systemd-networkd
 Works.

 Presumably this error message could be improved, in particular
because that name is indeed not provided by any .service files :-)

So if you're not root, systemctl needs to ask polkit to perform
authorization. It does this, by sending a dbus message to polkit. If
that well-known bus-name is not owned by anyone, the error message in
question gets returned. So with inside knowledge, it does make sense
;)

Regarding changing this: For debug purposes, it is highly valuable to
know the cause of failure. This message clearly tells a developer what
went wrong. Not sure we want to change this. Or more importantly, I'm
not entirely sure it is easy to change this, as this error is
generated deep down in the polkit-code.
We could just throw that message away and always return EPERM. Not
sure it's worth it, though.

Thanks
David
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting up network interfaces for containers with --private-network

2015-04-22 Thread Spencer Baugh
Lennart Poettering lenn...@poettering.net writes:
 On Tue, 21.04.15 15:22, Spencer Baugh (sba...@catern.com) wrote:

  Also, trivial static IP configuration is seldom sufficient, you at
  least need to also provide DNS configuration, and if you don't use
  DHCP or something similar then you need to configure that inside the
  container anyway. But if you do that you might as well configure the
  static IP addresses in it too, so what is gained by doing this from a
  networkd outside of the cotnainer?
 
  Or am I misunderstanding the role of networkd? It seems like if I am
  writing a service that represents the network interface and namespace
  for this container, I am doing something that networkd should
  ultimately do.
 
  Sure, absolutely. But our idea so far was that networkd should run
  inside the container to configure the container's network, and on the
  host to configure the host's network, but not to cross this boundary
  and have the host networkd configure the container's network.
 
 Hmm, yes, but I think the problem is the configuration done at
 interface-creation-time. It seems to me that that configuration
 currently does not fit naturally in either the host networkd or the
 container networkd.

 Well, again, I doubt that configuration exclusivel at
 interface-creation-time will be useful for more than the most trivial
 cases, already because as mentioned it would not cover DNS server
 configuration and thelike.

Sure, I'm in agreement, but there are things that can only be
configured at interface-creation-time. As a trivial example, the type
of the interface. And it looks like for ipvlan you can only choose L2 or
L3 mode at creation-time.

Additionally, it seems to me that the MAC address, at least, is
something that you might want to configure only at creation-time and not
change later.

The earlier two examples are necessary, the third example, the MAC
address, is just nice-to-have. But since there are things that actually
really must be configured at creation-time, it seems that eventually it
will be necessary to figure out a best practice or mechanism for such
configuration. And that method might as well also allow the nice-to-have
of configuring the MAC address at creation time.

 If you really want fixed IP addresses, I think this could work:

 We add configurability for the DHCP server address range in networkd,
 including taking ranges that contain a single IP address. You could
 then assign fixed addresses to your containers simply by dropping a
 .network snippet for them, that only contains a single dhcp range IP
 address for it. THat should work, no?

This would be a nice feature and would work for many use cases.

My problem with this solution is that I'm using IPv6. With IPv6, I can
set up stateless autoconfiguration to assign IPs to interfaces.  In
that case the IP is directly determined by the MAC address, so it is not
possible (AFAIK?) to specify a range of IPs of length 1 to force that
single IP to be assigned to a specific interface.

I would somewhat prefer to be using this feature of IPv6, rather than
using DHCPv6; and anyway, networkd doesn't support DHCPv6 right now,
right? So this doesn't necessarily work for me.

The mapping from MAC address to IPv6 address is just a matter of
encoding the MAC address in the last 64 bits of the IPv6 address.  So if
I know the MAC address of the container, and maybe listen a little on
the network interface to see what routing prefix is being advertised, I
know the IPv6 address that the container will get. Maybe something
(networkd, networkctl, machined, machinectl?) could reveal this
information? Just an idea.

The stateless autoconfiguration of IPv6 seems pretty similar to
systemd's own aims, it would be nice if they worked well together.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting up network interfaces for containers with --private-network

2015-04-22 Thread Spencer Baugh
Lennart Poettering lenn...@poettering.net writes:
 On Wed, 22.04.15 13:41, Spencer Baugh (sba...@catern.com) wrote:
  Lennart Poettering lenn...@poettering.net writes:
   Well, again, I doubt that configuration exclusivel at
   interface-creation-time will be useful for more than the most trivial
   cases, already because as mentioned it would not cover DNS server
   configuration and thelike.
  
  Sure, I'm in agreement, but there are things that can only be
  configured at interface-creation-time. As a trivial example, the type
  of the interface. And it looks like for ipvlan you can only choose L2 or
  L3 mode at creation-time.
  
  Additionally, it seems to me that the MAC address, at least, is
  something that you might want to configure only at creation-time and not
  change later.

 networkd considers the MAC address that can be changed for the network
 you connect to, and can hence change dynamically depending on the
 network you pick.

 Hmm, there's indeed a gap here though: as .link files are applied by
 udev, not by networkd, and udev is not available in containers, .link
 files are currently not applied at all in containers. It might be an
 option to do this within networkd instead of udev when it is run in a
 container.

 Of course, this will then only cover the props that can be changed
 with .link files, not the ones that have to be specified at link
 creation time.

 For those, things are nasty. I mean, for things like ipvlan which
 relate to an interface outside of the container, this means we need to
 create them outside of the container, and thus they need to be
 configured initially outside of the container.

 So far we handled all this inside .netdev files, that are processed by
 networkd. We have limited support for creating these devices also with
 nspawn (i.e. the --network-veth, --network-macvlan and
 --network-ipvlan switches), but I'd be really careful with turning
 them into more than just basic switches, I'd really leave more complex
 cases with networkd.

 Now, I am not really sure how would could hook this up... After all
 as you say it will not be sufficient to create the netdevs once before
 nspawn uses them, they need to be created before each time before
 nspawn uses them.

 As soon as networkd gains a bus interface maybe an option could be to
 hook up nspawn's --network-interface= with it: if the specified
 interface doesn't exist, nspawn could synchronously ask networkd to
 create it. With that in place you could then configure .netdev files
 outside of the container, and neatly pass them on into the container,
 without races. Would that fix your issue?

Yes, that sounds like it would work. This would destroy and recreate the
interface on reboot, which is fine for my use case.

There might at some point be a desire by someone else to have the
interface not be destroyed on reboot. At that point it would just
require teaching networkd something about network namespaces, which
shouldn't be hard. I don't want that myself, of course.

As a stopgap measure until the feature you described is ready, I'll use
the .service with PrivateNetwork= and JoinsNamespaceOf= suggestion you
made earlier.

  If you really want fixed IP addresses, I think this could work:
 
  We add configurability for the DHCP server address range in networkd,
  including taking ranges that contain a single IP address. You could
  then assign fixed addresses to your containers simply by dropping a
  .network snippet for them, that only contains a single dhcp range IP
  address for it. THat should work, no?
 
 This would be a nice feature and would work for many use cases.
 
 My problem with this solution is that I'm using IPv6. With IPv6, I can
 set up stateless autoconfiguration to assign IPs to interfaces.  In
 that case the IP is directly determined by the MAC address, so it is not
 possible (AFAIK?) to specify a range of IPs of length 1 to force that
 single IP to be assigned to a specific interface.
 
 I would somewhat prefer to be using this feature of IPv6, rather than
 using DHCPv6; and anyway, networkd doesn't support DHCPv6 right now,
 right? So this doesn't necessarily work for me.

 True. It's certainly our plan to support it eventually.

That's in reference to just DHCPv6, right? What about stateless
autoconfiguration, out of curiosity?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting up network interfaces for containers with --private-network

2015-04-21 Thread Spencer Baugh
Lennart Poettering lenn...@poettering.net writes:
 On Tue, 21.04.15 10:58, Spencer Baugh (sba...@catern.com) wrote:

  The MAC address is currently generated as hash value from the
  container name, it hence should be completely stable already as long
  as you keep using the same name for the container?
 
 Well, generally I want to know what MAC/IP address a machine/container
 will receive in advance of actually starting it. I could start it once
 and immediately stop it to see and record what MAC address is generated
 for a given name, or copy the code to generate the MAC address out of
 nspawn.c, but neither of those seem like good options.

 Sidenote: if this is about having stable names to refer to containers,
 note that nss-mycontainers adds those automatically. If enabled, then
 all local container names will be resolvable, automatically. It's
 often hence unnecessary to have fixed IP addresses for this at all.

It is about stable names, but I believe those names need to be usable
from off the host.

  I am interested in using networkd to do these things, but at the moment
  it doesn't seem to have the required level of power.
 
  what do you mean precisely with this?
 
 I mean that instead of writing another service (probably a shell script)
 to set up the interface on the host, using the PrivateNetwork= and
 JoinsNamespaceOf= trick, instead have networkd set up the interface on
 the host inside a network namespace and use the same kind of trick.

 Well, I mean how useful would this actually be? THis would only work
 for static configuration, everything more complex requires a daemon
 watching the interface continously and that's really hard to do for a
 set of network interfaces in a different network namespace.

All that I want to do is configuration that can be done at the time of
first creating the interface - like setting the MAC address. That is all
the script that I am using at the moment does, everything else is done
by networkd.

 Also, trivial static IP configuration is seldom sufficient, you at
 least need to also provide DNS configuration, and if you don't use
 DHCP or something similar then you need to configure that inside the
 container anyway. But if you do that you might as well configure the
 static IP addresses in it too, so what is gained by doing this from a
 networkd outside of the cotnainer?

 Or am I misunderstanding the role of networkd? It seems like if I am
 writing a service that represents the network interface and namespace
 for this container, I am doing something that networkd should
 ultimately do.

 Sure, absolutely. But our idea so far was that networkd should run
 inside the container to configure the container's network, and on the
 host to configure the host's network, but not to cross this boundary
 and have the host networkd configure the container's network.

Hmm, yes, but I think the problem is the configuration done at
interface-creation-time. It seems to me that that configuration
currently does not fit naturally in either the host networkd or the
container networkd.

 Set up the interface here just means create the interface with a
 specific MAC address, of course.

 Well, of course, we could beef up systemd-nspawn and allow it to take
 configurable IP or MAC addresses on the command line, and then it
 would become a second networkd, and we already have one of those...

Yes, but what else can configure the interfaces at creation time?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting up network interfaces for containers with --private-network

2015-04-21 Thread Spencer Baugh
Lennart Poettering lenn...@poettering.net writes:

 On Mon, 20.04.15 22:50, Spencer Baugh (sba...@catern.com) wrote:
 Yes, in that case, it is of course very simple, but it is not at all
 configurable. I have one thing and one thing only that I want to
 configure: The IP address that a given container receives. This seems
 like a reasonable thing to want to configure; ultimately there have to
 be fixed IP addresses somewhere, and I have a use for containers that
 would benefit from having fixed IP addresses.
 
 The way I currently fix the IP address that the container receives is by
 fixing the MAC address of the veth; since I am using IPv6 and radvd, the
 IP address is deterministically generated from the MAC address. So it
 would be helpful if there was a way to do fix the MAC address in
 nspawn. Would you accept a patch to add an option to nspawn to specify a
 MAC address for the veth? Or is there a better way to go about this?

 The MAC address is currently generated as hash value from the
 container name, it hence should be completely stable already as long
 as you keep using the same name for the container?

Well, generally I want to know what MAC/IP address a machine/container
will receive in advance of actually starting it. I could start it once
and immediately stop it to see and record what MAC address is generated
for a given name, or copy the code to generate the MAC address out of
nspawn.c, but neither of those seem like good options.

 maybe the ipvlan stuff could work for you?

It's possible, but then I'd be back to the situation where I need to
write a script to keep bringing up the ipvlan devices before starting
the container. Unless ipvlan devices don't disappear when the namespace
disappears?

  Another option could be to use write a service that sets up the
  interface, uses PrivateNetwork= and then use JoinsNamespaceOf= on the
  container service towards that service, and turn off nspawn's own
  private networking switch. That way PID1 would already set up the
  joint namespace for your container, and ensure it is set up properly
  by your setup service. And as long as either the setup service or the
  container is running the network namespace will stay referenced.
 
 Hmm, that is an interesting approach... it would be nice to be able to
 have networkd be setting up the interface here, though.

 Well, it can, but only if you run it inside of the container. I am
 pretty sure the networkd of the host should not configure the
 interfaces inside of it...

 I am interested in using networkd to do these things, but at the moment
 it doesn't seem to have the required level of power.

 what do you mean precisely with this?

I mean that instead of writing another service (probably a shell script)
to set up the interface on the host, using the PrivateNetwork= and
JoinsNamespaceOf= trick, instead have networkd set up the interface on
the host inside a network namespace and use the same kind of trick.

Or am I misunderstanding the role of networkd? It seems like if I am
writing a service that represents the network interface and namespace
for this container, I am doing something that networkd should
ultimately do.

Set up the interface here just means create the interface with a
specific MAC address, of course.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting up network interfaces for containers with --private-network

2015-04-20 Thread Spencer Baugh
Lennart Poettering lenn...@poettering.net writes:
 On Mon, 20.04.15 15:25, Spencer Baugh (sba...@catern.com) wrote:
 So far I'd recommend running networkd on the host and in the
 container. If you run it on the host, then it will automatically
 configure the hos side of each of nspawn's veth links with a new IP
 range, and be a DHCP server on it, as well as do IP
 masquerading. Connectivity will hence just work, if you use networkd
 in most cases.

This is in the case where I use --network-bridge, right? Because
otherwise there is no veth to be automatically configured.

Yes, in that case, it is of course very simple, but it is not at all
configurable. I have one thing and one thing only that I want to
configure: The IP address that a given container receives. This seems
like a reasonable thing to want to configure; ultimately there have to
be fixed IP addresses somewhere, and I have a use for containers that
would benefit from having fixed IP addresses.

The way I currently fix the IP address that the container receives is by
fixing the MAC address of the veth; since I am using IPv6 and radvd, the
IP address is deterministically generated from the MAC address. So it
would be helpful if there was a way to do fix the MAC address in
nspawn. Would you accept a patch to add an option to nspawn to specify a
MAC address for the veth? Or is there a better way to go about this?

 Of course, if you want to manually set up things, without networkd or
 an equivalent service, then a lot of things will be more manual: one
 way could be to add some script to ExecStartPre= of the service to set
 things up for you each time you start the container.

 Another option could be to use write a service that sets up the
 interface, uses PrivateNetwork= and then use JoinsNamespaceOf= on the
 container service towards that service, and turn off nspawn's own
 private networking switch. That way PID1 would already set up the
 joint namespace for your container, and ensure it is set up properly
 by your setup service. And as long as either the setup service or the
 container is running the network namespace will stay referenced.

Hmm, that is an interesting approach... it would be nice to be able to
have networkd be setting up the interface here, though.

I am interested in using networkd to do these things, but at the moment
it doesn't seem to have the required level of power.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Socket activation of container with private network

2015-04-20 Thread Spencer Baugh
Lennart Poettering lenn...@poettering.net writes:
 On Mon, 20.04.15 13:01, Spencer Baugh (sba...@catern.com) wrote:
 Lennart Poettering lenn...@poettering.net writes:
  Hmm, so you say the initial connection does not work but triggers the
  container, but the subsequent one will?
 
 Not quite; the initial connection seems to actually make it to sshd, as
 sshd has logs of getting it, but the connection is interrupted at some
 point by some thing before anything useful can be done.
 Subsequent connections indeed work fine.

 Interrupted? What precisely does sshd in the container log about the
 connection?

I've just noticed that there are in fact two cases: The case where I
first ssh from the host to the container, and the case where I first ssh
from another unrelated machine with IPv6 connectivity to the
container. Neither works, but they do appear to have different
behavior. In both cases, all subsequent ssh connections work fine no
matter where they originate from. Here are logs for both cases, both ssh
and sshd side.

Case of sshing from the host to the container:
Both sides are hung at the end of these logs.

# Log of ssh - on the host
  root@ipv6-test:~# ssh - 2001:470:8:9d:201:2ff:feaa:bbcd -p 23
  OpenSSH_6.7p1 Debian-3, OpenSSL 1.0.1k 8 Jan 2015
  debug1: Reading configuration data /etc/ssh/ssh_config
  debug1: /etc/ssh/ssh_config line 19: Applying options for *
  debug2: ssh_connect: needpriv 0
  debug1: Connecting to 2001:470:8:9d:201:2ff:feaa:bbcd 
[2001:470:8:9d:201:2ff:feaa:bbcd] port 23.
  debug1: Connection established.
  debug1: permanently_set_uid: 0/0
  debug1: key_load_public: No such file or directory
  debug1: identity file /root/.ssh/id_rsa type -1
  debug1: key_load_public: No such file or directory
  debug1: identity file /root/.ssh/id_rsa-cert type -1
  debug1: key_load_public: No such file or directory
  debug1: identity file /root/.ssh/id_dsa type -1
  debug1: key_load_public: No such file or directory
  debug1: identity file /root/.ssh/id_dsa-cert type -1
  debug1: key_load_public: No such file or directory
  debug1: identity file /root/.ssh/id_ecdsa type -1
  debug1: key_load_public: No such file or directory
  debug1: identity file /root/.ssh/id_ecdsa-cert type -1
  debug1: key_load_public: No such file or directory
  debug1: identity file /root/.ssh/id_ed25519 type -1
  debug1: key_load_public: No such file or directory
  debug1: identity file /root/.ssh/id_ed25519-cert type -1
  debug1: Enabling compatibility mode for protocol 2.0
  debug1: Local version string SSH-2.0-OpenSSH_6.7p1 Debian-3
  
# logs of sshd inside the container, when sshing from host
  root@ipv6-container:/# journalctl -u sshd*
  -- Logs begin at Mon 2015-04-20 18:08:32 UTC, end at Mon 2015-04-20 18:08:33 
UTC. --
  Apr 20 18:08:32 ipv6-container systemd[1]: Starting SSH Per-Connection Server 
for 0 ([2001:470:8:9d:201:2ff:feaa:bbcd]:38383)...
  Apr 20 18:08:32 ipv6-container systemd[1]: Started SSH Per-Connection Server 
for 0 ([2001:470:8:9d:201:2ff:feaa:bbcd]:38383).
  Apr 20 18:08:32 ipv6-container sshd[57]: debug1: inetd sockets after dupping: 
3, 4
  Apr 20 18:08:32 ipv6-container sshd[57]: Connection from 
2001:470:8:9d:201:2ff:feaa:bbcd port 38383 on 2001:470:8:9d:201:2ff:feaa:bbcd 
port 23
  Apr 20 18:08:32 ipv6-container sshd[57]: debug1: Client protocol version 2.0; 
client software version OpenSSH_6.7p1 Debian-3
  Apr 20 18:08:32 ipv6-container sshd[57]: debug1: match: OpenSSH_6.7p1 
Debian-3 pat OpenSSH* compat 0x0400
  Apr 20 18:08:32 ipv6-container sshd[57]: debug1: Enabling compatibility mode 
for protocol 2.0
  Apr 20 18:08:32 ipv6-container sshd[57]: debug1: Local version string 
SSH-2.0-OpenSSH_6.7p1 Debian-5
  Apr 20 18:08:32 ipv6-container sshd[57]: debug2: fd 3 setting O_NONBLOCK
  Apr 20 18:08:32 ipv6-container sshd[57]: debug3: fd 4 is O_NONBLOCK
  Apr 20 18:08:32 ipv6-container sshd[57]: debug2: Network child is on pid 64
  Apr 20 18:08:32 ipv6-container sshd[57]: debug3: preauth child monitor started
  Apr 20 18:08:32 ipv6-container sshd[57]: debug3: privsep user:group 104:65534 
[preauth]
  Apr 20 18:08:32 ipv6-container sshd[57]: debug1: permanently_set_uid: 
104/65534 [preauth]
  Apr 20 18:08:32 ipv6-container sshd[57]: debug1: list_hostkey_types: 
ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]
  Apr 20 18:08:32 ipv6-container sshd[57]: debug1: SSH2_MSG_KEXINIT sent 
[preauth]

Case of sshing from an unrelated machine to the container:
The ssh side terminates with the error at the end, but the sshd side
appears to just hang.

# logs of ssh - on unrelated machine
  root@lxc0:~# ssh - 2001:470:8:9d:201:2ff:feaa:bbcd -p 23
  OpenSSH_6.7p1 Debian-5, OpenSSL 1.0.1k 8 Jan 2015
  debug1: Reading configuration data /etc/ssh/ssh_config
  debug1: /etc/ssh/ssh_config line 19: Applying options for *
  debug2: ssh_connect: needpriv 0
  debug1: Connecting to 2001:470:8:9d:201:2ff:feaa:bbcd 
[2001:470:8:9d:201:2ff:feaa:bbcd] port 23.
  debug1: Connection

[systemd-devel] Setting up network interfaces for containers with --private-network

2015-04-20 Thread Spencer Baugh
Hi,

Currently, I can manually set up (or set up with a script) a veth, then
move it in to a systemd-nspawn container with
--network-interface. However, if the container tries to restart (or
exits and needs to be restarted), the network namespace of the container
is destroyed and therefore so is the veth that that namespace
contains. Thus, if the interface isn't recreated by some external force
in the time between stopping and restarting, the next invocation of
systemd-nspawn --network-interface=someif will fail.

To state the problem again more concretely, if I create a veth, assign
one end of the veth to a container started with systemd-nspawn
--network-interface=veth0, then run reboot inside the container, the
container will shut down and fail to come back up, as veth0 is
destroyed.

Perhaps, I could hack up some shell script to wrap system-nspawn and
make sure that whenever I run it, an appropriate network interface is
created before actually running systemd-nspawn
--network-interface=someif, but I don't think that's really the best
approach.

I could use --network-bridge on a bridge and get the veth made
automatically, as long as all I wanted to do was add the other end of
the veth to a bridge, and it would be remade whenever the container
restarted. But I think this might be the wrong place for this magic to
live; it's not very configurable and seems rather strange to have
systemd-nspawn doing ad-hoc networking tasks like this.

I'm curious about how this should be done; it seems very important for
serious use of containers.  Perhaps networkd could be used to do
something here? What is the best practice?

Thanks,
Spencer Baugh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] systemctl: print unit package in status

2014-12-18 Thread Spencer Baugh
Quoting Jóhann B. Guðmundsson (2014-12-18 04:08:32)
 
 On 12/18/2014 04:00 AM, Spencer Baugh wrote:
  When printing the status of a unit, open a connection to the session bus
  and query PackageKit for the package that the unit file belongs
  to. Print it if PackageKit knows.
 
 There are gazillion package manager in the wild 

PackageKit is a generic interface to the local package manager, used
by all the major distros and desktop environments. It's installed by
default on any normal desktop/laptop. So this is different from
hardcoding a call out to yum or apt.

 and this will 
 significantly delay the output of the status command which makes this 
 something you should be carrying downstream.

It adds 800ms to the output on my system. Still, adding a command line
flag to enable/disable this behavior would be good. If other
slower-than-usual operations are added, we might want to
enable/disable them with the same flag.  Suggestions on a flag name
that's appropriate for that behavior?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] systemctl: print unit package in status

2014-12-18 Thread Spencer Baugh
Quoting Kay Sievers (2014-12-18 15:04:22)
 On Thu, Dec 18, 2014 at 8:19 PM, Zbigniew Jędrzejewski-Szmek
 zbys...@in.waw.pl wrote:
  On Thu, Dec 18, 2014 at 07:09:34PM +, Jóhann B. Guðmundsson wrote:
 
  On 12/18/2014 06:44 PM, Jóhann B. Guðmundsson wrote:
  
  On 12/18/2014 06:36 PM, Zbigniew Jędrzejewski-Szmek wrote:
  You missed the part where I said you should make it opt-in.
  
  Should we not first determine the practicality of implementing
  this and if the system service manager should actually be looking
  up this info to begin with?
  
  We could not add the ability to define the upstream homepage in
  the status output but we can now clutter the status output with a
  name of a package?
 
  This could be implemented without the overhead and conflict as an
  extension to the output listed with systemctl list-unit-files if
  opt-in
  That's a valid point. list-unit-files seems to be a better home
  for this.
 
 The systemd command line tools are not supposed to call into
 higher-level daemons to query data. This sounds like the wrong way
 around. It sound like someone should teach packagekit about systemd
 units.
 
 Also, systmed does not want to get involved into any concept of
 packages. It is what distributions are made of, but this is not
 systemd's task to manage of describe.

systemctl does already directly invoke man to read man pages, despite
that just being one way among many to maintain documentation.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] systemctl: print unit package in status

2014-12-17 Thread Spencer Baugh
When printing the status of a unit, open a connection to the session bus
and query PackageKit for the package that the unit file belongs
to. Print it if PackageKit knows.
---
 src/systemctl/systemctl.c | 32 
 1 file changed, 32 insertions(+)

diff --git a/src/systemctl/systemctl.c b/src/systemctl/systemctl.c
index 4c4648f..ea2772b 100644
--- a/src/systemctl/systemctl.c
+++ b/src/systemctl/systemctl.c
@@ -3381,6 +3381,8 @@ typedef struct UnitStatusInfo {
 const char *source_path;
 const char *control_group;
 
+const char *package_name;
+
 char **dropin_paths;
 
 const char *load_error;
@@ -3507,6 +3509,9 @@ static void print_status_info(
 printf(   Loaded: %s%s%s\n,
on, strna(i-load_state), off);
 
+if (i-package_name)
+printf(  Package: %s\n, i-package_name);
+
 if (!strv_isempty(i-dropin_paths)) {
 _cleanup_free_ char *dir = NULL;
 bool last = false;
@@ -4384,6 +4389,11 @@ static int show_one(
 
 _cleanup_bus_message_unref_ sd_bus_message *reply = NULL;
 _cleanup_bus_error_free_ sd_bus_error error = SD_BUS_ERROR_NULL;
+
+_cleanup_bus_close_unref_ sd_bus *user_bus = NULL;
+_cleanup_bus_message_unref_ sd_bus_message *packagekit_reply = NULL;
+
+const char *file_path;
 UnitStatusInfo info = {};
 ExecStatusInfo *p;
 int r;
@@ -4453,6 +4463,28 @@ static int show_one(
 if (r  0)
 return bus_log_parse_error(r);
 
+file_path = info.source_path ? info.source_path : info.fragment_path;
+if (file_path) {
+/* we frequently can't get the user bus, nor call PackageKit, 
so don't complain on error */
+sd_bus_default_user(user_bus);
+if (user_bus) {
+sd_bus_call_method(
+user_bus,
+org.freedesktop.PackageKit,
+/org/freedesktop/PackageKit,
+org.freedesktop.PackageKit.Query,
+SearchFile,
+NULL,
+packagekit_reply,
+ss, file_path, 0);
+if (packagekit_reply) {
+r = sd_bus_message_read(packagekit_reply, 
bs, NULL, info.package_name);
+if (r  0)
+return bus_log_parse_error(r);
+}
+}
+}
+
 r = 0;
 
 if (!show_properties) {
-- 
2.1.3

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel