Re: [systemd-devel] Per session systemd?

2014-03-03 Thread David Herrmann
Hi

On Mon, Mar 3, 2014 at 8:30 AM, Yuxuan Shui yshu...@gmail.com wrote:
 Hi,

 This mail might be a little bit later for the topic, but I would like
 to share my thoughts anyway.

 Before systemd 206 was released, there are a few users (I don't know
 how many of them are there, but there's a page about it on archlinux's
 wiki. So I guess it's a well-known use case.) who use systemd to
 manage their sessions. For example, they will exec systemd in their
 ~/.xinitrc, and have systemd start all their X applications.

systemd --user ist started by PAM for any proper user login. So you
could fix that by just using a proper login manager (if you don't want
the big ones, there's small stuff like 'slim', too). Even if you don't
want these, I still recommend to do a PAM authentication in your
startup script. This might even be via pam_rootok so you don't have to
enter any password.

 I know this kind of use case has never been explicitly supported by
 systemd, but it was a really nice _accidental_ feature. However, after
 the cgroup changes made in the systemd 206 release, it became
 impossible to use systemd in such way.

 I saw some user complaints, but the systemd developers seemed unmoved.
 Maybe because the original purpose of systemd --user is to start
 per-user systemd instances. There're hacks to make systemd usable
 under a X session. But that's very complicated, and contains many
 pitfalls (User have to set a lot of environmental variables, and this
 makes logind unhappy since the systemd user instance is not in the
 same session as X). Besides, there're reasonable use cases which can't
 be covered by a per-user systemd instance, like periodically starting
 a graphic application.

Why is that not possible with per-user instances?

 So, I wrote a very dirty hack for my systemd, and have been using it
 till today. I add a 'User=' property to a session, and have systemd
 chown the cgroup to the given user, so I can start systemd in my
 .xinitrc as I used to. I admit this is probably a very bad hack, and
 I'm not sure if it will still work after the soon coming cgroup
 rework.

 That's why I'm writing this mail. I want to point out the reason
 behind use systemd as a session manager, so you will probably
 understand why I want to do this and help me. Since I can't get this
 done by myself with my limited systemd knowledge.

 Any help will be appreciated. It will be better if you can convince me
 that I'm stupid and this feature is totally useless.

What's the problem with per-user systemd besides startup
synchronization? (which is fixed by pam..)

Our concept basically boils down to treating sessions as a banal
ref-count to a user. Instead of putting all the big stuff into a
session, we now move it to the user. You can still have multiple
sessions, but they share the bulk of data now. On the one hand, this
is required for stuff like firefox that can only run once per user. On
the other hand, it seems rather natural to share contexts between
multiple logins of the same user. The same 'user' cannot sit in front
of two machines at the same time, so why allow two independent logins?
Anyhow, there's a lot going on right now and it'll take some time
until this is all implemented. But so far, there haven't been any
use-cases that cannot be solved with per-user systemd.

Thanks
David
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Per session systemd?

2014-03-03 Thread Yuxuan Shui
Hi,

On Mon, Mar 3, 2014 at 4:11 PM, David Herrmann dh.herrm...@gmail.com
wrote:
 Hi

 On Mon, Mar 3, 2014 at 8:30 AM, Yuxuan Shui yshu...@gmail.com wrote:
 Hi,

 This mail might be a little bit later for the topic, but I would like
 to share my thoughts anyway.

 Before systemd 206 was released, there are a few users (I don't know
 how many of them are there, but there's a page about it on archlinux's
 wiki. So I guess it's a well-known use case.) who use systemd to
 manage their sessions. For example, they will exec systemd in their
 ~/.xinitrc, and have systemd start all their X applications.

 systemd --user ist started by PAM for any proper user login. So you
 could fix that by just using a proper login manager (if you don't want
 the big ones, there's small stuff like 'slim', too). Even if you don't
 want these, I still recommend to do a PAM authentication in your
 startup script. This might even be via pam_rootok so you don't have to
 enter any password.
Yea, I know that. The problem is this instance is started once per every
user. And this systemd instance don't belong to the same session as the
logged-in user, causing problems I described below.


 I know this kind of use case has never been explicitly supported by
 systemd, but it was a really nice _accidental_ feature. However, after
 the cgroup changes made in the systemd 206 release, it became
 impossible to use systemd in such way.

 I saw some user complaints, but the systemd developers seemed unmoved.
 Maybe because the original purpose of systemd --user is to start
 per-user systemd instances. There're hacks to make systemd usable
 under a X session. But that's very complicated, and contains many
 pitfalls (User have to set a lot of environmental variables, and this
 makes logind unhappy since the systemd user instance is not in the
 same session as X). Besides, there're reasonable use cases which can't
 be covered by a per-user systemd instance, like periodically starting
 a graphic application.

 Why is that not possible with per-user instances?
Yes, it's possible, but it has many pitfalls as I described.

Here I mean if you *don't use those hacks*, you can't do things like
periodically starting a graphic application

 So, I wrote a very dirty hack for my systemd, and have been using it
 till today. I add a 'User=' property to a session, and have systemd
 chown the cgroup to the given user, so I can start systemd in my
 .xinitrc as I used to. I admit this is probably a very bad hack, and
 I'm not sure if it will still work after the soon coming cgroup
 rework.

 That's why I'm writing this mail. I want to point out the reason
 behind use systemd as a session manager, so you will probably
 understand why I want to do this and help me. Since I can't get this
 done by myself with my limited systemd knowledge.

 Any help will be appreciated. It will be better if you can convince me
 that I'm stupid and this feature is totally useless.

 What's the problem with per-user systemd besides startup
 synchronization? (which is fixed by pam..)

 Our concept basically boils down to treating sessions as a banal
 ref-count to a user. Instead of putting all the big stuff into a
 session, we now move it to the user. You can still have multiple
 sessions, but they share the bulk of data now. On the one hand, this
 is required for stuff like firefox that can only run once per user. On
 the other hand, it seems rather natural to share contexts between
 multiple logins of the same user. The same 'user' cannot sit in front
 of two machines at the same time, so why allow two independent logins?
 Anyhow, there's a lot going on right now and it'll take some time
 until this is all implemented. But so far, there haven't been any
 use-cases that cannot be solved with per-user systemd.
Sure a same 'user' can't sit in front of two machines, but that don't stop
me open two sessions on two machines. By using per-user systemd instance,
it's very hard, if not impossible, to manage multiple sessions of a single
user. Does this mean systemd is banning multiple sessions for single user?

A bigger problem is that polkit depends on sessions to authentication
actions. And since the per-user systemd instance won't normally belong to
an active session, it's impossible for applications started by per-user
systemd instance to, say, mount an USB stick.


 Thanks
 David

Regards,
Yuxuan Shui
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [RFC] [PATCH] Follow symlinks in /lib

2014-03-03 Thread Michael Stapelberg
Hi,

See http://bugs.debian.org/719695 for context.

This patch is not complete yet; at least masking/unmasking does not work
yet. Maybe I missed other verbs, too :).

Any feedback appreciated.

-- 
Best regards,
Michael
diff --git i/src/shared/install.c w/src/shared/install.c
index f57b94d..8f9596b 100644
--- i/src/shared/install.c
+++ w/src/shared/install.c
@@ -1023,6 +1023,8 @@ static int unit_file_load(
 (int) strv_length(info-required_by);
 }
 
+#define FOLLOW_MAX 8
+
 static int unit_file_search(
 InstallContext *c,
 InstallInfo *info,
@@ -1053,11 +1055,44 @@ static int unit_file_search(
 if (!path)
 return -ENOMEM;
 
+int cnt = 0;
+for (;;) {
+if (cnt++ = FOLLOW_MAX)
+return -ELOOP;
+
+r = unit_file_load(c, info, path, allow_symlink);
+
+/* symlinks are always allowed for units in {/usr,}/lib/systemd so that
+ * one can alias units without using Alias= (the downside of Alias= is
+ * that the alias only exists when the unit is enabled). */
+if (r = 0)
+break;
+
+if (r != -ELOOP)
+break;
+
+if (allow_symlink)
+break;
+
+if (!path_startswith(path, /lib/systemd) 
+!path_startswith(path, /usr/lib/systemd))
+break;
+
+char *target;
+r = readlink_and_make_absolute(path, target);
+if (r  0)
+return r;
+free(path);
+path = target;
+}
+
 r = unit_file_load(c, info, path, allow_symlink);
 
-if (r = 0)
+if (r = 0) {
 info-path = path;
-else {
+free(info-name);
+info-name = strdup(path_get_file_name(path));
+} else {
 if (r == -ENOENT  unit_name_is_instance(info-name)) {
 /* Unit file doesn't exist, however instance enablement was requested.
  * We will check if it is possible to load template unit file. */
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Recommended way to automount USB drives and trigger actions?

2014-03-03 Thread Alejandro Exojo
Hi.

I'm asked to do the following upgrade procedure to a custom embedded
system: plug an USB drive and upgrade our application (that runs as a
systemd service) from the contents of the drive. Is a bit ugly, but is
a temporary workaround.

I've thought of doing it with:

1. Automounting USB drives when they are plugged.
2. A oneshot service that calls pkcon or the underlying package
manager, and that is WantedBy=media-usbdrive.mount (the package
already has the machinery to restart the service).

I'm having doubts on the first part. I've already done the
automounting on some cases with udev rules, but I'm completely unsure
if is preferred to be done with .mount and .automount unit files. I
suppose I can do anything with udev rules that trigger commands too.

If you have any advice, I will be very glad to read it.
Thank you.

PS: Sorry if is not the right place for this, but there is no
systemd-users list. :-)
And since I've bothered you anyway, let me add that switching from
sysvinit to systemd has been a huge boon to our usability and
development. We are using the journal quite extensively to debug,
since we have no other UI that printing stuff to stderr. I think you
deserve to know, after all the destructive criticism that you tend to
receive.

-- 
Alejandro Exojo Piqueras

ModpoW, S.L.
Technova LaSalle | Sant Joan de la Salle 42 | 08022 Barcelona | www.modpow.es
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Satitic IP in container

2014-03-03 Thread arnaud gaboury

 On host side :
  /etc/systemd/network/70-dahlia.netdev ***
 [Match]
 Host=host0
 Virtualization=container

 [NetDev]
 Name=br0
 Kind=bridge

 [Match]
 Virtualization=container

 *** /etc/systemd/network/80-dahlia.network ***
 [Network]
 DHCP=no
 DNS=192.168.1.254


 [Address]
 Address=192.168.1.94/24

 [Route]
 Gateway=192.168.1.254

 ---
 Start the container
 # sudo systemd-nspawn --machine=dahlia --network-bridge=br0 -bD /dahlia

 *** On host : ***

 gabx@hortensia ➤➤ systemd/network % ip addr
 2: enp7s0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast
 state UP group default qlen 1000
 link/ether 14:da:e9:b5:7a:88 brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.87/24 brd 192.168.1.255 scope global enp7s0
valid_lft forever preferred_lft forever
 3: br0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc noqueue
 state DOWN group default
 link/ether 7a:21:78:cc:bc:a9 brd ff:ff:ff:ff:ff:ff
 8: vb-dahlia: BROADCAST,MULTICAST mtu 1500 qdisc noop master br0
 state DOWN group default qlen 1000
 link/ether 7a:21:78:cc:bc:a9 brd ff:ff:ff:ff:ff:ff


 *** on Container: ***

 On container, I have of course systemd-networkd enable, and no files in 
 /etc/systemd/networkd

 gab@dahlia ➤➤ ~ % ip addr show host0
 2: host0: NO-CARRIER,BROADCAST,ALLMULTI,AUTOMEDIA,NOTRAILERS,UP mtu
 1500 qdisc pfifo_fast state DOWN group default qlen 1000
 link/ether 3a:4f:1f:c5:b5:d1 brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.94/24 brd 192.168.1.255 scope global host0
valid_lft forever preferred_lft forever

 gab@dahlia ➤➤ ~ % ip route
 default via 192.168.1.254 dev host0
 192.168.1.0/24 dev host0  proto kernel  scope link  src 192.168.1.94

 gab@dahlia ➤➤ ~ % ping -c 3 8.8.8.8
 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
 From 192.168.1.94 icmp_seq=1 Destination Host Unreachable
 From 192.168.1.94 icmp_seq=2 Destination Host Unreachable
 From 192.168.1.94 icmp_seq=3 Destination Host Unreachable


Network is thus unreachable on the container.
As we can see above, host0 is listed as DOWN. I have no idea why.

# ip link set dev host0 up
does no effect, host0 is still down

Now some debugging outputs:

gab@dahlia ➤➤ ~ % ip route show
default via 192.168.1.254 dev host0
192.168.1.0/24 dev host0  proto kernel  scope link  src 192.168.1.94

gab@dahlia ➤➤ ~ % cat /etc/resolv.conf
# Generated by resolvconf
domain lan
nameserver 192.168.1.254

gab@dahlia ➤➤ ~ # SYSTEMD_LOG_LEVEL=debug /lib/systemd/systemd-networkd
timestamp of '/etc/systemd/network' changed
timestamp of '/run/systemd/network' changed
host0: link (with ifindex 2) added
lo: link (with ifindex 1) added
Sent message type=method_call sender=n/a
destination=org.freedesktop.DBus object=/org/freedesktop/DBus
interface=org.freedesktop.DBus member=Hello cookie=1 reply_cookie=0
error=n/a
Got message type=method_return sender=org.freedesktop.DBus
destination=:1.7 object=n/a interface=n/a member=n/a cookie=1
reply_cookie=1 error=n/a
Got message type=signal sender=org.freedesktop.DBus destination=:1.7
object=/org/freedesktop/DBus interface=org.freedesktop.DBus
member=NameAcquired cookie=2 reply_cookie=0 error=n/a


gab@dahlia ➤➤ ~ % ping -c3 192.168.1.254
PING 192.168.1.254 (192.168.1.254) 56(84) bytes of data.
64 bytes from 192.168.1.254: icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from 192.168.1.254: icmp_seq=2 ttl=64 time=0.034 ms
64 bytes from 192.168.1.254: icmp_seq=3 ttl=64 time=0.042 ms

--- 192.168.1.254 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.034/0.037/0.042/0.006 ms

I can ping the router
Why is my host0 shown as down and can't be up with the ip command ?
Do I need some conf files in container in /etc/systemd/netwrok/ ? I
tried to add some but it didn't changed anything.
I guess there is something wrong in my setup, but I have no idea what.

Thank you for help. I have been working on this setup for many days
now with no success.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Per session systemd?

2014-03-03 Thread Yuxuan Shui
Hi,

After reading some more mails and thinking about it a bit more, I seems to
have a better understanding.

I know that a per-user systemd is used to start service which should only
be started once for every user. But I also want systemd to be able to start
applications for every session (e.g. window manager), which is hard to do
with the currect systemd --user implementation.

I think there're two solutions here.

1) A per-session systemd instance. That's possibly the most simple
solution. The changes needed is adding a 'User=' property to session unit,
and give the change the ownership of the session cgroup to the given user.
Then the user could start systemd after he start X (e.g. put systemd into
.xinitrc). Also systemd probably have to read configuration files from a
different position as the systemd --user (e.g.
$XDG_CONFIG_HOME/systemd/session).

One advantage of this solutions is that systemd will automatically have all
the environment variables set up during the X startup sequence.

2) Let the per-user systemd start service in session. I think this is what
David meant. I don't know what changes are needed in systemd to do this.
Since the session cgroup is owned by root, maybe the ownership should be
changed to the user? Or a new systemd API to start service in a given
session?

I don't know. Also it seems to be hard to maintain different sets of
environment variables needed to start applications in different sessions.

Correct me if I'm wrong.


Regards,
Yuxuan Shui.


On Mon, Mar 3, 2014 at 4:46 PM, Yuxuan Shui yshu...@gmail.com wrote:

 Hi,

 On Mon, Mar 3, 2014 at 4:11 PM, David Herrmann dh.herrm...@gmail.com
 wrote:
  Hi
 
  On Mon, Mar 3, 2014 at 8:30 AM, Yuxuan Shui yshu...@gmail.com wrote:
  Hi,
 
  This mail might be a little bit later for the topic, but I would like
  to share my thoughts anyway.
 
  Before systemd 206 was released, there are a few users (I don't know
  how many of them are there, but there's a page about it on archlinux's
  wiki. So I guess it's a well-known use case.) who use systemd to
  manage their sessions. For example, they will exec systemd in their
  ~/.xinitrc, and have systemd start all their X applications.
 
  systemd --user ist started by PAM for any proper user login. So you
  could fix that by just using a proper login manager (if you don't want
  the big ones, there's small stuff like 'slim', too). Even if you don't
  want these, I still recommend to do a PAM authentication in your
  startup script. This might even be via pam_rootok so you don't have to
  enter any password.
 Yea, I know that. The problem is this instance is started once per every
 user. And this systemd instance don't belong to the same session as the
 logged-in user, causing problems I described below.

 
  I know this kind of use case has never been explicitly supported by
  systemd, but it was a really nice _accidental_ feature. However, after
  the cgroup changes made in the systemd 206 release, it became
  impossible to use systemd in such way.
 
  I saw some user complaints, but the systemd developers seemed unmoved.
  Maybe because the original purpose of systemd --user is to start
  per-user systemd instances. There're hacks to make systemd usable
  under a X session. But that's very complicated, and contains many
  pitfalls (User have to set a lot of environmental variables, and this
  makes logind unhappy since the systemd user instance is not in the
  same session as X). Besides, there're reasonable use cases which can't
  be covered by a per-user systemd instance, like periodically starting
  a graphic application.
 
  Why is that not possible with per-user instances?
 Yes, it's possible, but it has many pitfalls as I described.

 Here I mean if you *don't use those hacks*, you can't do things like
 periodically starting a graphic application
  
  So, I wrote a very dirty hack for my systemd, and have been using it
  till today. I add a 'User=' property to a session, and have systemd
  chown the cgroup to the given user, so I can start systemd in my
  .xinitrc as I used to. I admit this is probably a very bad hack, and
  I'm not sure if it will still work after the soon coming cgroup
  rework.
 
  That's why I'm writing this mail. I want to point out the reason
  behind use systemd as a session manager, so you will probably
  understand why I want to do this and help me. Since I can't get this
  done by myself with my limited systemd knowledge.
 
  Any help will be appreciated. It will be better if you can convince me
  that I'm stupid and this feature is totally useless.
 
  What's the problem with per-user systemd besides startup
  synchronization? (which is fixed by pam..)
 
  Our concept basically boils down to treating sessions as a banal
  ref-count to a user. Instead of putting all the big stuff into a
  session, we now move it to the user. You can still have multiple
  sessions, but they share the bulk of data now. On the one hand, this
  is required for stuff like 

Re: [systemd-devel] Per session systemd?

2014-03-03 Thread David Herrmann
Hi

On Mon, Mar 3, 2014 at 12:16 PM, Yuxuan Shui yshu...@gmail.com wrote:
 Hi,

 After reading some more mails and thinking about it a bit more, I seems to
 have a better understanding.

 I know that a per-user systemd is used to start service which should only be
 started once for every user. But I also want systemd to be able to start
 applications for every session (e.g. window manager), which is hard to do
 with the currect systemd --user implementation.

The idea is to run your window-manager only once per user. If you want
multiple logins with the same window-manager, then your window manager
should support that, not systemd. You wm daemon should simply be able
to serve multiple sessions from one instance.

Thanks
David

 I think there're two solutions here.

 1) A per-session systemd instance. That's possibly the most simple solution.
 The changes needed is adding a 'User=' property to session unit, and give
 the change the ownership of the session cgroup to the given user. Then the
 user could start systemd after he start X (e.g. put systemd into .xinitrc).
 Also systemd probably have to read configuration files from a different
 position as the systemd --user (e.g. $XDG_CONFIG_HOME/systemd/session).

 One advantage of this solutions is that systemd will automatically have all
 the environment variables set up during the X startup sequence.

 2) Let the per-user systemd start service in session. I think this is what
 David meant. I don't know what changes are needed in systemd to do this.
 Since the session cgroup is owned by root, maybe the ownership should be
 changed to the user? Or a new systemd API to start service in a given
 session?

 I don't know. Also it seems to be hard to maintain different sets of
 environment variables needed to start applications in different sessions.

 Correct me if I'm wrong.


 Regards,
 Yuxuan Shui.


 On Mon, Mar 3, 2014 at 4:46 PM, Yuxuan Shui yshu...@gmail.com wrote:

 Hi,

 On Mon, Mar 3, 2014 at 4:11 PM, David Herrmann dh.herrm...@gmail.com
 wrote:
  Hi
 
  On Mon, Mar 3, 2014 at 8:30 AM, Yuxuan Shui yshu...@gmail.com wrote:
  Hi,
 
  This mail might be a little bit later for the topic, but I would like
  to share my thoughts anyway.
 
  Before systemd 206 was released, there are a few users (I don't know
  how many of them are there, but there's a page about it on archlinux's
  wiki. So I guess it's a well-known use case.) who use systemd to
  manage their sessions. For example, they will exec systemd in their
  ~/.xinitrc, and have systemd start all their X applications.
 
  systemd --user ist started by PAM for any proper user login. So you
  could fix that by just using a proper login manager (if you don't want
  the big ones, there's small stuff like 'slim', too). Even if you don't
  want these, I still recommend to do a PAM authentication in your
  startup script. This might even be via pam_rootok so you don't have to
  enter any password.
 Yea, I know that. The problem is this instance is started once per every
 user. And this systemd instance don't belong to the same session as the
 logged-in user, causing problems I described below.

 
  I know this kind of use case has never been explicitly supported by
  systemd, but it was a really nice _accidental_ feature. However, after
  the cgroup changes made in the systemd 206 release, it became
  impossible to use systemd in such way.
 
  I saw some user complaints, but the systemd developers seemed unmoved.
  Maybe because the original purpose of systemd --user is to start
  per-user systemd instances. There're hacks to make systemd usable
  under a X session. But that's very complicated, and contains many
  pitfalls (User have to set a lot of environmental variables, and this
  makes logind unhappy since the systemd user instance is not in the
  same session as X). Besides, there're reasonable use cases which can't
  be covered by a per-user systemd instance, like periodically starting
  a graphic application.
 
  Why is that not possible with per-user instances?
 Yes, it's possible, but it has many pitfalls as I described.

 Here I mean if you don't use those hacks, you can't do things like
 periodically starting a graphic application
 
  So, I wrote a very dirty hack for my systemd, and have been using it
  till today. I add a 'User=' property to a session, and have systemd
  chown the cgroup to the given user, so I can start systemd in my
  .xinitrc as I used to. I admit this is probably a very bad hack, and
  I'm not sure if it will still work after the soon coming cgroup
  rework.
 
  That's why I'm writing this mail. I want to point out the reason
  behind use systemd as a session manager, so you will probably
  understand why I want to do this and help me. Since I can't get this
  done by myself with my limited systemd knowledge.
 
  Any help will be appreciated. It will be better if you can convince me
  that I'm stupid and this feature is totally useless.
 
  What's the problem with per-user 

Re: [systemd-devel] systemd version debus call changed

2014-03-03 Thread Àlex Fiestas
On Friday 28 February 2014 02:28:20 Lennart Poettering wrote:
 On Fri, 28.02.14 02:21, Timothée Ravier (sios...@gmail.com) wrote:
  On 26/02/2014 02:38, Lennart Poettering wrote:
   On Wed, 26.02.14 02:01, Jason A. Donenfeld (ja...@zx2c4.com) wrote:
   Upstream KDE patch is here:
   https://projects.kde.org/projects/kde/kde-workspace/repository/revision
   s/7584a63924620bac3bd87277c11cdb8cdb5018b1/diff/powerdevil/daemon/backe
   nds/upower/powerdevilupowerbackend.cpp?format=diff  
   Wow. Just wow. I am feeling tempted to just randomly change the version
   string exposed on the bus now, until they give that up. Whate else can I
   do than actually document that the string isn't stable?
   
   It is totally non-sensical to check for software versions the way KDE
   does it. We supply them with a call to check whether a certain operation
   is available (CanSuspend(), CanHibernate(), ...). They should just call
   that. It will tell them precisely whether the operation is not
   implemented in the code, or whether it is available on the hardware, and
   so on. But no, they decided to involve version checks...
  
  It looks like they check both systemd version and CanSuspend(),
  CanHibernate() results:
  https://projects.kde.org/projects/kde/kde-workspace/repository/revisions/m
  aster/entry/powerdevil/daemon/backends/upower/powerdevilupowerbackend.cpp#
  L229
  
  Could it be for compatibility with older systemd releases?
 
 They should just invoke the methods. If they get
 org.freedesktop.DBus.Error.UnknownMethod,
 org.freedesktop.DBus.Error.UnknownObject or
 org.freedesktop.DBus.Error.UnknownInterface back they should assume that
 logind is too old and doesn't support the call. (Actually, to make this
 really robust, they should just treat any error like that).
 
 Check for features by trying to make use of them. Don't check for
 version numbers.
The problem was that long ago logind supported suspend (methods were in the 
bus) but they did not work, at least not for opensuse so that work around was 
added.

This code will certainly die in the near future.

signature.asc
Description: This is a digitally signed message part.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] Fix systemd-stdio-bridge symlink

2014-03-03 Thread Lennart Poettering
On Sun, 02.03.14 23:37, Mike Gilbert (flop...@gentoo.org) wrote:

 The symlink is created in bindir (/usr/bin), and points to a binary
 which lives in rootlibexecdir (/lib/systemd or /usr/lib/systemd). A
 relative symlink does not work here.
 ---
  Makefile.am | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/Makefile.am b/Makefile.am
 index 38445fb..e7134a2 100644
 --- a/Makefile.am
 +++ b/Makefile.am
 @@ -1978,7 +1978,7 @@ systemd_bus_proxyd_LDADD = \
  
  bus-proxyd-install-hook:
   $(AM_V_at)$(MKDIR_P) $(DESTDIR)$(bindir)
 - $(AM_V_LN)$(LN_S) -f ../lib/systemd/systemd-bus-proxyd 
 $(DESTDIR)$(bindir)/systemd-stdio-bridge
 + $(AM_V_LN)$(LN_S) -f $(rootlibexecdir)/systemd-bus-proxyd 
 $(DESTDIR)$(bindir)/systemd-stdio-bridge
  
  bus-proxyd-uninstall-hook:
   rm -f $(DESTDIR)$(bindir)/systemd-stdio-bridge

This really sounds like we want to use ln's --relative option here, so
that the symlink is relative regardless what the setup is.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Process Manager?

2014-03-03 Thread Lennart Poettering
On Sun, 02.03.14 19:48, David Farning (dfarn...@gmail.com) wrote:

 Over the last couple of weeks I have been looking over and testing the
 systemd. Thanks for all the hard work and interesting ideas.
 
 One issue that has come to mind is the quality and structure of the
 documentation. The quality and clarity of the documentation can be as
 important as the quality and clarity of the code.  As a relatively
 young project the scope and implementation has shifted as new ideas
 have been tested.
 
 This has resulted in two areas of mis-communication:
 1. What is systemd?
 2. What is systemd... now?
 
 One suggestion would be to refer to systemd as a 'process manager.'
 Calling it a system manager seems a bit heavy handed... and results in
 unnecessary pushback. Calling it an service manager understates the
 scope of the project.

Well, the name of the project is systemd, which means that it manages
the system is already baked into its very name. It's what it is...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] core: add default extra dependency option

2014-03-03 Thread Lennart Poettering
On Mon, 03.03.14 11:52, WaLyong Cho (walyong@samsung.com) wrote:

  But if you do this on an embedded system you can do
  DefaultDependencies=no for all services where you want this and place
  them manually?
  
 Almost I can. Actually I can request to the package manager in our
 system. But, I don't want to put DefaultDependencies=no to all of
 services. Then all of services should consider which mount, socket, path
 and more units are needed to launch itself. I don't want this. I just
 wants they launch after basic.target and some of special services what
 should be per-processed before than others to optimize boot speed
 extremely. (Those pre-processed services will be listed in config with
 DefaultExtraDependencies=)
  
  Also, are you sure that you really want to solve this with manual deps?
  I mean, the kernel already has a CPU scheduler and an IO
  scheduler. Maybe it would be better to simply dump all the scheduling
  work on the kernel as far as that is possible, start everything in
  parallel, but then also tell the kernel what matters more, and what
  matters less.
  
  We already expose CPUShares= and BlockIOWeight= for services. Maybe we
  should duplicate these as StartupCPUShares= and StartupBlockIOWeight=
  which could set different values to apply only while the boot process is
  not complete yet. Or something like that. 
  
  Lennart
  
 Parallel is good and by this, systemd is very flexible to suit our
 product. But I(our product) want to some of services occupy most of
 system resources at the head of boot sequence. (don't confuse that will
 after basic.target) Some more detail, we play some of animation during
 boot and we call that boot-animation(similar with splash animation).
 During that time, we launch essential services and idle screen with this
 functionality. At this time, we don't want any other services are using
 system resources.
 
 StartupCPUShares= and StartupBlockIOWeight= maybe good idea. But should
 be considered it really OK, lower or higher CPUShares and BlockIOWeight
 during whole boot time.

Yes, precisely, that is what I want StartupCPUShares= to be: an
alternative to CPUShares= that is applied only while the system is
booting up.

A service with this configuration:

CPUShares=1024
StartupCPUShares=10

Would be scheduled at a very low priority during startup, but as soon as
startup is complete would be bumped to normal levels.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] binding tmpfiles.d to unit startup

2014-03-03 Thread Holger Schurig
Make it more user friendly (e.g. without an open man page). Instead of

 u root 0
 g mail /usr/bin/procmail
 g tty /usr/bin/write
 d /var/lib/foobar 664 root root
 c /etc/sudoers /usr/share/sudo/sudoers.default

user root 0
setgroup mail /usr/bin/procmail

... and so on.


Hmm, that gave me one thougth: if systemd starts as PID 1 and no
/etc/passwd etc doesn't exist, I can very well understand that, when
compiled with --enable-privioning, it should create those things. But
the c -line could be happily handled by a shell script. So my
proposal is to only add things into systemd-provision that absolutely
must be done by pid 1, because without it /bin/dash or most user-space
won't run. But then systemd-provision should just execute provision
shell scripts in /lib/systemd/provision.d (or similar). No need to
re-create cp, for example. Also it gives overall a bigger
flexibility.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Recommended way to automount USB drives and trigger actions?

2014-03-03 Thread Lennart Poettering
On Mon, 03.03.14 10:29, Alejandro Exojo (aex...@modpow.es) wrote:

 Hi.
 
 I'm asked to do the following upgrade procedure to a custom embedded
 system: plug an USB drive and upgrade our application (that runs as a
 systemd service) from the contents of the drive. Is a bit ugly, but is
 a temporary workaround.
 
 I've thought of doing it with:
 
 1. Automounting USB drives when they are plugged.
 2. A oneshot service that calls pkcon or the underlying package
 manager, and that is WantedBy=media-usbdrive.mount (the package
 already has the machinery to restart the service).
 
 I'm having doubts on the first part. I've already done the
 automounting on some cases with udev rules, but I'm completely unsure
 if is preferred to be done with .mount and .automount unit files. I
 suppose I can do anything with udev rules that trigger commands too.

Hmm, you could just add the USB drive to fstab, and mark it as
auto,nofail. Then, each time you insert the device it should be mounted.

Then with WantedBy=media-usbdrive.mount you could pull in your service,
as you already found out.

 
 If you have any advice, I will be very glad to read it.
 Thank you.
 
 PS: Sorry if is not the right place for this, but there is no
 systemd-users list. :-)
 And since I've bothered you anyway, let me add that switching from
 sysvinit to systemd has been a huge boon to our usability and
 development. We are using the journal quite extensively to debug,
 since we have no other UI that printing stuff to stderr. I think you
 deserve to know, after all the destructive criticism that you tend to
 receive.

Thanks!

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] binding tmpfiles.d to unit startup

2014-03-03 Thread Lennart Poettering
On Mon, 03.03.14 15:48, Holger Schurig (holgerschu...@gmail.com) wrote:

 Hmm, that gave me one thougth: if systemd starts as PID 1 and no
 /etc/passwd etc doesn't exist, I can very well understand that, when
 compiled with --enable-privioning, it should create those things. But
 the c -line could be happily handled by a shell script. So my
 proposal is to only add things into systemd-provision that absolutely
 must be done by pid 1, because without it /bin/dash or most user-space
 won't run. But then systemd-provision should just execute provision
 shell scripts in /lib/systemd/provision.d (or similar). No need to
 re-create cp, for example. Also it gives overall a bigger
 flexibility.

The explicit goal here is to have something declarative for recreating
/etc and /var. Something one can easily turn into a file list for rpm or
dpkg. Which means shell scripts are not suitable for this, because they
are imperative...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Per session systemd?

2014-03-03 Thread Lennart Poettering
On Mon, 03.03.14 15:30, Yuxuan Shui (yshu...@gmail.com) wrote:

Heya,

 That's why I'm writing this mail. I want to point out the reason
 behind use systemd as a session manager, so you will probably
 understand why I want to do this and help me. Since I can't get this
 done by myself with my limited systemd knowledge.

You should be able to place your WM in a systemd user service. Then,
when you log in, do something like this:

  systemctl import-environment DISPLAY XAUTHORITY
  systemctl start my-wm.service

The first command will upload the $DISPLAY variable into the systemd
user instance. The second instance will then spawn the WM.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Per session systemd?

2014-03-03 Thread Lennart Poettering
On Mon, 03.03.14 19:16, Yuxuan Shui (yshu...@gmail.com) wrote:

 Hi,
 
 After reading some more mails and thinking about it a bit more, I seems to
 have a better understanding.
 
 I know that a per-user systemd is used to start service which should only
 be started once for every user. But I also want systemd to be able to start
 applications for every session (e.g. window manager), which is hard to do
 with the currect systemd --user implementation.
 
 I think there're two solutions here.
 
 1) A per-session systemd instance. That's possibly the most simple
 solution. The changes needed is adding a 'User=' property to session unit,
 and give the change the ownership of the session cgroup to the given user.
 Then the user could start systemd after he start X (e.g. put systemd into
 .xinitrc). Also systemd probably have to read configuration files from a
 different position as the systemd --user (e.g.
 $XDG_CONFIG_HOME/systemd/session).

We want to move from a per-session to a per-user scheme to simplify
things, not making it moer complicated...

 One advantage of this solutions is that systemd will automatically have all
 the environment variables set up during the X startup sequence.
 
 2) Let the per-user systemd start service in session. I think this is what
 David meant. I don't know what changes are needed in systemd to do this.
 Since the session cgroup is owned by root, maybe the ownership should be
 changed to the user? Or a new systemd API to start service in a given
 session?

In the long run the idea is that a desktop compositor/WM will be a
singleton that manages the devices of all local sessions you might have
created, and merges them into a single virtual huge workplace. If you
log into two seats they will thus magically merge into one big
workplace. Each time you log into a seat you can more real estate added
to your existing compositor.

In the short term the compositors can't do this just yet, so the idea is
that the first time you login we start the compositor/WM on that
screen. And if you login a second time on another seat you will get a
simply dialog that gives you two options: Disconnect other session
and Disconnect this session. 

And a third option is to just work-around our ideas by making your WM a
templated service:

my-wm@.service:

[Service]
Environment=DISPLAY=%I
ExecStart=/usr/bin/my-wm

And then when you log in run systemctl start my-wm@$DISPLAY.service
and you get it loaded.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] Fix systemd-stdio-bridge symlink

2014-03-03 Thread Michael Biebl
2014-03-03 15:32 GMT+01:00 Lennart Poettering lenn...@poettering.net:
 On Sun, 02.03.14 23:37, Mike Gilbert (flop...@gentoo.org) wrote:

 The symlink is created in bindir (/usr/bin), and points to a binary
 which lives in rootlibexecdir (/lib/systemd or /usr/lib/systemd). A
 relative symlink does not work here.
 ---
  Makefile.am | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/Makefile.am b/Makefile.am
 index 38445fb..e7134a2 100644
 --- a/Makefile.am
 +++ b/Makefile.am
 @@ -1978,7 +1978,7 @@ systemd_bus_proxyd_LDADD = \

  bus-proxyd-install-hook:
   $(AM_V_at)$(MKDIR_P) $(DESTDIR)$(bindir)
 - $(AM_V_LN)$(LN_S) -f ../lib/systemd/systemd-bus-proxyd 
 $(DESTDIR)$(bindir)/systemd-stdio-bridge
 + $(AM_V_LN)$(LN_S) -f $(rootlibexecdir)/systemd-bus-proxyd 
 $(DESTDIR)$(bindir)/systemd-stdio-bridge

  bus-proxyd-uninstall-hook:
   rm -f $(DESTDIR)$(bindir)/systemd-stdio-bridge

 This really sounds like we want to use ln's --relative option here, so
 that the symlink is relative regardless what the setup is.

The patch looked ok to me as is, but I can certainly add a --relative
if you prefer.

Should
dbus1-generator-install-hook:
   $(AM_V_at)$(MKDIR_P) $(DESTDIR)$(usergeneratordir)
   $(AM_V_LN)$(LN_S) -f
$(systemgeneratordir)/systemd-dbus1-generator
$(DESTDIR)$(usergeneratordir)/systemd-dbus1-generator

be updated then as well?


-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] Fix systemd-stdio-bridge symlink

2014-03-03 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Mar 03, 2014 at 04:16:28PM +0100, Michael Biebl wrote:
 2014-03-03 16:12 GMT+01:00 Michael Biebl mbi...@gmail.com:
  This really sounds like we want to use ln's --relative option here, so
  that the symlink is relative regardless what the setup is.
 
  The patch looked ok to me as is, but I can certainly add a --relative
  if you prefer.
 
 Btw, what's the reason why you prefer relative symlinks in this case?
Works when looking at a container from outside? Is nicer during package
creation?

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] Fix systemd-stdio-bridge symlink

2014-03-03 Thread Lennart Poettering
On Mon, 03.03.14 16:12, Michael Biebl (mbi...@gmail.com) wrote:

 The patch looked ok to me as is, but I can certainly add a --relative
 if you prefer.
 
 Should
 dbus1-generator-install-hook:
$(AM_V_at)$(MKDIR_P) $(DESTDIR)$(usergeneratordir)
$(AM_V_LN)$(LN_S) -f
 $(systemgeneratordir)/systemd-dbus1-generator
 $(DESTDIR)$(usergeneratordir)/systemd-dbus1-generator
 
 be updated then as well?

I now changed the makefile to only generate relative symlinks. THis
makes use of ln --relative -s everywhere, which has been supported in
coreutils for 2y or so, hence I figure this should be OK. If this breaks
for people we can consider making use of this only after an autoconf check.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] binding tmpfiles.d to unit startup

2014-03-03 Thread Lennart Poettering
On Sun, 02.03.14 23:55, Lennart Poettering (lenn...@poettering.net) wrote:

 I am still open for this btw. If somebody wants to hack on that, I
 figure this should simply be addded to ExecContext, as a strv of
 directory names. In exec_spawn() we'd then just create all those dirs,
 right after resolving the UID/GID. When running in system mode we'd then
 create the dirs in /run, when running user mode in $XDG_RUNTIME_DIR.
 
 Should be a ~20 line patch or so...

Just to mention this: I have implemented this now in git.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC] bus: add sd_bus_emit_object_{added, removed}()

2014-03-03 Thread David Herrmann
ping?

On Tue, Feb 18, 2014 at 12:02 AM, David Herrmann dh.herrm...@gmail.com wrote:
 The ObjectManager dbus interface provides an InterfacesAdded signal to
 notify others about new interfaces that are added to an object. The same
 signal is also used to advertise new objects (by adding the first
 interface to a given object path) and delete them.

 However, our internal helpers sd_bus_emit_interfaces_{added,removed}()
 cannot properly deal with built-in interfaces like DBus.Properties as
 there's no vtable for it. Therefore, to avoid callers to track these
 internal interfaces, we provide two separate helpers which explicitly add
 these interfaces to the signal.

 sd_bus_emit_object_added() traverses the list of all vtables and fallback
 vtables (making sure a fallback never overwrites a real vtable!) and also
 adds built-in interfaces.

 sd_bus_emit_object_removed(): WIP
 ---
 Hi

 This is untested and I just wanted to get some feedback whether that's the way
 to go. Given the previous discussion we decided on two new entry-points to add
 and remove objects. For convenience, I now tried to omit any char 
 **interfaces
 argument to object_added() and just try to figure them out on my own.

 However, on object_removed() I cannot do that as node_vtable_get_userdata() is
 very likely to return 0 for all these objects. So there is no way to figure
 out which interfaces actually existed on that thing. We would require users to
 call it *before* destroying/unlinking the actual object. I don't know whether
 that's ok to assume?

 If not, we can just add a char **interfaces argument, but then it would 
 differ
 from object_added().. Not sure what sounds better..

 Cheers
 David

  src/libsystemd/sd-bus/bus-objects.c | 210 
 
  src/systemd/sd-bus.h|   2 +
  2 files changed, 212 insertions(+)

 diff --git a/src/libsystemd/sd-bus/bus-objects.c 
 b/src/libsystemd/sd-bus/bus-objects.c
 index b116a5d..0c099a3 100644
 --- a/src/libsystemd/sd-bus/bus-objects.c
 +++ b/src/libsystemd/sd-bus/bus-objects.c
 @@ -2469,6 +2469,216 @@ _public_ int sd_bus_emit_interfaces_removed(sd_bus 
 *bus, const char *path, const
  return sd_bus_emit_interfaces_removed_strv(bus, path, interfaces);
  }

 +static int object_added_append_all_prefix(
 +sd_bus *bus,
 +sd_bus_message *m,
 +Set *s,
 +const char *prefix,
 +const char *path,
 +bool require_fallback) {
 +
 +_cleanup_bus_error_free_ sd_bus_error error = SD_BUS_ERROR_NULL;
 +const char *previous_interface = NULL;
 +struct node_vtable *c;
 +struct node *n;
 +void *u = NULL;
 +int r;
 +
 +assert(bus);
 +assert(m);
 +assert(s);
 +assert(prefix);
 +assert(path);
 +
 +n = hashmap_get(bus-nodes, prefix);
 +if (!n)
 +return 0;
 +
 +LIST_FOREACH(vtables, c, n-vtables) {
 +if (require_fallback  !c-is_fallback)
 +continue;
 +
 +r = node_vtable_get_userdata(bus, path, c, u, error);
 +if (r  0)
 +return r;
 +if (bus-nodes_modified)
 +return 0;
 +if (r == 0)
 +continue;
 +
 +if (!streq_ptr(c-interface, previous_interface)) {
 +/* interface already handled by a previous run? */
 +if (set_get(s, c-interface))
 +continue;
 +
 +/* prevent fallbacks from overwriting specific objs 
 */
 +r = set_put(s, c-interface);
 +if (r  0)
 +return r;
 +
 +if (previous_interface) {
 +r = sd_bus_message_close_container(m);
 +if (r  0)
 +return r;
 +
 +r = sd_bus_message_close_container(m);
 +if (r  0)
 +return r;
 +}
 +
 +r = sd_bus_message_open_container(m, 'e', sa{sv});
 +if (r  0)
 +return r;
 +
 +r = sd_bus_message_append_basic(m, 's', 
 c-interface);
 +if (r  0)
 +return r;
 +
 +r = sd_bus_message_open_container(m, 'a', {sv});
 +if (r  0)
 +return r;
 +
 +previous_interface = c-interface;
 +}
 +
 +r = vtable_append_all_properties(bus, m, path, c, u, error);
 +if (r  0)
 +return r;
 +   

[systemd-devel] [PATCH] man: networkd - fix typo

2014-03-03 Thread Umut Tezduyar Lindskog
---
 man/systemd-networkd.service.xml |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/man/systemd-networkd.service.xml b/man/systemd-networkd.service.xml
index 6ee8494..cb6afaa 100644
--- a/man/systemd-networkd.service.xml
+++ b/man/systemd-networkd.service.xml
@@ -74,7 +74,7 @@
 that it is safe to transition between the initrd and the real 
root,
 and back./para
 
-paraNameservers configured in networkd, or receievd over DHCP
+paraNameservers configured in networkd, or received over DHCP
 are exposed in 
filename/run/systemd/network/resolv.conf/filename.
 This file should not be used directly, but only through a 
symlink
 from filename/etc/resolv.conf/filename./para
-- 
1.7.10.4

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] metadata: use the subjective cred of current

2014-03-03 Thread Daniel Mack
On 03/02/2014 10:11 PM, Djalal Harouni wrote:
 In kdbus_meta_append_*() we want to get the subjective context, so
 instead of using __task_cred() which reference the objective cred,
 use current_cred() to access the subjective cred.
 
 Signed-off-by: Djalal Harouni tix...@opendz.org
 ---
 Compile tested and make check

Looks correct. Applied, thanks!


Daniel

  metadata.c | 14 ++
  1 file changed, 2 insertions(+), 12 deletions(-)
 
 diff --git a/metadata.c b/metadata.c
 index df05b43..75fc819 100644
 --- a/metadata.c
 +++ b/metadata.c
 @@ -292,21 +292,18 @@ static int kdbus_meta_append_cmdline(struct kdbus_meta 
 *meta)
  
  static int kdbus_meta_append_caps(struct kdbus_meta *meta)
  {
 - const struct cred *cred;
   struct caps {
   u32 cap[_KERNEL_CAPABILITY_U32S];
   } cap[4];
   unsigned int i;
 + const struct cred *cred = current_cred();
  
 - rcu_read_lock();
 - cred = __task_cred(current);
   for (i = 0; i  _KERNEL_CAPABILITY_U32S; i++) {
   cap[0].cap[i] = cred-cap_inheritable.cap[i];
   cap[1].cap[i] = cred-cap_permitted.cap[i];
   cap[2].cap[i] = cred-cap_effective.cap[i];
   cap[3].cap[i] = cred-cap_bset.cap[i];
   }
 - rcu_read_unlock();
  
   /* clear unused bits */
   for (i = 0; i  4; i++)
 @@ -341,15 +338,8 @@ static int kdbus_meta_append_cgroup(struct kdbus_meta 
 *meta)
  static int kdbus_meta_append_audit(struct kdbus_meta *meta)
  {
   struct kdbus_audit audit;
 - const struct cred *cred;
 - uid_t uid;
  
 - rcu_read_lock();
 - cred = __task_cred(current);
 - uid = from_kuid(cred-user_ns, audit_get_loginuid(current));
 - rcu_read_unlock();
 -
 - audit.loginuid = uid;
 + audit.loginuid = from_kuid(current_user_ns(), 
 audit_get_loginuid(current));
   audit.sessionid = audit_get_sessionid(current);
  
   return kdbus_meta_append_data(meta, KDBUS_ITEM_AUDIT,
 

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] man: networkd - fix typo

2014-03-03 Thread Tom Gundersen
Applied. Thanks!

-t

On Mon, Mar 3, 2014 at 9:13 PM, Umut Tezduyar Lindskog
umut.tezdu...@axis.com wrote:
 ---
  man/systemd-networkd.service.xml |2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

 diff --git a/man/systemd-networkd.service.xml 
 b/man/systemd-networkd.service.xml
 index 6ee8494..cb6afaa 100644
 --- a/man/systemd-networkd.service.xml
 +++ b/man/systemd-networkd.service.xml
 @@ -74,7 +74,7 @@
  that it is safe to transition between the initrd and the 
 real root,
  and back./para

 -paraNameservers configured in networkd, or receievd over 
 DHCP
 +paraNameservers configured in networkd, or received over 
 DHCP
  are exposed in 
 filename/run/systemd/network/resolv.conf/filename.
  This file should not be used directly, but only through a 
 symlink
  from filename/etc/resolv.conf/filename./para
 --
 1.7.10.4

 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] Fix systemd-stdio-bridge symlink

2014-03-03 Thread Mike Gilbert
On Mon, Mar 3, 2014 at 11:57 AM, Lennart Poettering
lenn...@poettering.net wrote:
 On Mon, 03.03.14 16:12, Michael Biebl (mbi...@gmail.com) wrote:

 The patch looked ok to me as is, but I can certainly add a --relative
 if you prefer.

 Should
 dbus1-generator-install-hook:
$(AM_V_at)$(MKDIR_P) $(DESTDIR)$(usergeneratordir)
$(AM_V_LN)$(LN_S) -f
 $(systemgeneratordir)/systemd-dbus1-generator
 $(DESTDIR)$(usergeneratordir)/systemd-dbus1-generator

 be updated then as well?

 I now changed the makefile to only generate relative symlinks. THis
 makes use of ln --relative -s everywhere, which has been supported in
 coreutils for 2y or so, hence I figure this should be OK. If this breaks
 for people we can consider making use of this only after an autoconf check.


Would someone mind adding a Backport note for at least my original
commit? I'm not so sure about tagging Lennart's more extensive change.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] kdbus vs. pipe based ipc performance

2014-03-03 Thread Stefan Westerfeld
   Hi!

First of all: I'd really like to see kdbus being used as a general purpose IPC
layer; so that developers working on client-/server software will no longer
need to create their own homemade IPC by using primitives like sockets or
similar.

Now kdbus is advertised as high performance IPC solution, and compared to the
traditional dbus approach, this may well be true. But are the numbers that

$ test-bus-kernel-benchmark chart

produces impressive? Or to put it in another way: will developers working on
client-/server software happily accept kdbus, because it performs as good as a
homemade IPC solution would? Or does kdbus add overhead to a degree that some
applications can't accept?

To answer this, I wrote a program called ibench which passes messages between
a client and a server, but instead of using kdbus to do it, it uses traditional
pipes. To simulate main loop integration, it uses poll() in cases where a normal
client or server application would go into the main loop, and wait to be woken
up by filedescriptor activity.

Now here are the results I obtained using

- AMD Phenom(tm) 9850 Quad-Core Processor
- running Fedora 20 64-bit with systemd+kdbus from git
- system booted with kdbus and single kernel arguments


*** single cpu performance:  .

   SIZECOPY   MEMFD KDBUS-MAX  IBENCH  SPEEDUP

  1   32580   16390 32580  192007  5.89
  2   40870   16960 40870  191730  4.69
  4   40750   16870 40750  190938  4.69
  8   40930   16950 40930  191234  4.67
 16   40290   17150 40290  192041  4.77
 32   40220   18050 40220  191963  4.77
 64   40280   16930 40280  192183  4.77
128   40530   17440 40530  191649  4.73
256   40610   17610 40610  190405  4.69
512   40770   16690 40770  188671  4.63
   1024   40670   17840 40670  185819  4.57
   2048   40510   17780 40510  181050  4.47
   4096   39610   17330 39610  154303  3.90
   8192   38000   16540 38000  121710  3.20
  16384   35900   15050 35900   80921  2.25
  32768   31300   13020 31300   54062  1.73
  65536   243009940 24300   27574  1.13
 131072   167306820 16730   14886  0.89
 26214444204080  44206888  1.56
 52428816602040  20402781  1.36
1048576 800 950   9501231  1.30
2097152 310 490   490 475  0.97
4194304 150 240   240 227  0.95

*** dual cpu performance:  .

   SIZECOPY   MEMFD KDBUS-MAX  IBENCH  SPEEDUP

  1   31680   14000 31680  104664  3.30
  2   34960   14290 34960  104926  3.00
  4   34930   14050 34930  104659  3.00
  8   24610   13300 24610  104058  4.23
 16   33840   14740 33840  103800  3.07
 32   33880   14400 33880  103917  3.07
 64   34180   14220 34180  103349  3.02
128   34540   14260 34540  102622  2.97
256   37820   14240 37820  102076  2.70
512   37570   14270 37570   99105  2.64
   1024   37570   14780 37570   96010  2.56
   2048   21640   13330 21640   89602  4.14
   4096   23430   13120 23430   73682  3.14
   8192   34350   12300 34350   59827  1.74
  16384   25180   10560 25180   43808  1.74
  32768   202109700 20210   21112  1.04
  65536   154407820 15440   10771  0.70
 131072   116305670 116305775  0.50
 26214440803730  40803012  0.74
 52428818302040  20401421  0.70
1048576 810 950   950 631  0.66
2097152 310 490   490 269  0.55
4194304 150 240   240 133  0.55


I ran the tests twice - once using the same cpu for client and server (via cpu
affinity) and once using a different cpu for client and server.

The SIZE, COPY and MEMFD column are produced by test-bus-kernel-benchmark
chart, the KDBUS-MAX column is the maximum of the COPY and MEMFD column. So
this is the effective number of roundtrips that kdbus is able to do at that
SIZE. The IBENCH column is the effective number of roundtrips that ibench can
do at that SIZE.

For many relevant cases, ibench outperforms kdbus (a lot). The SPEEDUP factor
indicates how much faster ibench is than kdbus. For small to medium array
sizes, ibench always wins (sometimes a lot). For instance passing a 4Kb array
from client to server and returning back, ibench is 3.90 times faster if client
and server live on the same cpu, and 3.14 times faster if client and server
live on different cpus.

I'm bringing this up now because it would be sad if kdbus became part of the
kernel and universally available, but application developers would still build
their own protocols for performance reasons. And some things that may need to
be changed to make kdbus run as fast as 

Re: [systemd-devel] kdbus vs. pipe based ipc performance

2014-03-03 Thread Kay Sievers
On Mon, Mar 3, 2014 at 10:35 PM, Stefan Westerfeld ste...@space.twc.de wrote:
 First of all: I'd really like to see kdbus being used as a general purpose IPC
 layer; so that developers working on client-/server software will no longer
 need to create their own homemade IPC by using primitives like sockets or
 similar.

 Now kdbus is advertised as high performance IPC solution, and compared to the
 traditional dbus approach, this may well be true. But are the numbers that

 $ test-bus-kernel-benchmark chart

 produces impressive? Or to put it in another way: will developers working on
 client-/server software happily accept kdbus, because it performs as good as a
 homemade IPC solution would? Or does kdbus add overhead to a degree that some
 applications can't accept?

 To answer this, I wrote a program called ibench which passes messages 
 between
 a client and a server, but instead of using kdbus to do it, it uses 
 traditional
 pipes. To simulate main loop integration, it uses poll() in cases where a 
 normal
 client or server application would go into the main loop, and wait to be woken
 up by filedescriptor activity.

 Now here are the results I obtained using

 - AMD Phenom(tm) 9850 Quad-Core Processor
 - running Fedora 20 64-bit with systemd+kdbus from git
 - system booted with kdbus and single kernel arguments

 
 *** single cpu performance:  .

SIZECOPY   MEMFD KDBUS-MAX  IBENCH  SPEEDUP

   1   32580   16390 32580  192007  5.89
   2   40870   16960 40870  191730  4.69
   4   40750   16870 40750  190938  4.69
   8   40930   16950 40930  191234  4.67
  16   40290   17150 40290  192041  4.77
  32   40220   18050 40220  191963  4.77
  64   40280   16930 40280  192183  4.77
 128   40530   17440 40530  191649  4.73
 256   40610   17610 40610  190405  4.69
 512   40770   16690 40770  188671  4.63
1024   40670   17840 40670  185819  4.57
2048   40510   17780 40510  181050  4.47
4096   39610   17330 39610  154303  3.90
8192   38000   16540 38000  121710  3.20
   16384   35900   15050 35900   80921  2.25
   32768   31300   13020 31300   54062  1.73
   65536   243009940 24300   27574  1.13
  131072   167306820 16730   14886  0.89
  26214444204080  44206888  1.56
  52428816602040  20402781  1.36
 1048576 800 950   9501231  1.30
 2097152 310 490   490 475  0.97
 4194304 150 240   240 227  0.95

 *** dual cpu performance:  .

SIZECOPY   MEMFD KDBUS-MAX  IBENCH  SPEEDUP

   1   31680   14000 31680  104664  3.30
   2   34960   14290 34960  104926  3.00
   4   34930   14050 34930  104659  3.00
   8   24610   13300 24610  104058  4.23
  16   33840   14740 33840  103800  3.07
  32   33880   14400 33880  103917  3.07
  64   34180   14220 34180  103349  3.02
 128   34540   14260 34540  102622  2.97
 256   37820   14240 37820  102076  2.70
 512   37570   14270 37570   99105  2.64
1024   37570   14780 37570   96010  2.56
2048   21640   13330 21640   89602  4.14
4096   23430   13120 23430   73682  3.14
8192   34350   12300 34350   59827  1.74
   16384   25180   10560 25180   43808  1.74
   32768   202109700 20210   21112  1.04
   65536   154407820 15440   10771  0.70
  131072   116305670 116305775  0.50
  26214440803730  40803012  0.74
  52428818302040  20401421  0.70
 1048576 810 950   950 631  0.66
 2097152 310 490   490 269  0.55
 4194304 150 240   240 133  0.55
 

 I ran the tests twice - once using the same cpu for client and server (via cpu
 affinity) and once using a different cpu for client and server.

 The SIZE, COPY and MEMFD column are produced by test-bus-kernel-benchmark
 chart, the KDBUS-MAX column is the maximum of the COPY and MEMFD column. So
 this is the effective number of roundtrips that kdbus is able to do at that
 SIZE. The IBENCH column is the effective number of roundtrips that ibench can
 do at that SIZE.

 For many relevant cases, ibench outperforms kdbus (a lot). The SPEEDUP factor
 indicates how much faster ibench is than kdbus. For small to medium array
 sizes, ibench always wins (sometimes a lot). For instance passing a 4Kb array
 from client to server and returning back, ibench is 3.90 times faster if 
 client
 and server live on the same cpu, and 3.14 times faster if client and server
 live on different cpus.

 I'm bringing this up now because it would be sad if kdbus became part of the
 kernel and universally available, 

Re: [systemd-devel] systemd version debus call changed

2014-03-03 Thread Timothée Ravier
On 03/03/2014 14:28, Àlex Fiestas wrote:
 On Friday 28 February 2014 02:28:20 Lennart Poettering wrote:
 They should just invoke the methods. If they get
 org.freedesktop.DBus.Error.UnknownMethod,
 org.freedesktop.DBus.Error.UnknownObject or
 org.freedesktop.DBus.Error.UnknownInterface back they should assume that
 logind is too old and doesn't support the call. (Actually, to make this
 really robust, they should just treat any error like that).

 Check for features by trying to make use of them. Don't check for
 version numbers.
 The problem was that long ago logind supported suspend (methods were in the 
 bus) but they did not work, at least not for opensuse so that work around was 
 added.
 
 This code will certainly die in the near future.

I'm working on fixing this for KDE
(https://git.reviewboard.kde.org/r/116527/). But now that I've looked at
it little bit closer I think I may understand why they did version checks:

As far as I understand, between v183 (d889a2069a87e4617b32dd) and v198
(314b4b0a68d9ab35de98192), PrepareForSleep signal is sent before
suspending, but no signal is sent on resume, thus one must use upower to
get this info; From v198 and onward, PrepareForSleep signal is also sent
on resume, thus no need for upower anymore.

Am I correct here? Did I miss something? Are there are any other
solutions than checking for the version number to know if
PrepareForSleep will be sent on resume? Should we just give up trying to
support versions of systemd older than 198 (for systemd-based systems)?

-- 
Timothée Ravier
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] draft systemd wiki intro section.

2014-03-03 Thread David Farning
I took a stab at a draft intro section for the fd.o wiki.

---

Systemd manages system startup and UNIX services. In recent years,
computers have become more dynamic. Servers are turned on and off to
balance performance and resource usage. Desktop hardware is attached
and detached as necessary. Appliances must be available instantly.

Systemd launches and supervises services on demand. This brings three
advantages:
1. Hot plugging. Close coordination between udev, the device manager,
and systemd enables the system to rapidly adapt to changing hardware
configurations.
2. Rapid boot times. Rather than waiting for services to start one at
a time, services start in parallel.
3. Greater control over services. Close coordination between the
kernel and systemd provide administrators greater control over
services.

Systemd accomplishes this by providing a framework for communication
between services. Traditional systems encouraged individual services
to setup and manage interprocess communication on their own. This
requires a set of complex start-up scripts. Systemd provides:
1. Sockets. Consumers don't need to know if providers are present or
running before starting as long as systemd sets up and monitors a pair
socket between them
2. D-bus. Once started, systemd provides a policy for services to
communicate via D
3. Cgroups. Systemd uses cgroups to isolate process for greater control.

Systemd is reverse compatible SystemV init scrips and other process
management utilities such as cron and at.

--

This blurb starts with the problems systemd solves. Then it sumarizes
the technologies uses to solve those. Project developers by nature
tend to go into to much detail too early. This blurb removes
references to developers blogs in order to centralize information at
fd.o.

From a communication point of view the first couple of paragraphs on
the blog are crucial because many people who writes or talk about the
project begin by quoting the wiki.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Talks comparing systemd and launchd.

2014-03-03 Thread David Farning
Is anyone aware of any talks comparing systemd and launchd? Several of
the ground breaking ideas in systemd seem to come directly from
launchd. It would be interesting to hear about about why some ideas
were used and others were left behind.

I am not trying to second guess... just understand the differences :)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] kdbus vs. pipe based ipc performance

2014-03-03 Thread Kay Sievers
On Mon, Mar 3, 2014 at 11:06 PM, Kay Sievers k...@vrfy.org wrote:
 On Mon, Mar 3, 2014 at 10:35 PM, Stefan Westerfeld ste...@space.twc.de 
 wrote:
 First of all: I'd really like to see kdbus being used as a general purpose 
 IPC
 layer; so that developers working on client-/server software will no longer
 need to create their own homemade IPC by using primitives like sockets or
 similar.

 Now kdbus is advertised as high performance IPC solution, and compared to the
 traditional dbus approach, this may well be true. But are the numbers that

 $ test-bus-kernel-benchmark chart

 produces impressive? Or to put it in another way: will developers working on
 client-/server software happily accept kdbus, because it performs as good as 
 a
 homemade IPC solution would? Or does kdbus add overhead to a degree that some
 applications can't accept?

 To answer this, I wrote a program called ibench which passes messages 
 between
 a client and a server, but instead of using kdbus to do it, it uses 
 traditional
 pipes. To simulate main loop integration, it uses poll() in cases where a 
 normal
 client or server application would go into the main loop, and wait to be 
 woken
 up by filedescriptor activity.

 Now here are the results I obtained using

 - AMD Phenom(tm) 9850 Quad-Core Processor
 - running Fedora 20 64-bit with systemd+kdbus from git
 - system booted with kdbus and single kernel arguments

 
 *** single cpu performance:  .

SIZECOPY   MEMFD KDBUS-MAX  IBENCH  SPEEDUP

   1   32580   16390 32580  192007  5.89
   2   40870   16960 40870  191730  4.69
   4   40750   16870 40750  190938  4.69
   8   40930   16950 40930  191234  4.67
  16   40290   17150 40290  192041  4.77
  32   40220   18050 40220  191963  4.77
  64   40280   16930 40280  192183  4.77
 128   40530   17440 40530  191649  4.73
 256   40610   17610 40610  190405  4.69
 512   40770   16690 40770  188671  4.63
1024   40670   17840 40670  185819  4.57
2048   40510   17780 40510  181050  4.47
4096   39610   17330 39610  154303  3.90
8192   38000   16540 38000  121710  3.20
   16384   35900   15050 35900   80921  2.25
   32768   31300   13020 31300   54062  1.73
   65536   243009940 24300   27574  1.13
  131072   167306820 16730   14886  0.89
  26214444204080  44206888  1.56
  52428816602040  20402781  1.36
 1048576 800 950   9501231  1.30
 2097152 310 490   490 475  0.97
 4194304 150 240   240 227  0.95

 *** dual cpu performance:  .

SIZECOPY   MEMFD KDBUS-MAX  IBENCH  SPEEDUP

   1   31680   14000 31680  104664  3.30
   2   34960   14290 34960  104926  3.00
   4   34930   14050 34930  104659  3.00
   8   24610   13300 24610  104058  4.23
  16   33840   14740 33840  103800  3.07
  32   33880   14400 33880  103917  3.07
  64   34180   14220 34180  103349  3.02
 128   34540   14260 34540  102622  2.97
 256   37820   14240 37820  102076  2.70
 512   37570   14270 37570   99105  2.64
1024   37570   14780 37570   96010  2.56
2048   21640   13330 21640   89602  4.14
4096   23430   13120 23430   73682  3.14
8192   34350   12300 34350   59827  1.74
   16384   25180   10560 25180   43808  1.74
   32768   202109700 20210   21112  1.04
   65536   154407820 15440   10771  0.70
  131072   116305670 116305775  0.50
  26214440803730  40803012  0.74
  52428818302040  20401421  0.70
 1048576 810 950   950 631  0.66
 2097152 310 490   490 269  0.55
 4194304 150 240   240 133  0.55
 

 I ran the tests twice - once using the same cpu for client and server (via 
 cpu
 affinity) and once using a different cpu for client and server.

 The SIZE, COPY and MEMFD column are produced by test-bus-kernel-benchmark
 chart, the KDBUS-MAX column is the maximum of the COPY and MEMFD column. So
 this is the effective number of roundtrips that kdbus is able to do at that
 SIZE. The IBENCH column is the effective number of roundtrips that ibench can
 do at that SIZE.

 For many relevant cases, ibench outperforms kdbus (a lot). The SPEEDUP factor
 indicates how much faster ibench is than kdbus. For small to medium array
 sizes, ibench always wins (sometimes a lot). For instance passing a 4Kb array
 from client to server and returning back, ibench is 3.90 times faster if 
 client
 and server live on the same cpu, and 3.14 times faster if client and server
 live on different cpus.

 I'm bringing this up now because 

Re: [systemd-devel] kdbus vs. pipe based ipc performance

2014-03-03 Thread Kay Sievers
On Tue, Mar 4, 2014 at 5:00 AM, Kay Sievers k...@vrfy.org wrote:
 On Mon, Mar 3, 2014 at 11:06 PM, Kay Sievers k...@vrfy.org wrote:
 On Mon, Mar 3, 2014 at 10:35 PM, Stefan Westerfeld ste...@space.twc.de 
 wrote:
 First of all: I'd really like to see kdbus being used as a general purpose 
 IPC
 layer; so that developers working on client-/server software will no longer
 need to create their own homemade IPC by using primitives like sockets or
 similar.

 Now kdbus is advertised as high performance IPC solution, and compared to 
 the
 traditional dbus approach, this may well be true. But are the numbers that

 $ test-bus-kernel-benchmark chart

 produces impressive? Or to put it in another way: will developers working on
 client-/server software happily accept kdbus, because it performs as good 
 as a
 homemade IPC solution would? Or does kdbus add overhead to a degree that 
 some
 applications can't accept?

 To answer this, I wrote a program called ibench which passes messages 
 between
 a client and a server, but instead of using kdbus to do it, it uses 
 traditional
 pipes. To simulate main loop integration, it uses poll() in cases where a 
 normal
 client or server application would go into the main loop, and wait to be 
 woken
 up by filedescriptor activity.

 Now here are the results I obtained using

 - AMD Phenom(tm) 9850 Quad-Core Processor
 - running Fedora 20 64-bit with systemd+kdbus from git
 - system booted with kdbus and single kernel arguments

 
 *** single cpu performance:  .

SIZECOPY   MEMFD KDBUS-MAX  IBENCH  SPEEDUP

   1   32580   16390 32580  192007  5.89
   2   40870   16960 40870  191730  4.69
   4   40750   16870 40750  190938  4.69
   8   40930   16950 40930  191234  4.67
  16   40290   17150 40290  192041  4.77
  32   40220   18050 40220  191963  4.77
  64   40280   16930 40280  192183  4.77
 128   40530   17440 40530  191649  4.73
 256   40610   17610 40610  190405  4.69
 512   40770   16690 40770  188671  4.63
1024   40670   17840 40670  185819  4.57
2048   40510   17780 40510  181050  4.47
4096   39610   17330 39610  154303  3.90
8192   38000   16540 38000  121710  3.20
   16384   35900   15050 35900   80921  2.25
   32768   31300   13020 31300   54062  1.73
   65536   243009940 24300   27574  1.13
  131072   167306820 16730   14886  0.89
  26214444204080  44206888  1.56
  52428816602040  20402781  1.36
 1048576 800 950   9501231  1.30
 2097152 310 490   490 475  0.97
 4194304 150 240   240 227  0.95

 *** dual cpu performance:  .

SIZECOPY   MEMFD KDBUS-MAX  IBENCH  SPEEDUP

   1   31680   14000 31680  104664  3.30
   2   34960   14290 34960  104926  3.00
   4   34930   14050 34930  104659  3.00
   8   24610   13300 24610  104058  4.23
  16   33840   14740 33840  103800  3.07
  32   33880   14400 33880  103917  3.07
  64   34180   14220 34180  103349  3.02
 128   34540   14260 34540  102622  2.97
 256   37820   14240 37820  102076  2.70
 512   37570   14270 37570   99105  2.64
1024   37570   14780 37570   96010  2.56
2048   21640   13330 21640   89602  4.14
4096   23430   13120 23430   73682  3.14
8192   34350   12300 34350   59827  1.74
   16384   25180   10560 25180   43808  1.74
   32768   202109700 20210   21112  1.04
   65536   154407820 15440   10771  0.70
  131072   116305670 116305775  0.50
  26214440803730  40803012  0.74
  52428818302040  20401421  0.70
 1048576 810 950   950 631  0.66
 2097152 310 490   490 269  0.55
 4194304 150 240   240 133  0.55
 

 I ran the tests twice - once using the same cpu for client and server (via 
 cpu
 affinity) and once using a different cpu for client and server.

 The SIZE, COPY and MEMFD column are produced by test-bus-kernel-benchmark
 chart, the KDBUS-MAX column is the maximum of the COPY and MEMFD column. So
 this is the effective number of roundtrips that kdbus is able to do at that
 SIZE. The IBENCH column is the effective number of roundtrips that ibench 
 can
 do at that SIZE.

 For many relevant cases, ibench outperforms kdbus (a lot). The SPEEDUP 
 factor
 indicates how much faster ibench is than kdbus. For small to medium array
 sizes, ibench always wins (sometimes a lot). For instance passing a 4Kb 
 array
 from client to server and returning back, ibench is 3.90 times faster if 
 client
 and server live on the same cpu, and 3.14 times faster if