Re: [systemd-devel] [PATCH] firmware: wake all waiters
On 06/28/2017 06:06 PM, Luis R. Rodriguez wrote: On Wed, Jun 28, 2017 at 12:06 AM, Lennart Poetteringwrote: On Wed, 28.06.17 00:24, Luis R. Rodriguez (mcg...@kernel.org) wrote: I think it was first packaged into systemd, and then it was split out to help those who want it external. Certainly not. I'd sure know about that. ;-) Sorry I may have confused 'intended to be at first'. Tom and Daniel can elaborate. Tom helped out building the daemon as standalone project. There is no real reason that it needs to be part of the systemd code base. Thanks, Daniel ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Spurious failures starting ConnMan with systemd
Hi Colin, On 11/15/2016 03:46 PM, Colin Guthrie wrote: > Hope you're keeping well? Doing fine! How is live in the north? > So, by default, dbus should be socket activated. That means that when > dbus.service eventually starts shouldn't really matter, provided it is > eventually started. This is because it's actually dbus.socket that's the > important unit. It should be started by sockets.target which is pulled > in as part of the default dependencies that all units get automatically > (provided they've not disabled this) Okay, that makes sense. > So, check for dbus.socket, and check that connman.service doesn't > disable default deps. I think we do, see below. > If socket activation is used, then there shouldn't be any need to > mention dbus.socket/service in connman.service at all. """ [Unit] Description=Connection service DefaultDependencies=false Conflicts=shutdown.target RequiresMountsFor=/var/lib/connman After=dbus.service network-pre.target systemd-sysusers.service Before=network.target multi-user.target shutdown.target Wants=network.target [Service] Type=dbus BusName=net.connman Restart=on-failure ExecStart=/usr/sbin/connmand -n StandardOutput=null CapabilityBoundingSet=CAP_KILL CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SYS_TIME CAP_SYS_MODULE ProtectHome=true ProtectSystem=full [Install] WantedBy=multi-user.target """ Okay, so the DefaultDependencies=false is causing this problem then. In our commit history I found following explanation for it: """ commit 09aa0243aac40ec4e5bd0fbe41e702be4952a382 Author: Patrik FlyktDate: Thu Sep 17 10:42:46 2015 +0300 connman.service: Fix dependencies for early boot Unset default dependencies in order to properly run at early boot and require the save directory to be mounted before starting. See the systemd.unit man page, Debian's wiki page https://wiki.debian.org/Teams/pkg-systemd/rcSMigration and the upstream systemd-networkd.service file for details. diff --git a/src/connman.service.in b/src/connman.service.in index 8f7f3429f7dc..0a8f15c9f90b 100644 --- a/src/connman.service.in +++ b/src/connman.service.in @@ -1,7 +1,10 @@ [Unit] Description=Connection service +DefaultDependencies=false +Conflicts=shutdown.target +RequiresMountsFor=@localstatedir@/lib/connman After=dbus.service network-pre.target -Before=network.target remote-fs-pre.target +Before=network.target shutdown.target remote-fs-pre.target Wants=network.target remote-fs-pre.target [Service] """ Hmm, now I am confused... Thanks, Daniel ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] Spurious failures starting ConnMan with systemd
[Cc: systemd mailing list" what happened until now """ I am working on a system based on Yocto Project which uses ConnMan v1.33 for network management. The system uses systemd and generally works well so far, but once in a blue moon ConnMan fails to start up. Here is the relevant log from systemd: [ 15.886270] systemd[1]: connman.service: Main process exited, code=exited, status=1/FAILURE [ 15.912838] systemd[1]: Failed to start Connection service. There were no logs from connmand in this case, so in the connman.service file I've added the -d option to ExecStart= and set StandardOutput to syslog. I have restarted the system many times until the failure occurred again. There is only a single line from connmand in the log: [ 12.174528] connmand[278]: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory [ 12.209448] systemd[1]: connman.service: Main process exited, code=exited, status=1/FAILURE [ 12.242872] systemd[1]: Failed to start Connection service. So the system D-Bus is not up and running the time ConnMan is started. In case of failure, I've noticed there is also no mentioning about D-Bus starting up in the log, but in all other logs there is. Now, I think the After=dbus.service line in src/connman.service.in is not enough because After= specifies order, but not a requirement dependency. This what Requires= is there for. I've added the line Requires=dbus.service to the service file, reverted my other changes, and tried again. I've been unable to reproduce the problem ever since, so for me the problem seems to be fixed. What do you think? Should src/connman.service.in be changed to include the Requires= line by default or is this something to be fixed somewhere else? Should network-pre.target and systemd-sysusers.service be added to Requires= as well? """ ] On 11/15/2016 10:45 AM, Robert Tiemann wrote: On 11/14/2016 01:30 PM, Patrik Flykt wrote: On Fri, 2016-11-11 at 12:04 +0100, Daniel Wagner wrote: I am no systemd expert. I just did a quick read in the documentation and it seems it is recommended to have After= and Requires= together: Yes, this seems to be required here. Care to send a patch which adds Requires=? Please include your excellent error report into the commit message. I can create a patch that adds Requires=dbus.service to the service file if you like, but please read my comments below. Hmm, then again man systemd.service says that "...Service units with this option configured implicitly gain dependencies on the dbus.socket unit. This type is the default if BusName= is specified..." ConnMan defines both Type=dbus and Busname=net.connman, so it should already have these dependencies. Unless the DefaultDependencies=false overrides this behavior. Do we have any systemd experts around? I am by no means a systemd expert, so instead I've tried a few things. The output of $ systemctl show connman.service with the service file from v1.33 installed does not show any dependencies on dbus.socket, but only an After= dependency on dbus.service. The only Requires= dependency is on -.mount (and then there is also the RequiredMountsFor= dependency). I always thought this should be enough because dbus.service is actually required by other services in the system, therefore ConnMan should be started *after* it, right? Well. As I've come to realize, the troubles started after an upgrade of ConnMan v1.30 to v1.33 a few weeks ago. So, I've checked the changes of connman.service.in. With the service file shipped with v1.30, the output of the above command shows Requires=basic.target, which depends on sockets.target, which depends on dbus.socket. I think that's why I didn't have any trouble with v1.30. In 09aa024 (first change to the service file after v1.30 release), DefaultDependencies was set to false. According to https://www.freedesktop.org/software/systemd/man/systemd.special.html, an After= dependency on basic.target is added automatically if DefaultDependencies is set to yes, so this is missing now. The After= dependency on dbus.service is still there, but it is not Required. Out of curiosity, I've removed the dbus.service dependency completely to check if Type=dbus is enough as is documented. Turns out it is not. Neither the output of "systemctl show connman.service" nor that of "systemctl list-dependencies connman.service" show _any_ dependencies on dbus.service/dbus.socket in this case. Why the automatically added After= dependency on basic.target has turned into a Requires= dependency with the v1.30 file, but the explicit After=dbus.service dependency in the file from 09aa024 remains an After= dependency is beyond me. My guess is that DefaultDependencies does a little bit more than is documented, or maybe it's just a bug in my version of systemd (v225). Maybe it's some strange interaction between unit files that I am unable to see. A comment from
Re: [systemd-devel] nftables
Hi Daniel, On 02/22/2016 11:39 AM, Daniel Mack wrote: > On 02/22/2016 11:04 AM, Daniel Wagner wrote: >> On 02/22/2016 09:54 AM, Tomasz Bursztyka wrote: >>> I haven't been following the recent (well... the last ~2 years ^^') >>> work on nftables but I believe there are still people using the >>> iptables format. The use the iptables- nftables compatible tool, and >>> it mimics iptables through nftables. Verify it, but if it's the >>> generic way, then it would be as usual in ConnMan. > > It aims for that, yes, but in my tests there were quite some uncovered > corner cases when it comes to command line compatibility. But I wonder > why this is related to ConnMan - does it call out to the binary? We are using libxtables direcly which is a huge pain as it was never design to be used as we do it. We didn't want to call iptables via shell all the time. >>> If not, then you will have to come with a strategy that fits all. And >>> that's when it >>> might become tricky. As you noticed, ConnMan will have to avoid conflicts. > > I haven't followed the discussions. What kind of conflicts is this about? For iptables, we are maintaining our own input and output chain and insert into the main chain. And than we hope no one is killing the chain. It gets tricky after crash or restart to sync up correctly. In short we toetipping around trying to avoid to mess with other iptables rules. >>> I believe it will be a bit easier than with iptables though. >>> - you pull the current context: >>> -> if there is nothing, it will be easy ConnMan will just push >>> whatever it will want. >>> -> if something is there, integrating will have to play nice. >>> Basically: finding the >>> input entry point to jump on a custom ConnMan table (you'll probably >>> need that >>> for each IP version), get you rules applied or return if nothing. > > IMO, ConnMan, or any other tool for that matter, should never mess with > rules it hasn't created itself. Yes, completely agree. > nftables makes this easy, as it allows > for namespaces in the rule sets, through custom tables. Ah, so that sounds like I was looking. This is the answer to my initial question :) > If any other > tool installs other tables with conflicting rules, I don't think ConnMan > should care really. That is the plan. > With iptables, this is just a bit trickier due to > the custom jump label that you have to install in the INPUT and OUTPUT > chains. Yep. > Right now, there are two parts of systemd that touch packet filter > configurations, nspawn and networkd, and the code already takes care of > not touching any rule that it has no business with. > >> Lennart posted the headsup on systemd moving from iptables to >> nftables [1]. I don't know how far those plans have gotten but I think >> it would be good idea to coordinate this with systemd-networkd. >> >> @Tom do you happen to know what the status is on this? > > It was me who worked on this, and I postponed the branch a while ago > until we know how the kernel APIs look like that we want to use for a > per-unit packet filtering mechanism. Back then, it looked like this > could only be achieved with nftables, which is why I reworked all the > code in systemd. Ah, I though Tom was working on this. Sorry about that. > Now, things are not that clear anymore, so the decision was postponed. I > hope to catch up with this soon, but eventually, I think we should move > to nftables as well. Note, however, that iptables and nftables may > coexist on a system. My hope is that if ConnMan just uses nftables, the kernel needs to sort out the problems. I know I live in a naive world. :) >>> About coding around, it's a bit messy. There is one library, >>> libnftnl. It's not build on top of libnl. I am not entirely sure, but >>> I think you can hook your own netlink access functions to it. By >>> default it uses libmnl... You'll have to verify that. Ask Marcel what >>> he would prefer as well. Afaik, there is still the plan to move >>> ConnMan to ell, so keeping it's custom netlink access would make >>> sense then, I guess. >> >> I really like to avoid coding directly netlink. The libnftnl doesn't look >> too bad. > > You can have a look at the outdated branch of mine here: > > https://github.com/zonque/systemd/commits/nftnl Thanks for the pointer. cheers, daniel ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] nftables
Hi Tomasz and Tom, On 02/22/2016 09:54 AM, Tomasz Bursztyka wrote: >> Hi Tomasz, >> >> I just chatted with Patrik and told him about my idea to teach ConnMan >> to use nftables instead iptables. There are a few things to figure out >> first though. >> >> If I got that right there is no policy from the kernel on how to >> structure the rule sets. nat/filter tables and input/forward/output >> chains are userland policies. The question is how would ConnMan fit into >> this? As you know, ConnMan tries to avoid conflicts with iptables in tip >> toeing around. Do we have to do the same with nftables? > > You are right, when you start bare metal kernel with nftables: there > is nothing. No tables and no chains. It's a good thing because it > enables flexibility, but it can also be a burden when it comes to > integrate with custom table & chains. Okay, so nothing new :) > I haven't been following the recent (well... the last ~2 years ^^') > work on nftables but I believe there are still people using the > iptables format. The use the iptables- nftables compatible tool, and > it mimics iptables through nftables. Verify it, but if it's the > generic way, then it would be as usual in ConnMan. > > If not, then you will have to come with a strategy that fits all. And > that's when it > might become tricky. As you noticed, ConnMan will have to avoid conflicts. > I believe it will be a bit easier than with iptables though. > - you pull the current context: > -> if there is nothing, it will be easy ConnMan will just push > whatever it will want. > -> if something is there, integrating will have to play nice. > Basically: finding the > input entry point to jump on a custom ConnMan table (you'll probably > need that > for each IP version), get you rules applied or return if nothing. > > Problem start to arise when somebody/something else messes up again > with the context. Like flushing out everything, reinstalling stuff... > ConnMan will have to follow. It was the same with iptables. It's > hopefully much easier now: there are proper netlink notification > messages for it. Well, let's put this problem apart for now. Lennart posted the headsup on systemd moving from iptables to nftables [1]. I don't know how far those plans have gotten but I think it would be good idea to coordinate this with systemd-networkd. @Tom do you happen to know what the status is on this? > About coding around, it's a bit messy. There is one library, > libnftnl. It's not build on top of libnl. I am not entirely sure, but > I think you can hook your own netlink access functions to it. By > default it uses libmnl... You'll have to verify that. Ask Marcel what > he would prefer as well. Afaik, there is still the plan to move > ConnMan to ell, so keeping it's custom netlink access would make > sense then, I guess. I really like to avoid coding directly netlink. The libnftnl doesn't look too bad. cheers, daniel [1] https://lists.freedesktop.org/archives/systemd-devel/2015-May/032531.html ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/systemd-devel
Re: [systemd-devel] net stats per app
On 08/24/2011 04:19 PM, Gustavo Sverzut Barbieri wrote: A open question is how I get the whole thing persistent. So not each time when an application starts the counters begin at 0. My guts feeling systemd should take of this but I don't know if that is the right direction. Hmm, you could simply precreate the cgroups and mark the tasks file with +t (sticky bit). systemd won't remove the cgroup then after use. Or, we could add a new switch ControlGroupPersistant=yes or so which would set +t automatically but systemd would still create the groups for you (so that youdon't have to pre-create anything), but not delete them anymore. Would that make sense? (I have added this option now to the todo list, since it will make sense for stuff like cpuacct where we are in the same boat) Likely he will need to keep accounting between reboots as well, in this case the solution can't be in systemd or kernel, will need a tool to walk these groups accumulate them into a persistent media. Be it periodically, upon reboot or some other method. Yes, the aim is to maintain statistics even over system reboots. I had not yet time to figure out how this could be done. The use case is as following: We have a browser instance which is only used to connect to a our company portal. The user is able to start another instance of the browser which is used for ordinary web browsing. These two instance should be treated differently, so we have to maintain two statistics, one for the portal browser and one for the normal browser. If a certain traffic limit has been reached on the ordinary web browser, ConnMan should stop this session. The portal web browser is still allowed to access the internet through the current path. We have some more of those use cases but most of them can be mapped to that one. If a limit for an application is reached either shutdown the session or reroute the traffic to an different device. If ConnMan would know which application is put into which cgroup then ConnMan could maintain the statistics (also persistent). ConnMan knows which network device/route is currently in use for the application. So everything is there. I think systemd would be the wrong place to solve this. cheers, daniel ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel
[systemd-devel] net stats per app
Hi, I would like to do network statistics per application. That is all traffic generated by an instance of an application through its sockets should be collected and stored persistently. There are a few sub problems to solve. Let's start with Android. They have added a driver [1] to Linux which collects all traffic through a socket and collects them per UID. The data is then exported through the /proc interface. Note: the framing is not counted. Consequently every Android application installed uses a different user. Of course it is not possible to distinguish between two instances of one application. That seems to be an acceptable solution for them. So my thinking is, instead of using the uid driver trick, I could use cgroups for collecting the network traffic. At least from the sub module description it seems to be the right spot to add a new statistic interface. Then systemd would do the life cycle managment and moves applications into the right cgroup. A open question is how I get the whole thing persistent. So not each time when an application starts the counters begin at 0. My guts feeling systemd should take of this but I don't know if that is the right direction. Is my idea completely broken or do you see any hope for this approach? I have CC the ConnMan list because I hope to get this working on the shine new Session API. Thanks, Daniel [1] http://android.git.kernel.org/?p=kernel/common.git;a=blob_plain;f=drivers/misc/uid_stat.c;hb=5aa381271da879daa63420a687ca8e1c4b00deb6 ___ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel