Re: [systemd-devel] Can coredumpctl work independent of journald?

2016-05-11 Thread P.R.Dinesh
Additionally can we improve the journal log rotate system to keep the
higher severity messages (like coredump) for longer duration and remove
lower priority messages to get spaces.

On Thu, May 12, 2016 at 8:15 AM, P.R.Dinesh  wrote:

> Thank you Lennart,
> I would like to explain my system scenario.
>
> We are using systemd version 219. (Updating to 229 is in progress).
>
> Configured for persistent storage for both Journal and Coredump (Coredump
> is stored externallly)
>
> The logs and coredump are stored in another partition
> "/var/diagnostics/logs" and "/var/diagnostics/coredump".  We do symbolic
> link between the /var/lib/systemd/coredump and logs to those folders.
>
> Coredump and journald is configured to utilized upto 10% of the disk
> space(total disk space is ~400MB) which would allocate 40MB to journal logs
> and 40MB to coredump.  For some reason (under investigation) some of our
> daemons are generating too much logs which makes the journald to reach the
> 40MB limit within 6 hours.  Hence journald starts wrapping around.
> Meanwhile some daemons have also crashed and core dumped.
>
> Now when I do coredumplist, none of those coredumps are shown.
>
> Also I tried launching the coredumpctl with the coredump file both using
>  the pid name as well as using the coredump file name.  Since we dont have
> the journal entry coredumpctl is not launching them,  can we atleast have
> the coredumpctl launch the gdb using the core dump file name?
>
> [prd@localhost ~]$ coredumpctl gdb
> core.lshw.1000.9bb41758bba94306b39e751048e0cee9.23993.146287152300.xz
> No match found.
> [prd@localhost ~]$ coredumpctl gdb 23993
> No match found.
>
>
> In summary, the frequency of logs are higher and the frequency of core
> dumps are very less in our system which leads to the loss of coredump
> information.
>
> I am thinking of two solutions here
> 1) Enhance coredumpctl to launch the gdb using the coredump file name
> 2) Store the Journal logs for coredump seperately from other journal logs
> so that they could be maintained for long duration (is this feasible?)
>
> Thank you
> Regards,
> Dinesh
>
> On Wed, May 11, 2016 at 10:25 PM, Lennart Poettering <
> lenn...@poettering.net> wrote:
>
>> On Wed, 11.05.16 20:31, P.R.Dinesh (pr.din...@gmail.com) wrote:
>>
>> > I have set the journald to be persistant and limit its size to 40MB.
>> > I had a process coredumped and the coredump file is found in
>> > /var/log/systemd/coredump
>> >
>> > When I run coredumpctl this coredump is not shown.
>> >
>> > Later I found that the core dump log is missing from the Journal ( the
>> > journal got wrapped since it reached the size limitation).
>> >
>> > I think coredumpctl depends on journal to display the coredump.  Can't
>> it
>> > search for the coredump files present in the coredump folder and list
>> those
>> > files?
>>
>> We use the metadata and the filtering the journal provides us with,
>> and the coredump on disk is really just secondary, external data to
>> that, that can be lifecycled quicker than the logging data. We extract
>> the backtrace from the coredump at the momemt the coredump happens,
>> and all that along with numerous metadata fields is stored in the
>> journal. In fact storing the coredump is optional, because in many
>> setups the short backtrace in the logs is good enough, and the
>> coredump is less important.
>>
>> So, generally the concept here really is that logs are cheap, and thus
>> you keep around more of them; and coredumps are large and thus you
>> lifecycle them quicker. If I understand correctly what you want is the
>> opposite: you want a quicker lifecycle for the logs but keep the
>> coredumps around for longer. I must say, I am not entirely sure where
>> such a setup would be a good idea though... i.e. wanting persistent
>> coredumps but volatile logging sounds a strange combination to
>> me... Can you make a good case for this?
>>
>> But yeah, we really don't cover what you are asking for right now, and
>> I am not sure we should...
>>
>> > Also can I launch the coredumpctl gdb by providing a compressed core
>> > file.
>>
>> If you configure systemd-coredump to store the coredumps compressed
>> (which is in fact the default), then "coredumpctl gdb" will implicitly
>> decompress them so that gdb can do its work.
>>
>> Lennart
>>
>> --
>> Lennart Poettering, Red Hat
>>
>
>
>
> --
> With Kind Regards,
> Dinesh P Ramakrishnan
>



-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can coredumpctl work independent of journald?

2016-05-11 Thread P.R.Dinesh
Thank you Lennart,
I would like to explain my system scenario.

We are using systemd version 219. (Updating to 229 is in progress).

Configured for persistent storage for both Journal and Coredump (Coredump
is stored externallly)

The logs and coredump are stored in another partition
"/var/diagnostics/logs" and "/var/diagnostics/coredump".  We do symbolic
link between the /var/lib/systemd/coredump and logs to those folders.

Coredump and journald is configured to utilized upto 10% of the disk
space(total disk space is ~400MB) which would allocate 40MB to journal logs
and 40MB to coredump.  For some reason (under investigation) some of our
daemons are generating too much logs which makes the journald to reach the
40MB limit within 6 hours.  Hence journald starts wrapping around.
Meanwhile some daemons have also crashed and core dumped.

Now when I do coredumplist, none of those coredumps are shown.

Also I tried launching the coredumpctl with the coredump file both using
 the pid name as well as using the coredump file name.  Since we dont have
the journal entry coredumpctl is not launching them,  can we atleast have
the coredumpctl launch the gdb using the core dump file name?

[prd@localhost ~]$ coredumpctl gdb
core.lshw.1000.9bb41758bba94306b39e751048e0cee9.23993.146287152300.xz
No match found.
[prd@localhost ~]$ coredumpctl gdb 23993
No match found.


In summary, the frequency of logs are higher and the frequency of core
dumps are very less in our system which leads to the loss of coredump
information.

I am thinking of two solutions here
1) Enhance coredumpctl to launch the gdb using the coredump file name
2) Store the Journal logs for coredump seperately from other journal logs
so that they could be maintained for long duration (is this feasible?)

Thank you
Regards,
Dinesh

On Wed, May 11, 2016 at 10:25 PM, Lennart Poettering  wrote:

> On Wed, 11.05.16 20:31, P.R.Dinesh (pr.din...@gmail.com) wrote:
>
> > I have set the journald to be persistant and limit its size to 40MB.
> > I had a process coredumped and the coredump file is found in
> > /var/log/systemd/coredump
> >
> > When I run coredumpctl this coredump is not shown.
> >
> > Later I found that the core dump log is missing from the Journal ( the
> > journal got wrapped since it reached the size limitation).
> >
> > I think coredumpctl depends on journal to display the coredump.  Can't it
> > search for the coredump files present in the coredump folder and list
> those
> > files?
>
> We use the metadata and the filtering the journal provides us with,
> and the coredump on disk is really just secondary, external data to
> that, that can be lifecycled quicker than the logging data. We extract
> the backtrace from the coredump at the momemt the coredump happens,
> and all that along with numerous metadata fields is stored in the
> journal. In fact storing the coredump is optional, because in many
> setups the short backtrace in the logs is good enough, and the
> coredump is less important.
>
> So, generally the concept here really is that logs are cheap, and thus
> you keep around more of them; and coredumps are large and thus you
> lifecycle them quicker. If I understand correctly what you want is the
> opposite: you want a quicker lifecycle for the logs but keep the
> coredumps around for longer. I must say, I am not entirely sure where
> such a setup would be a good idea though... i.e. wanting persistent
> coredumps but volatile logging sounds a strange combination to
> me... Can you make a good case for this?
>
> But yeah, we really don't cover what you are asking for right now, and
> I am not sure we should...
>
> > Also can I launch the coredumpctl gdb by providing a compressed core
> > file.
>
> If you configure systemd-coredump to store the coredumps compressed
> (which is in fact the default), then "coredumpctl gdb" will implicitly
> decompress them so that gdb can do its work.
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
>



-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Using systemd --user to manage graphical sessions?

2016-05-11 Thread Luke Shumaker
On Wed, 11 May 2016 12:13:45 -0400,
Martin Pitt wrote:
> Or is someone actually using systemd --user for graphical sessions
> already and found a trick that I missed?

I am!  For several months now (and before that, I was still using
systemd --user for graphical stuff, but not consistently/coherently).

My configuration: https://lukeshu.com/git/dotfiles/tree/.config
Crappy write-up: https://lukeshu.com/blog/x11-systemd.html

One thing to note is that I don't use a DE, and have minimal
bus-activated services.

The big difference between what I do and what you wrote is that I
don't tie the DISPLAY name to the XDG_SESSION_ID (actually, the
session ID isn't even set in the graphical session).

The short version of how I have it work:

My ~/.xinitrc (AKA: script that starts the initial X11 clients)
contains:

_DISPLAY="$(systemd-escape -- "$DISPLAY")"
mkfifo "${XDG_RUNTIME_DIR}/x11-wm@${_DISPLAY}"
cat < "${XDG_RUNTIME_DIR}/x11-wm@${_DISPLAY}" &
systemctl --user start "X11@${_DISPLAY}.target" &
wait
systemctl --user stop "X11@${_DISPLAY}.target"

Which basically says: start X11@:0.target, wait for something to open
"${XDG_RUNTIME_DIR}/x11-wm@${_DISPLAY}" for writing and then close it,
then stop X11@:0.target.  Then I have my window manager configured to
open/close the file when I want to quit X11/log out (really, I have it
open at start, then just exit on quit; implicitly closing it).

Then, each of the whatever@${DISPLAY}.service files contains:

[Unit]
After=X11@%i.target
Requisite=X11@%i.target
[Service]
Environment=DISPLAY=%I

Now, when I launch a program from the window manager, I have it launch
with `systemd-run --user --scope -- sh -c "COMMAND HERE"`, so that
systemd can tell the difference between it and the window manager.  I
imagine that this would be problematic with less configurable window
managers.

As I type this, I have two graphical logins to the same user.  One on
a real screen, and the other with a fake screen via `vncserver` (of
course, managed as a systemd user unit :-) ).

The only problem I have with this setup is that dunst (my desktop
notification daemon) isn't happy running multiple instances on
different displays.  I think it's because it isn't happy sharing the
dbus, but I haven't spent even 1 minute debugging it yet.

-- 
Happy hacking,
~ Luke Shumaker
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Using systemd --user to manage graphical sessions?

2016-05-11 Thread Martin Pitt
Hello Mantas,

thanks for your reply!

Mantas Mikulėnas [2016-05-11 19:54 +0300]:
> AFAIK, the general idea of --user is that there's at most one graphical
> session (per user) at a time, so things like $DISPLAY naturally become per
> user.

Right, I understand that. But that doesn't mean that *always* have a
$DISPLAY. So simply having user units that start with systemd --user
does not work: If the first login is on a VT or through ssh these
would all fail, and if you then actually login on the DM nothing would
restart them.

That could be helped a bit with a ConditionHasEnvironment=DISPLAY or
something such, and an xinit.d script that restarts them. But this can
all be achieved through some good/bad hacks, my main issue is still
that you can't bind those to the system unit with the correct
lifecycle -- session-*.scope.

> I can start this with "systemctl --user start xeyes@${XDG_SESSION_ID}",
> > but this will hang off user@1000.service instead of session-*.scope
> > and thus it will not be stopped once the X session gets logged out.
> >
> 
> Most X11 clients will exit as soon as the X server goes away, no?

They do, but you get failed units and scary error messages, and this
happens *after* the X session is already gone -- thus no way for them
to gracefully shut down.

> `systemctl --user import-environment DISPLAY` seems to work well enough.

Right, that part works fine, that's not the problem.

> In stock GNOME, I already have a bunch of bus-activated apps running off
> the user bus (dbus.service), such as gedit, nautilus, or gnome-terminal;
> the latter finally got its own gnome-terminal.service in 3.20.2.

gnome-terminal is what I looked at too, and what I referred to in the
bug report: https://bugzilla.gnome.org/show_bug.cgi?id=744736 -- this
is pretty broken right now :-( This is essentially one program trying
to work around (and failing) the fundamental issue that a lot of
things don't make sense to start with the first non-X login and keep
around until after the X session ends.

Thanks,

Martin

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] resolved: use special nameservers for some domains

2016-05-11 Thread Lennart Poettering
On Wed, 11.05.16 12:13, Felix Schwarz (felix.schw...@oss.schwarz.eu) wrote:

> I'd like to know if resolved can redirect DNS queries for certain domains to
> different name servers?

Well, kinda. If you have multiple interfaces, and each interface has
one or more DNS servers attached, and each interface also has one or
more domains attached, then the DNS lookups within those domains will
be passed to the matching DNS servers. Thus, you have some pretty
powerful routing in place.

However, this all implies that there's an interface for each of these
DNS server "routes"... if you only have a single interface, and want
to route on that single interface to different DNS servers then, nope,
we don't support that currently. But I think it would be OK to add
this.

(You can hack around this for now, if you like, by adding a "dummy"
network interface a .netdev file with Type=dummy, and attaching the
DNS server/domain data to it.)

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] network unit Match by router advertisement?

2016-05-11 Thread Lennart Poettering
On Wed, 11.05.16 11:32, Brian Kroth (bpkr...@gmail.com) wrote:

> Hi again all,
> 
> TL;DR: would it be possible (or make sense) to have systemd Match rules for
> network units that could match on some artifact of the network the link is
> attached to like vlan tag, router advertisement, wireless access point or
> gateway mac, etc.?

Well, .network files contain the definition how to set up a network
interface, i.e. how to place it into UP state so that packets can be
received and how to configure IP routing so that communication further
on works. Hence: it uses relatively static properties of the device
that are already available when the device is offline, to find the
right .network file to read the dynamic configuration to apply in
order to put it online. The router advertisment info and things like
the gateway mac are pieces of information that are only available when
the network is already up, when the network configuration is already
applied. Hence using that as match for the configuration can't work:
at the time we could use that information we already would have had to
apply it. And if we don't apply it, we would never get the information
to acquire...

The VLAN tag is a different case though: it's assigned when the
VLAN networkd device is created, and configured in the .netdev
configuration file for that. Thus, it's already set the moment the
network device pops up, and it could be used nicely for the
matching. So yupp, added a MatchVLANId= or so, might make
sense. Please file an RFE issue on github about this, if you'd like to
see this implemented.

Matching by AP could work. Iirc today's WLAN drivers actually will
create virtual links for the network you connect to, and the ESSID for
each would be set before networkd would take notice of it, hence this
is probably something we could do. Note however, that networkd does
not interface with the WLAN stack at all at this point, a WLAN device
is treated like any other Ethernet device atm.

> However, the missing bit then would be network address assignment for the
> various instances to the right interfaces.  Ideally, I'd just stamp out
> network unit files and have the apache instance units depend upon that, but
> the trouble is that traditionally NIC naming hasn't always been consistent
> in the past.
> 
> I've read through [1], but it doesn't really provide what I'm looking for.
> Physical layout of the nic-port-types is semi interesting and perhaps
> consistent, but network operator error may result in a misassigned vlan
> port, or simply the wrong cable to the wrong port (which can be true for
> physical or virtual realms unfortunately), etc.
> 
> What I did in the past to work around that was to use ndisc6 or something
> similar to verify that the expected interface had the expected network
> properties - in this case a router advertisement.

Hmm, schemes like this sound a bit dangerous, no? I mean, if you base
your decision whether to apply the relatively open "internal LAN"
config to an interface or the restricted "internet" config on the
traffic you see on the port, then you make yourself vulnerable to
people sending you rogue IP packets...

I see your usecase though, but I don't really have any good suggestion
what to do in this case I must say...

Maybe adding something like a RequireDHCPServer= setting or so, that
allows configuration of a DHCP server address, and when set would
result in logged warnings if DHCP leases are offered from other
servers thatn the configured one, might be an option? i.e. so that you
at least get a loggable event when some .network file is applied to
the wrong iface?

But dunno, maybe Tom has an idea about this? Tom?

> [2] Sidenote: In the past I've used an old trick of setting the
> preferred_lft to 0 for IPv6 addresses that I wanted to be available to
> services, but not selected for outbound connections from the host.  This
> was basically to help influence the usual source address selection criteria
> which tries to avoid "deprecated" addresses.  I didn't see a way to specify
> that in the systemd.network man page.  Is there one that I'm missing, or is
> that another case for an Exec... statement?

This has been added very recently to systemd, see #3102, #2166,
b5834a0b38c1aa7d6975d76971cd75c07455d129. It will be available with
the next release.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] default service restart action?

2016-05-11 Thread Lennart Poettering
On Wed, 11.05.16 11:27, Brian Kroth (bpkr...@gmail.com) wrote:

> Hi all, I'm in the midst of steeping myself in systemd docs as I prepare to
> face lift a slew of services for Debian Jessie updates.
> 
> As I read through things I'm starting to think through a number of new ways
> I could potentially reorganize some of our services, which is cool. With my
> ideas though I think I'm finding a few gaps in either my understanding or
> systemd capabilities, so I wanted to send a few questions to the list.
> Hopefully this is the right place.
> 
> The first should hopefully be a bit of a softball:
> 
> With .service units one can specify OnFailure and other sorts of restart
> behaviors, including thresholds and backoffs for when to stop retrying and
> what to do then. Essentially a lightweight service problem escalation
> procedure.
> 
> However, in reading systemd-system.conf, I don't see any way to specify
> something like DefaultOnFailure behavior for what to do on failure, perhaps
> after some simple restart attempts, for all services.  Seems like it can
> only be done on a per unit basis, no?

That is correct, yes.

> Ideally, I'd like to be able to do something very simply like, declare
> if any service fails to restart itself or does so too often and enters a
> hard failure state, then systemd should (attempt to) fire off an
> escalation procedure unit like send a passive check status to Nagios or
> send an email, accepting that such procedures may depend upon network
> connectivity which may or may not be available (so maybe there's some
> circular dependency issues to work through in such a scenario, but I
> presume systemd already has facilities for handling that case, maybe via
> OnFailureJobMode= settings).
> 
> Thoughts?

That sounds like it goes towards service monitoring?

I figure our theory there was that monitoring systems should probably
keep an eye on the journal stream generated, where there are events
generated about these issues. These log entries are recognizable by
their message ID and carry both human readable as well as structured
metadta that let you know what's going on. Our plan was originally to
then add a concept of "activation-by-log-event" to systemd, so that
you could activate some service each time a log event of a certain
kind happens. However, we never came around to actually hack that up,
it's still on the TODO list.

I think OnFailure= and stuff are pretty useful for some things, but
for the monitoring case such a journal-based logic would be nicer,
because it can cover events triggered in a quick pace and during early
boot nicer, as they processing of this can happen serially and
asynchronously... Also, it would allow much nicer filtering for any
kind of event on the system, and we wouldn't happen to hook up every
kind of failure of each service with a OnFailure= like dependency.

So yeah, I think we should have better support for what you are trying
to do, but I think we should best do that by delivering the
activate-by-log-message feature after all...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Can coredumpctl work independent of journald?

2016-05-11 Thread Lennart Poettering
On Wed, 11.05.16 20:31, P.R.Dinesh (pr.din...@gmail.com) wrote:

> I have set the journald to be persistant and limit its size to 40MB.
> I had a process coredumped and the coredump file is found in
> /var/log/systemd/coredump
> 
> When I run coredumpctl this coredump is not shown.
> 
> Later I found that the core dump log is missing from the Journal ( the
> journal got wrapped since it reached the size limitation).
> 
> I think coredumpctl depends on journal to display the coredump.  Can't it
> search for the coredump files present in the coredump folder and list those
> files?

We use the metadata and the filtering the journal provides us with,
and the coredump on disk is really just secondary, external data to
that, that can be lifecycled quicker than the logging data. We extract
the backtrace from the coredump at the momemt the coredump happens,
and all that along with numerous metadata fields is stored in the
journal. In fact storing the coredump is optional, because in many
setups the short backtrace in the logs is good enough, and the
coredump is less important.

So, generally the concept here really is that logs are cheap, and thus
you keep around more of them; and coredumps are large and thus you
lifecycle them quicker. If I understand correctly what you want is the
opposite: you want a quicker lifecycle for the logs but keep the
coredumps around for longer. I must say, I am not entirely sure where
such a setup would be a good idea though... i.e. wanting persistent
coredumps but volatile logging sounds a strange combination to
me... Can you make a good case for this?

But yeah, we really don't cover what you are asking for right now, and
I am not sure we should...

> Also can I launch the coredumpctl gdb by providing a compressed core
> file.

If you configure systemd-coredump to store the coredumps compressed
(which is in fact the default), then "coredumpctl gdb" will implicitly
decompress them so that gdb can do its work.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Using systemd --user to manage graphical sessions?

2016-05-11 Thread Mantas Mikulėnas
On Wed, May 11, 2016 at 7:13 PM, Martin Pitt  wrote:

> Hello all,
>
> I've been experimenting with systemd --user as a possible replacement
> for bringing up graphical user sessions. We currently bring up most of
> that using upstart jobs (simple auto-restart on crashes, rate
> limiting, per-job logging, fine-grained startup condition control).
> There is one upstart process per session, so this is reasonably
> straightforward.
>
> But I still can't wrap my head around the mental model of how this is
> supposed to work with the user D-Bus and systemd instance. This works
> fine for some services which are not specific to a session, such as
> gvfs or pulseaudio, but it's not at all appropriate for user-facing
> applications or desktop components, such as gnome-terminal[1],
> gnome-session, the window manager, indicators, etc. They are
> necessarily per-session, and sometimes even need to be keyed off the
>

AFAIK, the general idea of --user is that there's at most one graphical
session (per user) at a time, so things like $DISPLAY naturally become per
user.

(It's not actually that bad, if you think how many people have been
hardcoding :0 or grepping `ps -ef` to map an UID to display/xauthority/bus
until now...)

I can start this with "systemctl --user start xeyes@${XDG_SESSION_ID}",
> but this will hang off user@1000.service instead of session-*.scope
> and thus it will not be stopped once the X session gets logged out.
>

Most X11 clients will exit as soon as the X server goes away, no?


> Is this all in vain, and we need to make the *entire* world of free
> software shift over to the new model of "user-wide services" before we
> can use systemd --user for graphical stuff? (Colin's patch list is
> already impressively long and it is by far not complete)
>

I think most programs will work as is just fine, as long as $DISPLAY is
passed to the --user instance and the dbus-daemon.


> Or is someone actually using systemd --user for graphical sessions
> already and found a trick that I missed?
>

`systemctl --user import-environment DISPLAY` seems to work well enough.
(There's also `dbus-update-activation-environment` which can push
environment into both dbus-daemon and systemd at once.)

In stock GNOME, I already have a bunch of bus-activated apps running off
the user bus (dbus.service), such as gedit, nautilus, or gnome-terminal;
the latter finally got its own gnome-terminal.service in 3.20.2.

(I don't include XAUTHORITY in the above example because Xorg has long
supported UID-based access control via `xhost +si:localuser:XXX`, e.g. gdm
sets that up by default.)

-- 
Mantas Mikulėnas 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] network unit Match by router advertisement?

2016-05-11 Thread Brian Kroth
Hi again all,

TL;DR: would it be possible (or make sense) to have systemd Match rules for
network units that could match on some artifact of the network the link is
attached to like vlan tag, router advertisement, wireless access point or
gateway mac, etc.?

So, the original motivation for this question comes from a web hosting
platform we developed that uses something like lightweight pre-containers
for running multiple apache instances per VM.  Multiple instances per VM,
each running as their own user, in order to avoid the overhead of full on
VMs for each apache (which are generally mostly idle) without the
performance overhead of something like suexec.  In order to run them each
as their own user, we bound them each to their own IPv6 address [2].  A
separate reverse proxy setup provides IPv4 connectivity, caching, security
filters, etc.

Anyways, in the past all of this dependency and setup madness was managed
with some Perl scripts and a database that would just setup appropriate
conf files on disk, addresses on the appropriate network interfaces (there
are between two or four on each node), and environment variables before
calling the standard sysv init script multiple times to start each instance.

As I'm thinking about how I could move towards a systemd integrated system,
I'm hoping to summarize this process to just stamping out (possibly
instanced) apache service unit files, php-fpm unit files, maybe some slice
unit files for arranging them into appropriate cgroup hierarchies, maybe
some lightweight container features like fs namespaces, probably grouped by
some target(s) for handling batch operations, etc., and just make systemd
manage the process dependencies starting/stopping/monitoring/etc.

However, the missing bit then would be network address assignment for the
various instances to the right interfaces.  Ideally, I'd just stamp out
network unit files and have the apache instance units depend upon that, but
the trouble is that traditionally NIC naming hasn't always been consistent
in the past.

I've read through [1], but it doesn't really provide what I'm looking for.
Physical layout of the nic-port-types is semi interesting and perhaps
consistent, but network operator error may result in a misassigned vlan
port, or simply the wrong cable to the wrong port (which can be true for
physical or virtual realms unfortunately), etc.

What I did in the past to work around that was to use ndisc6 or something
similar to verify that the expected interface had the expected network
properties - in this case a router advertisement.

Something similar in a Match section in systemd network units I would think
could be useful.  It could also be extended to other ideas like which
wireless access point you're attached to at the moment, or what the MAC
address of the gateway is that DHCP assigned to you, or what tagged vlan
attributes you see on the wire, etc. That could be used to fire off other
configuration events, especially in the case of mobile clients, when
systemd discovers via network artifacts that the machine has moved to a new
location and the user may want to perform some extra config actions, a
backup job, etc.

The only other way I can think of to emulate this might be to write a
series of udev rules that executed the appropriate discovery and matching
commands and then assigned interface alias names and then match on that in
the network units.  For instance, through RAs or VLAN tags I might
determine that the interface is on VLAN 123, so I create an interface alias
of vlan123, and then use network unit rules to match on that name when the
link is up and an appropriate service registers a need for the address.

I haven't dug through udev enough to try that yet, but it seems too
procedural to me for such a general sort of desire.  I like the
semi-declaritive style of configuration that systemd generally enables.

I guess the other option would be to just make them standalone Exec...
statement units like I did before, but again that seems too proceedural to
me.

Make sense?  Thoughts?

Thanks,
Brian

[1] <
https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
>

[2] Sidenote: In the past I've used an old trick of setting the
preferred_lft to 0 for IPv6 addresses that I wanted to be available to
services, but not selected for outbound connections from the host.  This
was basically to help influence the usual source address selection criteria
which tries to avoid "deprecated" addresses.  I didn't see a way to specify
that in the systemd.network man page.  Is there one that I'm missing, or is
that another case for an Exec... statement?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] default service restart action?

2016-05-11 Thread Brian Kroth
Hi all, I'm in the midst of steeping myself in systemd docs as I prepare to
face lift a slew of services for Debian Jessie updates.

As I read through things I'm starting to think through a number of new ways
I could potentially reorganize some of our services, which is cool. With my
ideas though I think I'm finding a few gaps in either my understanding or
systemd capabilities, so I wanted to send a few questions to the list.
Hopefully this is the right place.

The first should hopefully be a bit of a softball:

With .service units one can specify OnFailure and other sorts of restart
behaviors, including thresholds and backoffs for when to stop retrying and
what to do then. Essentially a lightweight service problem escalation
procedure.

However, in reading systemd-system.conf, I don't see any way to specify
something like DefaultOnFailure behavior for what to do on failure, perhaps
after some simple restart attempts, for all services.  Seems like it can
only be done on a per unit basis, no?

Ideally, I'd like to be able to do something very simply like, declare
if any service fails to restart itself or does so too often and enters a
hard failure state, then systemd should (attempt to) fire off an
escalation procedure unit like send a passive check status to Nagios or
send an email, accepting that such procedures may depend upon network
connectivity which may or may not be available (so maybe there's some
circular dependency issues to work through in such a scenario, but I
presume systemd already has facilities for handling that case, maybe via
OnFailureJobMode= settings).

Thoughts?

Thanks,
Brian
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Using systemd --user to manage graphical sessions?

2016-05-11 Thread Martin Pitt
Hello all,

I've been experimenting with systemd --user as a possible replacement
for bringing up graphical user sessions. We currently bring up most of
that using upstart jobs (simple auto-restart on crashes, rate
limiting, per-job logging, fine-grained startup condition control).
There is one upstart process per session, so this is reasonably
straightforward.

But I still can't wrap my head around the mental model of how this is
supposed to work with the user D-Bus and systemd instance. This works
fine for some services which are not specific to a session, such as
gvfs or pulseaudio, but it's not at all appropriate for user-facing
applications or desktop components, such as gnome-terminal[1],
gnome-session, the window manager, indicators, etc. They are
necessarily per-session, and sometimes even need to be keyed off the
session type (gnome, LXDE, XFCE, etc.). I. e. they should be started
on the first graphical session (not on VT logins) and should be
stopped when stopping the graphical session.

There are some existing discussions [2][3] and while the latter has a
lot of patches and some good direction, it does not really touch the
subject of the mail at all -- how do I get something from a user unit
into the session-*.scope session of logind?

I. e. if I have some ~/.config/systemd/user/xeyes@.service with

   [Unit]
   Description=Xeyes on session %I
   [Service]
   ExecStart=/usr/bin/xeyes

I can start this with "systemctl --user start xeyes@${XDG_SESSION_ID}",
but this will hang off user@1000.service instead of session-*.scope
and thus it will not be stopped once the X session gets logged out.

One idea was to add PartOf=session-*.scope, but that's a system-level
unit and thus the session process does not know about that. There also
is no Scope= option for a service to make that run in a different
scope. We can certainly hook something into xinit.d/ to *start* a
"master stub" service which then launches the components, but there is
no way to tell it to stop with the logind session.

Is this all in vain, and we need to make the *entire* world of free
software shift over to the new model of "user-wide services" before we
can use systemd --user for graphical stuff? (Colin's patch list is
already impressively long and it is by far not complete)

Or is someone actually using systemd --user for graphical sessions
already and found a trick that I missed?

Thanks,

Martin


[1] https://bugzilla.gnome.org/show_bug.cgi?id=744736
[2] 
https://mail.gnome.org/archives/desktop-devel-list/2014-January/msg00079.html
[3] https://lists.freedesktop.org/archives/systemd-devel/2013-August/012517.html

-- 
Martin Pitt| http://www.piware.de
Ubuntu Developer (www.ubuntu.com)  | Debian Developer  (www.debian.org)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Verify the gpg signature of the given tag

2016-05-11 Thread poma
On 11.05.2016 10:36, Greg KH wrote:
> On Wed, May 11, 2016 at 09:57:05AM +0200, poma wrote:
>>
>> $ git tag --verify v229
>> object 95adafc428b5b4be0ddd4d43a7b96658390388bc
>> type commit
>> tag v229
>> tagger Lennart Poettering  1455208658 +0100
>>
>> systemd 229
>> gpg: Signature made Thu 11 Feb 2016 05:37:38 PM CET using RSA key ID 9C3485B0
>> gpg: Good signature from "Lennart Poettering "
>> gpg: aka "Lennart Poettering "
>> gpg: aka "Lennart Poettering (Red Hat) "
>> gpg: aka "Lennart Poettering (Sourceforge.net) 
>> "
>> gpg: WARNING: This key is not certified with a trusted signature!
>> gpg:  There is no indication that the signature belongs to the owner.
>> Primary key fingerprint: 63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01 5CC4
>>  Subkey fingerprint: 16B1 C4EE C0BC 021A C777  F681 B63B 2187 9C34 85B0
>>
>>
>> How to do this without "gpg: WARNING:" part?
> 
> That's on your end, not the repo's end.  I suggest reading up on gpg
> trust models if you wish for this to be able to be resolved on your
> system.
> 
> good luck!
> 
> greg k-h
> 


Hello Greg! :)

Marshall Field - "Give the lady what she wants"
https://en.wikipedia.org/wiki/The_customer_is_always_right


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Verify the gpg signature of the given tag

2016-05-11 Thread poma
On 11.05.2016 13:04, Mantas Mikulėnas wrote:
> On Wed, May 11, 2016 at 10:57 AM, poma  wrote:
> 
>>
>> $ git tag --verify v229
>> object 95adafc428b5b4be0ddd4d43a7b96658390388bc
>> type commit
>> tag v229
>> tagger Lennart Poettering  1455208658 +0100
>>
>> systemd 229
>> gpg: Signature made Thu 11 Feb 2016 05:37:38 PM CET using RSA key ID
>> 9C3485B0
>> gpg: Good signature from "Lennart Poettering "
>> gpg: aka "Lennart Poettering "
>> gpg: aka "Lennart Poettering (Red Hat) <
>> lpoet...@redhat.com>"
>> gpg: aka "Lennart Poettering (Sourceforge.net) <
>> poetter...@users.sourceforge.net>"
>> gpg: WARNING: This key is not certified with a trusted signature!
>> gpg:  There is no indication that the signature belongs to the
>> owner.
>> Primary key fingerprint: 63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01 5CC4
>>  Subkey fingerprint: 16B1 C4EE C0BC 021A C777  F681 B63B 2187 9C34 85B0
>>
>>
>> How to do this without "gpg: WARNING:" part?
>>
> 
> In the pgp trust model – assuming you've already verified the key and are
> sure that it really belongs to Lennart – you need to sign (certify) it
> either with a public or local signature:
> 
> $ gpg --lsign-key "63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01 5CC4"
> 
> In the tofu or tofu+pgp trust model, mark it as good in tofu.db:
> 
> $ gpg --tofu-policy good "63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01
> 5CC4"
> 
> (You can try out the new models using "gpg --update-trustdb --trust-model
> tofu+pgp".)
> 


https://www.gnupg.org/news.html
GnuPG 2.1.10 released (2015-12-04)
"A new version of the modern branch of GnuPG has been released. The main 
features of this release are support for TOFU ..."

Fortunately or not,
Fedora still runs on diesel, i.e. 1.4.20 - "the classic portable version"
https://koji.fedoraproject.org/koji/packageinfo?packageID=453

so no Tofu in Fedora's kitchen, Mortadella only ;)

However reading upon
https://en.wikipedia.org/wiki/Trust_on_first_use
stands out what is called "strengths" -and- "weakness"
"... must initially validate every interaction ..."

This sounds rather naive,
or shall we say NA²IVE - "Not At All Intelligent Verification Engagement"


However, thanks for a great reference.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Can coredumpctl work independent of journald?

2016-05-11 Thread P.R.Dinesh
I have set the journald to be persistant and limit its size to 40MB.
I had a process coredumped and the coredump file is found in
/var/log/systemd/coredump

When I run coredumpctl this coredump is not shown.

Later I found that the core dump log is missing from the Journal ( the
journal got wrapped since it reached the size limitation).

I think coredumpctl depends on journal to display the coredump.  Can't it
search for the coredump files present in the coredump folder and list those
files?

Also can I launch the coredumpctl gdb by providing a compressed core file.

Thank you
-- 
With Kind Regards,
Dinesh P Ramakrishnan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Verify the gpg signature of the given tag

2016-05-11 Thread Mantas Mikulėnas
On Wed, May 11, 2016 at 10:57 AM, poma  wrote:

>
> $ git tag --verify v229
> object 95adafc428b5b4be0ddd4d43a7b96658390388bc
> type commit
> tag v229
> tagger Lennart Poettering  1455208658 +0100
>
> systemd 229
> gpg: Signature made Thu 11 Feb 2016 05:37:38 PM CET using RSA key ID
> 9C3485B0
> gpg: Good signature from "Lennart Poettering "
> gpg: aka "Lennart Poettering "
> gpg: aka "Lennart Poettering (Red Hat) <
> lpoet...@redhat.com>"
> gpg: aka "Lennart Poettering (Sourceforge.net) <
> poetter...@users.sourceforge.net>"
> gpg: WARNING: This key is not certified with a trusted signature!
> gpg:  There is no indication that the signature belongs to the
> owner.
> Primary key fingerprint: 63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01 5CC4
>  Subkey fingerprint: 16B1 C4EE C0BC 021A C777  F681 B63B 2187 9C34 85B0
>
>
> How to do this without "gpg: WARNING:" part?
>

In the pgp trust model – assuming you've already verified the key and are
sure that it really belongs to Lennart – you need to sign (certify) it
either with a public or local signature:

$ gpg --lsign-key "63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01 5CC4"

In the tofu or tofu+pgp trust model, mark it as good in tofu.db:

$ gpg --tofu-policy good "63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01
5CC4"

(You can try out the new models using "gpg --update-trustdb --trust-model
tofu+pgp".)

-- 
Mantas Mikulėnas 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] resolved: use special nameservers for some domains

2016-05-11 Thread Felix Schwarz

I'd like to know if resolved can redirect DNS queries for certain domains to
different name servers?

rationale: I want to query a few DNS black lists but my provider's name
servers have been blocked because they send too many queries to the BL.
However my intended usage qualifies for the "free" tier.

Assuming this is not possible at the moment should I file a github issue? Or
is such a feature considered "feature creep"  so it won't be added to resolved
"ever"?

fs

PS: I know that I can use dnsmasq and other software. However resolved is
quite well suited for my limited use cases so if possible I'd like to keep
resolved without installing extra stuff.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Verify the gpg signature of the given tag

2016-05-11 Thread Greg KH
On Wed, May 11, 2016 at 09:57:05AM +0200, poma wrote:
> 
> $ git tag --verify v229
> object 95adafc428b5b4be0ddd4d43a7b96658390388bc
> type commit
> tag v229
> tagger Lennart Poettering  1455208658 +0100
> 
> systemd 229
> gpg: Signature made Thu 11 Feb 2016 05:37:38 PM CET using RSA key ID 9C3485B0
> gpg: Good signature from "Lennart Poettering "
> gpg: aka "Lennart Poettering "
> gpg: aka "Lennart Poettering (Red Hat) "
> gpg: aka "Lennart Poettering (Sourceforge.net) 
> "
> gpg: WARNING: This key is not certified with a trusted signature!
> gpg:  There is no indication that the signature belongs to the owner.
> Primary key fingerprint: 63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01 5CC4
>  Subkey fingerprint: 16B1 C4EE C0BC 021A C777  F681 B63B 2187 9C34 85B0
> 
> 
> How to do this without "gpg: WARNING:" part?

That's on your end, not the repo's end.  I suggest reading up on gpg
trust models if you wish for this to be able to be resolved on your
system.

good luck!

greg k-h
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Transaction contains conflicting jobs 'restart' and 'stop'

2016-05-11 Thread Michal Sekletar
On Thu, Mar 10, 2016 at 10:11 PM, Orion Poplawski  wrote:

> Can't the stop of iptables be dropped because the service is already stopped
> (or more likely not even present)?

Isn't this the case already? I simplified your scenario, i.e. A
conflicts B and C is part of both A and B. If I first start B and C
and then issue stop for B, then follow-up restart of A doesn't produce
an error. I observed the problem only after trying to restart A when B
and C were running.

Michal
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Verify the gpg signature of the given tag

2016-05-11 Thread poma

$ git tag --verify v229
object 95adafc428b5b4be0ddd4d43a7b96658390388bc
type commit
tag v229
tagger Lennart Poettering  1455208658 +0100

systemd 229
gpg: Signature made Thu 11 Feb 2016 05:37:38 PM CET using RSA key ID 9C3485B0
gpg: Good signature from "Lennart Poettering "
gpg: aka "Lennart Poettering "
gpg: aka "Lennart Poettering (Red Hat) "
gpg: aka "Lennart Poettering (Sourceforge.net) 
"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the owner.
Primary key fingerprint: 63CD A1E5 D3FC 22B9 98D2  0DD6 327F 2695 1A01 5CC4
 Subkey fingerprint: 16B1 C4EE C0BC 021A C777  F681 B63B 2187 9C34 85B0


How to do this without "gpg: WARNING:" part?

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel