Re: [systemd-devel] "StandardOutput=console" don't work as expected

2015-12-30 Thread Filipe Brandenburger
Hi,

On Wed, Dec 30, 2015 at 2:52 AM, Michael Chapman  wrote:
> On Wed, 30 Dec 2015, Reindl Harald wrote:
>>> >  i am asking for StandardOutput=console get piped to the terminal
>>> >  systemctl was called - the rest is done by crond as all the years
>>> > before
>>>
>>>  That isn't possible at the moment, and I doubt it will ever be
>>>  supported. The service is executed by systemd, not systemctl, and there
>>>  is no communication channel to return a stream of output from the
>>>  command back to systemctl.
>>
>> since "systemctl start" on the shell waits until the "oneshot" service is
>> finished it can't be impossible that pid 1 geives back the tasks output
>
> Well, it's "impossible" in the sense that systemd doesn't have any code to
> do that, no matter how much you might wish it did.

There's an item in the TODO list for `systemctl start -v ...` that
shows a log of the service in systemctl output:
https://github.com/systemd/systemd/blob/c57d67f718077aadee4e2d0940fb87f513b98671/TODO#L132

This might get closer to what you're suggesting here (even without
StandardOutput=console), though it's still about logs and not really
raw output.

As the TODO item suggests, it's tricky to implement correctly, so not
sure how long until it will be available...

Cheers,
Filipe
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd 219 fails to create and/or use loop devices (or any other device)

2015-11-19 Thread Filipe Brandenburger
Hi,

On Thu, Nov 19, 2015 at 7:42 AM, von Thadden, Joachim, SEVEN
PRINCIPLES  wrote:
> using systemd 219-25 on Fedora 22 on a freshly created container I can not 
> make any
> device. Usage of --capability=CAP_MKNOD makes no difference.
>
> Steps to reproduce:
> [root@nbl ~]# machinectl pull-raw --verify=no
> http://ftp.halifax.rwth-aachen.de/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.raw.xz
> [root@nbl ~]# systemd-nspawn --capability=CAP_MKNOD -M 
> Fedora-Cloud-Base-20141203-21.x86_64
> [root@Fedora-Cloud-Base-20141203-21 ~]# strace -f mknod /dev/loop0 b 7 0
> mknod("/dev/loop0", S_IFBLK|0666, makedev(7, 0)) = -1 EPERM (Operation not 
> permitted)

This is likely being caused by the use of the "devices" namespace,
which prevents you from using specific character and block devices
inside a cgroup. nspawn now sets these by default.

Calling systemd-nspawn with --property='DeviceAllow=/dev/loop0 rwm'
should allow it to mknod and later use it in losetup as well.

HTH!
Filipe
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] at replacement

2018-07-12 Thread Filipe Brandenburger
Hi,

On Thu, Jul 12, 2018 at 12:04 PM Matt Zagrabelny  wrote:
> I know systemd can replace cron. Do folks use it to replace "at", too?
>
> I know it *can* - with two files per "at" entry and then enabling and 
> starting the timer.
>
> Is there an easier with to replace "at" with systemd than creating two files 
> and enabling and starting the timer?

Take a look at systemd-run and, in particular, options such as
--on-active=, --on-calendar= and --timer-property=, which allow you to
set a .timer option on demand for a single one-off command.

https://www.freedesktop.org/software/systemd/man/systemd-run.html#--on-active=

I hope this helps!

Cheers,
Filipe


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] at replacement

2018-07-12 Thread Filipe Brandenburger
On Thu, Jul 12, 2018 at 2:06 PM Matt Zagrabelny  wrote:
> On Thu, Jul 12, 2018 at 3:07 PM, Filipe Brandenburger  
> wrote:
>> Take a look at systemd-run and, in particular, options such as
>> --on-active=, --on-calendar= and --timer-property=, which allow you to
>> set a .timer option on demand for a single one-off command.
>
> What sort of timer properties are folks setting or adjusting via the 
> --timer-property? That is, when would I use such an option?

Technically, any options accepted in systemd.timer are possible:
https://www.freedesktop.org/software/systemd/man/systemd.timer.html#Options

For instance, RandomizedDelaySec= might be an interesting one you
might want to use.

AccuracySec= is also one you potentially want to look into, if you
care about the accuracy of the time at which your command will run.

Cheers,
Filipe


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] When is a unit "loaded"?

2018-07-11 Thread Filipe Brandenburger
Hey Daniel!

On Wed, Jul 11, 2018 at 5:16 PM Daniel Wang  wrote:
> I have a unit, say foo.service, on my system that's in 
> /usr/lib/systemd/system, but disabled by preset.

Not that it matters, but presets don't really matter here. The unit is
disabled, period.

> On system boot, it doesn't show as "loaded" per `systemctl --all | grep foo`.

Because there's no reference to it in units systemd sees, so systemd
doesn't need to load it.

At bootup, systemd will simply start with default.target and then
recursively load the units it needs.

See this link for more details:
https://www.freedesktop.org/software/systemd/man/bootup.html#System%20Manager%20Bootup

`systemctl --all` will only show the units in memory, so foo.service
won't be listed since it's not loaded.

> So if I override it with a file with the same name but under 
> /etc/systemd/system, `systemctl cat foo.service` will show the one under /etc 
> without the need for a `systemctl daemon-reload`.

Yes, because it's not loaded.

> If I create another service unit, bar.service, which has a After= dependency 
> on foo.service, and start bar, foo.service will show as loaded. And then if I 
> try to override it, `systemctl cat foo.service` will print a warning saying a 
> daemon-reload is needed.

Yes. If systemd sees a reference for that unit (even if it's an
After=), it will need to load it, since it needs to record the
dependency between the units in the internal memory structures, so it
needs a reference to the unit, and it loads it to have a complete
reference to it...

> Nothings seems incorrect, but I have a few questions:
> - Which units are loaded on-boot and which are not?

Only default.target and recursively any unit referred to by the loaded units.

> - Is the After= dependency alone enough to have systemd load a unit? Are 
> there any other dependency directives that will result in the same effect?

Yes, I believe any reference to units will trigger the unit to be
loaded. As I mentioned, systemd wants to keep the state in memory, so
loading the unit will allow it to keep complete state.

An exception (haven't checked it, but I expect it) is references in
the [Install] section (such as Also=) since those are only used by
`systemctl enable` and are not really loaded into memory as far as I
can tell (but I might be wrong here, and these might trigger the unit
to load as well.)

I hope this helps!

Cheers,
Filipe


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Apparmor in containers

2018-04-12 Thread Filipe Brandenburger
Hi,

Actually, it seems AppArmor has support for containers and can have a
specific profile for inside the containers only.

Docker does support it:
https://docs.docker.com/engine/security/apparmor/

Agree it shouldn't be too hard to hook this into nspawn... I don't really
use AppArmor or know it well though, so I'm not best placed to test it...

Cheers,
Filipe


On Thu, Apr 12, 2018 at 2:48 AM, Lennart Poettering 
wrote:

> On Di, 10.04.18 18:16, Matthias Pfau (matth...@tutanota.de) wrote:
>
> > Hi there,
> > we use apparmor on our production systems and want to test the setup in
> our test environment based on systemd-nspawn.
> >
> > Therefore, I installed apparmor on the host (debian stretch) and
> updated GRUB_CMDLINE_LINUX in /etc/default/grub to enable apparmor. I can
> use apparmor on the host system. However, within my containers, apparmor
> can not be started.
> >
> > `journalctl -kf` does not print anything when invoking `systemctl start
> apparmor` on the container and `systemctl status apparmor` just returns
> "ConditionSecurity=apparmor was not met".
> >
> > Is it possible to run apparmor in a container?
>
> Uh, I have no experience with AA but to my knowledge none of the
> kernel MACs (AA, SMACK, SELinux) are virtualized for container
> environments, i.e. there can only be one system policy, and containers
> tend to be managed under a single context only as a whole.
>
> But I'd be happy to be proved wrong, as I never touched AA, so I don't
> really know.
>
> If AA should indeed be virtualizable for containers then making nspawn
> support it is likely very easy, but I have my doubts it is...
>
> Please contact the AA community, and ask them whether AA containers
> can load their own policies. If yes, then please file an RFE issue
> against systemd, asking us to add support for this, with links to the
> APIs. best chance to get this implemented quickly would be to file a
> patch too, we'd be happy to review that.
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
>


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] how to login into a container booting with a minimal 'debian distro unstable' via nspawn

2018-03-25 Thread Filipe Brandenburger
Hi Florian,

On Sun, Mar 25, 2018 at 9:36 AM, Florian Held  wrote:
> how is it possible to log in into a container booting a minimal unstable
> debian distro via nspawn. After running:
>
> # debootstrap --arch=amd64 unstable ~/debian-tree/
> # systemd-nspawn -bD ~/debian-tree/
>
> prompts username followed by password. The combination
>
> "root"
> ""
>
> without quotes doesn't work. How can I login?

You can enter the container and just run a root shell on it with this command:

  # systemd-nspawn -D ~/debian-tree/ /bin/sh

(That's equivalent of single-user mode or a rescue shell on a machine.)

At that step, you can change the root password:

  # passwd root
  

At that point, boot the container again (with "-b") and you should be
able to log in.

I hope that helps!

Cheers,
Filipe


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Filtering logs of a single execution of a (transient) service

2018-03-23 Thread Filipe Brandenburger
Hi!

So I'm testing a program repeatedly and using `systemd-run` to start a
service with it, passing it a specific unit name.

When the test finishes and I bring down the service, I want to be able to
collect the journald logs for that execution of the test alone.

Right now what I'm doing is naming the service differently every time,
including a random number, so I can collect the logs for that service alone
at the end. Such as:

  # myservice_name=myservice-${RANDOM}.service
  # systemd-run --unit="${myservice_name}" --remain-after-exit mybin
--myarg1 --myarg2 ...

And then collecting the logs using:

  # journalctl -u "${myservice_name}"

One disadvantage of this approach is that the units pile up as I keep
running tests...

  # systemctl status myservice-*.service

And that it's hard to find which one is the latest one, from an unrelated
session (this works only while active):

  # systemctl list-units --state running myservice-*.service

I would like to run these tests all under a single unit name,
myservice.service. I'm fine with not having more than one of them at the
same time (in fact, that's a feature.)

But I wonder how I can get the logs for a single execution...

The best I could come up with was using a cursor to get the logs for the
last execution:

  # journalctl -u myservice MESSAGE_ID=39f53479d3a045ac8e11786248231fbf
--show-cursor
  -- Logs begin at Thu 2018-03-22 22:57:32 UTC, end at Fri 2018-03-23
19:17:01 UTC. --
  Mar 23 16:40:00 mymachine systemd[1]: Started mybin --myarg1 --myarg2
  Mar 23 16:45:00 mymachine systemd[1]: Started mybin --myarg1 --myarg2b
  Mar 23 16:50:00 mymachine systemd[1]: Started mybin --myarg1 --myarg2
--myarg3
  -- cursor:
s=abcde12345...;i=123f45;b=12345abcd...;m=f123f123;t=123456...;x=...

And then use the cursor to query journald and get the logs from the last
execution:

  # journalctl -u myservice --after-cursor 's=abcde12345...;i=123f45;...'

That works to query the last execution of the service, but not a random
one...

I guess what I'm looking for is a way to get systemd to inject a journal
field to every message logged by my unit. Something like an environment
variable perhaps? Or some other field I can pass to systemd-run using -p.
Or something that systemd itself generates, that's unique for each
execution of the service and that I can query somehow (perhaps `systemd
show` while the service is up.) Is there any such thing?

Any other suggestions of how I should accomplish something like this?

Thanks!
Filipe


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Cleanest way to halt a VM after a service has stopped

2018-02-26 Thread Filipe Brandenburger
Hi,

I found it's possible to halt a VM after a service has stopped by
using something like this:

ExecStopPost=/sbin/halt -p

Is this the cleanest approach? Or would anyone have a better
recommendation (perhaps using systemd-halt.service or similar)?

Thanks!
Filipe


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] At wits end... need to execute a script prior to anything getting killed/changed on reboot/shutdown

2019-01-16 Thread Filipe Brandenburger
If you want to run it early in the shutdown process, then keep
DefaultDependencies=yes, in which case it will run before the base
dependencies start to get stopped.

If you need some other resources to be up, for instance network, then add
After=network.target, etc.

Remember that when shutting down, the dependencies are stopped in the
opposite order as they're started up, so if you need your script to run
*before* something else is stopped, then you need an After= dependency.

You shouldn't add any ordering regarding reboot.target and shutdown.target.
Just enable your service (so that it looks up during normal system usage),
when the system goes down it will be stopped, and then depending on its
After= dependencies it will block those other services from being stopped
until you're done.

In recent systemd versions (I think starting from v238?) you can omit the
ExecStart=/bin/true line, an unit without that line starts to be valid in
one of those versions... Though keeping it around is fine and will work
with older versions too.

I hope this helps!

Cheers,
Filipe


On Wed, Jan 16, 2019 at 10:47 AM Christopher Cox 
wrote:

> I need to be able to execute a script before anything gets shutdown.  That
> is,
> when somebody does a "reboot", "shutdown" or "poweroff", I need this
> script to
> run first, and for it to finish before everything gets whacked.
>
> I know the following isn't "right"... I've tried so many different
> things.
> Google hasn't helped only giving me many "right" solutions that didn't
> work. In
> this current edition, basically I get a partial capture of processes that
> are
> running (that is some were killed directly or indirectly).. I need them
> all.  My
> script needs to see the state of operation before reboot/shutdown/poweroff
> do
> anything else.  My "save" saves some information about running processes
> (some
> not necessarily under systemd control).
>
> [Unit]
> Description=my-service save status
> DefaultDependencies=no
> Before=reboot.target shutdown.target
> Conflicts=reboot.target shutdown.target
>
> [Service]
> Type=oneshot
> RemainAfterExit=yes
> ExecStart=/bin/true
> ExecStop=/usr/local/bin/my-service.sh save
> StandardOutput=journal
>
> [Install]
> WantedBy=multi-user.target
>
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bugfix release(s)

2019-01-15 Thread Filipe Brandenburger
So I think all the bits already exist somewhere and perhaps a small
change in naming would go a long way to make these pushes smoother.

If when we cut v240 from the master branch, we had called it v240-rc1
instead, perhaps it was clear that it could take some more testing
before it was made official.

Furthermore, fixes for the breakage were backported into the
v240-stable tree in systemd-stable repository, so perhaps calling the
top of that tree v240 (or v240.0) at some point would have been
helpful.

Having been pushing to systemd-stable this week (fixing one of the
CVEs in previous versions), I have to say that there's some impedance
to contributing to that tree, since I needed a separate fork (GitHub
doesn't want to let me do PRs from my main fork), sometimes it doesn't
build on the latest toolchain and libs (I'm working on fixing that
too), etc. Perhaps having some more of the distro maintainers actively
helping on those branches would be best. I think bringing those
branches into the main repo would help in those regards.

Why don't we try something slightly different for the v241 timeline?

At the time of the release, we actually create a new *branch* and call
it release-v241. We also tag v241-rc1 at the start of that tree and
announce the pre-release. (Note that this branch means no need for
v241-stable in systemd-stable anymore, so it's not a branch which
wouldn't have existed, it's only in a different place now.)

As distros start to do heavier and broader testing of that
pre-release, we start fixing bugs at trunk, backport them to
release-v241 and after a week or so release v241-rc2. Rinse and
repeat.

After things are stable for a couple of weeks, we can finally just
bump the version number, tag v241.0 and announce the final release.
Hopefully everything will go very smooth. But, if it doesn't, we can
still iterate on that and release v241.1. We can also release v241.2
to address the CVEs that come up a month later (just kidding, of
course there won't be any!)

>From a developer's point of view, this really doesn't look too painful
compared with the current process.

And distros will have an useful "almost ready" point where they have
time to do one-time testing and start pushing to some users to collect
feedback before the final "official" or "stable" release.

What do you all think?

Cheers,
Filipe


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] sd-boot on Fedora 30?

2019-08-23 Thread Filipe Brandenburger
Hi,

I've been trying to get sd-boot to work on Fedora 30, made some progress
but not fully there yet...

First I found my partition GPT type in /boot was incorrect and bootctl was
trying to use /boot/efi instead. Ok, that fixed, now I get a list of
kernels.

But whenever I boot, I only get the "Reboot Into Firmware Interface" menu
entry and nothing else...

I imagine this might be related to the Grub entries:

$ sudo bootctl list
/boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf:7:
Unknown line "id", ignoring.
/boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf:8:
Unknown line "grub_users", ignoring.
/boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf:9:
Unknown line "grub_arg", ignoring.
/boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf:10:
Unknown line "grub_class", ignoring.
title: Fedora (5.2.8-200.fc30.x86_64) 30 (Workstation Edition)
(default)
   id: 4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64
   source:
/boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf
  version: 5.2.8-200.fc30.x86_64
linux: /vmlinuz-5.2.8-200.fc30.x86_64
   initrd: /initramfs-5.2.8-200.fc30.x86_64.img
  options: $kernelopts

I tried to at least fix the $kernelopts one, with grubby --args="..."
adding a dummy argument just to deduplicate it from the grubenv contents,
but still couldn't boot from there...

Even if I fix that, looks like new kernels installed would trigger
/usr/lib/kernel/install.d/20-grub.install and probably mess up that setup
(do I have to mask or remove it completely?)

Fedora's BLS document unfortunately doesn't mention sd-boot at all :-(
https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault

Anyways, if anyone has hints of what I could try next, I'd be quite
interested to know. (Perhaps adding some docs to Fedora wiki would be
pretty helpful too!) I thought I'd ask here first... If I don't hear back,
I might try to ask on Fedora lists instead.

Cheers!
Filipe
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Delegate= on slice before v237

2019-02-13 Thread Filipe Brandenburger via systemd-devel
Hey Lennart,

Thanks for the clarification.

On Tue, Feb 12, 2019 at 2:17 AM Lennart Poettering 
wrote:

> On Mo, 11.02.19 16:39, Filipe Brandenburger (filbran...@google.com) wrote:
> > Before systemd v237 (when Delegate= was no longer allowed on slice
> > units)... Did setting Delegate=yes on a slice have *any* effect at all?
> >
> > Or did it just do nothing (and a slice with Delegate=no or no setting
> > behave just the same)?
> >
> > Reason I ask is: I want to scrap this code
> > <
> https://github.com/opencontainers/runc/blob/v1.0.0-rc6/libcontainer/cgroups/systemd/apply_systemd.go#L195
> >
> > in libcontainer that tries to detect whether Delegate= is accepted in a
> > slice unit. (I'll just default it to false, never try it.)
> >
> > I'd like to be able to say that Delegate=yes never really did anything at
> > all on slice units... So I'm trying to confirm that is really the case
> > before stating it.
>
> So, it wasn't supposed to do anything, and what it does differs on
> cgroupsv2 and cgroupsv1.


libcontainer is pretty much cgroupv1 only, so that's what I'm concerned
about.


> The fact it wasn't refused outright was an
> accident, and because it was one I am not entirely sure what the
> precise effect of allowing it was. However, I am pretty sure it at
> least had two effects:
>
> 1. it would turn on all controllers for the cgroup
>

I don't *think* this is why libcontainer was trying to enable it, since a
few lines down it's explicitly enabling all the controllers by
setting MemoryAccounting, CPUAccounting and BlockIOAccounting during
transient unit creation:
https://github.com/opencontainers/runc/blob/v1.0.0-rc6/libcontainer/cgroups/systemd/apply_systemd.go#L275


> 2. it would stop systemd to ever migrating foreign processes below
>that slice, which is primarily relevant only when changing cgroup
>related props on the slice dynamically I guess.
>

I'm not sure I follow... Do you mean if libcontainer would write
to memory.limit_in_bytes (or one of the other properties of the memory or
other controller managed by systemd, such as cpu), then systemd would not
end up overwriting this as it does some other operation on the cgroup?

I'm not completely sure I understand what "migrate foreign processes"
means, given slices don't really hold any pids directly... Do you mean to
scope and service units below that slice?

In any case, for now I'll probably leave that alone... Though as I revamp
libcontainer support for unified hierarchy, I'll try to skip that check on
that case, that might make this a legacy-only setting, so not that
important to fully get rid of it for a while...

Cheers!
Filipe
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

<    1   2