Re: [systemd-devel] Mounting a new device to a mount point with an old (auto-generated) but inactive mount unit triggers an immediate unmount

2021-07-08 Thread Christian Rohmann

Hey Silvio,

On 07/07/2021 20:04, Silvio Knizek wrote:

after touching /etc/fstab you're supposed to run `systemctl daemon-
reload` to re-trigger the generators. This is in fact a feature to
announce changes in configuration files to systemd. See
man:systemd.generator for more information.


Thanks for the quick reply and the kind hint to the (right) documentation.


I am then just wondering why the issue referred to 
(https://github.com/systemd/systemd/issues/1741) is still open?
Are there still further plans to make systemd properly recognize that 
the inactive unit (pointing to a mount point that is used in a new and 
active unit) actually is superseeded and unmounting it makes now sense 
as that hits the new, working, active mount.



In any case I'd suggest then is to somehow give a warning to the user as 
with changes to the systemd units:
  "Warning: myfancyservice.service changed on disk. Run 'systemctl 
daemon-reload' to reload units."


Otherwise the "reaction" of an unmount to a just successfully happend 
mount is still quite surprising to a user.




Regards


Christian

||

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Mounting a new device to a mount point with an old (auto-generated) but inactive mount unit triggers an immediate unmount

2021-07-08 Thread Mantas Mikulėnas
On Thu, Jul 8, 2021 at 10:12 AM Christian Rohmann <
christian.rohm...@frittentheke.de> wrote:

> Hey Silvio,
> On 07/07/2021 20:04, Silvio Knizek wrote:
>
> after touching /etc/fstab you're supposed to run `systemctl daemon-
> reload` to re-trigger the generators. This is in fact a feature to
> announce changes in configuration files to systemd. See
> man:systemd.generator for more information.
>
> Thanks for the quick reply and the kind hint to the (right) documentation.
>
>
> I am then just wondering why the issue referred to (
> https://github.com/systemd/systemd/issues/1741) is still open?
> Are there still further plans to make systemd properly recognize that the
> inactive unit (pointing to a mount point that is used in a new and active
> unit) actually is superseeded and unmounting it makes now sense as that
> hits the new, working, active mount.
>

I *think* this was supposed to improve with v249:

https://github.com/systemd/systemd/pull/19322
https://github.com/systemd/systemd/issues/19983

In any case I'd suggest then is to somehow give a warning to the user as
> with changes to the systemd units:
>   "Warning: myfancyservice.service changed on disk. Run 'systemctl
> daemon-reload' to reload units."
>

systemd can't make non-systemd tools (such as `mount`) display warnings.

-- 
Mantas Mikulėnas
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Antw: [EXT] Re: Mounting a new device to a mount point with an old (auto-generated) but inactive mount unit triggers an immediate unmount

2021-07-08 Thread Ulrich Windl
>>> Christian Rohmann  schrieb am 08.07.2021 
>>> um
09:12 in Nachricht <23423e6a-4c69-1683-8758-40f98d444...@frittentheke.de>:
...
> Otherwise the "reaction" of an unmount to a just successfully happend 
> mount is still quite surprising to a user.

When trying to fix a broken system, my first frustrating experience with 
systemd was that after mounting /boot on root (via manual mount commend), 
systemd immediately umounted it. Repeatedly.
I mean if a collegue would have done that, what had I done to him? ;-)

So instead of fixing the problem, I had to find a way to "fix" systemd...

Regards,
Ulrich


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Mounting a new device to a mount point with an old (auto-generated) but inactive mount unit triggers an immediate unmount

2021-07-08 Thread Christian Rohmann

Hey Mantas,

On 08/07/2021 10:39, Mantas Mikulėnas wrote:


I am then just wondering why the issue referred to
(https://github.com/systemd/systemd/issues/1741
) is still open?
Are there still further plans to make systemd properly recognize
that the inactive unit (pointing to a mount point that is used in
a new and active unit) actually is superseeded and unmounting it
makes now sense as that hits the new, working, active mount.


I *think* this was supposed to improve with v249:

https://github.com/systemd/systemd/pull/19322 

https://github.com/systemd/systemd/issues/19983 



nice, thanks a bunch for the pointers!




In any case I'd suggest then is to somehow give a warning to the
user as with changes to the systemd units:
  "Warning: myfancyservice.service changed on disk. Run 'systemctl
daemon-reload' to reload units."


systemd can't make non-systemd tools (such as `mount`) display warnings.


yeah, that's certainly true and it's much better to not need any 
warnings anyways .




Regards


Christian

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Restricting swap usage for a process managed via systemd

2021-07-08 Thread Debraj Manna
Thanks Mantas for replying.

I have made the below changes.

Added systemd.unified_cgroup_hierarchy=1 in /etc/default/grub ran sudo
update-grub and rebooted the node.

GRUB_CMDLINE_LINUX="audit=1 rootdelay=180 nousb net.ifnames=0 biosdevname=0
fsck.mode=force fsck.repair=yes ipv6.disable=1
systemd.unified_cgroup_hierarchy=1"

Even after making these changes MemorySwapMax not taking into effect.

support@vrni-platform:~$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.15.0-143-generic root=/dev/mapper/vg-root ro
audit=1 rootdelay=180 nousb net.ifnames=0 biosdevname=0
fsck.mode=force fsck.repair=yes ipv6.disable=1
systemd.unified_cgroup_hierarchy=1 audit=1

support@vrni-platform:~$ findmnt
TARGET   SOURCE   FSTYPE OPTIONS
//dev/mapper/vg-root  ext4
rw,relatime,errors=panic,data=ordered
├─/sys   sysfssysfs
rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security securityfs   securityfs
rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup   cgroup   cgroup2
rw,nosuid,nodev,noexec,relatime,nsdelegate
│ ├─/sys/fs/pstore   pstore   pstore
rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/config   configfs configfs   rw,relatime
│ ├─/sys/fs/fuse/connections fusectl  fusectlrw,relatime
│ └─/sys/kernel/debugdebugfs  debugfsrw,relatime
├─/proc  proc proc
rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc systemd-1autofs
rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=1935
├─/dev   udev devtmpfs
rw,nosuid,relatime,size=8182012k,nr_inodes=2045503,mode=755
│ ├─/dev/pts devpts   devpts
rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
│ ├─/dev/shm tmpfstmpfs
rw,nosuid,nodev,noexec
│ ├─/dev/hugepages   hugetlbfshugetlbfs
rw,relatime,pagesize=2M
│ └─/dev/mqueue  mqueue   mqueue rw,relatime
├─/run   tmpfstmpfs
rw,nosuid,noexec,relatime,size=1642560k,mode=755
│ ├─/run/locktmpfstmpfs
rw,nosuid,nodev,noexec,relatime,size=5120k
│ ├─/run/rpc_pipefs  sunrpc   rpc_pipefs rw,relatime
│ ├─/run/shm none tmpfs
rw,nosuid,nodev,noexec,relatime
│ ├─/run/user/116tmpfstmpfs
rw,nosuid,nodev,relatime,size=1642556k,mode=700,uid=116,gid=122
│ ├─/run/user/998tmpfstmpfs
rw,nosuid,nodev,relatime,size=1642556k,mode=700,uid=998,gid=998
│ ├─/run/user/118tmpfstmpfs
rw,nosuid,nodev,relatime,size=1642556k,mode=700,uid=118,gid=124
│ ├─/run/user/1001   tmpfstmpfs
rw,nosuid,nodev,relatime,size=1642556k,mode=700,uid=1001,gid=1001
│ └─/run/user/121tmpfstmpfs
rw,nosuid,nodev,relatime,size=1642556k,mode=700,uid=121,gid=127
├─/boot  /dev/sda1ext4
rw,relatime,data=ordered
├─/tmp   /dev/mapper/vg-tmp   ext4
rw,nosuid,nodev,relatime,data=ordered
├─/home  /dev/mapper/vg-home  ext4
rw,nodev,relatime,data=ordered
└─/var   /dev/mapper/vg-var   ext4
rw,relatime,errors=panic,data=ordered
  ├─/var/log /dev/mapper/vg-var+log   ext4
rw,relatime,data=ordered
  │ └─/var/log/audit /dev/mapper/vg-var+log+audit ext4
rw,relatime,data=ordered
  └─/var/tmp /dev/mapper/vg-tmp   ext4
rw,nosuid,nodev,relatime,data=ordered
support@vrni-platform:~$

Any other suggestions?


On Mon, Jul 5, 2021 at 1:46 AM Mantas Mikulėnas  wrote:

> Looks like your Ubuntu version is using the "hybrid" cgroup mode by
> default. Cgroup v2 is indeed *enabled* in your kernel, but not necessarily
> *in use* – in the hybrid mode, systemd still mounts all resource
> controllers (cpu, memory, etc.) in v1 mode and only sets up its own process
> tracking in the v2 tree. See `findmnt`.
>
> You could boot with the systemd.unified_cgroup_hierarchy=1 kernel option
> to switch everything to cgroups v2, but if you're using container software
> (docker, podman) make sure those are cgroups v2-compatible.
>
> On Sun, Jul 4, 2021 at 10:36 AM Debraj Manna 
> wrote:
>
>> Hi
>>
>> I am trying to restrict the swap usage of a process using MemorySwapMax as
>> mentioned in the doc
>> 
>>  with
>> Ubuntu 18.04.
>>
>> Environment
>> 
>>
>> ubuntu@vrni-platform:/usr/lib/systemd/system$ uname -a
>> 

[systemd-devel] Expired Message in Log

2021-07-08 Thread Andreas Krueger
Hi Folks,

For my customer I have to verify the logger in its system, which is journald 
(241). For that I have written some tests that should verify if expired 
messages will been thrown out of the log. As you can see, the configuration is 
set in that way that messages older than 12 months shall be removed.

[Journal]
Storage=persistent
Compress=yes
Seal=yes
SplitMode=none
#SyncIntervalSec=5m
#RateLimitIntervalSec=30s
#RateLimitBurst=1
SystemMaxUse=500M
#SystemKeepFree=
SystemMaxFileSize=50M
SystemMaxFiles=13
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#RuntimeMaxFiles=100
MaxRetentionSec=12month
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg
#LineMax=48K
#ReadKMsg=yes

The test does 5 steps:

  1.  With the help of command date the system clock is reset by 367 days, 
which is definitely more than 12 months, even in leap year.
  2.  Send a message to logger.
  3.  Reset system clock to current time, which was stored before step 1.
  4.  Send another message to logger.
  5.  Synchronize the logger, to be sure all data are in the file system.

What I expect to see is only the message that was sent after resetting the 
system clock (green text). In fact, I can see the expired message (red text) as 
well (today is 2021-07-08). Is this correct?

Beside this, I have observed, that modifying the time by command date will 
trigger a rotation. Is this observation correct?

Greeting from Berlin,
Andreas

developer@debianVM ../loggerlib/build/integration_test/test (git)-[ak/journal] 
% journalctl -n 400 --no-pager -o short-iso
-- Logs begin at Mon 2020-07-06 15:23:50 CEST, end at Thu 2021-07-08 16:23:48 
CEST. --
2021-07-08T15:23:50+0200 debianVM systemd-journald[286]: System journal 
(/var/log/journal/c1223bdbe6484166aa9af858c392a7e0) is 40.0M, max 500.0M, 
460.0M free.
2020-07-06T15:23:50+0200 debianVM LoggerLib_sw_integration_test[4941]: Expired 
Message
2021-07-08T15:23:50+0200 debianVM systemd[1]: Starting Rotate log files...
2021-07-08T15:23:50+0200 debianVM anacron[4956]: Anacron 2.3 started on 
2021-07-08
2021-07-08T15:23:50+0200 debianVM systemd[1]: Starting Daily apt download 
activities...
2021-07-08T15:23:50+0200 debianVM anacron[4956]: Normal exit (0 jobs run)
2021-07-08T15:23:50+0200 debianVM LoggerLib_sw_integration_test[4941]: Test 
Message No.0
2021-07-08T15:23:50+0200 debianVM systemd[1]: Starting Daily man-db 
regeneration...
2021-07-08T15:23:50+0200 debianVM systemd[1]: Started Run anacron jobs.
2021-07-08T15:23:50+0200 debianVM systemd[1]: anacron.service: Succeeded.
2021-07-08T15:23:50+0200 debianVM sudo[4940]: pam_unix(sudo:session): session 
closed for user root
2021-07-08T15:23:50+0200 debianVM systemd[1]: logrotate.service: Succeeded.
2021-07-08T15:23:50+0200 debianVM systemd[1]: Started Rotate log files.
2021-07-08T15:23:50+0200 debianVM systemd[1]: man-db.service: Succeeded.
2021-07-08T15:23:50+0200 debianVM systemd[1]: Started Daily man-db regeneration.


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Expired Message in Log

2021-07-08 Thread Andrei Borzenkov
On 08.07.2021 18:33, Andreas Krueger wrote:
> Hi Folks,
> 
> For my customer I have to verify the logger in its system, which is journald 
> (241). For that I have written some tests that should verify if expired 
> messages will been thrown out of the log. As you can see, the configuration 
> is set in that way that messages older than 12 months shall be removed.
> 
> [Journal]
> Storage=persistent
> Compress=yes
> Seal=yes
> SplitMode=none
> #SyncIntervalSec=5m
> #RateLimitIntervalSec=30s
> #RateLimitBurst=1
> SystemMaxUse=500M
> #SystemKeepFree=
> SystemMaxFileSize=50M
> SystemMaxFiles=13
> #RuntimeMaxUse=
> #RuntimeKeepFree=
> #RuntimeMaxFileSize=
> #RuntimeMaxFiles=100
> MaxRetentionSec=12month
> #MaxFileSec=1month
> #ForwardToSyslog=yes
> #ForwardToKMsg=no
> #ForwardToConsole=no
> #ForwardToWall=yes
> #TTYPath=/dev/console
> #MaxLevelStore=debug
> #MaxLevelSyslog=debug
> #MaxLevelKMsg=notice
> #MaxLevelConsole=info
> #MaxLevelWall=emerg
> #LineMax=48K
> #ReadKMsg=yes
> 
> The test does 5 steps:
> 
>   1.  With the help of command date the system clock is reset by 367 days, 
> which is definitely more than 12 months, even in leap year.

Forward or backward?

>   2.  Send a message to logger.
>   3.  Reset system clock to current time, which was stored before step 1.
>   4.  Send another message to logger.
>   5.  Synchronize the logger, to be sure all data are in the file system.
> 
> What I expect to see is only the message that was sent after resetting the 
> system clock (green text). In fact, I can see the expired message (red text) 
> as well (today is 2021-07-08). Is this correct?
> 

This is plain text mailing list. We do not see any red, green or yellow.

Settings in journald.conf apply to archived journal files only. They do
not apply to active journal file. Was journal rotated in between?

journald does not remove individual "expired" messages, it removes whole
expired journal file. For your message to be removed it must be in
archive file that had been created in the past and which is older than
expiration date. Is it the case?

> Beside this, I have observed, that modifying the time by command date will 
> trigger a rotation. Is this observation correct?

That I cannot answer, but it sounds logical.

> 
> Greeting from Berlin,
> Andreas
> 
> developer@debianVM ../loggerlib/build/integration_test/test 
> (git)-[ak/journal] % journalctl -n 400 --no-pager -o short-iso
> -- Logs begin at Mon 2020-07-06 15:23:50 CEST, end at Thu 2021-07-08 16:23:48 
> CEST. --
> 2021-07-08T15:23:50+0200 debianVM systemd-journald[286]: System journal 
> (/var/log/journal/c1223bdbe6484166aa9af858c392a7e0) is 40.0M, max 500.0M, 
> 460.0M free.
> 2020-07-06T15:23:50+0200 debianVM LoggerLib_sw_integration_test[4941]: 
> Expired Message
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: Starting Rotate log files...
> 2021-07-08T15:23:50+0200 debianVM anacron[4956]: Anacron 2.3 started on 
> 2021-07-08
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: Starting Daily apt download 
> activities...
> 2021-07-08T15:23:50+0200 debianVM anacron[4956]: Normal exit (0 jobs run)
> 2021-07-08T15:23:50+0200 debianVM LoggerLib_sw_integration_test[4941]: Test 
> Message No.0
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: Starting Daily man-db 
> regeneration...
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: Started Run anacron jobs.
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: anacron.service: Succeeded.
> 2021-07-08T15:23:50+0200 debianVM sudo[4940]: pam_unix(sudo:session): session 
> closed for user root
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: logrotate.service: Succeeded.
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: Started Rotate log files.
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: man-db.service: Succeeded.
> 2021-07-08T15:23:50+0200 debianVM systemd[1]: Started Daily man-db 
> regeneration.
> 
> 
> 
> 
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> 

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Restricting swap usage for a process managed via systemd

2021-07-08 Thread Michal Koutný
Hello Debraj.

On Thu, Jul 08, 2021 at 05:10:44PM +0530, Debraj Manna 
 wrote:
> >> Linux vrni-platform 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 
> >> UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
> [...]
> GRUB_CMDLINE_LINUX="audit=1 rootdelay=180 nousb net.ifnames=0 biosdevname=0
> fsck.mode=force fsck.repair=yes ipv6.disable=1
> systemd.unified_cgroup_hierarchy=1"
> 
> Even after making these changes MemorySwapMax not taking into effect.

You need to add also swapaccount=1, swap accounting is enabled by
default only since kernel v5.8.

HTH,
Michal


signature.asc
Description: Digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Concurrent login / daemon-reload produces abandoned sessions

2021-07-08 Thread Michal Koutný
Hello Nicolas.

On Wed, Jul 07, 2021 at 02:12:51PM +0200, Nicolas Bock 
 wrote:
> Using systemd-248.3-1ubuntu1 on Ubuntu Impish the following
> script produces multiple abandoned sessions:
> 
>   $ for i in {1..100}; do sleep 0.2; ssh localhost sudo systemctl 
> daemon-reload & ssh localhost sleep 1 & done
>   $ sleep 2
>   $ jobs -p | xargs --verbose --no-run-if-empty kill -KILL
>   $ systemctl | grep abandoned
> session-174.scopeloaded active abandoned 
> Session 174 of user ubuntu
> session-175.scopeloaded active abandoned 
> Session 175 of user ubuntu
> session-176.scopeloaded active abandoned 
> Session 176 of user ubuntu
> session-25.scope loaded active abandoned 
> Session 25 of user ubuntu
> 
> I would like to debug this behavior further and understand
> why this is happening but don't know where to look next.

It might be a bit challenging :)

> Is there any information in particular I should look at?

I assume you use hybrid or unified cgroup setup and that the abandoned
scopes are empty (no processes in their cgroups), correct?

My hypothesis is following

// race between scope abandonement, emptiness notification -> abandon comes 
first
manager_reload
  manager_clear_jobs_and_units
unit_release_cgroup
  inotify_rm_watch(u->manager->cgroup_inotify_fd, 
u->cgroup_control_inotify_wd)
[...]
// last process terminates somewhere here but we're not watching emptiness yet
scope_coldplug()
  // scope should be checked for emptiness here

I _think_ this could be fixed with the patch
--- a/src/core/scope.c
+++ b/src/core/scope.c
@@ -243,8 +243,8 @@ static int scope_coldplug(Unit *u) {
 if (r < 0 && r != -EEXIST)
 return r;
 }
-} else
-(void) unit_enqueue_rewatch_pids(u);
+}
+(void) unit_enqueue_rewatch_pids(u);
 }

 bus_scope_track_controller(s);

Can you file a Github issue to track this (and possibly try if this
works for you)?

Thanks,
Michal



signature.asc
Description: Digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel