Bug#925373: libpam-modules: Executing script from pam_motd or pam_exec produce a huge memory consumption
Setting the following line on systemd-user file stops the issue: echo '@include null' >> /etc/pam.d/systemd-user Someone knows why including an inexistent file can change the behaviour? Thanks,
Bug#925373: libpam-modules: Executing script from pam_motd or pam_exec produce a huge memory consumption
The /tmp dir is mounted on / with ext4 as the installation process did. The output of mount is: # mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,nosuid,relatime,size=489384k,nr_inodes=122346,mode=755) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=100108k,mode=755) /dev/sda2 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=1687) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) /dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=100104k,mode=700) I didn't do any configuration on login except configuring the ssh keys, the test was done starting from a fresh install. Thanks for attach the file.
Bug#925373: libpam-modules: Executing script from pam_motd or pam_exec produce a huge memory consumption
Control: tags -1 moreinfo On Sat, Mar 23, 2019 at 06:19:53PM -0400, Ricardo Fraile wrote: > I attach the reports from the free, slabinfo, meminfo and ps in each step > in the following file, the directory names are between brackets in each > step. You did not attach these, but linked to a tarball stored in a Google drive instead. Attaching this tarball here so that the information is available to users of the Debian BTS without dependency on an external service. > If I comment the pam_motd lines on "/etc/pam.d/ssh" and I add a pam_exec > directly, I get the same bad memory consumtion result: > #sessionoptional pam_motd.so motd=/run/motd.dynamic > #sessionoptional pam_motd.so noupdate > session optional pam_exec.so stdout /bin/sh /etc/update-motd.d/10-uname > Why the memory consumption is large is the tempfiles are created inside > pam and executing the same from the login user doesn't produce the same > result? I don't know, and yours is certainly the first report I've seen of this despite the code in the pam modules not having changed for years. What type of filesystem is mounted at /tmp? Is it a tmpfs? Do you configure something in your login environment that causes mktemp to use a different path for tempfiles than pam itself, running in a system context, would? (i.e. $TMPDIR) -- Steve Langasek Give me a lever long enough and a Free OS Debian Developer to set it on, and I can move the world. Ubuntu Developer https://www.debian.org/ slanga...@ubuntu.com vor...@debian.org report.tar.gz Description: application/gzip signature.asc Description: PGP signature
Bug#925373: libpam-modules: Executing script from pam_motd or pam_exec produce a huge memory consumption
Package: libpam-modules Version: 1.1.8-3.6 Severity: important Dear Maintainer, In a Debian 9.8 installed only with ssh-server and standard system utilities under VMware with 2 cpu and 1Gb of ram. # ssh-keygen # cd .ssh # cat * >> authorized_keys # vi /etc/ssh/sshd_config #PermitRootLogin prohibit-password PermitRootLogin yes # systemctl disable cron # apt-get install smem # reboot I attach the reports from the free, slabinfo, meminfo and ps in each step in the following file, the directory names are between brackets in each step. https://drive.google.com/file/d/1rsp6x2zB34JyOaO7rYjCWzZwgtJfM9SP/view?usp=sharing Starting the system and dropping caches, the free report is the following (00-started_with_drop): # sync; echo 3 > /proc/sys/vm/drop_caches # date && free -m Sat Mar 23 16:50:01 EDT 2019 totalusedfree shared buff/cache available Mem:977 81 861 4 33 806 Swap: 1021 01021 Doing the ssh loop from other terminal: # while :; do ssh root@localhost "exit"; done One minute later, the memory reach and mantain the following level (01-with_loop_normal): # date && free -m Sat Mar 23 16:51:12 EDT 2019 totalusedfree shared buff/cache available Mem:977 287 643 9 46 592 Swap: 1021 01021 But if I set "/etc/update-motd.d/10-uname" with the following content: #!/bin/sh uname -snrvm A=`mktemp` B=`mktemp` C=`mktemp` rm -f $A rm -f $B rm -f $C The memory grows largely and starts swapping (02-with_loop_modified): # date && free -m Sat Mar 23 16:52:00 EDT 2019 totalusedfree shared buff/cache available Mem:977 344 582 10 50 533 Swap: 1021 01021 (03-with_loop_modified_last) # date && free -m Sat Mar 23 16:52:34 EDT 2019 totalusedfree shared buff/cache available Mem:977 878 63 7 35 7 Swap: 1021 141007 Stopping the loop and cleanning caches produce the following (04-finished_loop): # sync; echo 3 > /proc/sys/vm/drop_caches # date && free -m Sat Mar 23 16:53:34 EDT 2019 totalusedfree shared buff/cache available Mem:977 610 340 6 26 280 Swap: 1021 141007 There are around 300Mb of difference on the memory available. Doing the same task that pam_motd execute, don't produce that large increment (05-with_runparts_loop): # reboot # sync; echo 3 > /proc/sys/vm/drop_caches # date && free -m Sat Mar 23 17:05:55 EDT 2019 totalusedfree shared buff/cache available Mem:977 82 864 5 30 807 Swap: 1021 01021 Start the loop on other terminal: # while :; do run-parts --lsbsysinit /etc/update-motd.d ; done After a few time, stopping the loop and check (06-with_runpart_loop_last): # date && free -m Sat Mar 23 17:08:15 EDT 2019 totalusedfree shared buff/cache available Mem:977 83 852 5 41 800 Swap: 1021 01021 And finally cleaning caches (07-finished_runpart_loop): # sync; echo 3 > /proc/sys/vm/drop_caches # date && free -m Sat Mar 23 17:08:33 EDT 2019 totalusedfree shared buff/cache available Mem:977 82 863 5 31 806 Swap: 1021 01021 If I comment the pam_motd lines on "/etc/pam.d/ssh" and I add a pam_exec directly, I get the same bad memory consumtion result: #sessionoptional pam_motd.so motd=/run/motd.dynamic #sessionoptional pam_motd.so noupdate session optional pam_exec.so stdout /bin/sh /etc/update-motd.d/10-uname Why the memory consumption is large is the tempfiles are created inside pam and executing the same from the login user doesn't produce the same result? -- System Information: Debian Release: 9.8 APT prefers stable-updates APT policy: (500, 'stable-updates'), (500, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 4.9.0-8-amd64 (SMP w/2 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash Init: systemd (via /run/systemd/system) Versions of packages libpam-modules depends on: ii debconf [debconf-2.0] 1.5.61 ii libaudit1 1:2.6.7-2 ii libc6 2.24-11+deb9u4 ii libdb5.3 5.3.28-12+deb9u1 ii libpam-modules-bin 1.1.8-3.6 ii libpam0g 1.1.8-3.6 ii libselinux1