Bug#764647: `mdadm_env.sh`, referenced in systemd service file, not provided

2014-11-30 Thread westlake

isn't the EnvironmentFile supposed to point to /etc/default/mdadm ?


--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#764647: `mdadm_env.sh`, referenced in systemd service file, not provided

2014-10-09 Thread Paul Menzel
Package: mdadm
Version: 3.3.2-2
Severity: normal

Dear Debian folks,


looking through `journalctl -b -a` I noticed the line below.

Failed at step EXEC spawning /usr/lib/systemd/scripts/mdadm_env.sh: No 
such file or directory

But I am unable to find that script in the package. Could you please add
it or first check for its existence before trying to execute it?


Thanks,

Paul


PS: The following was printed to the terminal while running `reportbug
mdadm`.

running sudo /usr/share/bug/mdadm/script ...
[sudo] password for joe: 
File descriptor 3 (/tmp/reportbug-20141009-10473-Dmj9k1) leaked on pvs 
invocation. Parent PID 11801: /bin/bash

-- Package-specific info:
--- mdadm.conf
DEVICE partitions
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST system
MAILADDR root
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=fb7f3dc5:d183cab6:12123120:1a2207b9
ARRAY /dev/md1 level=raid1 num-devices=2 
UUID=52ff2cf2:40981859:e58d8dd6:5faec42c

--- /etc/default/mdadm
INITRDSTART='all'
AUTOCHECK=true
START_DAEMON=true
DAEMON_OPTIONS=--syslog
VERBOSE=false

--- /proc/mdstat:
Personalities : [raid1] 
md0 : active raid1 sda1[0]
  497856 blocks [2/1] [U_]
  
md1 : active raid1 sda2[0]
  1953013952 blocks [2/1] [U_]
  
unused devices: none

--- /proc/partitions:
major minor  #blocks  name

   80 1953514584 sda
   81 497983 sda1
   82 1953014017 sda2
   91 1953013952 md1
   90 497856 md0
 2530 1953012924 dm-0
 25315242880 dm-1
 25324194304 dm-2
 2533   20971520 dm-3
 2534   10485760 dm-4
 2535   10485760 dm-5
 2536  766803968 dm-6
 2537   41943040 dm-7
 2538  272629760 dm-8

--- LVM physical volumes:
  PVVG   Fmt  Attr PSize PFree  
  /dev/mapper/md1_crypt speicher lvm2 a--  1.82t 782.25g
--- mount output
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=212020,mode=755)
devpts on /dev/pts type devpts 
(rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,relatime,size=747836k,mode=755)
/dev/mapper/speicher-root on / type ext3 
(rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs 
(rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup 
(rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup 
(rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/devices type cgroup 
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup 
(rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup 
(rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup 
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup 
(rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs 
(rw,relatime,fd=21,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/mapper/speicher-tmp on /tmp type reiserfs (rw,relatime)
/dev/mapper/speicher-home on /home type ext3 (rw,relatime,data=ordered)
/dev/md0 on /boot type ext3 (rw,relatime,data=ordered)
/dev/mapper/speicher-var on /var type ext3 (rw,relatime,data=ordered)
/dev/mapper/speicher-usr on /usr type ext3 (rw,relatime,data=ordered)
/dev/mapper/speicher-bilder on /srv/bilder type xfs 
(rw,nosuid,nodev,noexec,relatime,attr2,inode64,noquota)
/dev/mapper/speicher-filme on /srv/filme type xfs 
(rw,nosuid,nodev,relatime,attr2,inode64,noquota)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/1000 type tmpfs 
(rw,nosuid,nodev,relatime,size=373920k,mode=700,uid=1000,gid=1000)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse 
(rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)

--- initrd.img-3.16-2-686-pae:
84638 blocks
c84c9c26d348e2b5bc763d17a5260f7c  ./etc/mdadm/mdadm.conf
d3be82c0f275d6c25b04d388baf9e836  ./etc/modprobe.d/mdadm.conf
4fcbf593caa8eede868bcb76fa14dbd0  
./lib/modules/3.16-2-686-pae/kernel/drivers/md/dm-mirror.ko
8fb176306d4e36f691a755e476d7fd35  
./lib/modules/3.16-2-686-pae/kernel/drivers/md/dm-log.ko

Bug#764647: `mdadm_env.sh`, referenced in systemd service file, not provided

2014-10-09 Thread Michael Tokarev
10.10.2014 01:25, Paul Menzel wrote:
 Package: mdadm
 Version: 3.3.2-2
 Severity: normal
 
 Dear Debian folks,
 
 
 looking through `journalctl -b -a` I noticed the line below.
 
   Failed at step EXEC spawning /usr/lib/systemd/scripts/mdadm_env.sh: No 
 such file or directory
 
 But I am unable to find that script in the package. Could you please add
 it or first check for its existence before trying to execute it?

As far as I understand, it is exactly how it supposed to be used, from
systemd/mdmonitor.service:

[Service]
Environment=  MDADM_MONITOR_ARGS=--scan
EnvironmentFile=-/run/sysconfig/mdadm
ExecStartPre=-/usr/lib/systemd/scripts/mdadm_env.sh
ExecStart=/sbin/mdadm --monitor $MDADM_MONITOR_ARGS

Why systemd ignores the leading minus sign?


 PS: The following was printed to the terminal while running `reportbug
 mdadm`.

That should be revisited, it does lots of stuff, mostly assuming a problem
is in array assemble.  Please note however that your arrays are degraded,
which probably should be fixed.

 --- /proc/mdstat:
 Personalities : [raid1] 
 md0 : active raid1 sda1[0]
   497856 blocks [2/1] [U_]
   
 md1 : active raid1 sda2[0]
   1953013952 blocks [2/1] [U_]

Thanks,

/mjt


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org