Am 26.01.2017 um 19:02 schrieb Luke Pyzowski:
I have a large RAID6 device with 24 local drives on CentOS7.3. Randomly (around 
50% of the time) systemd will unmount my RAID device thinking it is degraded 
after the mdadm-last-resort@.timer expires, however the device is working 
normally by all accounts, and I can immediately mount it manually upon boot 
completion. In the logs below /share is the RAID device. I can increase the 
timer in /usr/lib/systemd/system/mdadm-last-resort@.timer from 30 to 60 
seconds, but this problem can randomly still occur.

systemd[1]: Created slice system-mdadm\x2dlast\x2dresort.slice.
systemd[1]: Starting system-mdadm\x2dlast\x2dresort.slice.
systemd[1]: Starting Activate md array even though degraded...
systemd[1]: Stopped target Local File Systems.
systemd[1]: Stopping Local File Systems.
systemd[1]: Unmounting /share...
systemd[1]: Stopped (with error) /dev/md0.
systemd[1]: Started Activate md array even though degraded.
systemd[1]: Unmounted /share.

that also happens randomly in my Fedora 24 testing-vm with a RAID10 and you can be sure that in a virtual machine drives don't disappear or take long to appear
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to