Public bug reported:
Description:
After creating the RAID volume, deleting it and creating second RAID
volume (with same disks as with first volume but with less disks'
count), LED statuses on disks left in container are ‘failure’.
Steps to reproduce:
1. Turn on ledmon:
# ledmon --all
2. Create RAID container:
# mdadm --create /dev/md/imsm0 --metadata=imsm --raid-devices=3 /dev/nvme5n1
/dev/nvme4n1 /dev/nvme2n1 --run –force
3. Create first RAID volume:
# mdadm --create /dev/md/Volume --level=5 --chunk 64 --raid-devices=3
/dev/nvme5n1 /dev/nvme4n1 /dev/nvme2n1 --run –force
4. Stop first RAID volume:
# mdadm --stop /dev/md/Volume
5. Delete first RAID volume:
# mdadm --kill-subarray=0 /dev/md127
6. Create second RAID volume in the same container (with less disks' count
than first RAID, using the sane disks as in the first volume):
# mdadm --create /dev/md/Volume --level=1 --raid-devices=2 /dev/nvme5n1
/dev/nvme4n1 --run
7. Verify status LED on container member disks which are not part
of second RAID volume.
Expected results:
Disks from container which are not in the second volume should have ‘normal’
status LED.
Actual results:
Disks from container which are not in the second volume have ‘failure’ status
LED.
** Affects: ledmon (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1831733
Title:
ledmon incorrectly sets the status LED
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ledmon/+bug/1831733/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs