Hello all !

We're testing FAI 4.0~beta2 exp36 and we found something annoying... I
can't tell if it's really a bug or not...

The error message is :

[...]
Executing: mkfs.ext3  /dev/sda8
Executing: parted -s /dev/sda set 9 lvm on
Executing: pvcreate  /dev/sda9
Command had non-zero exit code
Error in task partition. Traceback: task_error task_partition task task_install 
task task_action task main

This happens when :
1) a server is installed with a class A (containing LVM partitions)
2) the same server is reinstalled with the same class A, but we've
"reset" the hardware raid.

>From what I understand, "reset" the raid, just reinitialize RAID flags
on the disk, and destroy MBR.
Then when reinstalling the same class, parts are created exactly where
they where, and when the lvm flag is set to 'on', LVM find olds LVM conf
and restore "LVs" that were existing before. So the following pvcreate
can't create an existing LV and exit with non-zero.

Note that if the MBR isn't erased, setup-storage find existing PV/VG/LV
and clean them before creating new partitions.

Step to reproduce:
1) install class A
2) and the end of the install : Alt + F2
3) dd if=/dev/zero of=/dev/<your_disk> ibs=512 count=1
4) install class A again.


As a workaround, we have to relaunch install (MBR exists, so lvm parts
are removed) or remove existing PV/VG/LV before recreate RAID (or eraze
MBR).

Do you think this a bug or not ? And can it be corrected ?

Regards
Mathieu

disk_config disk1 bootable:1
primary /boot           100     ext3    rw
primary swap            1G      swap    sw
primary /               1G      ext3    rw,errors=remount-ro
logical /usr            1500    ext3    rw
logical /var            1G      ext3    rw
logical /home           512     ext3    rw
logical /tmp            1G      ext3    rw
logical -               4G-     -       -

disk_config lvm
vg TEST         disk1.9
TEST-PART1      /part1  1G      ext4    rw
TEST-PART2      /part2  1G      ext4    rw
TEST-PART3      /part3  1G      ext4    rw

Antwort per Email an