Our configuration is a distributed, replicated volume with 7 pairs of bricks on 
2 servers. We are in the process of adding additional storage for another brick 
pair. I placed the new disks in one of the servers late last week and used the 
LSI storcli command to make a RAID 6 volume of the new disks. We are running 
RedHat 6.6 and Gluster 3.7.1 on both servers. Yesterday, I ran 'parted 
/dev/sdj' to create a partition on the new volume. Unfortunately, /dev/sdj was 
not the new volume (which is /dev/sdh). I realized the error right away, but 
the system was operating OK and it was late at night so I decided to wait until 
today to try to fix this. This morning, I ran 'parted rescue 0 36.0TB'. This 
runs, but does not find a partition to restore. I am using LVM, and the 
partition is /dev/mapper/vg_data5-lv_data5 with an xfs filesystem on it. The 
system continued to operate, but I expected that there would be problems on 
re-boot. I re-booted and indeed, the system can't find the volume at 
/dev/mapper/vg_data5-lv_data5. Is it possible to recover this volume in place, 
or do I need to just drop it from the gluster volume, recreate the lvm 
partition, and then copy the files from its partner brick on the other server? 
If I need to copy the files, what is the best procedure for doing it?

TIA,
Eva Freer
Oak Ridge National Laboratory
[email protected]
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to