So, I'm working as a "clueless user" here, I'll admit it, so I could use some pointers...

User had a RHE5 64-bit server box that some update (he didn't say what) resulted in a non-bootable system. Said system had the boot OS on one drive and a RAID partition on 4 other drives.

I was able to boot from a CD and run "linux rescue" and "scp" off the users data to another box with some USB hard disks. Took forever, but worked.


So, I then disconnected the RAID drives from the box and reloaded the OS again (and patched it.)

I then plugged the RAID drives back in and rebooted.

I can see that the 2G RAID volume is plugged in:

# lvscan
  ACTIVE            '/dev/VolGroup00/LogVol00' [72.47 GB] inherit
  ACTIVE            '/dev/VolGroup00/LogVol01' [1.94 GB] inherit


But, I honestly don't know the next steps (all my RAID experience has been in OSX...)

How do I automount this RAID again without destroying the data on it?

Just running "lvscan" coughs up an SELinux alert:

SELinux is preventing /usr/sbin/lvm (lvm_t) "write" to .cache (lvm_etc_t)

and the recommended "restorecon -v .cache" doesn't do anything:

restorecon -v .cache
lstat(.cache) failed: No such file or directory


So I'm worried I'm going to do something stupid to damage the RAID (even though there's a backup, I *really* don't want to restore the data again if I don't have to -- it took days to get the data off...)

Suggestions?  Next Steps?  Thanks!

- Steve
--
Steve Maser ([EMAIL PROTECTED])    | Thinking is man's only basic virtue,
Desktop Support Manager          | from which all the others proceed.
Dept. of Mechanical Engineering  |                          -- Ayn Rand

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to