Hi Aurélien,
Do you have a specification for the junit test results you produce, or
an example of one of your test results sets. We would be more than
willing to pick up and go with something that can be used with a wider
set of tools, with the obvious caveat that it provides everything needed
Hello list,
We recently evacuated several OSTs on a single OSS, replaced RAID
controllers, re-initialized RAIDs for new OSTs, and made new lustre
filesystems for them, using the same OST indices as we had before.
The filesystem and all its clients have been up and running the whole
time. We
Hello,
Did you backup old magic files (last_rcvd, LAST_ID, CONFIG/*) from the original
OSTs and put them back before trying to mount them?
You probably didn't do that. So when you remount the OSTs with existing index,
the MGS will refuse to add them without being told to writeconf, hence
Daniel,
It looks like your OST backend storage device may be having an issue. I
would check the health and stability of the backend storage device or raid
you are using for an OST device. It wouldn't likely cause a system reboot of
your OSS system. There may be more problems, hardware and/or OS
Hi Jeff,
Thanks for your reply
*Storage information *:
DL380G5 == OSS + 16GB Ram
OS== SFS G3.2-2 + centos 5.3 + lustre 1.8.3
MSA60 box == OST
RAID 6
Regards,
Daniel A
On Tue, Dec 21, 2010 at 11:45 AM, Jeff Johnson
jeff.john...@aeoncomputing.com wrote:
Daniel,
Daniel,
Check the health and stability of your raid-6 volume. Make sure the raid is
healthy and online. Use whatever monitor utility came with your raid card or
check /proc/mdstat if it's a Linux mdraid. Check /var/log/messages for error
messages from your raid or other hardware.
--Jeff