There were no answers to this, from anyone, so it appears this is not solved?

So, I proceeded to follow the directions here
https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821
using the stock redhat 2.6.32-504.16.2 kernel and what appears to be Lustre 2.7.52 sources. Some mods to this were needed, but I built the kernel, and then successfully built Lustre rpms including the zfs and spl modules for version 0.6.4.

BUT, when I boot the kernel following the directions to create the initramfs, problems. The /boot/grub/grub.conf is clearly read, I can choose between the stock kernel and the lustre kernel, but when it leaves the grub prompts.... nothing. The console just sits without echoing any of the bootup sequence, with only an underscore occasionally flashing. I am left with no choice but to reboot back to the stock kernel.

The volumes with the exception of /boot are all lvm, and the /var/log/dracut.log indicates that lvm is included in the initramfs image, but the lvm volumes just don't seem to be accessed, just /boot that is ext4. I am at a loss about what to do next, and would really appreciate some advice from someone about what to do here.

Many thanks,
bob


On 5/13/2015 10:16 AM, Bob Ball wrote:
OK, so, I am seeing EXACTLY the issue reported at the end of LU-6452 8 minutes after it was closed by Andreas Dilger.
https://jira.hpdd.intel.com/browse/LU-6452

There is no response.  Is there a solution?

This is Lustre 2.7.0 with (now) zfs 0.6.4.1-1, which was current when the server was built. I see a number of recent Emails about updates to Lustre sources against this zfs version, but is there a solution for the standard set of 2.7.0 Lustre rpms? Or any solution that will get me un-stuck?

[root@umdist01 ~]# cat /etc/redhat-release
Scientific Linux release 6.6 (Carbon)
[root@umdist01 ~]# uname -r
2.6.32-504.8.1.el6_lustre.x86_64

I note that my first OSS server, initially built in March against zfs 0.6.3-1, seems to be happy and definitely does not exhibit this issue. That server has now also updated to 0.6.4.1, and I am loathe to reboot it now (up for 40 days) for fear of losing access to all of its stored files. Perhaps I should downgrade to the earlier zfs and cut off automatic zfs updates from that repo?

Thanks,
bob

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to