On Wed, 13 May 2015 10:16:07 -0400 Bob Ball <b...@umich.edu> wrote: > OK, so, I am seeing EXACTLY the issue reported at the end of LU-6452 > 8 minutes after it was closed by Andreas Dilger. > https://jira.hpdd.intel.com/browse/LU-6452 > > There is no response. Is there a solution?
As I read it that's what you get when your osd_zfs module does not match your spl/zfs. We build a set of kernel+spl+zfs+lustre then leave it alone until we build another set. (currently using this combo in production for lustre on zfs: 2.6.32-504.16.2+0.6.4.1-1+2.5.3-5chaos) Also note that IPoIB is very broken on the -504.8.1 kernel. /Peter K > This is Lustre 2.7.0 with (now) zfs 0.6.4.1-1, which was current when > the server was built. I see a number of recent Emails about updates > to Lustre sources against this zfs version, but is there a solution > for the standard set of 2.7.0 Lustre rpms? Or any solution that will > get me un-stuck? > > [root@umdist01 ~]# cat /etc/redhat-release > Scientific Linux release 6.6 (Carbon) > [root@umdist01 ~]# uname -r > 2.6.32-504.8.1.el6_lustre.x86_64 > > I note that my first OSS server, initially built in March against zfs > 0.6.3-1, seems to be happy and definitely does not exhibit this > issue. That server has now also updated to 0.6.4.1, and I am loathe > to reboot it now (up for 40 days) for fear of losing access to all of > its stored files. Perhaps I should downgrade to the earlier zfs and > cut off automatic zfs updates from that repo? > > Thanks, > bob > > _______________________________________________ > lustre-discuss mailing list > lustre-discuss@lists.lustre.org > http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org _______________________________________________ lustre-discuss mailing list lustre-discuss@lists.lustre.org http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org