On Tue, Oct 08, 2013 at 11:40:30AM -0400, Anjana Kar wrote: > The git checkout was on Sep. 20. Was the patch before or after?
The bug was introduced on Sep. 10 and reverted on Sep. 24, so you hit the lucky window. :) > The zpool create command successfully creates a raidz2 pool, and mkfs.lustre > does not complain, but The pool you created with zpool create was just for testing. I would recommend destroying that pool, rebuilding your lustre packages from the latest master (or better yet, a stable tag such as v2_4_1_0), and starting over with your original mkfs.lustre command. This would ensure that your pool is properly configured for use with lustre. If you'd prefer to keep this pool, you should set canmount=off on the root dataset, as mkfs.lustre would have done: zfs set canmount=off lustre-ost0 > > [root@cajal kar]# zpool list > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > lustre-ost0 36.2T 2.24M 36.2T 0% 1.00x ONLINE - > > [root@cajal kar]# /usr/sbin/mkfs.lustre --fsname=cajalfs --ost > --backfstype=zfs --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0 This command seems to be missing the dataset name, i.e. lustre-ost0/ost0 > > [root@cajal kar]# /sbin/service lustre start lustre-ost0 > lustre-ost0 is not a valid lustre label on this node As mentioned elsewhere, this looks like an ldev.conf configuration error. Ned _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
