Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-09 Thread Thomas Stibor
Hello Anjana,

I can confirm that this setup works (ZFS-MGS/MDT or LDFISKFS-MGS/MDT and
ZFS-OSS/OST)

I used a Cent OS 6.4
build: 
2.4.0-RC2-gd3f91c4-PRISTINE-2.6.32-358.6.2.el6_lustre.g230b174.x86_64
and the Lustre Packages from
http://downloads.whamcloud.com/public/lustre/latest-feature-release/el6/server/RPMS/x86_64/

ZFS is downloaded from ZOL and compiled/installed.

SPL: Loaded module v0.6.2-1
SPL: using hostid 0x
ZFS: Loaded module v0.6.2-1, ZFS pool version 5000, ZFS filesystem version 5

I first run in the same problem:

mkfs.lustre --fsname=lustrefs --reformat --ost --backfstype=zfs .
mkfs.lustre FATAL: unable to prepare backend (22)
mkfs.lustre: exiting with 22 (Invalid argument)

and saw that ZFS libraries in /usr/local/lib where not known to Cent OS 6.4.

A quick:

echo /usr/local/lib  /etc/ld.so.conf.d/zfs.conf
echo /usr/local/lib64  /etc/ld.so.conf.d/zfs.conf
ldconfig

solved the problem.

(LDISKFS)
mkfs.lustre --reformat --mgs /dev/sda16
mkfs.lustre --reformat --fsname=zlust --mgsnode=10.16.0.104@o2ib0 --mdt
--index=0 /dev/sda5

(ZFS)
mkfs.lustre --reformat --mgs --backfstype=zfs mgs/mgs /dev/sda16
mkfs.lustre --reformat --fsname=zlust --mgsnode=10.16.0.104@o2ib0 --mdt
--index=0 --backfstype=zfs mdt0/mdt0 /dev/sda5

is working fine.
The OSS/OST is a debian wheezy box with 70 disks JBOD and kernel
3.6.11-lustre-tstibor-build with patch series 3.x-fc18.series
and SPL/ZFS v0.6.2-1

Best,
 Thomas

On 10/08/2013 05:40 PM, Anjana Kar wrote:
 The git checkout was on Sep. 20. Was the patch before or after?

 The zpool create command successfully creates a raidz2 pool, and mkfs.lustre
 does not complain, but

 [root@cajal kar]# zpool list
 NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 lustre-ost0  36.2T  2.24M  36.2T 0%  1.00x  ONLINE  -

 [root@cajal kar]# /usr/sbin/mkfs.lustre --fsname=cajalfs --ost 
 --backfstype=zfs --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0

 [root@cajal kar]# /sbin/service lustre start lustre-ost0
 lustre-ost0 is not a valid lustre label on this node

 I think we'll be splitting up the MDS and OSTs on 2 nodes as some of you 
 said
 there could be other issues down the road, but thanks for all the good 
 suggestions.

 -Anjana

 On 10/07/2013 07:24 PM, Ned Bass wrote:
 I'm guessing your git checkout doesn't include this commit:

 * 010a78e Revert LU-3682 tunefs: prevent tunefs running on a mounted device

 It looks like the LU-3682 patch introduced a bug that could cause your issue,
 so its reverted in the latest master.

 Ned

 On Mon, Oct 07, 2013 at 04:54:13PM -0400, Anjana Kar wrote:
 On 10/07/2013 04:27 PM, Ned Bass wrote:
 On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:

 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
 --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2
 /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
 /dev/sdo /dev/sdq /dev/sds

 mkfs.lustre FATAL: Invalid filesystem name /dev/sds
 It seems that either the version of mkfs.lustre you are using has a
 parsing bug, or there was some sort of syntax error in the actual
 command entered.  If you are certain your command line is free from
 errors, please post the version of lustre you are using, or report the
 bug in the Lustre issue tracker.

 Thanks,
 Ned
 For building this server, I followed steps from the walk-thru-build*
 for Centos 6.4,
 and added --with-spl and --with-zfs when configuring lustre..
 *https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821

 spl and zfs modules were installed from source for the lustre 2.4 kernel
 2.6.32.358.18.1.el6_lustre2.4

 Device sds appears to be valid, but I will try issuing the command
 using by-path
 names..

 -Anjana
 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss




smime.p7s
Description: S/MIME Cryptographic Signature
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-08 Thread Scott Nolin
I would check to make sure your ldev.conf file is set up with the 
lustre-ost0 and host name properly.


Scott

On 10/8/2013 10:40 AM, Anjana Kar wrote:

The git checkout was on Sep. 20. Was the patch before or after?

The zpool create command successfully creates a raidz2 pool, and mkfs.lustre
does not complain, but

[root@cajal kar]# zpool list
NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
lustre-ost0  36.2T  2.24M  36.2T 0%  1.00x  ONLINE  -

[root@cajal kar]# /usr/sbin/mkfs.lustre --fsname=cajalfs --ost
--backfstype=zfs --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0

[root@cajal kar]# /sbin/service lustre start lustre-ost0
lustre-ost0 is not a valid lustre label on this node

I think we'll be splitting up the MDS and OSTs on 2 nodes as some of you
said
there could be other issues down the road, but thanks for all the good
suggestions.

-Anjana

On 10/07/2013 07:24 PM, Ned Bass wrote:

I'm guessing your git checkout doesn't include this commit:

* 010a78e Revert LU-3682 tunefs: prevent tunefs running on a mounted device

It looks like the LU-3682 patch introduced a bug that could cause your issue,
so its reverted in the latest master.

Ned

On Mon, Oct 07, 2013 at 04:54:13PM -0400, Anjana Kar wrote:

On 10/07/2013 04:27 PM, Ned Bass wrote:

On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:

Here is the exact command used to create a raidz2 pool with 8+2 drives,
followed by the error messages:

mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
--index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2
/dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
/dev/sdo /dev/sdq /dev/sds

mkfs.lustre FATAL: Invalid filesystem name /dev/sds

It seems that either the version of mkfs.lustre you are using has a
parsing bug, or there was some sort of syntax error in the actual
command entered.  If you are certain your command line is free from
errors, please post the version of lustre you are using, or report the
bug in the Lustre issue tracker.

Thanks,
Ned

For building this server, I followed steps from the walk-thru-build*
for Centos 6.4,
and added --with-spl and --with-zfs when configuring lustre..
*https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821

spl and zfs modules were installed from source for the lustre 2.4 kernel
2.6.32.358.18.1.el6_lustre2.4

Device sds appears to be valid, but I will try issuing the command
using by-path
names..

-Anjana


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss






smime.p7s
Description: S/MIME Cryptographic Signature
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-08 Thread Ned Bass
On Tue, Oct 08, 2013 at 11:40:30AM -0400, Anjana Kar wrote:
 The git checkout was on Sep. 20. Was the patch before or after?

The bug was introduced on Sep. 10 and reverted on Sep. 24, so you hit
the lucky window.  :)

 The zpool create command successfully creates a raidz2 pool, and mkfs.lustre
 does not complain, but

The pool you created with zpool create was just for testing.  I would
recommend destroying that pool, rebuilding your lustre packages from the
latest master (or better yet, a stable tag such as v2_4_1_0), and
starting over with your original mkfs.lustre command.  This would ensure
that your pool is properly configured for use with lustre.

If you'd prefer to keep this pool, you should set canmount=off on the
root dataset, as mkfs.lustre would have done:

  zfs set canmount=off lustre-ost0

 
 [root@cajal kar]# zpool list
 NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 lustre-ost0  36.2T  2.24M  36.2T 0%  1.00x  ONLINE  -
 
 [root@cajal kar]# /usr/sbin/mkfs.lustre --fsname=cajalfs --ost
 --backfstype=zfs --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0

This command seems to be missing the dataset name, i.e. lustre-ost0/ost0

 
 [root@cajal kar]# /sbin/service lustre start lustre-ost0
 lustre-ost0 is not a valid lustre label on this node

As mentioned elsewhere, this looks like an ldev.conf configuration
error.

Ned
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Anjana Kar
Is it possible to configure the MDT as ldiskfs and the OSTs with zfs
in lustre 2.4? The server is running a lustre kernel on a Centos 6.4
system, has both lustre-osd-ldiskfs and lustre-osd-zfs rpms installed.
The MDT is up as ldiskfs, but get an error trying to configure the ost:

mkfs.lustre --fsname=lustrefs --reformat --ost --backfstype=zfs .

mkfs.lustre FATAL: unable to prepare backend (22)
mkfs.lustre: exiting with 22 (Invalid argument)

Thanks,
-Anjana Kar
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Dilger, Andreas
In theory this should be possible, but we have not tested this for a long time 
since it isn't a configuration that is common. 

Note that you need to configure the OST on a separate node from the MDT in this 
case. We have not implemented the ability to have multiple OSD types on the 
same node. I don't recall the details if why this won't work, but it doesn't. 

Cheers, Andreas

On 2013-10-07, at 4:10, Anjana Kar k...@psc.edu wrote:

 Is it possible to configure the MDT as ldiskfs and the OSTs with zfs
 in lustre 2.4? The server is running a lustre kernel on a Centos 6.4
 system, has both lustre-osd-ldiskfs and lustre-osd-zfs rpms installed.
 The MDT is up as ldiskfs, but get an error trying to configure the ost:
 
 mkfs.lustre --fsname=lustrefs --reformat --ost --backfstype=zfs .
 
 mkfs.lustre FATAL: unable to prepare backend (22)
 mkfs.lustre: exiting with 22 (Invalid argument)
 
 Thanks,
 -Anjana Kar
 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Jeff Johnson
On 10/7/13 11:23 AM, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:

 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs --index=0
 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2 /dev/sda /dev/sdc
 /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm /dev/sdo /dev/sdq /dev/sds

Additional suggestion. You should make zfs/zpools with persistent device 
names like /dev/disk/by-path or /dev/disk/by-id. Standard 'sd' device 
names are not persistent and could change after a reboot or hardware 
change. This would be bad for a zpool with data.

Also, I don't know if its just email formatting but be sure that command 
is all on one line:

mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs --index=0 \
--mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2 /dev/sda /dev/sdc \
/dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm /dev/sdo /dev/sdq /dev/sds



--Jeff

-- 
--
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite D - San Diego, CA 92117

High-performance Computing / Lustre Filesystems / Scale-out Storage

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Scott Nolin



Ned


Here is the exact command used to create a raidz2 pool with 8+2 drives,
followed by the error messages:

mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs --index=0
--mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2 /dev/sda /dev/sdc
/dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm /dev/sdo /dev/sdq /dev/sds

mkfs.lustre FATAL: Invalid filesystem name /dev/sds

mkfs.lustre FATAL: unable to prepare backend (22)
mkfs.lustre: exiting with 22 (Invalid argument)

dmesg shows
ZFS: Loaded module v0.6.2-1, ZFS pool version 5000, ZFS filesystem version 5

Any suggestions on creating the pool separately?


Just make sure you can see /dev/sds in your system - if not, that's your 
problem.


I would also suggest consider building this without using these top 
level dev names. It is very easy for these to change accidentally. If 
you're just testing it's fine, but over time it will be a problem.


See
http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool

I like the vdev_id.conf with meaningful (to our sysadmins) aliases to 
device 'by-path'.


Scott



smime.p7s
Description: S/MIME Cryptographic Signature
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Ned Bass
On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:
 
 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
 --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2
 /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
 /dev/sdo /dev/sdq /dev/sds
 
 mkfs.lustre FATAL: Invalid filesystem name /dev/sds

It seems that either the version of mkfs.lustre you are using has a
parsing bug, or there was some sort of syntax error in the actual
command entered.  If you are certain your command line is free from
errors, please post the version of lustre you are using, or report the
bug in the Lustre issue tracker.

Thanks,
Ned
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Anjana Kar
On 10/07/2013 04:27 PM, Ned Bass wrote:
 On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:

 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
 --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2
 /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
 /dev/sdo /dev/sdq /dev/sds

 mkfs.lustre FATAL: Invalid filesystem name /dev/sds
 It seems that either the version of mkfs.lustre you are using has a
 parsing bug, or there was some sort of syntax error in the actual
 command entered.  If you are certain your command line is free from
 errors, please post the version of lustre you are using, or report the
 bug in the Lustre issue tracker.

 Thanks,
 Ned

For building this server, I followed steps from the walk-thru-build* for 
Centos 6.4,
and added --with-spl and --with-zfs when configuring lustre..
*https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821

spl and zfs modules were installed from source for the lustre 2.4 kernel
2.6.32.358.18.1.el6_lustre2.4

Device sds appears to be valid, but I will try issuing the command using 
by-path
names..

-Anjana
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Ned Bass
I'm guessing your git checkout doesn't include this commit:

* 010a78e Revert LU-3682 tunefs: prevent tunefs running on a mounted device

It looks like the LU-3682 patch introduced a bug that could cause your issue,
so its reverted in the latest master.

Ned

On Mon, Oct 07, 2013 at 04:54:13PM -0400, Anjana Kar wrote:
 On 10/07/2013 04:27 PM, Ned Bass wrote:
 On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:
 
 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
 --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2
 /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
 /dev/sdo /dev/sdq /dev/sds
 
 mkfs.lustre FATAL: Invalid filesystem name /dev/sds
 It seems that either the version of mkfs.lustre you are using has a
 parsing bug, or there was some sort of syntax error in the actual
 command entered.  If you are certain your command line is free from
 errors, please post the version of lustre you are using, or report the
 bug in the Lustre issue tracker.
 
 Thanks,
 Ned
 
 For building this server, I followed steps from the walk-thru-build*
 for Centos 6.4,
 and added --with-spl and --with-zfs when configuring lustre..
 *https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821
 
 spl and zfs modules were installed from source for the lustre 2.4 kernel
 2.6.32.358.18.1.el6_lustre2.4
 
 Device sds appears to be valid, but I will try issuing the command
 using by-path
 names..
 
 -Anjana
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss