[Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Anjana Kar
Is it possible to configure the MDT as ldiskfs and the OSTs with zfs
in lustre 2.4? The server is running a lustre kernel on a Centos 6.4
system, has both lustre-osd-ldiskfs and lustre-osd-zfs rpms installed.
The MDT is up as ldiskfs, but get an error trying to configure the ost:

mkfs.lustre --fsname=lustrefs --reformat --ost --backfstype=zfs .

mkfs.lustre FATAL: unable to prepare backend (22)
mkfs.lustre: exiting with 22 (Invalid argument)

Thanks,
-Anjana Kar
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Dilger, Andreas
In theory this should be possible, but we have not tested this for a long time 
since it isn't a configuration that is common. 

Note that you need to configure the OST on a separate node from the MDT in this 
case. We have not implemented the ability to have multiple OSD types on the 
same node. I don't recall the details if why this won't work, but it doesn't. 

Cheers, Andreas

On 2013-10-07, at 4:10, Anjana Kar k...@psc.edu wrote:

 Is it possible to configure the MDT as ldiskfs and the OSTs with zfs
 in lustre 2.4? The server is running a lustre kernel on a Centos 6.4
 system, has both lustre-osd-ldiskfs and lustre-osd-zfs rpms installed.
 The MDT is up as ldiskfs, but get an error trying to configure the ost:
 
 mkfs.lustre --fsname=lustrefs --reformat --ost --backfstype=zfs .
 
 mkfs.lustre FATAL: unable to prepare backend (22)
 mkfs.lustre: exiting with 22 (Invalid argument)
 
 Thanks,
 -Anjana Kar
 ___
 Lustre-discuss mailing list
 Lustre-discuss@lists.lustre.org
 http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Trouble to migrate files from deactivated ost

2013-10-07 Thread Dilger, Andreas
On 2013/09/19 11:39 PM, Arman Khalatyan arm2...@gmail.com wrote:

Hello,
We are removing one ost from  the lustre 2.4.1.
Only few files are written on that OST without any striping.

First we dectivate on MDS
lctl --device 15 deactivate

then on client: 
lfs_migrate  -y  
/lustre/arm2arm/Projects/EFRE-TESTS/RAID6TEST/arman-io-stresstest/stresste
st/run_test.sh
/lustre/arm2arm/Projects/EFRE-TESTS/RAID6TEST/arman-io-stresstest/stresste
st/run_test.sh: cannot swap layouts between
/lustre/arm2arm/Projects/EFRE-TESTS/RAID6TEST/arman-io-stresstest/stresste
st/run_test.sh and a volatile file (Operation not permitted)
error: migrate: migrate stripe file
'/lustre/arm2arm/Projects/EFRE-TESTS/RAID6TEST/arman-io-stresstest/stresst
est/run_test.sh' failed


What does it mean a volatile file?

A volatile file is a new concept in Lustre 2.4 - it is a file that is
created
in the filesystem that has no name (e.g. it is created as an open unlinked
file).
This is used to implement atomic file migration in 2.4, which is safe for
files
that are in use.

Cheers, Andreas
-- 
Andreas Dilger

Lustre Software Architect
Intel High Performance Data Division


___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Jeff Johnson
On 10/7/13 11:23 AM, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:

 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs --index=0
 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2 /dev/sda /dev/sdc
 /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm /dev/sdo /dev/sdq /dev/sds

Additional suggestion. You should make zfs/zpools with persistent device 
names like /dev/disk/by-path or /dev/disk/by-id. Standard 'sd' device 
names are not persistent and could change after a reboot or hardware 
change. This would be bad for a zpool with data.

Also, I don't know if its just email formatting but be sure that command 
is all on one line:

mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs --index=0 \
--mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2 /dev/sda /dev/sdc \
/dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm /dev/sdo /dev/sdq /dev/sds



--Jeff

-- 
--
Jeff Johnson
Co-Founder
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite D - San Diego, CA 92117

High-performance Computing / Lustre Filesystems / Scale-out Storage

___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Scott Nolin



Ned


Here is the exact command used to create a raidz2 pool with 8+2 drives,
followed by the error messages:

mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs --index=0
--mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2 /dev/sda /dev/sdc
/dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm /dev/sdo /dev/sdq /dev/sds

mkfs.lustre FATAL: Invalid filesystem name /dev/sds

mkfs.lustre FATAL: unable to prepare backend (22)
mkfs.lustre: exiting with 22 (Invalid argument)

dmesg shows
ZFS: Loaded module v0.6.2-1, ZFS pool version 5000, ZFS filesystem version 5

Any suggestions on creating the pool separately?


Just make sure you can see /dev/sds in your system - if not, that's your 
problem.


I would also suggest consider building this without using these top 
level dev names. It is very easy for these to change accidentally. If 
you're just testing it's fine, but over time it will be a problem.


See
http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool

I like the vdev_id.conf with meaningful (to our sysadmins) aliases to 
device 'by-path'.


Scott



smime.p7s
Description: S/MIME Cryptographic Signature
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Ned Bass
On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:
 
 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
 --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2
 /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
 /dev/sdo /dev/sdq /dev/sds
 
 mkfs.lustre FATAL: Invalid filesystem name /dev/sds

It seems that either the version of mkfs.lustre you are using has a
parsing bug, or there was some sort of syntax error in the actual
command entered.  If you are certain your command line is free from
errors, please post the version of lustre you are using, or report the
bug in the Lustre issue tracker.

Thanks,
Ned
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Anjana Kar
On 10/07/2013 04:27 PM, Ned Bass wrote:
 On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:

 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
 --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2
 /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
 /dev/sdo /dev/sdq /dev/sds

 mkfs.lustre FATAL: Invalid filesystem name /dev/sds
 It seems that either the version of mkfs.lustre you are using has a
 parsing bug, or there was some sort of syntax error in the actual
 command entered.  If you are certain your command line is free from
 errors, please post the version of lustre you are using, or report the
 bug in the Lustre issue tracker.

 Thanks,
 Ned

For building this server, I followed steps from the walk-thru-build* for 
Centos 6.4,
and added --with-spl and --with-zfs when configuring lustre..
*https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821

spl and zfs modules were installed from source for the lustre 2.4 kernel
2.6.32.358.18.1.el6_lustre2.4

Device sds appears to be valid, but I will try issuing the command using 
by-path
names..

-Anjana
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] ldiskfs for MDT and zfs for OSTs?

2013-10-07 Thread Ned Bass
I'm guessing your git checkout doesn't include this commit:

* 010a78e Revert LU-3682 tunefs: prevent tunefs running on a mounted device

It looks like the LU-3682 patch introduced a bug that could cause your issue,
so its reverted in the latest master.

Ned

On Mon, Oct 07, 2013 at 04:54:13PM -0400, Anjana Kar wrote:
 On 10/07/2013 04:27 PM, Ned Bass wrote:
 On Mon, Oct 07, 2013 at 02:23:32PM -0400, Anjana Kar wrote:
 Here is the exact command used to create a raidz2 pool with 8+2 drives,
 followed by the error messages:
 
 mkfs.lustre --fsname=cajalfs --reformat --ost --backfstype=zfs
 --index=0 --mgsnode=10.10.101.171@o2ib lustre-ost0/ost0 raidz2
 /dev/sda /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk /dev/sdm
 /dev/sdo /dev/sdq /dev/sds
 
 mkfs.lustre FATAL: Invalid filesystem name /dev/sds
 It seems that either the version of mkfs.lustre you are using has a
 parsing bug, or there was some sort of syntax error in the actual
 command entered.  If you are certain your command line is free from
 errors, please post the version of lustre you are using, or report the
 bug in the Lustre issue tracker.
 
 Thanks,
 Ned
 
 For building this server, I followed steps from the walk-thru-build*
 for Centos 6.4,
 and added --with-spl and --with-zfs when configuring lustre..
 *https://wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821
 
 spl and zfs modules were installed from source for the lustre 2.4 kernel
 2.6.32.358.18.1.el6_lustre2.4
 
 Device sds appears to be valid, but I will try issuing the command
 using by-path
 names..
 
 -Anjana
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss