Re: [lustre-discuss] Interesting disk usage of tar of many small files on zfs-based Lustre 2.10

2017-08-04 Thread Nathan R.M. Crawford
MDT, so I am waiting for the zfs 0.7.1 rpms to include https://github.com/zfsonlinux/zfs/pull/6439 before opening the file system up to non-scratch usage. -Nate > Alex. > > On Aug 3, 2017, at 7:28 PM, Nathan R.M. Crawford <nrcra...@uci.edu> wrote: > > Off-list, it was sug

Re: [lustre-discuss] Interesting disk usage of tar of many small files on zfs-based Lustre 2.10

2017-08-03 Thread Nathan R.M. Crawford
17 at 3:07 PM, Nathan R.M. Crawford <nrcra...@uci.edu> wrote: > In testing how to cope with naive users generating millions of tiny > files, I noticed some surprising (to me) behavior on a lustre 2.10/ZFS > 0.7.0 system. > > The test directory (based on actual user data) c

[lustre-discuss] Interesting disk usage of tar of many small files on zfs-based Lustre 2.10

2017-08-03 Thread Nathan R.M. Crawford
In testing how to cope with naive users generating millions of tiny files, I noticed some surprising (to me) behavior on a lustre 2.10/ZFS 0.7.0 system. The test directory (based on actual user data) contains about 4 million files (avg size 8.6K) in three subdirectories. Making tar files of

Re: [lustre-discuss] lustre-discuss Digest, Vol 136, Issue 26

2017-07-31 Thread Nathan R.M. Crawford
;>1. Re: Install issues on 2.10.0 (John Casu) >>2. How does Lustre client side caching work? (Joakim Ziegler) >>3. LNET router (2.10.0) recommendations for heterogeneous (mlx5, >> qib) IB setup (Nathan R.M. Crawford) >> >> >> ---

[lustre-discuss] LNET router (2.10.0) recommendations for heterogeneous (mlx5, qib) IB setup

2017-07-25 Thread Nathan R.M. Crawford
Hi All, We are gradually updating a cluster (OS, etc.) in-place, basically switching blocks of nodes from the old head node to the new. Until we can re-arrange the fabric at the next scheduled machine room power shutdown event, we are running two independent Infiniband subnets. As I can't find

Re: [lustre-discuss] OSTs per OSS with ZFS

2017-07-06 Thread Nathan R.M. Crawford
On a somewhat-related question, what are the expected trade-offs when splitting the "striping" between ZFS (striping over vdevs) and Lustre (striping over OSTs)? Specific example: if one has an OSS with 40 disks and intends to use 10-disk raidz2 vdevs, how do these options compare?: A) 4 OSTs,

Re: [lustre-discuss] Lustre-2.9 / ZFS-0.7.0 tuning recommendations

2017-03-24 Thread Nathan R.M. Crawford
orts running to match the IO potential. > > Thanks, > Keith > > -Original Message- > From: lustre-discuss [mailto:lustre-discuss-boun...@lists.lustre.org] On > Behalf Of Dilger, Andreas > Sent: Friday, March 24, 2017 12:46 AM > To: Nathan R.M. Crawford <nrcra...@

Re: [lustre-discuss] Lustre-2.9 / ZFS-0.7.0 tuning recommendations

2017-03-24 Thread Nathan R.M. Crawford
Hi Andreas, Thanks for the clarification! On Fri, Mar 24, 2017 at 12:45 AM, Dilger, Andreas <andreas.dil...@intel.com> wrote: > On Mar 23, 2017, at 18:38, Nathan R.M. Crawford <nrcra...@uci.edu> wrote: > > > > Hi All, > > > > I've been evaluatin

[lustre-discuss] Lustre-2.9 / ZFS-0.7.0 tuning recommendations

2017-03-23 Thread Nathan R.M. Crawford
Hi All, I've been evaluating some of the newer options that should be available with Lustre 2.9 on top of ZFSonLinux 0.7.0 (currently at rc3). Specifically, trying 16MB RPCS/blocks on the OSTs and large dnodes on the MDTs. I've gathered bits and pieces from discussions in the zfs and lustre

[lustre-discuss] 2.7.63 compilation problem with OFED-3.18-1-20151109-0727 and ZFS 0.6.5.2; o2ib failure

2015-11-20 Thread Nathan R.M. Crawford
Hi All, I'm attempting to install Lustre on ZFS on EL6.7 using Truescale HCAs. Not surprisingly, this involves several pre-release versions of various packages. Also not surprisingly, there are some snags. Version info: Scientific Linux (CERN) 6.7 using kernel 2.6.32-573.7.1.el6.x86_64 ZFS