Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-17 Thread Ben Evans
I'm guessing that you have more disk bandwidth than network bandwidth. Adding more OSSes and distributing the OSTs among them would probably help the general case, not necessarily the single dd case. On 10/14/16, 3:22 PM, "lustre-discuss on behalf of Riccardo Veraldi"

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-15 Thread Riccardo Veraldi
On 14/10/16 14:31, Mark Hahn wrote: anyway if I force direct I/O, for example using oflag=direct in dd, the write performance drop as low as 8MB/sec with 1MB block size. And each write it's about 120ms latency. but that's quite a small block size. do you approach buffered performance if

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-15 Thread Riccardo Veraldi
* Riccardo Veraldi; lustre-discuss@lists.lustre.org <mailto:lustre-discuss@lists.lustre.org> *Subject:* Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance Riccardo, While the difference is extreme, direct I/O write performance will always be poor. Direct I/O

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Patrick Farrell
ailto:p...@cray.com>> Sent: Friday, October 14, 2016 3:12:22 PM To: Riccardo Veraldi; lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org> Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance Riccardo, While the difference is extreme, dir

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Mark Hahn
anyway if I force direct I/O, for example using oflag=direct in dd, the write performance drop as low as 8MB/sec with 1MB block size. And each write it's about 120ms latency. but that's quite a small block size. do you approach buffered performance if you write significantly bigger blocks

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread John Bauer
t; Riccardo Veraldi <riccardo.vera...@cnaf.infn.it> > Sent: Friday, October 14, 2016 2:22:32 PM > To: lustre-discuss@lists.lustre.org > Subject: [lustre-discuss] Lustre on ZFS pooer direct I/O performance > > Hello, > > I would like how may I improve the situation of my lu

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Riccardo Veraldi
eraldi <riccardo.vera...@cnaf.infn.it> *Sent:* Friday, October 14, 2016 2:22:32 PM *To:* lustre-discuss@lists.lustre.org *Subject:* [lustre-discuss] Lustre on ZFS pooer direct I/O performance Hello, I would like how may I improve the situation of my lustre cluster. I have 1 MDS and 1 OSS

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Patrick Farrell
irect I/O. From: lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on behalf of Patrick Farrell <p...@cray.com> Sent: Friday, October 14, 2016 3:12:22 PM To: Riccardo Veraldi; lustre-discuss@lists.lustre.org Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performan

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Patrick Farrell
ect: [lustre-discuss] Lustre on ZFS pooer direct I/O performance Hello, I would like how may I improve the situation of my lustre cluster. I have 1 MDS and 1 OSS with 20 OST defined. Each OST is a 8x Disks RAIDZ2. A single process write performance is around 800MB/sec anyway if I force direct I/O,

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Jones, Peter A
Riccardo I would imagine that knowing the Lustre and ZFS version you are using could be useful info to anyone who could advise you. Peter On 10/14/16, 12:22 PM, "lustre-discuss on behalf of Riccardo Veraldi"

[lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Riccardo Veraldi
Hello, I would like how may I improve the situation of my lustre cluster. I have 1 MDS and 1 OSS with 20 OST defined. Each OST is a 8x Disks RAIDZ2. A single process write performance is around 800MB/sec anyway if I force direct I/O, for example using oflag=direct in dd, the write