Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-17 Thread Ben Evans
I'm guessing that you have more disk bandwidth than network bandwidth. Adding more OSSes and distributing the OSTs among them would probably help the general case, not necessarily the single dd case. On 10/14/16, 3:22 PM, "lustre-discuss on behalf of Riccardo Veraldi" wrote: >Hello, > >I would l

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-15 Thread Riccardo Veraldi
On 14/10/16 14:31, Mark Hahn wrote: anyway if I force direct I/O, for example using oflag=direct in dd, the write performance drop as low as 8MB/sec with 1MB block size. And each write it's about 120ms latency. but that's quite a small block size. do you approach buffered performance if you

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-15 Thread Riccardo Veraldi
*From:*lustre-discuss mailto:lustre-discuss-boun...@lists.lustre.org>> on behalf of Patrick Farrell mailto:p...@cray.com>> *Sent:* Friday, October 14, 2016 3:12:22 PM *To:* Riccardo Veraldi; lustre-discuss@lists.lustre.org <mailto:lustre-discuss@lists.lustre.org&

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Patrick Farrell
mitigate this, as Andreas suggested. - Patrick From: Dilger, Andreas Sent: Friday, October 14, 2016 4:38:19 PM To: John Bauer; Riccardo Veraldi Cc: lustre-discuss@lists.lustre.org; Patrick Farrell Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O per

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Dilger, Andreas
ts.lustre.org<mailto:lustre-discuss@lists.lustre.org> Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance Riccardo, While the difference is extreme, direct I/O write performance will always be poor. Direct I/O writes cannot be asynchronous, since they don't use

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Mark Hahn
anyway if I force direct I/O, for example using oflag=direct in dd, the write performance drop as low as 8MB/sec with 1MB block size. And each write it's about 120ms latency. but that's quite a small block size. do you approach buffered performance if you write significantly bigger blocks (8-

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread John Bauer
ad) can be outstanding at a time with > direct I/O. > > From: lustre-discuss on behalf of > Patrick Farrell > Sent: Friday, October 14, 2016 3:12:22 PM > To: Riccardo Veraldi; lustre-discuss@lists.lustre.org > Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O perf

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Riccardo Veraldi
2:22:32 PM *To:* lustre-discuss@lists.lustre.org *Subject:* [lustre-discuss] Lustre on ZFS pooer direct I/O performance Hello, I would like how may I improve the situation of my lustre cluster. I have 1 MDS and 1 OSS with 20 OST defined. Each OST is a 8x Disks RAIDZ2. A single process

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Patrick Farrell
ime with direct I/O. From: lustre-discuss on behalf of Patrick Farrell Sent: Friday, October 14, 2016 3:12:22 PM To: Riccardo Veraldi; lustre-discuss@lists.lustre.org Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance Riccardo, While the difference is extreme, direct I

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Patrick Farrell
hread you see with buffered I/O. - Patrick From: lustre-discuss on behalf of Riccardo Veraldi Sent: Friday, October 14, 2016 2:22:32 PM To: lustre-discuss@lists.lustre.org Subject: [lustre-discuss] Lustre on ZFS pooer direct I/O performance Hello, I would lik

Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Jones, Peter A
Riccardo I would imagine that knowing the Lustre and ZFS version you are using could be useful info to anyone who could advise you. Peter On 10/14/16, 12:22 PM, "lustre-discuss on behalf of Riccardo Veraldi" wrote: >Hello, > >I would like how may I improve the situation of my lustre cluster. >

[lustre-discuss] Lustre on ZFS pooer direct I/O performance

2016-10-14 Thread Riccardo Veraldi
Hello, I would like how may I improve the situation of my lustre cluster. I have 1 MDS and 1 OSS with 20 OST defined. Each OST is a 8x Disks RAIDZ2. A single process write performance is around 800MB/sec anyway if I force direct I/O, for example using oflag=direct in dd, the write performanc