I'm guessing that you have more disk bandwidth than network bandwidth.
Adding more OSSes and distributing the OSTs among them would probably help
the general case, not necessarily the single dd case.
On 10/14/16, 3:22 PM, "lustre-discuss on behalf of Riccardo Veraldi"
On 14/10/16 14:31, Mark Hahn wrote:
anyway if I force direct I/O, for example using oflag=direct in dd,
the write performance drop as low as 8MB/sec
with 1MB block size. And each write it's about 120ms latency.
but that's quite a small block size. do you approach buffered
performance
if
* Riccardo Veraldi; lustre-discuss@lists.lustre.org
<mailto:lustre-discuss@lists.lustre.org>
*Subject:* Re: [lustre-discuss] Lustre on ZFS pooer direct I/O
performance
Riccardo,
While the difference is extreme, direct I/O write performance will
always be poor. Direct I/O
ailto:p...@cray.com>>
Sent: Friday, October 14, 2016 3:12:22 PM
To: Riccardo Veraldi;
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance
Riccardo,
While the difference is extreme, dir
anyway if I force direct I/O, for example using oflag=direct in dd, the write
performance drop as low as 8MB/sec
with 1MB block size. And each write it's about 120ms latency.
but that's quite a small block size. do you approach buffered performance
if you write significantly bigger blocks
t; Riccardo Veraldi <riccardo.vera...@cnaf.infn.it>
> Sent: Friday, October 14, 2016 2:22:32 PM
> To: lustre-discuss@lists.lustre.org
> Subject: [lustre-discuss] Lustre on ZFS pooer direct I/O performance
>
> Hello,
>
> I would like how may I improve the situation of my lu
eraldi <riccardo.vera...@cnaf.infn.it>
*Sent:* Friday, October 14, 2016 2:22:32 PM
*To:* lustre-discuss@lists.lustre.org
*Subject:* [lustre-discuss] Lustre on ZFS pooer direct I/O performance
Hello,
I would like how may I improve the situation of my lustre cluster.
I have 1 MDS and 1 OSS
irect
I/O.
From: lustre-discuss <lustre-discuss-boun...@lists.lustre.org> on behalf of
Patrick Farrell <p...@cray.com>
Sent: Friday, October 14, 2016 3:12:22 PM
To: Riccardo Veraldi; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performan
ect: [lustre-discuss] Lustre on ZFS pooer direct I/O performance
Hello,
I would like how may I improve the situation of my lustre cluster.
I have 1 MDS and 1 OSS with 20 OST defined.
Each OST is a 8x Disks RAIDZ2.
A single process write performance is around 800MB/sec
anyway if I force direct I/O,
Riccardo
I would imagine that knowing the Lustre and ZFS version you are using
could be useful info to anyone who could advise you.
Peter
On 10/14/16, 12:22 PM, "lustre-discuss on behalf of Riccardo Veraldi"
Hello,
I would like how may I improve the situation of my lustre cluster.
I have 1 MDS and 1 OSS with 20 OST defined.
Each OST is a 8x Disks RAIDZ2.
A single process write performance is around 800MB/sec
anyway if I force direct I/O, for example using oflag=direct in dd, the
write
11 matches
Mail list logo