I'm guessing that you have more disk bandwidth than network bandwidth.
Adding more OSSes and distributing the OSTs among them would probably help
the general case, not necessarily the single dd case.
On 10/14/16, 3:22 PM, "lustre-discuss on behalf of Riccardo Veraldi"
wrote:
>Hello,
>
>I would l
On 14/10/16 14:31, Mark Hahn wrote:
anyway if I force direct I/O, for example using oflag=direct in dd,
the write performance drop as low as 8MB/sec
with 1MB block size. And each write it's about 120ms latency.
but that's quite a small block size. do you approach buffered
performance
if you
*From:*lustre-discuss mailto:lustre-discuss-boun...@lists.lustre.org>> on behalf of
Patrick Farrell mailto:p...@cray.com>>
*Sent:* Friday, October 14, 2016 3:12:22 PM
*To:* Riccardo Veraldi; lustre-discuss@lists.lustre.org
<mailto:lustre-discuss@lists.lustre.org&
mitigate this, as Andreas suggested.
- Patrick
From: Dilger, Andreas
Sent: Friday, October 14, 2016 4:38:19 PM
To: John Bauer; Riccardo Veraldi
Cc: lustre-discuss@lists.lustre.org; Patrick Farrell
Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O per
ts.lustre.org<mailto:lustre-discuss@lists.lustre.org>
Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance
Riccardo,
While the difference is extreme, direct I/O write performance will always be
poor. Direct I/O writes cannot be asynchronous, since they don't use
anyway if I force direct I/O, for example using oflag=direct in dd, the write
performance drop as low as 8MB/sec
with 1MB block size. And each write it's about 120ms latency.
but that's quite a small block size. do you approach buffered performance
if you write significantly bigger blocks (8-
ad) can be outstanding at a time with
> direct I/O.
>
> From: lustre-discuss on behalf of
> Patrick Farrell
> Sent: Friday, October 14, 2016 3:12:22 PM
> To: Riccardo Veraldi; lustre-discuss@lists.lustre.org
> Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O perf
2:22:32 PM
*To:* lustre-discuss@lists.lustre.org
*Subject:* [lustre-discuss] Lustre on ZFS pooer direct I/O performance
Hello,
I would like how may I improve the situation of my lustre cluster.
I have 1 MDS and 1 OSS with 20 OST defined.
Each OST is a 8x Disks RAIDZ2.
A single process
ime with direct
I/O.
From: lustre-discuss on behalf of
Patrick Farrell
Sent: Friday, October 14, 2016 3:12:22 PM
To: Riccardo Veraldi; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] Lustre on ZFS pooer direct I/O performance
Riccardo,
While the difference is extreme, direct I
hread you see with buffered I/O.
- Patrick
From: lustre-discuss on behalf of
Riccardo Veraldi
Sent: Friday, October 14, 2016 2:22:32 PM
To: lustre-discuss@lists.lustre.org
Subject: [lustre-discuss] Lustre on ZFS pooer direct I/O performance
Hello,
I would lik
Riccardo
I would imagine that knowing the Lustre and ZFS version you are using
could be useful info to anyone who could advise you.
Peter
On 10/14/16, 12:22 PM, "lustre-discuss on behalf of Riccardo Veraldi"
wrote:
>Hello,
>
>I would like how may I improve the situation of my lustre cluster.
>
Hello,
I would like how may I improve the situation of my lustre cluster.
I have 1 MDS and 1 OSS with 20 OST defined.
Each OST is a 8x Disks RAIDZ2.
A single process write performance is around 800MB/sec
anyway if I force direct I/O, for example using oflag=direct in dd, the
write performanc
12 matches
Mail list logo