Sorry I thought I had told the ZFS and lustre version but I did not.
Lustre version is 2.8.0 ZFS is

I understand that I cannot expect performance from Direct I/O but on hte previous Lsutre clsuter with ldiskfs and hardare raid I was having 300MB/sec. Probably the hw raid caching mechanism was aiding in this.



On 14/10/16 13:12, Patrick Farrell wrote:


While the difference is extreme, direct I/O write performance will always be poor. Direct I/O writes cannot be asynchronous, since they don't use the page cache. This means Lustre cannot return from one write (and start the next) until it has finished transferring the data to the network.

This means you can only have one I/O in flight at a time. Good write performance from Lustre (or any network filesystem) depends on keeping a lot of data in flight at once.

What sort of direct write performance were you hoping for? It will never match that 800 MB/s from one thread you see with buffered I/O.

- Patrick

*From:* lustre-discuss <> on behalf of Riccardo Veraldi <>
*Sent:* Friday, October 14, 2016 2:22:32 PM
*Subject:* [lustre-discuss] Lustre on ZFS pooer direct I/O performance

I would like how may I improve the situation of my lustre cluster.

I have 1 MDS and 1 OSS with 20 OST defined.

Each OST is a 8x Disks RAIDZ2.

A single process write performance is around 800MB/sec

anyway if I force direct I/O, for example using oflag=direct in dd, the
write performance drop as low as 8MB/sec

with 1MB block size. And each write it's about 120ms latency.

I used these ZFS settings

options zfs zfs_prefetch_disable=1
options zfs zfs_txg_history=120
options zfs metaslab_debug_unload=1

i am quite worried for the low performance.

Any hints or suggestions that may help me to improve the situation ?

thank you


lustre-discuss mailing list

lustre-discuss mailing list

Reply via email to