Re: [zfs-discuss] Re: ZFS direct IO

2007-01-24 Thread Roch - PAE
[EMAIL PROTECTED] writes: Note also that for most applications, the size of their IO operations would often not match the current page size of the buffer, causing additional performance and scalability issues. Thanks for mentioning this, I forgot about it. Since ZFS's default

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-24 Thread Jonathan Edwards
On Jan 24, 2007, at 06:54, Roch - PAE wrote: [EMAIL PROTECTED] writes: Note also that for most applications, the size of their IO operations would often not match the current page size of the buffer, causing additional performance and scalability issues. Thanks for mentioning this, I

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-24 Thread johansen-osdev
And this feature is independant on whether or not the data is DMA'ed straight into the user buffer. I suppose so, however, it seems like it would make more sense to configure a dataset property that specifically describes the caching policy that is desired. When directio implies different

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread Jonathan Edwards
Roch I've been chewing on this for a little while and had some thoughts On Jan 15, 2007, at 12:02, Roch - PAE wrote: Jonathan Edwards writes: On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem.

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread johansen-osdev
Basically speaking - there needs to be some sort of strategy for bypassing the ARC or even parts of the ARC for applications that may need to advise the filesystem of either: 1) the delicate nature of imposing additional buffering for their data flow 2) already well optimized applications

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread Bart Smaalders
[EMAIL PROTECTED] wrote: In order to protect the user pages while a DIO is in progress, we want support from the VM that isn't presently implemented. To prevent a page from being accessed by another thread, we have to unmap the TLB/PTE entries and lock the page. There's a cost associated with

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread johansen-osdev
Note also that for most applications, the size of their IO operations would often not match the current page size of the buffer, causing additional performance and scalability issues. Thanks for mentioning this, I forgot about it. Since ZFS's default block size is configured to be larger than

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-15 Thread Roch - PAE
Jonathan Edwards writes: On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem. Direct I/O as generally understood (i.e. not UFS-specific) is an optimization which allows data to

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-15 Thread Jason J. W. Williams
Hi Roch, You mentioned improved ZFS performance in the latest Nevada build (60 right now?)...I was curious if one would notice much of a performance improvement between 54 and 60? Also, does anyone think the zfs_arc_max tunable-support will be made available as a patch to S10U3, or would that

[zfs-discuss] Re: ZFS direct IO

2007-01-05 Thread Anton B. Rang
DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem. Direct I/O as generally understood (i.e. not UFS-specific) is an optimization which allows data to be transferred directly between user data buffers and disk, without a memory-to-memory copy.

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-05 Thread Jonathan Edwards
On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem. Direct I/O as generally understood (i.e. not UFS-specific) is an optimization which allows data to be transferred directly between user data