[EMAIL PROTECTED] writes:
Note also that for most applications, the size of their IO operations
would often not match the current page size of the buffer, causing
additional performance and scalability issues.
Thanks for mentioning this, I forgot about it.
Since ZFS's default
On Jan 24, 2007, at 06:54, Roch - PAE wrote:
[EMAIL PROTECTED] writes:
Note also that for most applications, the size of their IO
operations
would often not match the current page size of the buffer, causing
additional performance and scalability issues.
Thanks for mentioning this, I
And this feature is independant on whether or not the data is
DMA'ed straight into the user buffer.
I suppose so, however, it seems like it would make more sense to
configure a dataset property that specifically describes the caching
policy that is desired. When directio implies different
Roch
I've been chewing on this for a little while and had some thoughts
On Jan 15, 2007, at 12:02, Roch - PAE wrote:
Jonathan Edwards writes:
On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem.
Basically speaking - there needs to be some sort of strategy for
bypassing the ARC or even parts of the ARC for applications that
may need to advise the filesystem of either:
1) the delicate nature of imposing additional buffering for their
data flow
2) already well optimized applications
[EMAIL PROTECTED] wrote:
In order to protect the user pages while a DIO is in progress, we want
support from the VM that isn't presently implemented. To prevent a page
from being accessed by another thread, we have to unmap the TLB/PTE
entries and lock the page. There's a cost associated with
Note also that for most applications, the size of their IO operations
would often not match the current page size of the buffer, causing
additional performance and scalability issues.
Thanks for mentioning this, I forgot about it.
Since ZFS's default block size is configured to be larger than
Jonathan Edwards writes:
On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem.
Direct I/O as generally understood (i.e. not UFS-specific) is an
optimization which allows data to
Hi Roch,
You mentioned improved ZFS performance in the latest Nevada build (60
right now?)...I was curious if one would notice much of a performance
improvement between 54 and 60? Also, does anyone think the zfs_arc_max
tunable-support will be made available as a patch to S10U3, or would
that
DIRECT IO is a set of performance optimisations to circumvent shortcomings of
a given filesystem.
Direct I/O as generally understood (i.e. not UFS-specific) is an optimization
which allows data to be transferred directly between user data buffers and
disk, without a memory-to-memory copy.
On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
DIRECT IO is a set of performance optimisations to circumvent
shortcomings of a given filesystem.
Direct I/O as generally understood (i.e. not UFS-specific) is an
optimization which allows data to be transferred directly between
user data
11 matches
Mail list logo