Peter Schuller writes:
I agree about the usefulness of fbarrier() vs. fsync(), BTW. The cool
thing is that on ZFS, fbarrier() is a no-op. It's implicit after
every system call.
That is interesting. Could this account for disproportionate kernel
CPU usage for applications that
That is interesting. Could this account for disproportionate kernel
CPU usage for applications that perform I/O one byte at a time, as
compared to other filesystems? (Nevermind that the application
shouldn't do that to begin with.)
I just quickly measured this (overwritting files in
On Mon, 12 Feb 2007, Peter Schuller wrote:
Hello,
Often fsync() is used not because one cares that some piece of data is on
stable storage, but because one wants to ensure the subsequent I/O operations
are performed after previous I/O operations are on stable storage. In these
cases the
2007/2/12, Frank Hofmann [EMAIL PROTECTED]:
On Mon, 12 Feb 2007, Peter Schuller wrote:
Hello,
Often fsync() is used not because one cares that some piece of data is on
stable storage, but because one wants to ensure the subsequent I/O operations
are performed after previous I/O operations
Peter Schuller wrote:
Hello,
Often fsync() is used not because one cares that some piece of data is on
stable storage, but because one wants to ensure the subsequent I/O operations
are performed after previous I/O operations are on stable storage. In these
cases the latency introduced by an
On 12-Feb-07, at 5:55 PM, Frank Hofmann wrote:
On Mon, 12 Feb 2007, Peter Schuller wrote:
Hello,
Often fsync() is used not because one cares that some piece of
data is on
stable storage, but because one wants to ensure the subsequent I/O
operations
are performed after previous I/O
On Mon, 12 Feb 2007, Toby Thain wrote:
[ ... ]
I'm no guru, but would not ZFS already require strict ordering for its
transactions ... which property Peter was exploiting to get fbarrier() for
free?
It achieves this by flushing the disk write cache when there's need to
barrier. Which
2007/2/12, Frank Hofmann [EMAIL PROTECTED]:
On Mon, 12 Feb 2007, Chris Csanady wrote:
This is true for NCQ with SATA, but SCSI also supports ordered tags,
so it should not be necessary.
At least, that is my understanding.
Except that ZFS doesn't talk SCSI, it talks to a target driver. And
Jeff Bonwick,
Do you agree that their is a major tradeoff of
builds up a wad of transactions in memory?
We loose the changes if we have an unstable
environment.
Thus, I don't quite understand why a 2-phase
approach to commits isn't done. First,
Do you agree that their is a major tradeoff of
builds up a wad of transactions in memory?
I don't think so. We trigger a transaction group commit when we
have lots of dirty data, or 5 seconds elapse, whichever comes first.
In other words, we don't let updates get stale.
Jeff
That said, actually implementing the underlying mechanisms may not be
worth the trouble. It is only a matter of time before disks have fast
non-volatile memory like PRAM or MRAM, and then the need to do
explicit cache management basically disappears.
I meant fbarrier() as a syscall exposed
That is interesting. Could this account for disproportionate kernel
CPU usage for applications that perform I/O one byte at a time, as
compared to other filesystems? (Nevermind that the application
shouldn't do that to begin with.)
No, this is entirely a matter of CPU efficiency in the current
12 matches
Mail list logo