On Mon, May 11, 2009 at 01:29:06PM -0500, Anthony Liguori wrote:
It don't think we realistically can.
Maybe two fds? One open in O_SYNC and one not. Is such a thing sane?
O_SYNC and O_DIRECT currently behave exactly in the same way. In none
of the filesystem I've looked at there is an
On Tue, May 12, 2009 at 11:35:23AM +0300, Avi Kivity wrote:
The cache size on disks is constantly growing, and if you lose cache
it doesn't really matter how much you lose but what you lose.
Software errors won't cause data loss on a real disk (firmware bugs
will, but the firmware is
On Mon, May 11, 2009 at 07:49:37PM +0300, Avi Kivity wrote:
Maybe we should add a fourth cache= mode then. But
cache=writeback+fsync doesn't correspond to any real world drive; in the
real world you're limited to power failures and a few megabytes of cache
(typically less),
On Mon, May 11, 2009 at 12:47:58PM -0500, Anthony Liguori wrote:
But how do we define the data integrity guarantees to the user of
cache=writeback+fsync? It seems to require a rather detailed knowledge
of Linux's use of T_FLUSH operations.
It does work the same as for disks with writeback
On Mon, May 11, 2009 at 11:38:09AM -0500, Anthony Liguori wrote:
Right now it doesn't, but it probably should.
So then with cache=writeback, fsync behaves itself but O_DIRECT writes
do not.
Right now O_DIRECT does not do an explicit cache flush, but due to the
way barriers are
Christoph Hellwig wrote:
On Mon, May 11, 2009 at 07:49:37PM +0300, Avi Kivity wrote:
Maybe we should add a fourth cache= mode then. But
cache=writeback+fsync doesn't correspond to any real world drive; in the
real world you're limited to power failures and a few megabytes of cache
On Mon, 11 May 2009 06:09:08 pm Christoph Hellwig wrote:
Do we need a new feature flag for this command or can we expect that
all previous barrier support was buggy enough anyway?
You mean reuse the VIRTIO_BLK_F_BARRIER for this as well? Seems fine.
AFAIK only lguest offered that, and lguest
On Tue, 12 May 2009 11:48:36 pm Christian Borntraeger wrote:
Am Tuesday 12 May 2009 15:54:14 schrieb Rusty Russell:
On Mon, 11 May 2009 06:09:08 pm Christoph Hellwig wrote:
Do we need a new feature flag for this command or can we expect that
all previous barrier support was buggy enough
Currently virtio-blk does support barriers for ordering requests which
is enough to guarantee filesystem metadata integrity with write back
caches, but it does not support any way to flush that writeback cache,
to guarantee that data is stable on disk on a fsync.
This patch implements a new
Christoph Hellwig wrote:
Currently virtio-blk does support barriers for ordering requests which
is enough to guarantee filesystem metadata integrity with write back
caches, but it does not support any way to flush that writeback cache,
to guarantee that data is stable on disk on a fsync.
This
On Mon, May 11, 2009 at 09:51:40AM -0500, Anthony Liguori wrote:
What typically triggers a flush operation?
fsync.
I would assume an fsync would, but would a flush happen after every
O_DIRECT write?
Right now it doesn't, but it probably should.
If the backend implementation of T_FLUSH is
Christoph Hellwig wrote:
If the backend implementation of T_FLUSH is fsync, I would think that
this would result in rather poor performance for O_DIRECT operations in
the guest.
Right now it's fsync. By the time I'll submit the backend change it
will still be fsync, but at least called
On Mon, May 11, 2009 at 06:45:50PM +0300, Avi Kivity wrote:
Right now it's fsync. By the time I'll submit the backend change it
will still be fsync, but at least called from the posix-aio-compat
thread pool.
I think if we have cache=writeback we should ignore this.
It's only needed for
Christoph Hellwig wrote:
On Mon, May 11, 2009 at 09:51:40AM -0500, Anthony Liguori wrote:
What typically triggers a flush operation?
fsync.
I would assume an fsync would, but would a flush happen after every
O_DIRECT write?
Right now it doesn't, but it probably should.
Christoph Hellwig wrote:
On Mon, May 11, 2009 at 06:45:50PM +0300, Avi Kivity wrote:
Right now it's fsync. By the time I'll submit the backend change it
will still be fsync, but at least called from the posix-aio-compat
thread pool.
I think if we have cache=writeback we should
Avi Kivity wrote:
Christoph Hellwig wrote:
On Mon, May 11, 2009 at 06:45:50PM +0300, Avi Kivity wrote:
Right now it's fsync. By the time I'll submit the backend change it
will still be fsync, but at least called from the posix-aio-compat
thread pool.
I think if we have
Anthony Liguori wrote:
Avi Kivity wrote:
Christoph Hellwig wrote:
On Mon, May 11, 2009 at 06:45:50PM +0300, Avi Kivity wrote:
Right now it's fsync. By the time I'll submit the backend change it
will still be fsync, but at least called from the posix-aio-compat
thread pool.
I
Avi Kivity wrote:
Anthony Liguori wrote:
Right now, it's fairly easy to understand. cache=none and
cache=writethrough guarantee that all write operations that the guest
thinks have completed are completed. cache=writeback provides no
such guarantee.
cache=none is partially broken as
Anthony Liguori wrote:
Avi Kivity wrote:
Anthony Liguori wrote:
Right now, it's fairly easy to understand. cache=none and
cache=writethrough guarantee that all write operations that the
guest thinks have completed are completed. cache=writeback provides
no such guarantee.
cache=none
19 matches
Mail list logo