David Chinner wrote:
you are understanding barriers to be the same as syncronous writes. (and
therefor the data is on persistant media before the call returns)
No, I'm describing the high level behaviour that is expected by
a filesystem. The reasons for this are below
You say no, but
David Chinner wrote:
That sounds like a good idea - we can leave the existing
WRITE_BARRIER behaviour unchanged and introduce a new WRITE_ORDERED
behaviour that only guarantees ordering. The filesystem can then
choose which to use where appropriate
So what if you want a synchronous write,
Jens Axboe wrote:
No Stephan is right, the barrier is both an ordering and integrity
constraint. If a driver completes a barrier request before that request
and previously submitted requests are on STABLE storage, then it
violates that principle. Look at the code and the various ordering
Stefan Bader wrote:
You got a linear target that consists of two disks. One drive (a)
supports barriers and the other one (b) doesn't. Device-mapper just
maps the requests to the appropriate disk. Now the following sequence
happens:
1. block x gets mapped to drive b
2. block y (with barrier)
Phillip Susi wrote:
Hrm... I may have misunderstood the perspective you were talking from.
Yes, when the bio is completed it must be on the media, but the
filesystem should issue both requests, and then really not care when
they complete. That is to say, the filesystem should not wait
Neil Brown wrote:
md/dm modules could keep count of requests as has been suggested
(though that would be a fairly big change for raid0 as it currently
doesn't know when a request completes - bi_endio goes directly to the
filesystem).
Are you sure? I believe that dm handles bi_endio
David Chinner wrote:
Sounds good to me, but how do we test to see if the underlying
device supports barriers? Do we just assume that they do and
only change behaviour if -o nobarrier is specified in the mount
options?
The idea is that ALL block devices will support barriers; if the
underlying
Jens Axboe wrote:
A barrier write will include a flush, but it may also use the FUA bit to
ensure data is on platter. So the only situation where a fallback from a
barrier to flush would be valid, is if the device lied and told you it
could do FUA but it could not and that is the reason why the
Neil Brown wrote:
There is no guarantee that a device can support BIO_RW_BARRIER - it is
always possible that a request will fail with EOPNOTSUPP.
Why is it not the job of the block layer to translate for broken devices
and send them a flush/write/flush?
These devices would find it very
I second that award. Three threads in as many days, all idiotic
trolling. Can this idiot be banned from the list? Sheesh!
Ronni Nielsen wrote:
[EMAIL PROTECTED] wrote:
[snip arguments FUBAR]
oscar
And the award as Troll Of The Year goes to: johnrobertbanks.
/oscar
/ronni
+--+
|
Pekka J Enberg wrote:
We never want to _abort_ pending updates only pending reads. So, even with
revoke(), we need to be careful which is why we do do_fsync() in
generic_revoke_file() to make sure pending updates are flushed before we
declare the inode revoked.
But, I haven't looked at
Is this revoke system supported for the filesystem as a whole? I
thought it was just to force specific files closed, not the whole
filesystem. What if the filesystem itself has pending IO to say, update
inodes or block bitmaps? Can these be aborted?
Pekka Enberg wrote:
FYI, the revoke
Nikolai Joukov wrote:
replication. In case of RAID4 and RAID5-like configurations, RAIF performed
about two times *better* than software RAID and even better than an Adaptec
2120S RAID5 controller. This is because RAIF is located above file system
caches and can cache parity as normal data
13 matches
Mail list logo