} development; [EMAIL PROTECTED]; linux-kernel@vger.kernel.org;
} [EMAIL PROTECTED]; Jens Axboe; David Chinner; Andreas Dilger
} Subject: Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for
} devices, filesystems, and dm/md.
}
} On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
} > [EM
PROTECTED]; linux-kernel@vger.kernel.org;
} [EMAIL PROTECTED]; Jens Axboe; David Chinner; Andreas Dilger
} Subject: Re: [dm-devel] Re: [RFD] BIO_RW_BARRIER - what it means for
} devices, filesystems, and dm/md.
}
} On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
} > [EMAIL PROTECTED] wrote:
} >
[EMAIL PROTECTED] wrote:
On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
[EMAIL PROTECTED] wrote:
On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
All of the high end arrays have non-volatile cache (read, on power loss, it is a
promise that it will get all of your data out to permane
On Wed, 11 Jul 2007 18:44:21 EDT, Ric Wheeler said:
> [EMAIL PROTECTED] wrote:
> > On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
> >
> >> All of the high end arrays have non-volatile cache (read, on power loss,
> >> it is a
> >> promise that it will get all of your data out to permanent st
[EMAIL PROTECTED] wrote:
On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
All of the high end arrays have non-volatile cache (read, on power loss, it is a
promise that it will get all of your data out to permanent storage). You don't
need to ask this kind of array to drain the cache. In fa
Ric Wheeler wrote:
>> Don't those thingies usually have NV cache or backed by battery such
>> that ORDERED_DRAIN is enough?
>
> All of the high end arrays have non-volatile cache (read, on power loss,
> it is a promise that it will get all of your data out to permanent
> storage). You don't need t
[EMAIL PROTECTED] wrote:
> On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
>
>> All of the high end arrays have non-volatile cache (read, on power loss, it
>> is a
>> promise that it will get all of your data out to permanent storage). You
>> don't
>> need to ask this kind of array to drai
On Tue, 10 Jul 2007 14:39:41 EDT, Ric Wheeler said:
> All of the high end arrays have non-volatile cache (read, on power loss, it
> is a
> promise that it will get all of your data out to permanent storage). You
> don't
> need to ask this kind of array to drain the cache. In fact, it might jus
Tejun Heo wrote:
[ cc'ing Ric Wheeler for storage array thingie. Hi, whole thread is at
http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/3344 ]
I am actually on the list, just really, really far behind in the thread ;-)
Hello,
[EMAIL PROTECTED] wrote:
but when you consider
Jens Axboe wrote:
On Thu, May 31 2007, Phillip Susi wrote:
Jens Axboe wrote:
No Stephan is right, the barrier is both an ordering and integrity
constraint. If a driver completes a barrier request before that request
and previously submitted requests are on STABLE storage, then it
violat
[EMAIL PROTECTED] wrote:
> On Fri, 01 Jun 2007 16:16:01 +0900, Tejun Heo said:
>> Don't those thingies usually have NV cache or backed by battery such
>> that ORDERED_DRAIN is enough?
>
> Probably *most* do, but do you really want to bet the user's data on it?
Thought we were talking about high-e
On Fri, 01 Jun 2007 16:16:01 +0900, Tejun Heo said:
> Don't those thingies usually have NV cache or backed by battery such
> that ORDERED_DRAIN is enough?
Probably *most* do, but do you really want to bet the user's data on it?
> The problem is that the interface between the host and a storage de
[ cc'ing Ric Wheeler for storage array thingie. Hi, whole thread is at
http://thread.gmane.org/gmane.linux.kernel.device-mapper.devel/3344 ]
Hello,
[EMAIL PROTECTED] wrote:
> but when you consider the self-contained disk arrays it's an entirely
> different story. you can easily have a few gig of
On Fri, 1 Jun 2007, Tejun Heo wrote:
but one
thing we should bear in mind is that harddisks don't have humongous
caches or very smart controller / instruction set. No matter how
relaxed interface the block layer provides, in the end, it just has to
issue whole-sale FLUSH CACHE on the device to
Stefan Bader wrote:
> 2007/5/30, Phillip Susi <[EMAIL PROTECTED]>:
>> Stefan Bader wrote:
>> >
>> > Since drive a supports barrier request we don't get -EOPNOTSUPP but
>> > the request with block y might get written before block x since the
>> > disk are independent. I guess the chances of this are
On Thu, May 31 2007, Phillip Susi wrote:
> Jens Axboe wrote:
> >No Stephan is right, the barrier is both an ordering and integrity
> >constraint. If a driver completes a barrier request before that request
> >and previously submitted requests are on STABLE storage, then it
> >violates that principl
Jens Axboe wrote:
No Stephan is right, the barrier is both an ordering and integrity
constraint. If a driver completes a barrier request before that request
and previously submitted requests are on STABLE storage, then it
violates that principle. Look at the code and the various ordering
options.
2007/5/30, Phillip Susi <[EMAIL PROTECTED]>:
Stefan Bader wrote:
>
> Since drive a supports barrier request we don't get -EOPNOTSUPP but
> the request with block y might get written before block x since the
> disk are independent. I guess the chances of this are quite low since
> at some point a
On Wed, May 30 2007, Phillip Susi wrote:
> >That would be the exactly how I understand Documentation/block/barrier.txt:
> >
> >"In other words, I/O barrier requests have the following two properties.
> >1. Request ordering
> >...
> >2. Forced flushing to physical medium"
> >
> >"So, I/O barriers ne
On Thu, May 31, 2007 at 02:07:39AM +0100, Alasdair G Kergon wrote:
> On Thu, May 31, 2007 at 10:46:04AM +1000, Neil Brown wrote:
> > If a filesystem cares, it could 'ask' as suggested above.
> > What would be a good interface for asking?
>
> XFS already tests:
> bd_disk->queue->ordered == QUEUE_
On Thu, May 31, 2007 at 10:46:04AM +1000, Neil Brown wrote:
> If a filesystem cares, it could 'ask' as suggested above.
> What would be a good interface for asking?
XFS already tests:
bd_disk->queue->ordered == QUEUE_ORDERED_NONE
Alasdair
--
[EMAIL PROTECTED]
-
To unsubscribe from this list: s
On Thu, May 31, 2007 at 10:46:04AM +1000, Neil Brown wrote:
> What if the truth changes (as can happen with md or dm)?
You get notified in endio() that the barrier had to be emulated?
Alasdair
--
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
Phillip Susi wrote:
Hrm... I may have misunderstood the perspective you were talking from.
Yes, when the bio is completed it must be on the media, but the
filesystem should issue both requests, and then really not care when
they complete. That is to say, the filesystem should not wait for bloc
Stefan Bader wrote:
You got a linear target that consists of two disks. One drive (a)
supports barriers and the other one (b) doesn't. Device-mapper just
maps the requests to the appropriate disk. Now the following sequence
happens:
1. block x gets mapped to drive b
2. block y (with barrier) get
On Wed, May 30, 2007 at 11:12:37AM +0200, Stefan Bader wrote:
> it might be better to indicate -EOPNOTSUPP right from
> device-mapper.
Indeed we should. For support, on receipt of a barrier, dm core should
send a zero-length barrier to all active underlying paths, and delay
mapping any further I
> in-flight I/O to go to zero?
Something like that is needed for some dm targets to support barriers.
(We needn't always wait for *all* in-flight I/O.)
When faced with -EOPNOTSUP, do all callers fall back to a sync in
the places a barrier would have been used, or are there any more
sophisticated
On Tue, May 29, 2007 at 11:25:42AM +0200, Stefan Bader wrote:
> doing a sort of suspend, issuing the
> barrier request, calling flush to all mapped devices and then wait for
> in-flight I/O to go to zero?
Something like that is needed for some dm targets to support barriers.
(We needn't always wa
2007/5/28, Alasdair G Kergon <[EMAIL PROTECTED]>:
On Mon, May 28, 2007 at 11:30:32AM +1000, Neil Brown wrote:
> 1/ A BIO_RW_BARRIER request should never fail with -EOPNOTSUP.
The device-mapper position has always been that we require
> a zero-length BIO_RW_BARRIER
(i.e. containing no data to
(dunny why you explicitly dropped me off the cc/to list when replying to
my email, hence I missed it for 3 days)
On Fri, May 25 2007, Phillip Susi wrote:
> Jens Axboe wrote:
> >A barrier write will include a flush, but it may also use the FUA bit to
> >ensure data is on platter. So the only situa
On Mon, May 28, 2007 at 11:30:32AM +1000, Neil Brown wrote:
> 1/ A BIO_RW_BARRIER request should never fail with -EOPNOTSUP.
The device-mapper position has always been that we require
> a zero-length BIO_RW_BARRIER
(i.e. containing no data to read or write - or emulated, possibly
device-speci
Jens Axboe wrote:
A barrier write will include a flush, but it may also use the FUA bit to
ensure data is on platter. So the only situation where a fallback from a
barrier to flush would be valid, is if the device lied and told you it
could do FUA but it could not and that is the reason why the b
31 matches
Mail list logo