On Monday, 20 March 2000 at 20:17:13 +0100, Poul-Henning Kamp wrote:
In message [EMAIL PROTECTED], Matthew Dillon writes:
Well, let me tell you what the fuzzy goal is first and then maybe we
can work backwards.
Eventually all physical I/O needs a physical address. The quickest
On Monday, 20 March 2000 at 14:04:48 -0800, Matthew Dillon wrote:
If a particular subsystem needs b_data, then that subsystem is obviously
willing to take the virtual mapping / unmapping hit. If you look at
Greg's current code this is, in fact, what is occuring the critical
Eventually all physical I/O needs a physical address. The quickest
way to get to a physical address is to be given an array of vm_page_t's
(which can be trivially translated to physical addresses).
Not all: PIO access to ATA needs virtual access. RAID5 needs
virtual access
On Monday, 20 March 2000 at 15:23:31 -0600, Dan Nelson wrote:
In the last episode (Mar 20), Poul-Henning Kamp said:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
* Poul-Henning Kamp [EMAIL PROTECTED] [000320 11:45] wrote:
Before we redesign the clustering, I would like to know if we
On Monday, 20 March 2000 at 22:52:59 +0100, Poul-Henning Kamp wrote:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
* Poul-Henning Kamp [EMAIL PROTECTED] [000320 11:45] wrote:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
Keeping the currect cluster code is a bad idea, if
On Tuesday, 21 March 2000 at 9:29:56 -0800, Matthew Dillon wrote:
I would think that track-caches and intelligent drives would gain
much if not more of what clustering was designed to do gain.
Hm. But I'd think that even with modern drives a smaller number of bigger
I/Os is preferable over
In the last episode (Mar 23), Greg Lehey said:
Agreed. This is on the Vinum wishlist, but it comes at the expense of
reliability (how long do you wait to cluster? What happens if the
system fails in between?). In addition, for Vinum it needs to be done
before entering the hardware
: Hm. But I'd think that even with modern drives a smaller number of bigger
: I/Os is preferable over lots of very small I/Os.
:
:Not necessarily. It depends upon overhead costs per-i/o. With larger I/Os, you
:do pay in interference costs (you can't transfer data for request N because
:the
:
: I would think that track-caches and intelligent drives would gain
: much if not more of what clustering was designed to do gain.
:
:Hm. But I'd think that even with modern drives a smaller number of bigger
:I/Os is preferable over lots of very small I/Os. Or have I missed the point?
:
:--
On Mon, Mar 20, 2000 at 11:54:58PM -0800, Matthew Jacob wrote:
Hm. But I'd think that even with modern drives a smaller number of bigger
I/Os is preferable over lots of very small I/Os.
Not necessarily. It depends upon overhead costs per-i/o. With larger I/Os, you
do pay in
On Tue, Mar 21, 2000 at 09:29:56AM -0800, Matthew Dillon wrote:
:
: I would think that track-caches and intelligent drives would gain
: much if not more of what clustering was designed to do gain.
:
:Hm. But I'd think that even with modern drives a smaller number of bigger
:I/Os is
On Tue, Mar 21, 2000 at 09:29:56AM -0800, Matthew Dillon wrote:
:
: I would think that track-caches and intelligent drives would gain
: much if not more of what clustering was designed to do gain.
:
:Hm. But I'd think that even with modern drives a smaller number of bigger
:I/Os is
On Tue, Mar 21, 2000 at 01:14:45PM -0800, Rodney W. Grimes wrote:
On Tue, Mar 21, 2000 at 09:29:56AM -0800, Matthew Dillon wrote:
:
: I would think that track-caches and intelligent drives would gain
: much if not more of what clustering was designed to do gain.
:
:Hm. But I'd
On Tue, Mar 21, 2000 at 01:14:45PM -0800, Rodney W. Grimes wrote:
On Tue, Mar 21, 2000 at 09:29:56AM -0800, Matthew Dillon wrote:
:
: I would think that track-caches and intelligent drives would gain
: much if not more of what clustering was designed to do gain.
:
:Hm.
:I have two patches up for test at http://phk.freebsd.dk/misc
:
:I'm looking for reviews and tests, in particular vinum testing
:would be nice since Grog is quasi-offline at the moment.
:
:Poul-Henning
:
:2317 BWRITE-STRATEGY.patch
:
:This patch is machine generated except for the
Kirk and I have already mapped out a plan to drastically update
the buffer cache API which will encapsulate much of the state within
the buffer cache module.
Sounds good. Combined with my stackable BIO plans that sounds like
a really great win for FreeBSD.
--
Poul-Henning Kamp
:
:
:Kirk and I have already mapped out a plan to drastically update
:the buffer cache API which will encapsulate much of the state within
:the buffer cache module.
:
:Sounds good. Combined with my stackable BIO plans that sounds like
:a really great win for FreeBSD.
:
:--
In message [EMAIL PROTECTED], Matthew Dillon writes:
I think so. I can give -current a quick synopsis of the plan but I've
probably forgotten some of the bits (note: the points below are not
in any particular order):
Thanks for the sketch. It sounds really good.
Is it your
:Thanks for the sketch. It sounds really good.
:
:Is it your intention that drivers which cannot work from the b_pages[]
:array will call to map them into VM, or will a flag on the driver/dev_t/
:whatever tell the generic code that it should be mapped before calling
:the driver ?
:
:What about
* Matthew Dillon [EMAIL PROTECTED] [000320 10:01] wrote:
:
:
:Kirk and I have already mapped out a plan to drastically update
:the buffer cache API which will encapsulate much of the state within
:the buffer cache module.
:
:Sounds good. Combined with my stackable BIO plans
In message [EMAIL PROTECTED], Matthew Dillon writes:
Well, let me tell you what the fuzzy goal is first and then maybe we
can work backwards.
Eventually all physical I/O needs a physical address. The quickest
way to get to a physical address is to be given an array of
In message [EMAIL PROTECTED], Alfred Perlstein writes:
Keeping the currect cluster code is a bad idea, if the drivers were
taught how to traverse the linked list in the buf struct rather
than just notice "a big buffer" we could avoid a lot of page
twiddling and also allow for massive IO
* Poul-Henning Kamp [EMAIL PROTECTED] [000320 11:45] wrote:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
Keeping the currect cluster code is a bad idea, if the drivers were
taught how to traverse the linked list in the buf struct rather
than just notice "a big buffer" we could
* Poul-Henning Kamp [EMAIL PROTECTED] [000320 12:03] wrote:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
* Poul-Henning Kamp [EMAIL PROTECTED] [000320 11:45] wrote:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
Keeping the currect cluster code is a bad idea, if the
In message [EMAIL PROTECTED], Alfred Perlstein writes:
Before we redesign the clustering, I would like to know if we
actually have any recent benchmarks which prove that clustering
is overall beneficial ?
Yes it is really benificial.
I would like to see some numbers if you have them.
:
:* Poul-Henning Kamp [EMAIL PROTECTED] [000320 12:03] wrote:
: In message [EMAIL PROTECTED], Alfred Perlstein writes:
: * Poul-Henning Kamp [EMAIL PROTECTED] [000320 11:45] wrote:
: In message [EMAIL PROTECTED], Alfred Perlstein writes:
:
: Keeping the currect cluster code is a bad idea,
In the last episode (Mar 20), Poul-Henning Kamp said:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
* Poul-Henning Kamp [EMAIL PROTECTED] [000320 11:45] wrote:
Before we redesign the clustering, I would like to know if we
actually have any recent benchmarks which prove that
:
:I agree that it is obvious for NFS, but I don't see it as being
:obvious at all for (modern) disks, so for that case I would like
:to see numbers.
:
:If running without clustering is just as fast for modern disks,
:I think the clustering needs rethought.
:
: Depends on the type of disk drive
:
:In message [EMAIL PROTECTED], Matthew Dillon writes:
:
:Well, let me tell you what the fuzzy goal is first and then maybe we
:can work backwards.
:
:Eventually all physical I/O needs a physical address. The quickest
:way to get to a physical address is to be given an array of
* Matthew Dillon [EMAIL PROTECTED] [000320 14:18] wrote:
:lock on the bp. With a shared lock you are allowed to issue READ
:I/O but you are not allowed to modify the contents of the buffer.
:With an exclusive lock you are allowed to issue both READ and WRITE
:I/O and you
: lock on the bp. With a shared lock you are allowed to issue READ
: I/O but you are not allowed to modify the contents of the buffer.
: With an exclusive lock you are allowed to issue both READ and WRITE
: I/O and you can modify the contents of the buffer.
:
:
Alfred Perlstein wrote:
* Poul-Henning Kamp [EMAIL PROTECTED] [000320 11:45] wrote:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
Keeping the currect cluster code is a bad idea, if the drivers were
taught how to traverse the linked list in the buf struct rather
than just
Just as a perhaps interesting aside on this topic; it'd be quite
neat for controllers that understand scatter/gather to be able to
simply suck N regions of buffer cache which were due for committing
directly into an S/G list...
(wishlist item, I guess 8)
--
\\ Give a man a fish, and you
Committing a 64k block would require 8 times the overhead of bundling
up the RPC as well as transmission and reply, it may be possible
to pipeline these commits because you don't really need to wait
for one to complete before issueing another request, but it's still
8x the amount of traffic.
I
In message [EMAIL PROTECTED], Alfred Perlstein writes:
* Poul-Henning Kamp [EMAIL PROTECTED] [000320 11:45] wrote:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
Keeping the currect cluster code is a bad idea, if the drivers were
taught how to traverse the linked list in the buf
I agree that it is obvious for NFS, but I don't see it as being
obvious at all for (modern) disks, so for that case I would like
to see numbers.
If running without clustering is just as fast for modern disks,
I think the clustering needs rethought.
I think it should be pretty obvious,
On Mon, Mar 20, 2000 at 08:21:52PM +0100, Poul-Henning Kamp wrote:
In message [EMAIL PROTECTED], Alfred Perlstein writes:
Keeping the currect cluster code is a bad idea, if the drivers were
taught how to traverse the linked list in the buf struct rather
than just notice "a big buffer" we
Hm. But I'd think that even with modern drives a smaller number of bigger
I/Os is preferable over lots of very small I/Os.
Not necessarily. It depends upon overhead costs per-i/o. With larger I/Os, you
do pay in interference costs (you can't transfer data for request N because
the 256Kbytes
38 matches
Mail list logo