In message <[EMAIL PROTECTED]>, Greg Lehey writes
:
>> Hmm, try to keep vinum/RAID5 in the picture when you look at this
>> code, it complicated matters a lot.
>
>I don't think it's that relevant, in fact.
Yes it is, because the CPU needs to read the buffers to calculate
the parity, it cannot ju
On Thursday, 23 March 2000 at 17:44:38 -0600, Dan Nelson wrote:
> In the last episode (Mar 23), Greg Lehey said:
>>
>> Agreed. This is on the Vinum wishlist, but it comes at the expense of
>> reliability (how long do you wait to cluster? What happens if the
>> system fails in between?). In addi
In the last episode (Mar 23), Greg Lehey said:
>
> Agreed. This is on the Vinum wishlist, but it comes at the expense of
> reliability (how long do you wait to cluster? What happens if the
> system fails in between?). In addition, for Vinum it needs to be done
> before entering the hardware dr
On Tuesday, 21 March 2000 at 9:29:56 -0800, Matthew Dillon wrote:
>>>
>>> I would think that track-caches and intelligent drives would gain
>>> much if not more of what clustering was designed to do gain.
>>
>> Hm. But I'd think that even with modern drives a smaller number of bigger
>> I/Os is p
On Monday, 20 March 2000 at 22:52:59 +0100, Poul-Henning Kamp wrote:
> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>> * Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
>>> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>>>
Keeping the currect cluster code is
On Monday, 20 March 2000 at 15:23:31 -0600, Dan Nelson wrote:
> In the last episode (Mar 20), Poul-Henning Kamp said:
>> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>>> * Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
Before we redesign the clustering, I would l
> >>Eventually all physical I/O needs a physical address. The quickest
> >>way to get to a physical address is to be given an array of vm_page_t's
> >>(which can be trivially translated to physical addresses).
> >
> > Not all: PIO access to ATA needs virtual access. RAID5 needs
> > v
On Monday, 20 March 2000 at 14:04:48 -0800, Matthew Dillon wrote:
>
> If a particular subsystem needs b_data, then that subsystem is obviously
> willing to take the virtual mapping / unmapping hit. If you look at
> Greg's current code this is, in fact, what is occuring the critica
On Monday, 20 March 2000 at 20:17:13 +0100, Poul-Henning Kamp wrote:
> In message <[EMAIL PROTECTED]>, Matthew Dillon writes:
>
>>Well, let me tell you what the fuzzy goal is first and then maybe we
>>can work backwards.
>>
>>Eventually all physical I/O needs a physical address. The q
> On Tue, Mar 21, 2000 at 01:14:45PM -0800, Rodney W. Grimes wrote:
> > > On Tue, Mar 21, 2000 at 09:29:56AM -0800, Matthew Dillon wrote:
> > > > :>
> > > > :> I would think that track-caches and intelligent drives would gain
> > > > :> much if not more of what clustering was designed to do gain.
On Tue, Mar 21, 2000 at 01:14:45PM -0800, Rodney W. Grimes wrote:
> > On Tue, Mar 21, 2000 at 09:29:56AM -0800, Matthew Dillon wrote:
> > > :>
> > > :> I would think that track-caches and intelligent drives would gain
> > > :> much if not more of what clustering was designed to do gain.
> > > :
>
> On Tue, Mar 21, 2000 at 09:29:56AM -0800, Matthew Dillon wrote:
> > :>
> > :> I would think that track-caches and intelligent drives would gain
> > :> much if not more of what clustering was designed to do gain.
> > :
> > :Hm. But I'd think that even with modern drives a smaller number of bigge
On Tue, Mar 21, 2000 at 09:29:56AM -0800, Matthew Dillon wrote:
> :>
> :> I would think that track-caches and intelligent drives would gain
> :> much if not more of what clustering was designed to do gain.
> :
> :Hm. But I'd think that even with modern drives a smaller number of bigger
> :I/Os is
On Mon, Mar 20, 2000 at 11:54:58PM -0800, Matthew Jacob wrote:
> >
> > Hm. But I'd think that even with modern drives a smaller number of bigger
> > I/Os is preferable over lots of very small I/Os.
>
> Not necessarily. It depends upon overhead costs per-i/o. With larger I/Os, you
> do pay in int
:>
:> I would think that track-caches and intelligent drives would gain
:> much if not more of what clustering was designed to do gain.
:
:Hm. But I'd think that even with modern drives a smaller number of bigger
:I/Os is preferable over lots of very small I/Os. Or have I missed the point?
:
:--
:> Hm. But I'd think that even with modern drives a smaller number of bigger
:> I/Os is preferable over lots of very small I/Os.
:
:Not necessarily. It depends upon overhead costs per-i/o. With larger I/Os, you
:do pay in interference costs (you can't transfer data for request N because
:the 256K
>
> Hm. But I'd think that even with modern drives a smaller number of bigger
> I/Os is preferable over lots of very small I/Os.
Not necessarily. It depends upon overhead costs per-i/o. With larger I/Os, you
do pay in interference costs (you can't transfer data for request N because
the 256Kbyte
On Mon, Mar 20, 2000 at 08:21:52PM +0100, Poul-Henning Kamp wrote:
> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>
> >Keeping the currect cluster code is a bad idea, if the drivers were
> >taught how to traverse the linked list in the buf struct rather
> >than just notice "a big buff
> I agree that it is obvious for NFS, but I don't see it as being
> obvious at all for (modern) disks, so for that case I would like
> to see numbers.
>
> If running without clustering is just as fast for modern disks,
> I think the clustering needs rethought.
I think it should be pretty obvious
In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>* Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
>> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>>
>> >Keeping the currect cluster code is a bad idea, if the drivers were
>> >taught how to traverse the linked list
>>Committing a 64k block would require 8 times the overhead of bundling
>>up the RPC as well as transmission and reply, it may be possible
>>to pipeline these commits because you don't really need to wait
>>for one to complete before issueing another request, but it's still
>>8x the amount of traf
:>
:>I agree that it is obvious for NFS, but I don't see it as being
:>obvious at all for (modern) disks, so for that case I would like
:>to see numbers.
:>
:>If running without clustering is just as fast for modern disks,
:>I think the clustering needs rethought.
:
: Depends on the type of disk
In the last episode (Mar 20), Poul-Henning Kamp said:
> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
> >* Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
> >>
> >> Before we redesign the clustering, I would like to know if we
> >> actually have any recent benchmarks which
:
:* Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 12:03] wrote:
:> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
:> >* Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
:> >> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
:> >>
:> >> >Keeping the currect cluster c
Just as a perhaps interesting aside on this topic; it'd be quite
neat for controllers that understand scatter/gather to be able to
simply suck N regions of buffer cache which were due for committing
directly into an S/G list...
(wishlist item, I guess 8)
--
\\ Give a man a fish, and you fee
Alfred Perlstein wrote:
>
> * Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
> > In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
> >
> > >Keeping the currect cluster code is a bad idea, if the drivers were
> > >taught how to traverse the linked list in the buf struct rather
:> lock on the bp. With a shared lock you are allowed to issue READ
:> I/O but you are not allowed to modify the contents of the buffer.
:> With an exclusive lock you are allowed to issue both READ and WRITE
:> I/O and you can modify the contents of the buffer.
:>
:> b
* Matthew Dillon <[EMAIL PROTECTED]> [000320 14:18] wrote:
>
> :>lock on the bp. With a shared lock you are allowed to issue READ
> :>I/O but you are not allowed to modify the contents of the buffer.
> :>With an exclusive lock you are allowed to issue both READ and WRITE
> :>I/O
:
:In message <[EMAIL PROTECTED]>, Matthew Dillon writes:
:
:>Well, let me tell you what the fuzzy goal is first and then maybe we
:>can work backwards.
:>
:>Eventually all physical I/O needs a physical address. The quickest
:>way to get to a physical address is to be given an ar
In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>> >> Before we redesign the clustering, I would like to know if we
>> >> actually have any recent benchmarks which prove that clustering
>> >> is overall beneficial ?
>> >
>> >Yes it is really benificial.
>>
>> I would like to see some nu
* Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 12:03] wrote:
> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
> >* Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
> >> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
> >>
> >> >Keeping the currect cluster code is a
In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>* Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
>> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>>
>> >Keeping the currect cluster code is a bad idea, if the drivers were
>> >taught how to traverse the linked list
* Poul-Henning Kamp <[EMAIL PROTECTED]> [000320 11:45] wrote:
> In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>
> >Keeping the currect cluster code is a bad idea, if the drivers were
> >taught how to traverse the linked list in the buf struct rather
> >than just notice "a big buffer" w
In message <[EMAIL PROTECTED]>, Alfred Perlstein writes:
>Keeping the currect cluster code is a bad idea, if the drivers were
>taught how to traverse the linked list in the buf struct rather
>than just notice "a big buffer" we could avoid a lot of page
>twiddling and also allow for massive IO clu
In message <[EMAIL PROTECTED]>, Matthew Dillon writes:
>Well, let me tell you what the fuzzy goal is first and then maybe we
>can work backwards.
>
>Eventually all physical I/O needs a physical address. The quickest
>way to get to a physical address is to be given an array of vm_
* Matthew Dillon <[EMAIL PROTECTED]> [000320 10:01] wrote:
>
> :
> :
> :>Kirk and I have already mapped out a plan to drastically update
> :>the buffer cache API which will encapsulate much of the state within
> :>the buffer cache module.
> :
> :Sounds good. Combined with my stackabl
:Thanks for the sketch. It sounds really good.
:
:Is it your intention that drivers which cannot work from the b_pages[]
:array will call to map them into VM, or will a flag on the driver/dev_t/
:whatever tell the generic code that it should be mapped before calling
:the driver ?
:
:What about un
In message <[EMAIL PROTECTED]>, Matthew Dillon writes:
>I think so. I can give -current a quick synopsis of the plan but I've
>probably forgotten some of the bits (note: the points below are not
>in any particular order):
Thanks for the sketch. It sounds really good.
Is it your in
:
:
:>Kirk and I have already mapped out a plan to drastically update
:>the buffer cache API which will encapsulate much of the state within
:>the buffer cache module.
:
:Sounds good. Combined with my stackable BIO plans that sounds like
:a really great win for FreeBSD.
:
:--
:Poul-H
>Kirk and I have already mapped out a plan to drastically update
>the buffer cache API which will encapsulate much of the state within
>the buffer cache module.
Sounds good. Combined with my stackable BIO plans that sounds like
a really great win for FreeBSD.
--
Poul-Henning Kamp
:I have two patches up for test at http://phk.freebsd.dk/misc
:
:I'm looking for reviews and tests, in particular vinum testing
:would be nice since Grog is quasi-offline at the moment.
:
:Poul-Henning
:
:2317 BWRITE-STRATEGY.patch
:
:This patch is machine generated except for the ccd
I have two patches up for test at http://phk.freebsd.dk/misc
I'm looking for reviews and tests, in particular vinum testing
would be nice since Grog is quasi-offline at the moment.
Poul-Henning
2317 BWRITE-STRATEGY.patch
This patch is machine generated except for the ccd.c and buf
42 matches
Mail list logo