On Sun, 2007-06-24 at 09:40 -0700, Linus Torvalds wrote:
>
> On Sat, 23 Jun 2007, Peter Zijlstra wrote:
>
> > On Thu, 2007-06-21 at 16:08 -0700, Linus Torvalds wrote:
> > >
> > > The vm_dirty_ratio thing is a global value, and I think we need that
> > > regardless (for the independent issue of
On Sat, 23 Jun 2007, Peter Zijlstra wrote:
> On Thu, 2007-06-21 at 16:08 -0700, Linus Torvalds wrote:
> >
> > The vm_dirty_ratio thing is a global value, and I think we need that
> > regardless (for the independent issue of memory deadlocks etc), but if we
> > *additionally* had a per-device
On Thu, 2007-06-21 at 16:08 -0700, Linus Torvalds wrote:
>
> On Thu, 21 Jun 2007, Matt Mackall wrote:
> >
> > Perhaps we want to throw some sliding window algorithms at it. We can
> > bound requests and total I/O and if requests get retired too slowly we
> > can shrink the windows. Alternately,
On Thu, 2007-06-21 at 16:08 -0700, Linus Torvalds wrote:
On Thu, 21 Jun 2007, Matt Mackall wrote:
Perhaps we want to throw some sliding window algorithms at it. We can
bound requests and total I/O and if requests get retired too slowly we
can shrink the windows. Alternately, we can
On Sat, 23 Jun 2007, Peter Zijlstra wrote:
On Thu, 2007-06-21 at 16:08 -0700, Linus Torvalds wrote:
The vm_dirty_ratio thing is a global value, and I think we need that
regardless (for the independent issue of memory deadlocks etc), but if we
*additionally* had a per-device throttle
On Sun, 2007-06-24 at 09:40 -0700, Linus Torvalds wrote:
On Sat, 23 Jun 2007, Peter Zijlstra wrote:
On Thu, 2007-06-21 at 16:08 -0700, Linus Torvalds wrote:
The vm_dirty_ratio thing is a global value, and I think we need that
regardless (for the independent issue of memory
On Thu, 21 Jun 2007, Matt Mackall wrote:
>
> Perhaps we want to throw some sliding window algorithms at it. We can
> bound requests and total I/O and if requests get retired too slowly we
> can shrink the windows. Alternately, we can grow the window if we're
> retiring things within our desired
On Wed, Jun 20, 2007 at 11:20:59AM +0200, Jens Axboe wrote:
> On Wed, Jun 20 2007, Peter Zijlstra wrote:
> > On Wed, 2007-06-20 at 11:14 +0200, Jens Axboe wrote:
> > > On Wed, Jun 20 2007, Andrew Morton wrote:
> > > > Perhaps our queues are too long - if the VFS _does_ back off, it'll take
> > > >
On Thu, 2007-06-21 at 12:54 -0400, Mark Lord wrote:
> Andrew Morton wrote:
> >
> > What do we actually want the kernel to *do*? Stated in terms of "when the
> > dirty memory state is A, do B" and "when userspace does C, the kernel should
> > do D".
>
> When we have dirty pages awaiting
Andrew Morton wrote:
What do we actually want the kernel to *do*? Stated in terms of "when the
dirty memory state is A, do B" and "when userspace does C, the kernel should
do D".
When we have dirty pages awaiting write-out,
and the write-out device is completely idle,
then we should be
Dave Jones wrote:
On Mon, Jun 18, 2007 at 04:47:11PM -0700, Andrew Morton wrote:
> Frankly, I find it very depressing that the kernel defaults matter. These
> things are trivially tunable and you'd think that after all these years,
> distro initscripts would be establishing the settings,
Dave Jones wrote:
On Mon, Jun 18, 2007 at 04:47:11PM -0700, Andrew Morton wrote:
Frankly, I find it very depressing that the kernel defaults matter. These
things are trivially tunable and you'd think that after all these years,
distro initscripts would be establishing the settings, based
Andrew Morton wrote:
What do we actually want the kernel to *do*? Stated in terms of when the
dirty memory state is A, do B and when userspace does C, the kernel should
do D.
When we have dirty pages awaiting write-out,
and the write-out device is completely idle,
then we should be writing
On Thu, 2007-06-21 at 12:54 -0400, Mark Lord wrote:
Andrew Morton wrote:
What do we actually want the kernel to *do*? Stated in terms of when the
dirty memory state is A, do B and when userspace does C, the kernel should
do D.
When we have dirty pages awaiting write-out,
and the
On Wed, Jun 20, 2007 at 11:20:59AM +0200, Jens Axboe wrote:
On Wed, Jun 20 2007, Peter Zijlstra wrote:
On Wed, 2007-06-20 at 11:14 +0200, Jens Axboe wrote:
On Wed, Jun 20 2007, Andrew Morton wrote:
Perhaps our queues are too long - if the VFS _does_ back off, it'll take
some time for
On Thu, 21 Jun 2007, Matt Mackall wrote:
Perhaps we want to throw some sliding window algorithms at it. We can
bound requests and total I/O and if requests get retired too slowly we
can shrink the windows. Alternately, we can grow the window if we're
retiring things within our desired
On Wed, 20 Jun 2007, Arjan van de Ven wrote:
>
> maybe that needs to be fixed? If you stopped dirtying after the initial
> bump.. is there a reason for the kernel to dump all that data to the
> disk in such a way that it disturbs interactive users?
No. I would argue that the kernel should try
On Wed, 2007-06-20 at 10:17 -0700, Linus Torvalds wrote:
>
> On Wed, 20 Jun 2007, Peter Zijlstra wrote:
> >
> > Building on the per BDI patches, how about integrating feedback from the
> > full-ness of device queues. That is, when we are happily doing IO and we
> > cannot possibly saturate the
On Wed, 20 Jun 2007, Peter Zijlstra wrote:
>
> Building on the per BDI patches, how about integrating feedback from the
> full-ness of device queues. That is, when we are happily doing IO and we
> cannot possibly saturate the active devices (as measured by their queue
> never reaching 75%?)
On Wed, 2007-06-20 at 11:20 +0200, Jens Axboe wrote:
> On Wed, Jun 20 2007, Peter Zijlstra wrote:
> > On Wed, 2007-06-20 at 11:14 +0200, Jens Axboe wrote:
> > > On Wed, Jun 20 2007, Andrew Morton wrote:
> > > > Perhaps our queues are too long - if the VFS _does_ back off, it'll take
> > > > some
On Wed, Jun 20 2007, Peter Zijlstra wrote:
> On Wed, 2007-06-20 at 11:14 +0200, Jens Axboe wrote:
> > On Wed, Jun 20 2007, Andrew Morton wrote:
> > > Perhaps our queues are too long - if the VFS _does_ back off, it'll take
> > > some time for that to have an effect.
> > >
> > > Perhaps the fact
On Wed, 2007-06-20 at 01:58 -0700, Andrew Morton wrote:
> > On Wed, 20 Jun 2007 10:35:36 +0200 Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> > On Tue, 2007-06-19 at 21:44 -0700, Andrew Morton wrote:
> >
> > > Anyway, this is all arse-about. What is the design? What algorithms
> > > do we need to
On Wed, 2007-06-20 at 11:14 +0200, Jens Axboe wrote:
> On Wed, Jun 20 2007, Andrew Morton wrote:
> > Perhaps our queues are too long - if the VFS _does_ back off, it'll take
> > some time for that to have an effect.
> >
> > Perhaps the fact that the queue size knows nothing about the _size_ of
On Wed, Jun 20 2007, Andrew Morton wrote:
> Perhaps our queues are too long - if the VFS _does_ back off, it'll take
> some time for that to have an effect.
>
> Perhaps the fact that the queue size knows nothing about the _size_ of the
> requests in the queue is a problem.
It's complicated, the
> On Wed, 20 Jun 2007 10:35:36 +0200 Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> On Tue, 2007-06-19 at 21:44 -0700, Andrew Morton wrote:
>
> > Anyway, this is all arse-about. What is the design? What algorithms
> > do we need to implement to do this successfully? Answer me that, then
> > we
On Tue, 2007-06-19 at 21:44 -0700, Andrew Morton wrote:
> Anyway, this is all arse-about. What is the design? What algorithms
> do we need to implement to do this successfully? Answer me that, then
> we can decide upon these implementation details.
Building on the per BDI patches, how about
On Tue, 2007-06-19 at 21:44 -0700, Andrew Morton wrote:
Anyway, this is all arse-about. What is the design? What algorithms
do we need to implement to do this successfully? Answer me that, then
we can decide upon these implementation details.
Building on the per BDI patches, how about
On Wed, 20 Jun 2007 10:35:36 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Tue, 2007-06-19 at 21:44 -0700, Andrew Morton wrote:
Anyway, this is all arse-about. What is the design? What algorithms
do we need to implement to do this successfully? Answer me that, then
we can decide
On Wed, Jun 20 2007, Andrew Morton wrote:
Perhaps our queues are too long - if the VFS _does_ back off, it'll take
some time for that to have an effect.
Perhaps the fact that the queue size knows nothing about the _size_ of the
requests in the queue is a problem.
It's complicated, the size
On Wed, 2007-06-20 at 01:58 -0700, Andrew Morton wrote:
On Wed, 20 Jun 2007 10:35:36 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Tue, 2007-06-19 at 21:44 -0700, Andrew Morton wrote:
Anyway, this is all arse-about. What is the design? What algorithms
do we need to implement to
On Wed, 2007-06-20 at 11:14 +0200, Jens Axboe wrote:
On Wed, Jun 20 2007, Andrew Morton wrote:
Perhaps our queues are too long - if the VFS _does_ back off, it'll take
some time for that to have an effect.
Perhaps the fact that the queue size knows nothing about the _size_ of the
On Wed, Jun 20 2007, Peter Zijlstra wrote:
On Wed, 2007-06-20 at 11:14 +0200, Jens Axboe wrote:
On Wed, Jun 20 2007, Andrew Morton wrote:
Perhaps our queues are too long - if the VFS _does_ back off, it'll take
some time for that to have an effect.
Perhaps the fact that the queue
On Wed, 2007-06-20 at 11:20 +0200, Jens Axboe wrote:
On Wed, Jun 20 2007, Peter Zijlstra wrote:
On Wed, 2007-06-20 at 11:14 +0200, Jens Axboe wrote:
On Wed, Jun 20 2007, Andrew Morton wrote:
Perhaps our queues are too long - if the VFS _does_ back off, it'll take
some time for that
On Wed, 20 Jun 2007, Peter Zijlstra wrote:
Building on the per BDI patches, how about integrating feedback from the
full-ness of device queues. That is, when we are happily doing IO and we
cannot possibly saturate the active devices (as measured by their queue
never reaching 75%?) then we
On Wed, 2007-06-20 at 10:17 -0700, Linus Torvalds wrote:
On Wed, 20 Jun 2007, Peter Zijlstra wrote:
Building on the per BDI patches, how about integrating feedback from the
full-ness of device queues. That is, when we are happily doing IO and we
cannot possibly saturate the active
On Wed, 20 Jun 2007, Arjan van de Ven wrote:
maybe that needs to be fixed? If you stopped dirtying after the initial
bump.. is there a reason for the kernel to dump all that data to the
disk in such a way that it disturbs interactive users?
No. I would argue that the kernel should try to
On Wed, 20 Jun 2007 00:24:34 -0400 Dave Jones <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 18, 2007 at 04:47:11PM -0700, Andrew Morton wrote:
>
> > Frankly, I find it very depressing that the kernel defaults matter. These
> > things are trivially tunable and you'd think that after all these
On Mon, Jun 18, 2007 at 04:47:11PM -0700, Andrew Morton wrote:
> Frankly, I find it very depressing that the kernel defaults matter. These
> things are trivially tunable and you'd think that after all these years,
> distro initscripts would be establishing the settings, based upon expected
>
From: Linus Torvalds <[EMAIL PROTECTED]>
Date: Tue, 19 Jun 2007 12:04:33 -0700 (PDT)
>
>
> On Tue, 19 Jun 2007, John Stoffel wrote:
> >
> > Shouldn't the vm_dirty_ratio be based on the speed of the device, and
> > not the size of memory?
>
> Yes. It should depend on:
> - speed of the
On Tue, 19 Jun 2007, Linus Torvalds wrote:
>
> Yes. It should depend on:
> - speed of the device(s) in question
Btw, this one can be quite a big deal. Try connecting an iPod and syncing
8GB of data to it. Oops.
So yes, it would be nice to have some per-device logic too. Tested patches
On Tue, 19 Jun 2007, John Stoffel wrote:
>
> Shouldn't the vm_dirty_ratio be based on the speed of the device, and
> not the size of memory?
Yes. It should depend on:
- speed of the device(s) in question
- seekiness of the workload
- wishes of the user as per the latency of other
Andrew Morton <[EMAIL PROTECTED]> writes:
>
> It seems too large. Memory sizes are going up faster than disk throughput
> and it seems wrong to keep vast amounts of dirty data floating about in
> memory like this. It can cause long stalls while the system writes back
> huge amounts of data and
> "Andrew" == Andrew Morton <[EMAIL PROTECTED]> writes:
Andrew> On Mon, 18 Jun 2007 14:14:30 -0700
Andrew> Tim Chen <[EMAIL PROTECTED]> wrote:
>> IOZone write drops by about 60% when test file size is 50 percent of
>> memory. Rand-write drops by 90%.
Andrew> heh.
Andrew> (Or is that an
Andrew == Andrew Morton [EMAIL PROTECTED] writes:
Andrew On Mon, 18 Jun 2007 14:14:30 -0700
Andrew Tim Chen [EMAIL PROTECTED] wrote:
IOZone write drops by about 60% when test file size is 50 percent of
memory. Rand-write drops by 90%.
Andrew heh.
Andrew (Or is that an inappropriate
Andrew Morton [EMAIL PROTECTED] writes:
It seems too large. Memory sizes are going up faster than disk throughput
and it seems wrong to keep vast amounts of dirty data floating about in
memory like this. It can cause long stalls while the system writes back
huge amounts of data and is
On Tue, 19 Jun 2007, John Stoffel wrote:
Shouldn't the vm_dirty_ratio be based on the speed of the device, and
not the size of memory?
Yes. It should depend on:
- speed of the device(s) in question
- seekiness of the workload
- wishes of the user as per the latency of other operations.
On Tue, 19 Jun 2007, Linus Torvalds wrote:
Yes. It should depend on:
- speed of the device(s) in question
Btw, this one can be quite a big deal. Try connecting an iPod and syncing
8GB of data to it. Oops.
So yes, it would be nice to have some per-device logic too. Tested patches
would
From: Linus Torvalds [EMAIL PROTECTED]
Date: Tue, 19 Jun 2007 12:04:33 -0700 (PDT)
On Tue, 19 Jun 2007, John Stoffel wrote:
Shouldn't the vm_dirty_ratio be based on the speed of the device, and
not the size of memory?
Yes. It should depend on:
- speed of the device(s) in question
On Mon, Jun 18, 2007 at 04:47:11PM -0700, Andrew Morton wrote:
Frankly, I find it very depressing that the kernel defaults matter. These
things are trivially tunable and you'd think that after all these years,
distro initscripts would be establishing the settings, based upon expected
On Wed, 20 Jun 2007 00:24:34 -0400 Dave Jones [EMAIL PROTECTED] wrote:
On Mon, Jun 18, 2007 at 04:47:11PM -0700, Andrew Morton wrote:
Frankly, I find it very depressing that the kernel defaults matter. These
things are trivially tunable and you'd think that after all these years,
> Is it good to keep tons of dirty stuff around? Sure. It allows overwriting
> (and thus avoiding doing the write in the first place), but it also allows
> for a more aggressive IO scheduling, in that you have more writes that you
> can schedule.
it also allows for an elevator that can merge
On Mon, 18 Jun 2007, Andrew Morton wrote:
> On Mon, 18 Jun 2007 14:14:30 -0700
> Tim Chen <[EMAIL PROTECTED]> wrote:
>
> > Andrew,
> >
> > The default vm_dirty_ratio changed from 40 to 10
> > for the 2.6.22-rc kernels in this patch:
Yup.
> > IOZone write drops by about 60% when test file
On Mon, 18 Jun 2007 14:14:30 -0700
Tim Chen <[EMAIL PROTECTED]> wrote:
> Andrew,
>
> The default vm_dirty_ratio changed from 40 to 10
> for the 2.6.22-rc kernels in this patch:
>
> http://git.kernel.org/?
>
Andrew,
The default vm_dirty_ratio changed from 40 to 10
for the 2.6.22-rc kernels in this patch:
http://git.kernel.org/?
p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=07db59bd6b0f279c31044cba6787344f63be87ea;hp=de46c33745f5e2ad594c72f2cf5f490861b16ce1
IOZone write drops by about
Andrew,
The default vm_dirty_ratio changed from 40 to 10
for the 2.6.22-rc kernels in this patch:
http://git.kernel.org/?
p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=07db59bd6b0f279c31044cba6787344f63be87ea;hp=de46c33745f5e2ad594c72f2cf5f490861b16ce1
IOZone write drops by about
On Mon, 18 Jun 2007 14:14:30 -0700
Tim Chen [EMAIL PROTECTED] wrote:
Andrew,
The default vm_dirty_ratio changed from 40 to 10
for the 2.6.22-rc kernels in this patch:
http://git.kernel.org/?
On Mon, 18 Jun 2007, Andrew Morton wrote:
On Mon, 18 Jun 2007 14:14:30 -0700
Tim Chen [EMAIL PROTECTED] wrote:
Andrew,
The default vm_dirty_ratio changed from 40 to 10
for the 2.6.22-rc kernels in this patch:
Yup.
IOZone write drops by about 60% when test file size is 50
Is it good to keep tons of dirty stuff around? Sure. It allows overwriting
(and thus avoiding doing the write in the first place), but it also allows
for a more aggressive IO scheduling, in that you have more writes that you
can schedule.
it also allows for an elevator that can merge
58 matches
Mail list logo