Hi,
Thanks for your comments.
I'm sorry for my late reply.
Bill Davidsen wrote:
>> Then, what do you think of the following idea?
>>
>> (4) add `dirty_start_writeback_ratio' as percentage of memory,
>> at which a generator of dirty pages itself starts writeback
>> (that is, non-blocking ra
Tomoki Sekiyama wrote:
Hi,
Thanks for your reply.
3) Use "dirty_ratio" as the blocking ratio. And add
"start_writeback_ratio", and start writeback at
start_writeback_ratio(default:90) * dirty_ratio / 100 [%].
In this way, specifying blocking ratio can be done in the same way
as curre
Hi,
Thanks for your reply.
>>3) Use "dirty_ratio" as the blocking ratio. And add
>> "start_writeback_ratio", and start writeback at
>> start_writeback_ratio(default:90) * dirty_ratio / 100 [%].
>> In this way, specifying blocking ratio can be done in the same way
>> as current kernel, but
Tomoki Sekiyama wrote:
Hi,
Thanks for your comments.
I'm sorry for my late reply.
Bill Davidsen wrote:
> Andrew Morton wrote:
>> - I wonder if dirty_limit_ratio is the best name we could choose.
>> vm_dirty_blocking_ratio, perhaps? Dunno.
>>
> I don't like it, but I dislike it less than "dirt
Hi,
Thanks for your comments.
I'm sorry for my late reply.
Bill Davidsen wrote:
> Andrew Morton wrote:
>>> On Wed, 14 Mar 2007 21:42:46 +0900 Tomoki Sekiyama
>>> <[EMAIL PROTECTED]> wrote:
>>>
>>> ...
>>>
>>>
>>> -Solution:
>>>
>>> I consider that all of the dirty pages for the disk have been wr
Andrew Morton wrote:
On Wed, 14 Mar 2007 21:42:46 +0900 Tomoki Sekiyama <[EMAIL PROTECTED]> wrote:
...
-Solution:
I consider that all of the dirty pages for the disk have been written
back and that the disk is clean if a process cannot write 'write_chunk'
pages in balance_dirty_pages().
To a
> On Wed, 14 Mar 2007 21:42:46 +0900 Tomoki Sekiyama <[EMAIL PROTECTED]> wrote:
>
> ...
>
>
> -Solution:
>
> I consider that all of the dirty pages for the disk have been written
> back and that the disk is clean if a process cannot write 'write_chunk'
> pages in balance_dirty_pages().
>
> To av
Hi,
I've been working on an alternative solution (see patch below). However
I haven't posted yet because I'm not quite satisfied and haven't done a
lot of testing.
The patch relies on the per backing dev dirty/writeback counts currently
in -mm to which David Chinner objected. I plan to rework tho
Hi,
I ported the patch sent before to 2.6.21-rc3-mm2, so I'm resending it.
( Previous patch is available at
http://marc.info/?l=linux-kernel&m=117223267512340&w=2 )
-Summary:
I have observed a problem that write(2) can be blocked for a long time
if a system has several disks and is under heavy
Hi,
Thank you for your comments.
Leroy van Logchem wrote:
>The default dirty_ratio on most 2.6 kernels tend to be too large imo.
>If you are going to do sustained writes multiple times the size of
>the memory you have at least two problems.
>
>1) The precious dentry and inodecache will be dropped
Hi,
On Fri, 2007-03-02 at 13:06 +, Leroy van Logchem wrote:
> > I'm sorry to piggy-back this thread.
> >
> > Could it be what I'm experiencing in the following bugzilla report:
> > http://bugzilla.kernel.org/show_bug.cgi?id=7372
> >
> > As I explained in the report, I see this issue only sin
> I'm sorry to piggy-back this thread.
>
> Could it be what I'm experiencing in the following bugzilla report:
> http://bugzilla.kernel.org/show_bug.cgi?id=7372
>
> As I explained in the report, I see this issue only since 2.6.18.
> So if your concern is related to mine, what could have changed b
Hi,
On Thu, 2007-03-01 at 12:47 +, Leroy van Logchem wrote:
> Tomoki Sekiyama hitachi.com> writes:
> > thanks for your comments.
>
> The default dirty_ratio on most 2.6 kernels tend to be too large imo.
> If you are going to do sustained writes multiple times the size of
> the memory you hav
Hi Kamezawa-san,
KAMEZAWA Hiroyuki wrote:
>>> Interesting, but how about adjust this parameter like below instead of
>>> adding new control knob ?(this kind of knob is not easy to use.)
>>> count_dirty_pages_on_device_limited(bdi, writechunk) above returns
>>> dirty pages on bdi. if # of dirt
Tomoki Sekiyama hitachi.com> writes:
> thanks for your comments.
The default dirty_ratio on most 2.6 kernels tend to be too large imo.
If you are going to do sustained writes multiple times the size of
the memory you have at least two problems.
1) The precious dentry and inodecache will be dro
On Tue, 27 Feb 2007 09:50:16 +0900
Tomoki Sekiyama <[EMAIL PROTECTED]> wrote:
> Hi Kamezawa-san,
>
> thanks for your reply.
>
> KAMEZAWA Hiroyuki wrote:
> > Interesting, but how about adjust this parameter like below instead of
> > adding new control knob ?(this kind of knob is not easy to use.)
Hi Nikita,
thanks for your comments.
Nikita Danilov wrote:
>> While Dirty+Writeback pages get more than 40% of memory, process-B is
>> blocked in balance_dirty_pages() until writeback of some (`write_chunk',
>> typically = 1536) dirty pages on disk-b is started.
>
> May be the simpler solution is
Hi Kamezawa-san,
thanks for your reply.
KAMEZAWA Hiroyuki wrote:
> Interesting, but how about adjust this parameter like below instead of
> adding new control knob ?(this kind of knob is not easy to use.)
>
> ==
> struct writeback_control wbc = {
> .bdi
Tomoki Sekiyama writes:
> Hi,
Hello,
>
[...]
>
> While Dirty+Writeback pages get more than 40% of memory, process-B is
> blocked in balance_dirty_pages() until writeback of some (`write_chunk',
> typically = 1536) dirty pages on disk-b is started.
May be the simpler solution is to use
On Fri, 23 Feb 2007 21:03:37 +0900
Tomoki Sekiyama <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have observed a problem that write(2) can be blocked for a long time
> if a system has several disks and is under heavy I/O pressure. This
> patchset is to avoid the problem.
>
> Example of the probrem:
>
Hi,
I have observed a problem that write(2) can be blocked for a long time
if a system has several disks and is under heavy I/O pressure. This
patchset is to avoid the problem.
Example of the probrem:
There are two processes on a system which has two disks. Process-A
writes heavily to disk-a, an
21 matches
Mail list logo