>         for (;;) {
>                 struct writeback_control wbc = {
>                         .bdi            = bdi,
>                         .sync_mode      = WB_SYNC_NONE,
>                         .older_than_this = NULL,
>                         .nr_to_write    = write_chunk,
>                         .range_cyclic   = 1,
>                 };
> 
>                 get_dirty_limits(&background_thresh, &dirty_thresh,
>                                 &bdi_thresh, bdi);
>                 bdi_nr_reclaimable = bdi_stat(bdi, BDI_DIRTY) +
>                                         bdi_stat(bdi, BDI_UNSTABLE);
> (A)             if (bdi_nr_reclaimable + bdi_stat(bdi, BDI_WRITEBACK) <=
>                         bdi_thresh)
>                                 break;
> 
>                 /* Note: nr_reclaimable denotes nr_dirty + nr_unstable.
>                  * Unstable writes are a feature of certain networked
>                  * filesystems (i.e. NFS) in which data may have been
>                  * written to the server's write cache, but has not yet
>                  * been flushed to permanent storage.
>                  */
> (B)              if (bdi_nr_reclaimable) {
>                         writeback_inodes(&wbc);
> 
>                         get_dirty_limits(&background_thresh, &dirty_thresh,
>                                        &bdi_thresh, bdi);
>                         bdi_nr_reclaimable = bdi_stat(bdi, BDI_DIRTY) +
>                                                 bdi_stat(bdi, BDI_UNSTABLE);
> (C)                     if (bdi_nr_reclaimable + bdi_stat(bdi, BDI_WRITEBACK) 
> <=
>                                 bdi_thresh)
>                                 break;
> 
>                         pages_written += write_chunk - wbc.nr_to_write;
>                         if (pages_written >= write_chunk)
>                                 break;          /* We've done our duty */
>                 }
>                 congestion_wait(WRITE, HZ/10);
>         }
> 
> I'm thinking that if bdi_nr_reclaimable == 0, A reduces to
> bdi_stat(bdi, BDI_WRITEBACK) <= bdi_thresh and we're still out of the
> loop, no?
> 

Yep, sorry.

I'll review this patch, there seem to be odd things, like calculating
dirty_thresh, then not ever using it...

Btw, would you mind measuring the performance of my patch with the
slow/fast device mix, to see how it compares to your patchset and the
vanila kernel?

Thanks,
Miklos
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to