On Tue, Apr 07, 2015 at 02:55:13PM -0400, Jeff Moyer wrote:
The way the on-stack plugging currently works, each nesting level
flushes its own list of I/Os. This can be less than optimal (read
awful) for certain workloads. For example, consider an application
that issues asynchronous O_DIRECT
[ Sending again with a trimmed CC list to just the lists. Jeff - cc
lists that large get blocked by mailing lists... ]
On Tue, Apr 07, 2015 at 02:55:13PM -0400, Jeff Moyer wrote:
The way the on-stack plugging currently works, each nesting level
flushes its own list of I/Os. This can be less
Hi,
That makes sense to me. Acked-by: Steven Whitehouse swhit...@redhat.com
Steve.
On 07/04/15 18:48, Abhi Das wrote:
quotad periodically syncs in-memory quotas to the ondisk quota file
and sets the QDF_REFRESH flag so that a subsequent read of a synced
quota is re-read from disk.
This looks good, but without the blk_finish_plug argument we're bound
to grow programming mistakes where people forget it. Any chance we
could have annotations similar to say rcu_read_lock/rcu_read_unlock
or the spinlocks so that sparse warns about it?
The way the on-stack plugging currently works, each nesting level
flushes its own list of I/Os. This can be less than optimal (read
awful) for certain workloads. For example, consider an application
that issues asynchronous O_DIRECT I/Os. It can send down a bunch of
I/Os together in a single
- Original Message -
quotad periodically syncs in-memory quotas to the ondisk quota file
and sets the QDF_REFRESH flag so that a subsequent read of a synced
quota is re-read from disk.
gfs2_quota_lock() checks for this flag and sets a 'force' bit to
force re-read from disk if