I just realised that the original patch did not stick to the migration
bandwidth target. I have attached another version of the patch that
throttles to the migration bandwidth when above the low watermark.
regards
Steve
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
> "Mike" == Mike Snitzer writes:
Mike> Seems not, I think that BZ may be due to the LBPRZ clause in
Mike> commit 397737223 ("sd: Make discard granularity match logical
Mike> block size when LBPRZ=1") ?
Mike> should be: q->limits.discard_granularity = 1 * logical_block_size;
*blush*
Will fi
Hi,
Further to my previoulsy posted patch, I have developed a patch that
adds a high and low watermark as a percent of the cache disk, where
dm-cache will attempt to keep a percentage of the cache in a dirty state
to avoid writing so much data back to the slow disks.
It co-operates nicely wi
I just realised that I had also made a change to dm-cache-target.c
(attached) to make sure the policy set_dirty function is called every
time we write to a cache block to ensure that the writeback time us
updated for a hot block.
Steve
On 03/03/16 22:25, Steven Wilton wrote:
Hi,
Please find
On Fri, Mar 04, 2016 at 10:06:24AM -0600, Benjamin Marzinski wrote:
> On Fri, Feb 12, 2016 at 04:10:23PM +0800, Gris Ge wrote:
>
> This looks good to me. Personally, I would have loved to see
> multipathd actually passing structured data across the IPC
> connection, and have the multipath client co