On Fri, Oct 16, 2015 at 07:08:37PM -0400, Zygo Blaxell wrote:
> On Fri, Oct 16, 2015 at 06:50:08PM +0200, David Sterba wrote:
> > The 'limit' filter is underdesigned, it should have been a range for
> > [min,max], with some relaxed semantics when one of the bounds is
> > missing. Besides that, using a full u64 for a single value is a waste of
> > bytes.
> 
> What is min for?
> 
> If we have more than 'max' matching chunks, we'd process 'max' and stop.

Right.

> If we have fewer than 'min' chunks, we'd process what we have and run out.

If we have fewer than min, we'll do nothing.

> If we have more than 'min' chunks but fewer than 'max' chunks, why would
> we stop before we reach 'max' chunks?

I must be missing something here. If there are less than 'max' chunks
(with all filters applied), how are we supposed to reach 'max'?

The 'limit' filter is applied after all the other filters. It can be
used standalone, or with eg. usage. The usecase where I find it useful:

* we want to make metadata chunks more compact
* let's set usage=30
* avoid unnecessary work (which is a bigger performance hit in case
  of metadata chunks), ie. if there's a single chunk with usage < 30%,
  let's skip it

The command:

 $ btrfs balance start -musage=30,limit=2.. /path

So in case there's only one such chunk, balance would just move it
without any gain. Avoiding the unnecessary work here is IMO a win,
though it might seem limited. But I'm going to utilize this in the
btrfsmaintenance package that among others does periodic balancing
on the background, I believe the effects will be noticeable.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to