Well, it's at zero now...

# btrfs fi df /export/
Data, single: total=30.45TiB, used=30.25TiB
System, DUP: total=32.00MiB, used=3.62MiB
Metadata, DUP: total=66.50GiB, used=65.16GiB
GlobalReserve, single: total=512.00MiB, used=0.00B


On 01/12/17 16:47, Duncan wrote:
Hans van Kranenburg posted on Fri, 01 Dec 2017 18:06:23 +0100 as
excerpted:

On 12/01/2017 05:31 PM, Matt McKinnon wrote:
Sorry, I missed your in-line reply:


2) How big is this filesystem? What does your `btrfs fi df
/mountpoint` say?


# btrfs fi df /export/
Data, single: total=30.45TiB, used=30.25TiB
System, DUP: total=32.00MiB, used=3.62MiB
Metadata, DUP: total=66.50GiB, used=65.08GiB
GlobalReserve, single: total=512.00MiB, used=53.69MiB

Multi-TiB filesystem, check. total/used ratio looks healthy.

Not so healthy, from here.  Data/metadata are healthy, yes,
but...

Any usage at all of global reserve is a red flag indicating that
something in the filesystem thinks, or thought when it resorted
to global reserve, that space is running out.

Global reserve usage doesn't really hint what the problem is,
but it's definitely a red flag that there /is/ a problem, and
it's easily overlooked, as it apparently was here.

It's likely indication of a bug, possibly one of the ones fixed
right around 4.12/4.13.  I'll let the devs and better experts take
it from there, but I'd certainly be worried until global reserve
drops to zero usage.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to