On 02/15/2018 11:51 AM, Austin S. Hemmelgarn wrote:
There are scaling performance issues with directory listings on BTRFS for directories with more than a few thousand files, but they're not well documented (most people don't hit them because most applications are designed around the expectation that directory listings will be slow in big directories), and I would not expect them to be much of an issue unless you're dealing with tens of thousands of files and particularly slow storage.

Understood -- thanks. Then plan is to keep it to around 1k entries per directory. We've done some fairly concrete testing here to find the fall-off point for dirent caching in BTRFS, and the sweet-spot between having a large number of small directories cached vs. a few massive directories cached. ~1k seems most palatable for our use-case and directory tree structure.

I've only ever lost a BTRFS volume to a power failure _once_ in the multiple years I've been using it, and that ended up being because the power failure trashed the storage device pretty severely (it was super-cheap flash storage).  I do know however that there are people who have had much worse results than me.

Good to know. We'll be running power-fail testing over the next couple months. I'm waiting for some hardware to arrive presently. We'll power-cycle fairly large filesystems a few thousand times before we deem it safe to ship. If there are latent bugs in BTRFS still w.r.t. power-fail, I can guarantee we'll trip over them...

It's not exactly a 'general sense' or a hunch, issues with BTRFS on SMR drives have been pretty well demonstrated in practice, hence Duncan making this statement despite the fact that it most likely did not apply to you.

Ah, ok, thanks for clarifying.  I appreciate the forewarning regardless.

Best,

ellis
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to