On 2018年02月13日 18:21, John Ettedgui wrote: > On Thu, Jul 21, 2016 at 1:19 AM, Qu Wenruo <quwen...@cn.fujitsu.com> wrote: >> >> >> No more. >> >> The dump is already good enough for me to dig for some time. >> >> We don't usually get such large extent tree dump from a real world use case. >> >> It would help us in several ways, from determine how fragmented a block >> group is to determine if a defrag will help. >> >> Thanks, >> Qu >> >> > > > Hello there, > > have you found anything good since then?
Unfortunately, not really much to speed it up. This reminds me of the old (and crazy) idea to skip block group build for RO mount. But not really helpful for it. > With a default system, the behavior is pretty much still the same, > though I have not recreated the partitions since. > > Defrag helps, but I think balance helps even more. > clear_cache may help too, but I'm not really sure as I've not tried it > on its own. > I was actually able to get a 4TB partition on a 5400rpm HDD to mount > in around 500ms, quite faster that even some Gb partitions I have on > SSDs! Alas I wrote some files to it and it's taking over a second > again, so no more magic there. The problem is not about how much space it takes, but how many extents are here in the filesystem. For new fs filled with normal data, I'm pretty sure data extents will be as large as its maximum size (256M), causing very little or even no pressure to block group search. > > The workarounds do work, so it's still not a major issue, but they're > slow and sometimes I have to workaround the "no space left on device" > which then takes even more time. And since I went to SUSE, some mail/info is lost during the procedure. Despite that, I have several more assumption to this problem: 1) Metadata usage bumped by inline files If there are a lot of small files (<2K as default), and your metadata usage is quite high (generally speaking, it meta:data ratio should be way below 1:8), that may be the cause. If so, try mount the fs with "max_inline=0" mount option and then try to rewrite such small files. 2) SSD write amplification along with dynamic remapping To be honest, I'm not really buying this idea, since mount doesn't have anything related to write. But running fstrim won't harm anyway. 3) Rewrite the existing files (extreme defrag) In fact, defrag doesn't work well if there are subvolumes/snapshots /reflink involved. The most stupid and mindless way, is to write a small script and find all regular files, read them out and rewrite it back. This should acts much better than traditional defrag, although it's time-consuming and makes snapshot completely meaningless. (and since you're already hitting ENOSPC, I don't think the idea is really working for you) And since you're already hitting ENOSPC, either it's caused by unbalanced meta/data usage, or it's really going hit the limit, I would recommend to enlarge the fs or delete some files to see if it helps. Thanks, Qu > > Thank you! > John > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >
Description: OpenPGP digital signature