Hi Chao, On Fri, Apr 21, 2023 at 1:19 AM Chao Yu <c...@kernel.org> wrote: > > Hi JuHyung, > > Sorry for delay reply. > > On 2023/4/11 1:03, Juhyung Park wrote: > > Hi Chao, > > > > On Tue, Apr 11, 2023 at 12:44 AM Chao Yu <c...@kernel.org> wrote: > >> > >> Hi Juhyung, > >> > >> On 2023/4/4 15:36, Juhyung Park wrote: > >>> Hi everyone, > >>> > >>> I want to start a discussion on using f2fs for regular > >>> desktops/workstations. > >>> > >>> There are growing number of interests in using f2fs as the general > >>> root file-system: > >>> 2018: https://www.phoronix.com/news/GRUB-Now-Supports-F2FS > >>> 2020: https://www.phoronix.com/news/Clear-Linux-F2FS-Root-Option > >>> 2023: > >>> https://code.launchpad.net/~nexusprism/curtin/+git/curtin/+merge/439880 > >>> 2023: > >>> https://code.launchpad.net/~nexusprism/grub/+git/ubuntu/+merge/440193 > >>> > >>> I've been personally running f2fs on all of my x86 Linux boxes since > >>> 2015, and I have several concerns that I think we need to collectively > >>> address for regular non-Android normies to use f2fs: > >>> > >>> A. Bootloader and installer support > >>> B. Host-side GC > >>> C. Extended node bitmap > >>> > >>> I'll go through each one. > >>> > >>> === A. Bootloader and installer support === > >>> > >>> It seems that both GRUB and systemd-boot supports f2fs without the > >>> need for a separate ext4-formatted /boot partition. > >>> Some distros are seemingly disabling f2fs module for GRUB though for > >>> security reasons: > >>> https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1868664 > >>> > >>> It's ultimately up to the distro folks to enable this, and still in > >>> the worst-case scenario, they can specify a separate /boot partition > >>> and format it to ext4 upon installation. > >>> > >>> The installer itself to show f2fs and call mkfs.f2fs is being worked > >>> on currently on Ubuntu. See the 2023 links above. > >>> > >>> Nothing f2fs mainline developers should do here, imo. > >>> > >>> === B. Host-side GC === > >>> > >>> f2fs relieves most of the device-side GC but introduces a new > >>> host-side GC. This is extremely confusing for people who have no > >>> background in SSDs and flash storage to understand, let alone > >>> discard/trim/erase complications. > >>> > >>> In most consumer-grade blackbox SSDs, device-side GCs are handled > >>> automatically for various workloads. f2fs, however, leaves that > >>> responsibility to the userspace with conservative tuning on the > >> > >> We've proposed a f2fs feature named "space awared garbage collection" > >> and shipped it in huawei/honor's devices, but forgot to try upstreaming > >> it. :-P > >> > >> In this feature, we introduced three mode: > >> - performance mode: something like write-gc in ftl, it can trigger > >> background gc more frequently and tune its speed according to free > >> segs and reclaimable blks ratio. > >> - lifetime mode: slow down background gc to avoid high waf if there > >> is less free space. > >> - balance mode: behave as usual. > >> > >> I guess this may be helpful for Linux desktop distros since there is > >> no such storage service trigger gc_urgent. > >> > > > > That indeed sounds interesting. > > > > If you need me to test something out, feel free to ask. > > Thanks a lot for that. :) > > I'm trying to figure out a patch... > > > > > I manually trigger gc_urgent from time to time on my 2TB SSD laptop > > (which, as a laptop, isn't left on 24/7 so f2fs have a bit of trouble > > finding enough idle time to trigger GC sufficiently). > > If I don't, I run out of free segments within a few weeks. > > Have you ever tried to config /sys/fs/f2fs/<disk>/gc_idle_interval? > > Set the value to 0, and check free segment decrement in one day, it can > infer whether free segment will be exhausted after a few weeks. >
Well, I'm sure I can tune the sysfs tunables so that the background GC works sufficiently for my workload. My original main concern is that, that is a manual process that not all users can do. My proposed way of hooking up GC to fstrim is a dirty, but fool-proof way to ensure that f2fs is kept healthy. But if there's a more elegant way of handling this automatically, I'm all for it. > > > >>> kernel-side by default. Android handles this by init.rc tunings and a > >>> separate code running in vold to trigger gc_urgent. > >>> > >>> For regular Linux desktop distros, f2fs just runs on the default > >>> configuration set on the kernel and unless it’s running 24/7 with > >>> plentiful idle time, it quickly runs out of free segments and starts > >>> triggering foreground GC. This is giving people the wrong impression > >>> that f2fs slows down far drastically than other file-systems when > >>> that’s quite the contrary (i.e., less fragmentation overtime). > >>> > >>> This is almost the equivalent of re-living the nightmare of trim. On > >>> SSDs with very small to no over-provisioned space, running a > >>> file-system with no discard what-so-ever (sadly still a common case > >>> when an external SSD is used with no UAS) will also drastically slow > >> > >> What does UAS mean? > >> > > > > USB Attached SCSI. It's a protocol that sends SCSI commands over USB. > > Most SATA-to-USB and NVMe-to-USB chips support it. > > > > AFAIK, it's the only way of sending trim commands and query SMART data > > over USB. (Plus, it's faster.) > > > > If either the host or the chip doesn't support it, it's negotiated > > through "usb-storage" (aka mass-storage), which then prevents anyone > > from sending trim commands. > > > > The external SSD shenanigan is a whole another rant for another day.. > > Thanks for the explanation. > > > > >>> the performance down. On file-systems with no asynchronous discard, > >> > >> There is no such performance issue in f2fs, right? as f2fs enables > >> discard mount option by default, and supports async discard feature. > >> > > > > Yup. It's one of my favorite f2fs feature :)) > > > > Though imo it might be a good idea to explicitly recommend people to > > NOT disable it as a lot of "how to improve SSD performance on Linux" > > guides online tell you to outright disable the "discard" mount option. > > Like you said, those concerns are invalid on f2fs. > > > > btrfs recently added discard=async and enabled it by default too, but > > I'm not sure if their implementation aims to do the same with what > > f2fs does. > > > >>> mounting a file-system with the discard option adds a non-negligible > >>> overhead on every remove/delete operations, so most distros now > >>> (thankfully) use a timer job registered to systemd to trigger fstrim: > >>> https://github.com/util-linux/util-linux/commits/master/sys-utils/fstrim.timer > >>> > >>> This is still far from ideal. The default file-system, ext4, slows > >>> down drastically almost to a halt when fstrim -a is called, especially > >>> on SATA. For some reason that is still a mystery for me, people seem > >>> to be happy with it. No one bothered to improve it for years > >>> ¯\_(ツ)_/¯. > >>> > >>> So here’s my proposal: > >>> As Linux distros don’t have a good mechanism for hinting when to > >>> trigger GC, introduce a new Kconfig, CONFIG_F2FS_GC_UPON_FSTRIM and > >>> enable it by default. > >>> This config will hook up ioctl(FITRIM), which is currently ignored on > >>> f2fs - > >>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git/commit/?h=master&id=e555da9f31210d2b62805cd7faf29228af7c3cfb > >>> , to perform discard and GC on all invalid segments. > >>> Userspace configuration with enough f2fs/GC knowledge such as Android > >>> should disable it. > >>> > >>> This will ensure that Linux distros that blindly call fstrim will at > >>> least avoid constant slowdowns when free segments are depleted with > >>> the occasional (once a week) slowdown, which *people are already > >>> living with on ext4*. I'll even go further and mention that since f2fs > >>> GC is a regular R/W workload, it doesn't cause an extreme slowdown > >>> comparable to a level of a full file-system trim operation. > >>> > >>> If this is acceptable, I’ll cook up a patch. > >>> > >>> In an ideal world, all Linux distros should have an explicit f2fs GC > >>> trigger mechanism (akin to > >>> https://github.com/kdave/btrfsmaintenance#distro-integration ), but > >>> it’s practically unrealistic to expect that, given the installer > >>> doesn’t even support f2fs for now. > >>> > >>> === C. Extended node bitmap === > >>> > >>> f2fs by default have a very limited number of allowed inodes compared > >>> to other file-systems. Just 2 AOSP syncs are enough to exhaust f2fs > >>> and result in -ENOSPC. > >>> > >>> Here are some of the stats collected from me and my colleague that we > >>> use daily as a regular desktop with GUI, web-browsing and everything: > >>> 1. Laptop > >>> Utilization: 68% (182914850 valid blocks, 462 discard blocks) > >>> - Node: 10234905 (Inode: 10106526, Other: 128379) > >>> - Data: 172679945 > >>> - Inline_xattr Inode: 2004827 > >>> - Inline_data Inode: 867204 > >>> - Inline_dentry Inode: 51456 > >>> > >>> 2. Desktop #1 > >>> Utilization: 55% (133310465 valid blocks, 0 discard blocks) > >>> - Node: 6389660 (Inode: 6289765, Other: 99895) > >>> - Data: 126920805 > >>> - Inline_xattr Inode: 2253838 > >>> - Inline_data Inode: 1119109 > >>> - Inline_dentry Inode: 187958 > >>> > >>> 3. Desktop #2 > >>> Utilization: 83% (202222003 valid blocks, 1 discard blocks) > >>> - Node: 21887836 (Inode: 21757139, Other: 130697) > >>> - Data: 180334167 > >>> - Inline_xattr Inode: 39292 > >>> - Inline_data Inode: 35213 > >>> - Inline_dentry Inode: 1127 > >>> > >>> 4. Colleague > >>> Utilization: 22% (108652929 valid blocks, 362420605 discard blocks) > >>> - Node: 5629348 (Inode: 5542909, Other: 86439) > >>> - Data: 103023581 > >>> - Inline_xattr Inode: 655752 > >>> - Inline_data Inode: 259900 > >>> - Inline_dentry Inode: 193000 > >>> > >>> 5. Android phone (for reference) > >>> Utilization: 78% (36505713 valid blocks, 1074 discard blocks) > >>> - Node: 704698 (Inode: 683337, Other: 21361) > >>> - Data: 35801015 > >>> - Inline_xattr Inode: 683333 > >>> - Inline_data Inode: 237470 > >>> - Inline_dentry Inode: 112177 > >>> > >>> Chao Yu added a functionality to expand this via the -i flag passed to > >>> mkfs.f2fs back in 2018 - > >>> https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git/commit/?id=baaa076b4d576042913cfe34169442dfda651ca4 > >>> > >>> I occasionally find myself in a weird position of having to tell > >>> people "Oh you should use the -i option from mkfs.f2fs" when they > >>> encounter this issue only after they’ve migrated most of the data and > >>> ask back "Why isn’t this enabled by default?". > >>> > >>> While this might not be an issue for the foreseeable future in > >>> Android, I’d argue that this is a feature that needs to be enabled by > >>> default for desktop environments with preferably a robust testing > >> > >> Yes, I guess we need to add some testcases and do some robust tests for > >> large nat_bitmap feature first. Since I do remember once its design flaw > >> corrupted data. :( > >> > > > > And I'm glad to report everything's been rock solid ever since that > > fix :) I'm actively using it on many of my systems. > > > > One thing to note here is that my colleague (Alexander Koskovich) ran > > into an fsck issue with that feature enabled on a large SSD, > > preventing boot. > > I didn't encounter it as I never had f2fs-tools installed on my system > > (which also tells you how robust f2fs was for years on my setup with > > multiple sudden power-offs without any fsck runs). > > > > See here for the fix if you missed it: > > https://lore.kernel.org/all/cad14+f0fbtxfad_dm-ryfipbaong-b-6hqrms2m4riidx9y...@mail.gmail.com/ > > Oh, I missed that one. > > I added a comment on it, please check it. > > > > > Btw, is there a downside (e.g., more disk usage, slower performance) > > from using large nat_bitmap except for legacy kernel compatibility? I > > - f2fs needs to reserve more NAT space, it may be wasted, if there is > less node(including inode), but I think that is a trade-off. > > Do you suffer any performance issue when using nat_bitmap? > Not that I've noticed, but I haven't exactly ran any benchmarks either. Years ago, when I was switching my main system from ext4 to f2fs, the only performance hit that I've noticed was the slow directory traversal which was addressed later with readdir_ra. And years later after that, I've switched to extended node bitmap and the only issue there was data corruption, which was also fixed. I did not notice any performance issues from using extended node bitmap. I haven't used ext4 since then personally, but I did recently switch a production server from ext4 to f2fs to avoid performance degradation from lack of discard. This specifically isn't really related to extended node bitmap, but rather related to how f2fs and ext4 each behave. If you're interested in this workload, read on: The production server that I've mentioned basically uses an SSD as a temporary cache as RAM is insufficient. Once a workload is triggered, it writes 50-100 GiB of data to the cache (SSD) and the resulting output is stored to HDD (10-30 GiB). After the data is stored to HDD, the corresponding cache from the SSD is removed. Multiple instances of the said workload can co-exist, so the SSD is subject to immense write-intensive workload and we almost immediately hit thermal throttling territory if we start 3-4 instances. (Nothing we can do to mitigate thermal throttling.) Our major concern here was that, with ext4, the SSD runs out of free block as the file-system doesn't perform trim and future workloads have severe performance penalty. To mitigate that, we either have to use the 'discard' option, which delays each file removal, or fstrim upon each workload's end, which results in major I/O stalls big enough to impact other workloads running at the same time. We've deployed f2fs. At first, we didn't see much of a performance gain as f2fs was too conservative in issuing trim (not GC). We tuned the sysfs idle intervals, but it didn't help much either. Maybe if the idle_interval and discard_idle_interval were in milliseconds unit, it could've allowed us to tune better. We recently switched to always use gc_urgent low(2). f2fs aggressively issues trim, and the free blocks are well within our comfort zone and we didn't see any noticeable performance drop as the number of parallel workloads increased. We've been sticking with gc_urgent low(2) ever since with this particular setup, and we're happy with it :) > Thanks, > > > was guessing not, but might as well ask to be sure. > > > > Thanks, regards > > > >> Thanks, > >> > >>> infrastructure. Guarding this with #ifndef __ANDROID__ doesn’t seem to > >>> make much sense as it introduces more complications to how > >>> fuzzing/testing should be done. > >>> > >>> I’ll also add that it’s a common practice for userspace mkfs tools to > >>> introduce breaking default changes to older kernels (with options to > >>> produce a legacy image, of course). > >>> > >>> This was a lengthy email, but I hope I was being reasonable. > >>> > >>> Jaegeuk and Chao, let me know what you think. > >>> And as always, thanks for your hard work :) > >>> > >>> Thanks, > >>> regards _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel