Le 04/12/2018 à 03:52, Chris Murphy a écrit : > On Mon, Dec 3, 2018 at 1:04 PM Lionel Bouton > <lionel-subscript...@bouton.name> wrote: >> Le 03/12/2018 à 20:56, Lionel Bouton a écrit : >>> [...] >>> Note : recently I tried upgrading from 4.9 to 4.14 kernels, various >>> tuning of the io queue (switching between classic io-schedulers and >>> blk-mq ones in the virtual machines) and BTRFS mount options >>> (space_cache=v2,ssd_spread) but there wasn't any measurable improvement >>> in mount time (I managed to reduce the mount of IO requests >> Sent to quickly : I meant to write "managed to reduce by half the number >> of IO write requests for the same amount of data writen" >> >>> by half on >>> one server in production though although more tests are needed to >>> isolate the cause). > Interesting. I wonder if it's ssd_spread or space_cache=v2 that > reduces the writes by half, or by how much for each? That's a major > reduction in writes, and suggests it might be possible for further > optimization, to help mitigate the wandering trees impact.
Note, the other major changes were : - 4.9 upgrade to 1.14, - using multi-queue aware bfq instead of noop. If BTRFS IO patterns in our case allow bfq to merge io-requests, this could be another explanation. Lionel