January 5, 2021 7:20 PM, cedric.dew...@eclipso.eu wrote:

>>> I was expecting btrfs to do almost all reads from the fast SSD, as both
> 
> the data and the metadata is on that drive, so the slow hdd is only really
> needed when there's a bitflip on the SSD, and the data has to be 
> reconstructed.
> 
>> IIRC there will be some read policy feature to do that, but not yet
>> merged, and even merged, you still need to manually specify the
>> priority, as there is no way for btrfs to know which driver is faster
>> (except the non-rotational bit, which is not reliable at all).
> 
> Manually specifying the priority drive would be a big step in the right 
> direction. Maybe btrfs
> could get a routine that benchmarks the sequential and random read and write 
> speed of the drives at
> (for instance) mount time, or triggered by an administrator? This could lead 
> to misleading results
> if btrfs doesn't get the whole drive to itself.
> 
>>> Writing has to be done to both drives of course, but I don't expect
> 
> slowdowns from that, as the system RAM should cache that.
> 
>> Write can still slow down the system even you have tons of memory.
>> Operations like fsync() or sync() will still wait for the writeback,
>> thus in your case, it will also be slowed by the HDD no matter what.
>> 
>> In fact, in real world desktop, most of the writes are from sometimes
>> unnecessary fsync().
>> 
>> To get rid of such slow down, you have to go dangerous by disabling
>> barrier, which is never a safe idea.
> 
> I suggest a middle ground, where btrfs returns from fsync when one of the 
> copies (instead of all
> the copies) of the data has been written completely to disk. This poses a 
> small data risk, as this
> creates moments that there's only one copy of the data on disk, while the 
> software above btrfs
> thinks all data is written on two disks. one problem I see if the server is 
> told to shut down while
> there's a big backlog of data to be written to the slow drive, while the big 
> drive is already done.
> Then the server could cut the power while the slow drive is still being 
> written.
> 
> i think this setting should be given to the system administrator, it's not a 
> good idea to just
> blindly enable this behavior.
> 
>>> Is there a way to tell btrfs to leave the slow hdd alone, and to prioritize
> 
> the SSD?
> 
>> Not in upstream kernel for now.


I happen to have written a custom patch for my own use for a similar use case:
I have a bunch of slow drives constituting a raid1 FS of dozens of terabytes,
and just one SSD, reserved only for metadata.

My patch adds an entry under sysfs for each FS so that the admin can select the
"metadata_only" devid. This is optional, if it's not done, the usual btrfs 
behavior
applies. When set, this device is:
- never considered for new data chunks allocation
- preferred for new metadata chunk allocations
- preferred for metadata reads

This way I still have raid1, but the metadata chunks on slow drives are only
there for redundancy and never accessed for reads as long as the SSD metadata
is valid.

This *drastically* improved my snapshots rotation, and even made qgroups usable
again. I think I've been running this for 1-2 years, but obviously I'd love 
seeing
such option on the vanilla kernel so that I can get rid of hacky patch :)

>> 
>> Thus I guess you need something like bcache to do this.
> 
> Agreed. However, one of the problems of bcache, it that it can't use 2 SSD's 
> in mirrored mode to
> form a writeback cache in front of many spindles, so this structure is 
> impossible:
> +-----------------------------------------------------------+--------------+--------------+
> | btrfs raid 1 (2 copies) /mnt |
> +--------------+--------------+--------------+--------------+--------------+--------------+
> | /dev/bcache0 | /dev/bcache1 | /dev/bcache2 | /dev/bcache3 | /dev/bcache4 | 
> /dev/bcache5 |
> +--------------+--------------+--------------+--------------+--------------+--------------+
> | Write Cache (2xSSD in raid 1, mirrored) |
> | /dev/sda2 and /dev/sda3 |
> +--------------+--------------+--------------+--------------+--------------+--------------+
> | Data | Data | Data | Data | Data | Data |
> | /dev/sda9 | /dev/sda10 | /dev/sda11 | /dev/sda12 | /dev/sda13 | /dev/sda14 |
> +--------------+--------------+--------------+--------------+--------------+--------------+
> 
> In order to get a system that has no data loss if a drive fails, the user 
> either has to live with
> only a read cache, or the user has to put a separate writeback cache in front 
> of each spindle like
> this:
> +-----------------------------------------------------------+
> | btrfs raid 1 (2 copies) /mnt |
> +--------------+--------------+--------------+--------------+
> | /dev/bcache0 | /dev/bcache1 | /dev/bcache2 | /dev/bcache3 |
> +--------------+--------------+--------------+--------------+
> | Write Cache | Write Cache | Write Cache | Write Cache |
> |(Flash Drive) |(Flash Drive) |(Flash Drive) |(Flash Drive) |
> | /dev/sda5 | /dev/sda6 | /dev/sda7 | /dev/sda8 |
> +--------------+--------------+--------------+--------------+
> | Data | Data | Data | Data |
> | /dev/sda9 | /dev/sda10 | /dev/sda11 | /dev/sda12 |
> +--------------+--------------+--------------+--------------+
> 
> In the mainline kernel is's impossible to put a bcache on top of a bcache, so 
> a user does not have
> the option to have 4 small write caches below one fast, big read cache like 
> this:
> +-----------------------------------------------------------+
> | btrfs raid 1 (2 copies) /mnt |
> +--------------+--------------+--------------+--------------+
> | /dev/bcache4 | /dev/bcache5 | /dev/bcache6 | /dev/bcache7 |
> +--------------+--------------+--------------++-------------+
> | Read Cache (SSD) |
> | /dev/sda4 |
> +--------------+--------------+--------------+--------------+
> | /dev/bcache0 | /dev/bcache1 | /dev/bcache2 | /dev/bcache3 |
> +--------------+--------------+--------------+--------------+
> | Write Cache | Write Cache | Write Cache | Write Cache |
> |(Flash Drive) |(Flash Drive) |(Flash Drive) |(Flash Drive) |
> | /dev/sda5 | /dev/sda6 | /dev/sda7 | /dev/sda8 |
> +--------------+--------------+--------------+--------------+
> | Data | Data | Data | Data |
> | /dev/sda9 | /dev/sda10 | /dev/sda11 | /dev/sda12 |
> +--------------+--------------+--------------+--------------+
> 
>> Thanks,
>> Qu
> 
> Thank you,
> Cedric
> 
> ---
> 
> Take your mailboxes with you. Free, fast and secure Mail & Cloud: 
> https://www.eclipso.eu - Time to
> change!

Reply via email to