> On Wed, Oct 6, 2021 at 12:16 PM Laurence Perkins <[email protected]> wrote:
> >
> > Other option, depending on exactly what your use case is would be to look 
> > into your choice of filesystem.  SMR doesn't like random writes into one of 
> > its chunks unless it has enough idle time to go back and straighten it out 
> > later is all.  There are now format options for ext4 to align its metadata 
> > to the SMR sections and to make it avoid random writes as much as it can.  
> > Additionally BTRFS, ZFS, and NILFS2 are all structured such that they tend 
> > to write from one end of the disk to the other and then jump back to the 
> > beginning, so they see little if any degradation from SMR.
> 
> Unless something has changed, it was ZFS rebuilds that caused a lot of the 
> initial fuss on Linux.  Drives were getting dropped from pools due to 
> timeouts/etc during rebuilds.  I'm not sure how sequential the IO is for ZFS 
> rebuilds.  I think btrfs seems a bit smarter about scrubs in general.
> 
> --
> Rich
> 

Good to know.  I don't use ZFS myself, and I suspect the benchmarks I looked at 
when I was checking on how best to deal with SMR drives didn't try a pool 
rebuild for ZFS.

Scrub on BTRFS is read-only unless there are errors that need correcting, which 
should be rare.  BTRFS balance operations might be affected, but those aren't 
needed as often as they used to be and should still operate linearly on whole 
chunks.  

I think what to look for there would be if there's a way to align the BTRFS 
chunks to the SMR blocks.  But the manufacturers decided to continue 
manufacturing CBR disks for the surveillance industry, so I haven't had to 
worry about it just yet.

LMP

Reply via email to