On Fri, May 22, 2020 at 12:47 PM antlists <antli...@youngman.org.uk> wrote:
>
> What puzzles me (or rather, it doesn't, it's just cost cutting), is why
> you need a *dedicated* cache zone anyway.
>
> Stick a left-shift register between the LBA track and the hard drive,
> and by switching this on you write to tracks 2,4,6,8,10... and it's a
> CMR zone. Switch the register off and it's an SMR zone writing to all
> tracks.

Disclaimer: I'm not a filesystem/DB design expert.

Well, I'm sure the zones aren't just 2 tracks wide, but that is worked
around easily enough.  I don't see what this gets you though.  If
you're doing sequential writes you can do them anywhere as long as
you're doing them sequentially within any particular SMR zone.  If
you're overwriting data then it doesn't matter how you've mapped them
with a static mapping like this, you're still going to end up with
writes landing in the middle of an SMR zone.

> The other thing is, why can't you just stream writes to a SMR zone,
> especially if we try and localise writes so lets say all LBAs in Gig 1
> go to the same zone ... okay - if we run out of zones to re-shingle to,
> then the drive is going to grind to a halt, but it will be much less
> likely to crash into that barrier in the first place.

I'm not 100% following you, but if you're suggesting remapping all
blocks so that all writes are always sequential, like some kind of
log-based filesystem, your biggest problem here is going to be
metadata.  Blocks logically are only 512 bytes, so there are a LOT of
them.  You can't just freely remap them all because then you're going
to end up with more metadata than data.

I'm sure they are doing something like that within the cache area,
which is fine for short bursts of writes, but at some point you need
to restructure that data so that blocks are contiguous or otherwise
following some kind of pattern so that you don't have to literally
remap every single block.  Now, they could still reside in different
locations, so maybe some sequential group of blocks are remapped, but
if you have a write to one block in the middle of a group you need to
still read/rewrite all those blocks somewhere.  Maybe you could use a
COW-like mechanism like zfs to reduce this somewhat, but you still
need to manage blocks in larger groups so that you don't have a ton of
metadata.

With host-managed SMR this is much less of a problem because the host
can use extents/etc to reduce the metadata, because the host already
needs to map all this stuff into larger structures like
files/records/etc.  The host is already trying to avoid having to
track individual blocks, so it is counterproductive to re-introduce
that problem at the block layer.

Really the simplest host-managed SMR solution is something like f2fs
or some other log-based filesystem that ensures all writes to the disk
are sequential.  Downside to flash-based filesystems is that they can
disregard fragmentation on flash, but you can't disregard that for an
SMR drive because random disk performance is terrible.

> Even better, if we have two independent heads, we could presumably
> stream updates using one head, and re-shingle with the other. But that's
> more cost ...

Well, sure, or if you're doing things host-managed then you stick the
journal on an SSD and then do the writes to the SMR drive
opportunistically.  You're basically describing a system where you
have independent drives for the journal and the data areas.  Adding an
extra head on a disk (or just having two disks) greatly improves
performance, especially if you're alternating between two regions
constantly.

-- 
Rich

Reply via email to