On 22/05/2020 18:20, Rich Freeman wrote:
On Fri, May 22, 2020 at 12:47 PM antlists <antli...@youngman.org.uk> wrote:

What puzzles me (or rather, it doesn't, it's just cost cutting), is why
you need a *dedicated* cache zone anyway.

Stick a left-shift register between the LBA track and the hard drive,
and by switching this on you write to tracks 2,4,6,8,10... and it's a
CMR zone. Switch the register off and it's an SMR zone writing to all
tracks.

Disclaimer: I'm not a filesystem/DB design expert.

Well, I'm sure the zones aren't just 2 tracks wide, but that is worked
around easily enough.  I don't see what this gets you though.  If
you're doing sequential writes you can do them anywhere as long as
you're doing them sequentially within any particular SMR zone.  If
you're overwriting data then it doesn't matter how you've mapped them
with a static mapping like this, you're still going to end up with
writes landing in the middle of an SMR zone.

Let's assume each shingled track overwrites half the previous write. Let's also assume a shingled zone is 2GB in size. My method converts that into a 1GB CMR zone, because we're only writing to every second track.

I don't know how these drives cache their writes before re-organising, but this means that ANY disk zone can be used as cache, rather than having a (too small?) dedicated zone...

So what you could do is allocate one zone of CMR to every four or five zones of SMR and just reshingle each SMR as the CMR filled up. The important point is that zones can switch from CMR cache to SMR filling up, to full SMR zones decaying as they are re-written.

The other thing is, why can't you just stream writes to a SMR zone,
especially if we try and localise writes so lets say all LBAs in Gig 1
go to the same zone ... okay - if we run out of zones to re-shingle to,
then the drive is going to grind to a halt, but it will be much less
likely to crash into that barrier in the first place.

I'm not 100% following you, but if you're suggesting remapping all
blocks so that all writes are always sequential, like some kind of
log-based filesystem, your biggest problem here is going to be
metadata.  Blocks logically are only 512 bytes, so there are a LOT of
them.  You can't just freely remap them all because then you're going
to end up with more metadata than data.

I'm sure they are doing something like that within the cache area,
which is fine for short bursts of writes, but at some point you need
to restructure that data so that blocks are contiguous or otherwise
following some kind of pattern so that you don't have to literally
remap every single block.

Which is why I'd break it down to maybe 2GB zones. If as the zone fills it streams, but is then re-organised and re-written properly when time permits, you've not got too large chunks of metadata. You need a btree to work out where each zone is stored, then each one has a btree to say where the blocks is stored. Oh - and these drives are probably 4K blocks only - most new drives are.

Now, they could still reside in different
locations, so maybe some sequential group of blocks are remapped, but
if you have a write to one block in the middle of a group you need to
still read/rewrite all those blocks somewhere.  Maybe you could use a
COW-like mechanism like zfs to reduce this somewhat, but you still
need to manage blocks in larger groups so that you don't have a ton of
metadata.

The problem with drives at the moment is they run out of CMR cache, so they have to rewrite all those blocks WHILE THE USER IS STILL WRITING. The point of my idea is that they can repurpose disk as SMR or CMR as required, so they don't run out of cache at the wrong time ...

Yes metadata may bloom under pressure, but give the drives a break and they can grab a new zone, do an SMR ordered stream, and shrink the metadata.

With host-managed SMR this is much less of a problem because the host
can use extents/etc to reduce the metadata, because the host already
needs to map all this stuff into larger structures like
files/records/etc.  The host is already trying to avoid having to
track individual blocks, so it is counterproductive to re-introduce
that problem at the block layer.

Really the simplest host-managed SMR solution is something like f2fs
or some other log-based filesystem that ensures all writes to the disk
are sequential.  Downside to flash-based filesystems is that they can
disregard fragmentation on flash, but you can't disregard that for an
SMR drive because random disk performance is terrible.

Which is why you have small(ish) zones so logically close writes are hopefully physically close as well ...

Even better, if we have two independent heads, we could presumably
stream updates using one head, and re-shingle with the other. But that's
more cost ...

Well, sure, or if you're doing things host-managed then you stick the
journal on an SSD and then do the writes to the SMR drive
opportunistically.  You're basically describing a system where you
have independent drives for the journal and the data areas.  Adding an
extra head on a disk (or just having two disks) greatly improves
performance, especially if you're alternating between two regions
constantly.

EXcept I'm describing a system where journal and data areas are interchangeable :-)

Cheers,
Wol

Reply via email to