> On May 25, 2017, at 11:21 AM, Ken Merry <[email protected]> wrote:
> 
>> 
>> On May 24, 2017, at 6:39 PM, Andrew Gabriel <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> On 24/05/2017 22:10, Ken Merry wrote:
>>> Is anyone working on SMR support for ZFS?
>>> 
>>> I put support into FreeBSD for SMR drives from the block layer (GEOM) 
>>> through the SCSI layer (CAM):
>>> 
>>> https://svnweb.freebsd.org/base?view=revision&revision=300207 
>>> <https://svnweb.freebsd.org/base?view=revision&revision=300207>
>>> 
>>> So far, no filesystems in the FreeBSD tree are using the SMR support.
>>> 
>>> It looks like there are Linux folks working on SMR support:
>>> 
>>> http://events.linuxfoundation.org/sites/events/files/slides/lemoal-Linux-SMR-vault-2017.pdf
>>> 
>>> We (Spectra Logic) are considering starting work on supporting Host Aware 
>>> and Host Managed SMR drives in ZFS, and we’d rather collaborate with other 
>>> folks who are interested instead of duplicating the effort.
>> 
>> There was a discussion about it at the European OpenZFS conference in Paris 
>> a couple of years ago. One of the drive vendors came along and gave a 
>> technical presentation on SMR, with a view to persuading ZFS developers to 
>> modify ZFS to make good use of SMR drives.
>> 
>> The presentation was very good, but the general feedback at the time was 
>> that no one believed SMR would be around for very long, probably not long 
>> enough to get any support in ZFS stable, before hard drives had completely 
>> given way to SSDs.
>> 
>> Things may have changed since then, although I hear less about SMR drives 
>> now than I did back then. How long do you think SMR will be around? What is 
>> your use case for the drives?
> 
> Spectra’s take on it is that Seagate and HGST / WD will likely keep producing 
> high capacity SMR drives for a while.
> 
> From a logical standpoint, SMR allows them to increase the drive capacity by 
> 20% or so (the numbers are fuzzy, that isn’t a precise amount), and they’ve 
> been selling SMR drives for a lower cost per GB than traditional fully random 
> access drives.
> 
> SSD prices are coming down, and capacity is going up, but it still can’t 
> match hard drives in terms of price for the capacity.  On the other end of 
> the spectrum, tape capacity and throughput is increasing pretty rapidly.  
> (LTO-8 is coming soon.)
> 
> In between those is spinning disk.  SMR disks have lower random write 
> performance, but also lower price, and are more competitive with tape.  With 
> tape capacity increasing, the disk vendors will want to stay somewhat 
> competitive.  SMR is a tool that they can use to compete on the lower price, 
> higher capacity end of the spectrum.
> 
> Spectra’s focus is generally on archive and backup storage.  That means high 
> capacity disk and tape libraries.  Spectra generally only uses SSDs for 
> things like caching, databases, etc.  They’re too expensive for archive 
> storage.
> 
> As for Spectra’s use case for the drives:
> 
> https://www.spectralogic.com/products/arcticblue/ 
> <https://www.spectralogic.com/products/arcticblue/>
> 
> That’s a 96-drive, 4U JBOD enclosure.  We can put up to 8 Arctic Blue 
> enclosures behind a Black Pearl (S3) box.  Right now we’re using Drive 
> Managed SMR drives with ZFS.  We only had to do some minor modifications to 
> ZFS to get reasonable performance with Drive Managed SMR drives.  The cost 
> (as low as $.10/GB for the whole system) is very good.
> 
> While one vendor has Drive Managed and Host Aware drives, the other one, as 
> far as I know, only has Host Managed drives.  Host Managed is much harder to 
> deal with from a software standpoint, since it can’t handle any out of order 
> access within the bands.  In order to have higher performance with Host Aware 
> drives, and be able to choose between vendors, we’ll have to implement Host 
> Managed support in ZFS, or write a layer below ZFS that will do 
> read/modify/write to eliminate any non-sequential writes to Host Managed 
> disks.

As part of the metadata classes discussion a while back, the feature would 
allow you to 
easily separate out the raw data from other metadata and place them on 
different vdevs.
At that point, an SMR-optimized allocator makes sense for the data while 
metadata can 
reside on non-SMR drives.
 -- richard


> 
> Ken
> — 
> Ken Merry
> [email protected] <mailto:[email protected]>
> 
> 
> 
> openzfs-developer | Archives 
> <https://openzfs.topicbox.com/groups/developer/discussions/T1fec2160a70daccc-M63c1e798a0c2c95b02d82ed9>
>  | Powered by Topicbox <https://topicbox.com/>

------------------------------------------
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T1fec2160a70daccc-M07491d6c76fd073693b2e962
Powered by Topicbox: https://topicbox.com

Reply via email to