>> Why would I when I can get a 18TB Seagate IronWolf for <$600, a 18TB Seagate 
>> Exos for <$500, or a 18TB WD Gold for <$600?  
> 
> IOPS

Some installations don’t care so much about IOPS.

Less-tangible factors include:

* Time to repair and thus to restore redundancy.  When an EC pool of spinners 
takes a *month* to weight up a drive, that’s a significant operational and data 
durability / availability concern.

* RMAs.  They’re a pain, especially if you have to work them through a chassis 
vendor, who likely will be dilatory and demand unreasonable hoops like 
attaching a BMC web interface screenshot for every drive.  This translates to 
each RMA being modeled with a certain shipping / person-hour cost, which means 
that for lower unit-value items it may not be worth the hassle.  It is not 
unreasonable to guesstimate a threshold around USD 500.  Soit is not uncommon 
to just trash failed / DOA spinners — or letting them stack up indefinitely in 
a corner — instead of recovering their value.

As I wrote … in 2019 I think it was, with spinners you have some manner of HBA 
in the mix.  If that HBA is a fussy RAID model, you may have significant added 
cost for the RoC, onboard RAM, and supercap/BBU.  Complexity also comes with 
neverending firmware bugs and cache management nightmares.  Gas gauge firmware… 
don’t even get me talking about that.

And consider how many TB of 3.5” spinners you fit into an RU, compared to 2.5” 
or EDSFF flash.  RUs aren’t free, and SATA HBAs will bottleneck a relatively 
dense HDD chassis long before a similar number of NVMe drives will bottleneck.  
Unless perhaps you have the misfortune of a chassis manufacturer who for some 
reason runs NVMe PCI lanes *though* an HBA.



_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to