On 09/08/2017 16:59, Alan Somers wrote:
On Wed, Aug 9, 2017 at 8:27 AM, Mikhail T. <m...@aldan.algebra.com> wrote:
My server has 8 "hot-plug" slots, that can accept both SATA and SAS drives.
SATA ones tend to be cheaper for the same features (like cache-sizes), what am I getting
for the extra money spent on SAS?
Asking specifically about the protocol differences... It would seem, for
example, SATA can not be as easily hot-plugged, but with camcontrol(8) that
should not be a problem, right? What else? Thank you!
Sent from mobile device, please, pardon shorthand.
Good question. First of all, hot-plugability has more to do with the
controller than the protocol. Since you have a SAS controller, you
should have no problem hot plugging SATA drives. But SAS drives still
have a few advantages:
1) When a SATA drive goes into error recovery, it can lock up the bus
indefinitely. This won't matter if your drives are directly connected
to a SAS HBA. But if you have an expander with say, 4 SAS lanes going
to the HBA, then a flaky SATA drive can reduce the bandwidth available
to the good drives.
2) Even with NCQ, the SATA protocol is limited to queueing one or more
write commands OR one or more read commands. You can't queue a
mixture of reads and writes at the same time. SAS does not have that
limitation. In this sense, SAS is theoretically more performant.
However, I've never heard of anybody observing a performance problem
that can be definitely blamed on this effect.
3) SAS drives have a lot of fancy features that you may not need or
care about. For example, they often have features that are useful in
multipath setups (dual ports, persistent reservations), their error
reporting capabilities are more sophisticated than SMART, their self
encrypting command set is more sophisticated, etc etc.
4) The SAS activity LED is the opposite of SATA's. With SATA, the LED
is off for an idle drive or blinking for a busy drive. With SAS, it's
on for an idle drive or blinking for a busy drive. This makes it
easier to see at a glance how many SAS drives you have installed. I
think some SATA drives have a way to change the LEDs behavior, though.
5) Desktop class SATA drives can spend an indefinite amount of time in
error recovery mode. If your RAID stack doesn't timeout a command,
that can cause your array to hang. But SAS drives and RAID class
SATA drives will fail any command than spends too much time in error
6) But the most important difference isn't something you'll find on
any datasheet or protocol manual. SAS drives are built to a higher
standard of quality than SATA drives, and have accordingly lower
I'm guessing that you don't have an expander (since you only have 8
slots), so item 1 doesn't matter to you. I'll guess that item 3
doesn't matter either, or you wouldn't have asked this question. Item
5 can be dealt with simply by buying the higher end SATA drives. So
item 6 is really the most important. If this system needs to have
very high uptime and consistent bandwidth, or if it will be difficult
to access for maintenance, then you probably want to use SAS drives.
If not, then you can save some money by using SATA. Hope that helps.
Alan makes a good point about SAS expanders and their tendency to stick
when some SATA drives go off on a trip. I'm also assuming Mikhail(?)'s
setup doesn't use one.
On BSD with ZFS, a SATA drive chucking a shoe doesn't make any
difference if they're directly connected to the HBA (same applies to
GEOM RAID/MIRROR). "Dive silent?", "Detach it".
I'm not at all convinced that SAS is any more reliable than SATA per se.
This is based on 30+ years experience with Winchesters starting with
ST506. In the UK I used to write most of the storage articles for a
couple of major tech publishers, and I spent a lot of time talking to
and visiting the manufacturers and looking around the factories. Some of
this may now be out-of-date (Conner went bust for a start).
The thing is that if you opened a XXX brand SCSI disk and the IDE
version, guess what? They were the same inside. I spoke to the makers,
and apparently the electronics on the SCSI version is a lot more
expensive. Why? Well we don't sell as many, er, um.
Okay, they don't make cheap and nasty SCSI (or SAS) drives, but they do
make low-end IDE/SATA. They also make some very nice drives that are
only available as SAS. An equivalent quality SAS/SATA drive will be just
as reliable - there's no mechanical reason for them not to be. They come
off the same line.
Then there's the MTBF and the unrecoverable error rates. On high-end
drives the latter is normally claimed to be 10x better than the cheap
ones. Pretty much always, and exactly 10x. This is utter bilge. What
they're saying is that the unrecoverable error rate is this figure or
better, and any study in to this has shown that it's usually a lot
better than both figures. So both figures are technically correct; it
just makes the SATA drive look worse. If anyone has any actual evidence
of equivalent SAS and SATA drives having a different error rate, please
get in touch.
MTBF? Okay, SATA drives do fail more quickly. Run a drive 24/7 for a
couple of years in an array and it only spins up once and runs at a
constant speed; doesn't get knocked and has properly organised air
conditioning (no thermal shocks). The SATA drive in a desktop, on the
other hand, gets turned on and off and generally abused. It may be
running for less actual time but the odds are stacked against it. How
many light bulbs fail while they're running vs. how many fail when you
turn them on?
Finally, there's been my experience running a load of drives in data
centres for many years. In some servers there are SAS drives. In others
there are SATA server drives (supplied by Dell at 4x the cost of cheap
ones). And in others there are cheapo drives than were around and
whacked in when half a mirror failed. You know what's coming, don't you?
So I won't say it.
firstname.lastname@example.org mailing list
To unsubscribe, send any mail to "freebsd-hardware-unsubscr...@freebsd.org"