Sorry for the delayed reply, SATA vs SAS is a subject many ZFS sysadmins
(including myself until recently) are completely ignorant about. I'm still not
super knowledgable about STP, but here goes. Normal SAS devices communicate
with your controller via Serial SCSI Protocol (SSP), while SATA can communicate
using SATA Tunneling Protocol (STP). Feel free to look into the specifications
for each to see real differences, but among other things STP is limited by the
command set implemented by the drive in firmware; basically SATA drives have
fewer configuration options and among other things may or may not have a SAS
WWN other constant GUID. SAS devices implement additional monitoring and
failure modes as well as an awareness and methods of communication with
multiple initiators. As for what this means in the practical world, when a
SATA device is behind a SAS switch (like in a backplane) certain situations
must be handled properly by the interposer since the SATA drive itself has no
concept of 'disconnect initiator A and connect to initiator B' or 'this drive
is now attached to a new initiator, let's make sure it has the same GUID as
before'. Do you know who wrote/debugged the firmware on your interposer?
I don't have a personal story where an interposter/SAS switch screwed me, but
have I have heard multiple folks (Theo @ OmniTI and Garrett @ Nexenta) blame
them for really hard to debug problems. I'll leave you with a quote from
"At a significant account, I can say that we (meaning Nexenta) have verified
that SAS/SATA expanders combined with high loads of ZFS activity have proven
conclusively to be highly toxic. So, if you're designing an enterprise storage
solution, please consider using SAS all the way to the disk drives, and just
skip those cheaper SATA options. You may think SATA looks like a bargain, but
when your array goes offline during ZFS scrub or resilver operations because
the expander is choking on cache sync commands, you'll really wish you had
spent the extra cash up front. Really."
So wherever possible I'd try and use all SAS. Where I must use a SATA SSD I'll
driectly connect to the SAS controller on a dedicated channel and not behind a
SAS switch and interposer. I'm not sure that helps, but at least there's one
less set of firmware between ZFS and my disks. That said, I still have a Sun
J4400 JBOD and two Dell MD1000 devices filled with 1TB SATA disks behind SAS
interposers, but when they're retired you can bet the replacement will 100%
P.S. Also note, sometimes devices with a SAS interface aren't what they seem.
Supposedly the OCZ Talos SSDs are actually SATA devices with an internal
P.P.S. Hadn't seen those SGI JBODs, those are silly high density, I can't
imagine those come cheap and I'm sure they are deeper than most things in your
rack. Dell/DataOn do 60 disks in 4U, but with eleven of the SGI JBODs in a
single rack (44U), that's a whopping 660 4TB disks (2.4PB raw) per cabinet.
Truly silly silly dense.
On Nov 13, 2012, at 3:57 PM, Cedric Tineo wrote:
> Could you please give info or links to clarify "Sata tunneling protocol
> nonsense" issues?
> Also have you considered Supermicro's 45 drive in 4U enclosure - The
> SC847E26-RJBOD1 ? It's cheap too at around 2000$ and based on LSI backplanes
> SGI has a nice and crazy 81(!) 3.5" disks in 4U JBOD enclosure but they
> refused to sell it to us without them supplying the disks also.
> Anyone aware of a 60 or more disks in 4U enclosure on top of those mentioned
> in this message or the data-on? We are trying to build
> super-high-density-storage racks.
> Cedric Tineo
> On 13 nov. 2012, at 21:08, Peter Tripp <pe...@psych.columbia.edu> wrote:
>> Hi folks,
>> I'm in the market for a couple of JBODs. Up until now I've been relatively
>> lucky with finding hardware that plays very nicely with ZFS. All my gear
>> currently in production uses LSI SAS controllers (3801e, 9200-16e, 9211-8i)
>> with backplanes powered by LSI SAS expanders (Sun x4250, Sun J4400, etc).
>> But I'm in the market for SAS2 JBODs to support a large number 3.5inch SAS
>> disks (60+ 3TB disks to start).
>> I'm aware of potential issues with SATA drives/interposers and the whole
>> SATA Tunneling Protocol (STP) nonsense, so I'm going to stick to a pure SAS
>> setup. Also, since I've had trouble with in the past with daisy-chained SAS
>> JBODs I'll probably stick with one SAS 4x cable (SFF8088) per JBOD and
>> unless there were a compelling reason for multi-pathing I'd probably stick
>> to a single controller. If possible I'd rather buy 20 packs of enterprise
>> SAS disks with 5yr warranties and have the JBOD come with empty trays, but
>> would also consider buying disks with the JBOD if the price wasn't too crazy.
>> Does anyone have any positive/negative experiences with any of the following
>> with ZFS:
>> * SuperMicro SC826E16-R500LPB (2U 12 drives, dual 500w PS, single LSI
>> SAS2X28 expander)
>> * SuperMicro SC846BE16-R920B (4U 24 drives, dual 920w PS, single unknown
>> * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown
>> * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown
>> I'm leaning towards the SuperMicro stuff, but every time I order SuperMicro
>> gear there's always something missing or wrongly configured so some of the
>> cost savings gets eaten up with my time figuring out where things went wrong
>> and returning/ordering replacements. The Dell/HP gear I'm sure is fine, but
>> buying disks from them gets pricey quick. The last time I looked they
>> charged $150 extra per disk for when the only added value was a proprietary
>> sled a shorter warranty (3yr vs 5yr).
>> I'm open to other JBOD vendors too, was just really just curious what folks
>> were using when they needed more than two dozen 3.5" SAS disks for use with
>> zfs-discuss mailing list
> zfs-discuss mailing list
zfs-discuss mailing list