Re: [zfs-discuss] Re: External drive enclosures + Sun Server for mass

2007-01-22 Thread mike

Areca makes excellent PCI express cards - but probably have zero
support in Solaris/OpenSolaris. I use them in both Windows and Linux.
Works natively in FreeBSD too. They're the fastest cards on the market
I believe still.

However probably not very appropriate for this since it's a Solaris-based OS :(

On 1/22/07, David J. Orman <[EMAIL PROTECTED]> wrote:


#2 - They only have PCI Express slots, and I can't find any good external SATA 
interface cards on PCI Express

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: External drive enclosures + Sun Server for mass

2007-01-22 Thread Jason J. W. Williams

Hi David,

Depending on the I/O you're doing the X4100/X4200 are much better
suited because of the dual HyperTransport buses. As a storage box with
GigE outputs you've got a lot more I/O capacity with two HT buses than
one. That plus the X4100 is just a more solid box. The X2100 M2 while
a vast improvement over the X2100 in terms of reliability and
features, is still an OEM'd whitebox. We use the X2100 M2s for
application servers, but for anything that needs solid reliability or
I/O we go Galaxy.

Best Regards,
Jason

On 1/22/07, David J. Orman <[EMAIL PROTECTED]> wrote:

> Not to be picky, but the X2100 and X2200 series are
> NOT
> designed/targeted for disk serving (they don't even
> have redundant power
> supplies).  They're compute-boxes.  The X4100/X4200
> are what you are
> looking for to get a flexible box more oriented
> towards disk i/o and
> expansion.

I don't see those as being any better suited to external discs other than:

#1 - They have the capacity for redundant PSUs, which is irrelevant to my needs.
#2 - They only have PCI Express slots, and I can't find any good external SATA 
interface cards on PCI Express

I can't wrap my head around the idea that I should buy a lot more than I need, 
which still doesn't serve my purposes. The 4 disks in an x4100 still aren't 
enough, and the machine is a fair amount more costly. I just need mirrored boot 
drives, and an external disk array.

> That said (if you're set on an X2200 M2), you are
> probably better off
> getting a PCI-E SCSI controller, and then attaching
> it to an external
> SCSI->SATA JBOD.  There are plenty of external JBODs
> out there which use
> Ultra320/Ultra160 as a host interface and SATA as a
> drive interface.
> Sun will sell you a supported SCSI controller with
> the X2200 M2 (the
> "Sun StorageTek PCI-E Dual Channel Ultra320 SCSI
> HBA").
>
> SCSI is far better for a host attachment mechanism
> than eSATA if you
> plan on doing more than a couple of drives, which it
> sounds like you
> are. While the SCSI HBA is going to cost quite a bit
> more than an eSATA
> HBA, the external JBODs run about the same, and the
> total difference is
> going to be $300 or so across the whole setup (which
> will cost you $5000
> or more fully populated). So the cost to use SCSI vs
> eSATA as the host-
> attach is a rounding error.

I understand your comments in some ways, in others I do not. It sounds like we're moving backwards 
in time. Exactly why is SCSI "better" than SAS/SATA for external devices? From my 
experience (with other OSs/hardware platforms) the opposite is true. A nice SAS/SATA controller 
with external ports (especially those that allow multiple SAS/SATA drives via one cable - whichever 
tech you use) works wonderfully for me, and I get a nice thin/clean cable which makes cable 
management much more "enjoyable" in higher density situations.

I also don't agree with the logic "just spend a mere $300 extra to use older 
technology!"

$300 may not be much to large business, but things like this nickle and dime 
small business owners. There's a lot of things I'd prefer to spend $300 on than 
an expensive SCSI HBA which offers no advantages over a SAS counterpart, in 
fact offers disadvantages instead.

Your input is of course highly valued, and it's quite possible I'm missing an important 
piece to the puzzle somewhere here, but I am not convinced this is the ideal solution - 
simply a "stick with the old stuff, it's easier" solution, which I am very much 
against.

Thanks,
David


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: External drive enclosures + Sun Server for mass

2007-01-22 Thread David J. Orman
> Not to be picky, but the X2100 and X2200 series are
> NOT
> designed/targeted for disk serving (they don't even
> have redundant power
> supplies).  They're compute-boxes.  The X4100/X4200
> are what you are
> looking for to get a flexible box more oriented
> towards disk i/o and
> expansion.

I don't see those as being any better suited to external discs other than:

#1 - They have the capacity for redundant PSUs, which is irrelevant to my needs.
#2 - They only have PCI Express slots, and I can't find any good external SATA 
interface cards on PCI Express

I can't wrap my head around the idea that I should buy a lot more than I need, 
which still doesn't serve my purposes. The 4 disks in an x4100 still aren't 
enough, and the machine is a fair amount more costly. I just need mirrored boot 
drives, and an external disk array.

> That said (if you're set on an X2200 M2), you are
> probably better off
> getting a PCI-E SCSI controller, and then attaching
> it to an external
> SCSI->SATA JBOD.  There are plenty of external JBODs
> out there which use
> Ultra320/Ultra160 as a host interface and SATA as a
> drive interface.
> Sun will sell you a supported SCSI controller with
> the X2200 M2 (the
> "Sun StorageTek PCI-E Dual Channel Ultra320 SCSI
> HBA").
>
> SCSI is far better for a host attachment mechanism
> than eSATA if you
> plan on doing more than a couple of drives, which it
> sounds like you
> are. While the SCSI HBA is going to cost quite a bit
> more than an eSATA
> HBA, the external JBODs run about the same, and the
> total difference is
> going to be $300 or so across the whole setup (which
> will cost you $5000
> or more fully populated). So the cost to use SCSI vs
> eSATA as the host-
> attach is a rounding error.

I understand your comments in some ways, in others I do not. It sounds like 
we're moving backwards in time. Exactly why is SCSI "better" than SAS/SATA for 
external devices? From my experience (with other OSs/hardware platforms) the 
opposite is true. A nice SAS/SATA controller with external ports (especially 
those that allow multiple SAS/SATA drives via one cable - whichever tech you 
use) works wonderfully for me, and I get a nice thin/clean cable which makes 
cable management much more "enjoyable" in higher density situations.
 
I also don't agree with the logic "just spend a mere $300 extra to use older 
technology!"

$300 may not be much to large business, but things like this nickle and dime 
small business owners. There's a lot of things I'd prefer to spend $300 on than 
an expensive SCSI HBA which offers no advantages over a SAS counterpart, in 
fact offers disadvantages instead. 

Your input is of course highly valued, and it's quite possible I'm missing an 
important piece to the puzzle somewhere here, but I am not convinced this is 
the ideal solution - simply a "stick with the old stuff, it's easier" solution, 
which I am very much against.

Thanks,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss