[EMAIL PROTECTED] said: > In general, such tasks would be better served by T5220 (or the new T5440 :-) > and J4500s. This would change the data paths from: > client --<net>-- T5220 --<net>-- X4500 --<SATA>-- disks to > client --<net>-- T5440 --<SAS>-- disks > > With the J4500 you get the same storage density as the X4500, but with SAS > access (some would call this direct access). You will have much better > bandwidth and lower latency between the T5440 (server) and disks while still > having the ability to multi-head the disks. The
There's an odd economic factor here, if you're in the .edu sector: The Sun Education Essentials promotional price list has the X4540 priced lower than a bare J4500 (not on the promotional list, but with a standard EDU discount). We have a project under development right now which might be served well by one of these EDU X4540's with a J4400 attached to it. The spec sheets for J4400 and J4500 say you can chain together enough of them to make a pool of 192 drives. I'm unsure about the bandwidth of these daisy-chained SAS interconnects, though. Any thoughts as to how high one might scale an X4540-plus-J4x00 solution? How does the X4540's internal disk bandwidth compare to that of the (non-RAID) SAS HBA? Regards, Marion _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss