Stephen Nelson-Smith wrote:
Hi,

I recommended a ZFS-based archive solution to a client needing to have
a network-based archive of 15TB of data in a remote datacentre.  I
based this on an X2200 + J4400, Solaris 10 + rsync.

This was enthusiastically received, to the extent that the client is
now requesting that their live system (15TB data on cheap SAN and
Linux LVM) be replaced with a ZFS-based system.

The catch is that they're not ready to move their production systems
off Linux - so web, db and app layer will all still be on RHEL 5.
At some point I am sure you will convince them to see the light! ;)
As I see it, if they want to benefit from ZFS at the storage layer,
the obvious solution would be a NAS system, such as a 7210, or
something buillt from a JBOD and a head node that does something
similar.  The 7210 is out of budget - and I'm not quite sure how it
presents its storage - is it NFS/CIFS?
The 7000 series devices can present NFS, CIFS and iSCSI. Looks very nice if you need a nice Gui / Don't know command line / need nice analytics. I had a play with one the other day and am hoping to get my mit's on one shortly for testing. I would like to give it a real
gd crack with VMWare for VDI VM's.
  If so, presumably it would be
relatively easy to build something equivalent, but without the
(awesome) interface.
For sure the above gear would be fine for that. If you use standard Solaris 10 10/08 you have NFS and iSCSI ability directly in the OS and also available to be supported via a support contract if needed. Best bet would probably be NFS for the Linux machines, but you would need
to test in *their* environment with *their* workload.
The interesting alternative is to set up Comstar on SXCE, create
zpools and volumes, and make these available either over a fibre
infrastructure, or iSCSI.  I'm quite excited by this as a solution,
but I'm not sure if it's really production ready.
If you want fibre channel target then you will need to use OpenSolaris or SXDE I believe. It's not available in mainstream Solaris yet. I am personally waiting till then when it has been *well* tested in the bleeding edge community. I have too much data to take big risks with it.
What other options are there, and what advice/experience can you share?
I do very similar stuff here with J4500's and T2K's for compliance archives, NFS and iSCSI targets for Windows machines. Works fine for me. Biggest system is 48TB on J4500 for Veritas Netbackup DDT staging volumes. Very good throughput indeed. Perfect in fact, based on the large files that are created in this environment. One of these J4500's can keep 4 LTO4 drives in a SL500 saturated with
data on a T5220. (4 streams at ~160 MB/sec)

I think you have pretty much the right idea though. Certainly if you use Sun kit you will be able to deliver
a commercially supported solution for them.
Thanks,

S.

--
_________________________________________________________________________

Scott Lawson
Systems Architect
Information Communication Technology Services

Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand

Phone  : +64 09 968 7611
Fax    : +64 09 968 7641
Mobile : +64 27 568 7611

mailto:sc...@manukau.ac.nz

http://www.manukau.ac.nz

__________________________________________________________________________

perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

__________________________________________________________________________



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to