Hi Andras,

No problems writing direct. Answers inline below. (If there are any typo's it cause it's late and I have had a very long day ;))

andras spitzer wrote:
Scott,

Sorry for writing you directly, but most likely you have missed my
questions regarding your SW design, whenever you have time, would you
reply to that? I really value your comments and appreciate it as it
seems you have great experience with ZFS in a professional
environment, and this is something not so frequent today.

That was my e-mail, response to your e-mail (it's in the thread) :

"Scott,

That is an awesome reference you wrote, I totally understand and agree
with your idea of having everything redundant (dual path, redundant
switches, dual controllers) at the SAN infrastructure, I would have
some question about the sw design you use if you don't mind.

- are you using MPxIO as DMP?
Yes. configuring via 'stmsboot'. I have used Sun MPXIO for quite a few years now and have found
it works well (was SAN Foundatin Kit for many years).
- as I understood from your e-mail all of your ZFS pools are ZFS
mirrored? (you don't have non-redundant ZFS configuration)
Certainly the ones that are from SAN based disk. No there are no non redundant ZFS configurations. All storage is doubled up. Expensive, but we tend to stick to modular storage for this and spread the cost over many yeasr. Storage budget is at least 50% of systems group infrastructure budget.

There are many other ZFS file systems which aren't SAN attached and are in mirrors, RAIDZ's etc. I mentioned the Loki's aka J4500 which are in RAIDZ's. Very nice and have worked very reliably so far. I would strongly advocate these units for ZFS if you want a lot of disk reasonably cheaply
that performs well...
- why you decided to use ZFS mirror instead of ZFS raidz or raidz2?
As we already have hardware based RAID5 from our arrays. (Sun 3510, 3511, 6140's). The ZFS file systems are used mostly for mirroring purposes, but also to take advantage of the other nice things
ZFS brings lack snapshots, cloning, clone promotions etc.
- you have RAID 5 protected LUNs from SAN, and you put ZFS mirror on
top of them?
Yes. Covered above I think.
Could you please share some details about your configuration regarding
SAN redundancy VS ZFS redundancy (I guess you use both here), also
some background why you decided to go with that?
Been doing it for many years. Not just with ZFS, but UFS and VXFS as well. Also quite a large number of NTFS machines. We have two geographically separate data centers which are a few kilometers apart with redundant dark fibre links over different routes. All core switches are in a full mesh with two cores per site, each with a redundant connection to the two cores at the other site. One via each route.

We believe strongly that storage is the key to our business. Servers are but processing to work the data and are far easier to replace. We tend to standardize on particular models and then buy a bunch of em and not
necessarily maintenance for them.

There are a lot of key things to building a reliable data center. I have been having a lively discussion on this twith Toby and Richard which has been raising some interesting points. I do firmly believe in getting things right from the ground up. I start with power and environment. Storage comes next in my book.
Regards,
sendai "

One point I'm really interested is that it seems you deploy ZFS with
ZFS mirror, even when you have RAID redundancy at the HW/SAN level,
which means extra costs to you obviously. I'm looking for a fairly
decisive opinion whether is it safe to use ZFS configuration without
redundancy when you have RAID redundancy in your high-end SAN, or you
still decide to go with ZFS redundancy (ZFS mirror in your case, not
even raidz or raidz2) because of the extra self-healing feature and
the lowered risk of total pool failure?
I think this has also been covered in recent list posts. the important thing is really to have two copies of blocks if you wish to be able to self heal. The cost I guess is what value you place on availability
and reliability of your data.

ZFS mirrors are faster for resilvering as well. Much much faster in my experience. We recently used this during a data center move and rebuild. Our SAN fabric was extended to 3 sites and we moved blocks of storage one piece at a time and resynced them at the new location once they were in place with 0%
disruption to the business.

I do think the fishworks stuff are going to prove to be game breakers in the near future for many people as they will offer many of the features we want in our storage. Once COMSTAR has been integrated into this line I might buy some. (I have a large investment in fibre channel and I don't trust networking people as far as I can kick them when it comes to understanding the potential problems that can arise from disconnecting block targets that are coming in over Ethernet. )
Also, if you could reply in the thread, so that everyone can read your
experiences, that would be great!

Regards,
sendai

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to