[zfs-discuss] Re: Re: ZFS Support for remote mirroring

2007-05-10 Thread Anantha N. Srirama
To clarify further; EMC note EMC Host Connectivity Guide for Solaris indicates that ZFS is supported on 11/06 (aka Update 3) and onwards. However, they sneak in a cautionary disclaimer that snapshot and clone features are supported by Sun. If one reads it carefully it appears that they do

[zfs-discuss] Extremely long ZFS destroy operations

2007-05-09 Thread Anantha N. Srirama
We've Solaris 10 Update 3 (aka 11/06) running on an E2900 (24 x 96). On this server we've been running a large SAS environment totalling well over 2TB. We also take daily snapshots of the filesystems and clone them for use by a local zone. This setup has been in use for well over 6 months.

[zfs-discuss] Re: ZFS Support for remote mirroring

2007-05-09 Thread Anantha N. Srirama
For whatever reason EMC notes (on PowerLink) suggest that ZFS is not supported on their arrays. If one is going to use a ZFS filesystem on top of a EMC array be warned about support issues. This message posted from opensolaris.org ___ zfs-discuss

[zfs-discuss] Re: Extremely long ZFS destroy operations

2007-05-09 Thread Anantha N. Srirama
I've since stopped making the second clone when I realized the .zfs/snapshot/snapname still exists after the clone operation is completed. So my need for the local clone is met by the direct access to the snapshot. However, the poor performance of the destroy is still valid. It is quite

[zfs-discuss] Re: ZFS performance with Oracle

2007-03-18 Thread Anantha N. Srirama
I'm sorry dude, I can't make head or tail from your post. What is your point? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Re: Need help making lsof work with ZFS

2007-02-18 Thread Anantha N. Srirama
I think so. After all there are features shipped which are not fully baked/guranteed like the send/receive. Isn't shipping the header files better than letting developers guess their structure and possibly make mistakes? Of course the developer can compile against OpenSolaris source but far

[zfs-discuss] Need help making lsof work with ZFS

2007-02-13 Thread Anantha N. Srirama
I contacted the author of 'lsof' regarding the missing ZFS support. The command works but fails to display any files that are opened by the process in a ZFS filesystem. He indicates that the required ZFS kernel structure definitions (header files) are not shipped with the OS. He further

[zfs-discuss] Re: Need help making lsof work with ZFS

2007-02-13 Thread Anantha N. Srirama
I did find zfs.h and libzfs.h (thanks Eric). However, when I try to compile the latest version (4.87C) of lsof it finds the following files missing: dmu.h zfs_acl.h zfs_debug.h zfs_rlock.h zil.h spa.h zfs_context.h zfs_dir.h zfs_vfsops.h zio.h txg.h zfs_ctldir.h zfs_ioctl.h zfs_znode.h

[zfs-discuss] Re: Disk Failure Rates and Error Rates -- ( Off topic: Jim Gray lost at sea)

2007-02-12 Thread Anantha N. Srirama
Here's another website working on his rescue, myy prayers are for a safe return of this CS icon. http://www.helpfindjim.com/ This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Re: Re: Re: ZFS or UFS - what to do?

2007-01-28 Thread Anantha N. Srirama
You're right that storage level snapshots are filesystem agnostic. I'm not sure why you believe you won't be able to restore individual files by using a NetApp snapshot? In the case of ZFS you'd take a periodic snapshot and use it to restore files, in the case of NetApp you can do the same (of

[zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-28 Thread Anantha N. Srirama
Agreed, I guess I didn't articulate my point/thought very well. The best config is to present JBoDs and let ZFS provide the data protection. This has been a very stimulating conversation thread; it is shedding new light into how to best use ZFS. This message posted from opensolaris.org

[zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-26 Thread Anantha N. Srirama
I've used ZFS since July/August 2006 when Sol 10 Update 2 came out (first release to integrate ZFS.) I've used it on three servers (E25K domain, and 2 E2900s) extensivesely; two them are production. I've over 3TB of storage from an EMC SAN under ZFS management for no less than 6 months. Like

[zfs-discuss] Re: Can you turn on zfs compression when the fs is already populated?

2007-01-24 Thread Anantha N. Srirama
I've used the COMPRESS feature for quite a while and you can flip back and forth without any problem. When you turn the compress ON nothing happens to the existing data. However when you start updating your files all new blocks will be compressed; so it is possible to have your file be composed

[zfs-discuss] Re: Converting home directory from ufs to zfs

2007-01-24 Thread Anantha N. Srirama
No such facility exists to automagically convert an existing UFS filesystem to ZFS. You've to create a new ZFS pool/filesystem and then move your data. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Re: How much do we really want zpool remove?

2007-01-18 Thread Anantha N. Srirama
I can vouch for this situation. I had to go through a long maintenance to accomplish the following: - 50 x 64GB drives in a zpool; needed to seperate out 15 of them out due to performance issues. There was no need to increase storage capacity. Because I couldn't yank 15 drives from the

[zfs-discuss] Re: Re: Heavy writes freezing system

2007-01-17 Thread Anantha N. Srirama
Bug 6413510 is the root cause. ZFS maestros please correct me if I'm quoting an incorrect bug. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Heavy writes freezing system

2007-01-17 Thread Anantha N. Srirama
Bag-o-tricks-r-us, I suggest the following in such a case: - Two ZFS pools - One for production - One for Education - Isolate the LUNs feeding the pools if possible, don't share spindles. Remember on EMC/Hitachi you've logical LUNs created by striping/concat'ng carved up physical disks,

[zfs-discuss] Re: Re: Heavy writes freezing system

2007-01-17 Thread Anantha N. Srirama
I did some straight up Oracle/ZFS testing but not on Zvols. I'll give it a shot and report back, next week is the earliest. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Extremely poor ZFS perf and other observations

2007-01-13 Thread Anantha N. Srirama
I'm observing the following behavior in our environment (Sol10U2, E2900, 24x96, 2x2Gbps, ...) - I've a compressed ZFS filesystem where I'm creating a large tar file. I notice that the tar process is running fine (accumulating CPU, truss shows writes, ...) but for whatever reason the timestamp

[zfs-discuss] Re: Puzzling ZFS behavior with COMPRESS option

2007-01-09 Thread Anantha N. Srirama
I'll see if I can confirm what you are suggesting. Thanks. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Puzzling ZFS behavior with COMPRESS option

2007-01-09 Thread Anantha N. Srirama
I've some important information that should shed some light on this behavior: This evening I created a new filesystem across the very same 50 disks including the COMPRESS attribute. My goal was to isolate some workload to the new filesystem and started moving a 100GB directory tree over to the

[zfs-discuss] Puzzling ZFS behavior with COMPRESS option

2007-01-08 Thread Anantha N. Srirama
Our setup: - E2900 (24 x 96); Solaris 10 Update 2 (aka 06/06) - 2 2Gbps FC HBA - EMC DMX storage - 50 x 64GB LUNs configured in 1 ZFS pool - Many filesystems created with COMPRESS enabled; specifically I've one that is 768GB I'm observing the following puzzling behavior: - We are currently

[zfs-discuss] Re: Puzzling ZFS behavior with COMPRESS option

2007-01-08 Thread Anantha N. Srirama
Quick update, since my original post I've confirmed via DTrace (rwtop script in toolkit) that the application is not generating 150MB/S * compressratio of I/O. What then is causing this much I/O in our system? This message posted from opensolaris.org

[zfs-discuss] Re: ZFS behavior under heavy load (I/O that is)

2006-12-13 Thread Anantha N. Srirama
Thanks, I just downloaded Update 3 and hopefully the problem will go away. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Performance problems during 'destroy' (and bizzare Zone problem as well)

2006-12-12 Thread Anantha N. Srirama
[b]Setting:[/b] We've operating in the following setup for well over 60 days. - E2900 (24 x 92) - 2 2Gbps FC to EMC SAN - Solaris 10 Update 2 (06/06) - ZFS with compression turned on - Global zone + 1 local zone (sparse) - Local zone is fed ZFS clones from the global Zone [b]Daily

[zfs-discuss] ZFS behavior under heavy load (I/O that is)

2006-12-12 Thread Anantha N. Srirama
I'm observing the following behavior on our E2900 (24 x 92 config), 2 FCs, and ... I've a large filesystem (~758GB) with compress mode on. When this filesystem is under heavy load (150MB/S) I've problems saving files in 'vi'. I posted here about it and recall that the issue is addressed in

[zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-11-28 Thread Anantha N. Srirama
Oh my, one day after I posted my horror story another one strikes. This is validation of the design objectives of ZFS, looks like this type of stuff happens more often than not. In the past we'd have just attributed this type of problem to some application induced corruption, now ZFS is pinning

[zfs-discuss] Re: Re: Production ZFS Server Death (06/06)

2006-11-28 Thread Anantha N. Srirama
Glad it worked for you. I suspect in your case the corruption happened way down in the tree and you could get around it by pruning the tree (rm the file) below the point of corruption. I suspect this could be due to a very localized corruption like Alpha particle problem where a bit was flipped

[zfs-discuss] Another win for ZFS

2006-11-27 Thread Anantha N. Srirama
Today ZFS proved its mettle at our site. We've a set of Sun servers (25k and 2900s) that are all connected to a DMX3500 via a SAN. Different servers use the storage differently; some of the storage on the server side was configured with ZFS while others were configured as UFS filesystems while

[zfs-discuss] Re: Configuring a 3510 for ZFS

2006-10-18 Thread Anantha N. Srirama
Thanks for the stimulating exchange of ideas/thoughts. I've always been a believer of letting s/w do my RAID functions; for example in the old days of VxVM I always preferred to do mirroring at the s/w level. It is my belief that there is more 'meta' information available at the OS level than

[zfs-discuss] Re: Configuring a 3510 for ZFS

2006-10-16 Thread Anantha N. Srirama
I'm glad you asked this question. We are currently expecting 3511 storage sub-systems for our servers. We were wondering about their configuration as well. This ZFS thing throws a wrench in the old line think ;-) Seriously, we now have to put on a new hat to figure out the best way to leverage

[zfs-discuss] Re: I'm dancin' in the streets

2006-09-27 Thread Anantha N. Srirama
Some people have privately asked me the configuration details when the problem was encountered. Here they are: zonecfg:bluenile info zonepath: /zones/bluenile autoboot: false pool: inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin

[zfs-discuss] Re: I'm dancin' in the streets

2006-09-26 Thread Anantha N. Srirama
I've found a small bug in the ZFS Zones integration in Sol10 06/06 release. This evening I started tweaking my configuration to make it consistent (I like orthogonal naming standards) and hit upon this situation: - Setup a ZFS clone as /zfspool/bluenile/cloneapps; this is a clone of my global

[zfs-discuss] I'm dancin' in the streets

2006-09-22 Thread Anantha N. Srirama
Wow! I solved a tricky problem this morning thanks to Zones ZFS integration. We have a SAS SPDS database environment running on Sol10 06/06. The SPDS database is unique in that when a table is being updated by one user it is unavailable to the rest of the user community. Our nightly update

[zfs-discuss] Re: Re: Bizzare problem with ZFS filesystem

2006-09-18 Thread Anantha N. Srirama
I don't see a patch for this on the SunSolve website. I've opened a service request to get this patch for Sol10 06/06. Stay tuned. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Re: Re: Bizzare problem with ZFS filesystem

2006-09-13 Thread Anantha N. Srirama
I ran the DTrace script and the resulting output is rather large (1 million lines and 65MB), so I won't burden this forum with that much data. Here are the top 100 lines from the DTrace output. Let me know if you need the full output and I'll figure out a way for the group to get it. dtrace:

[zfs-discuss] Re: Bizzare problem with ZFS filesystem

2006-09-13 Thread Anantha N. Srirama
One more piece of information. I was able to ascertain the slowdown happens only when ZFS is used heavily; meaning lots of inflight I/O. This morning when the system was quiet my writes to the /u099 filesystem was excellent and it has gone south like I reported earlier. I am currently

[zfs-discuss] Re: zfs and Oracle ASM

2006-09-13 Thread Anantha N. Srirama
I did a non-scientific benchmark against ASM and ZFS. Just look for my posts and you'll see it. To summarize it was a statistical tie for simple loads of around 2GB of data and we've chosen to stick with ASM for a variety of reasons not the least of which is its ability to rebalance when disks

[zfs-discuss] Bizzare problem with ZFS filesystem

2006-09-12 Thread Anantha N. Srirama
I'm experiencing a bizzare write performance problem while using a ZFS filesystem. Here are the relevant facts: [b]# zpool list[/b] NAMESIZEUSED AVAILCAP HEALTH ALTROOT mtdc 3.27T502G 2.78T14% ONLINE - zfspool

[zfs-discuss] Re: Bizzare problem with ZFS filesystem

2006-09-12 Thread Anantha N. Srirama
Here's the information you requested. Script started on Tue Sep 12 16:46:46 2006 # uname -a SunOS umt1a-bio-srv2 5.10 Generic_118833-18 sun4u sparc SUNW,Netra-T12 # prtdiag System Configuration: Sun Microsystems sun4u Sun Fire E2900 System clock frequency: 150 MHZ Memory size: 96GB

[zfs-discuss] Re: Oracle on ZFS

2006-09-09 Thread Anantha N. Srirama
I finally got around to running a 'benchmark' using the AOL clickstream data (2GB of text files and approximately 36 million rows). Here are the Oracle settings during the test. - Same Oracle settings for all tests - All disks in question are 32GB EMC hypers - I had the standard Oracle

[zfs-discuss] Re: Oracle on ZFS

2006-09-09 Thread Anantha N. Srirama
One correction in the interest of full disclosure, tests were conducted on a machine that is different from my original post indicated a server configuration. Here's the server config used in tests: - E25K domain (1 board: 4P/8Way x 32GB) - 2 2Gbps FC - MPxIO - Solaris 10 Update 2 (06/06); no

[zfs-discuss] Re: Oracle on ZFS

2006-08-26 Thread Anantha N. Srirama
Good start, I'm now motivated to run the same test on my server. My h/w config for the test will be: - E2900 (24 way x 96GB) - 2 2Gbps QLogic cards - 40 x 64GB EMC LUNs I'll run the AOL deidentified clickstream database. It'll primarily be a write test. I intend to use the following scenarios:

[zfs-discuss] Re: ZFS compression / space efficiency

2006-08-22 Thread Anantha N. Srirama
We're running ZFS with compress=ON on a E2900. I'm hosting SAS/SPDS datasets (files) on these filesystems and am achieving 1:3.87 (as reported by zfs) compression. Your mileage will vary depending on the data you are writing. If your data is already compressed (zip files) then don't expect any

[zfs-discuss] Re: ZFS write performance problem with compression set to ON

2006-08-21 Thread Anantha N. Srirama
I've a few questions: - Does 'zpool iostat' report numbers from the top of the ZFS stack or at the bottom? I've corelated the zpool iostat numbers with the system iostat numbers and they matchup. This tells me the numbers are from the 'bottom' of the ZFS stack, right? Having said that it'd be

[zfs-discuss] Re: ZFS write performance problem with compression set to ON

2006-08-17 Thread Anantha N. Srirama
Therein lies my dillemma: - We know the I/O sub-system is capable of much higher I/O rates - Under the test setup I've SAS datasets which are lending themselves to compression. This should manifest itself as lots of read I/O resulting in much smaller (4x) write I/O due to compression. This

[zfs-discuss] ZFS write performance problem with compression set to ON

2006-08-16 Thread Anantha N. Srirama
Test setup: - E2900 with 12 US-IV+ 1.5GHz processor, 96GB memory, 2x2Gbps FC HBAs, MPxIO in round-robbin config. - 50x64GB EMC disks presented on both 2 FCs. - ZFS pool defined using all 50 disks - Multiple ZFS filesystems built on the above pool. I'm observing the following: - When