Re: [zfs-discuss] Monitoring ZFS

2006-12-13 Thread Roch - PAE
The latency issue might improve with this rfe 6471212 need reserved I/O scheduler slots to improve I/O latency of critical ops -r Tom Duell writes: Group, We are running a benchmark with 4000 users simulating a hospital management system running on Solaris 10 6/06 on USIV+ based

Re: [zfs-discuss] Need Clarification on ZFS quota property.

2006-12-13 Thread dudekula mastan
Hi Darren Thanks for your reply. You please take a deep look into the following command: $mkfs -F vxfs -o bsize=1024 /dev/rdsk/c5t20d9s2 2048000 The above command creates vxfs file system on first 2048000 blocks (each block size is 1024 bytes) of /dev/rdsk/c5t20d9s2 .

Re: [zfs-discuss] Re: Re: Re: Snapshots impact on performance

2006-12-13 Thread Chris Gerhard
Robert Milkowski wrote: Hello Chris, Wednesday, December 6, 2006, 6:23:48 PM, you wrote: CG One of our file servers internally to Sun that reproduces this CG running nv53 here is the dtrace output: Any conclusions yet? Not yet. We had to delete all the automatic snapshots we had so that

Re[2]: [zfs-discuss] Monitoring ZFS

2006-12-13 Thread Robert Milkowski
Hello Neil, Wednesday, December 13, 2006, 1:59:15 AM, you wrote: NP Tom Duell wrote On 12/12/06 17:11,: Group, We are running a benchmark with 4000 users simulating a hospital management system running on Solaris 10 6/06 on USIV+ based SunFire 6900 with 6540 storage array. Are there

[zfs-discuss] Re: ZFS behavior under heavy load (I/O that is)

2006-12-13 Thread Anantha N. Srirama
Thanks, I just downloaded Update 3 and hopefully the problem will go away. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Need Clarification on ZFS quota property.

2006-12-13 Thread Darren Dunham
$mkfs -F vxfs -o bsize=1024 /dev/rdsk/c5t20d9s2 2048000 The above command creates vxfs file system on first 2048000 blocks (each block size is 1024 bytes) of /dev/rdsk/c5t20d9s2 . Like this is there a option to limit the size of ZFS file system.? if so what it is ? how it is ?

[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-13 Thread Kory Wheatley
The Luns will be on separate SPA controllersnot on all the same controller, so that's why I thought if we split our data on different disks and ZFS Storage Pools we would get better IO performance. Correct? This message posted from opensolaris.org

Re: [zfs-discuss] Re: ZFS Storage Pool advice

2006-12-13 Thread Richard Elling
Kory Wheatley wrote: The Luns will be on separate SPA controllersnot on all the same controller, so that's why I thought if we split our data on different disks and ZFS Storage Pools we would get better IO performance. Correct? The way to think about it is that, in general, for best

[zfs-discuss] ZFS gui to create RAID 1+0 pools

2006-12-13 Thread Neal Weiss
I would like to create the following pool using the zfs gui: zpool create tank mirror c0t7d0 c1t7d0 mirror c4t7d0 c5t7do mirror c6t7d0 c7t7d0 The gui does not seem to let me create multiple vdevs in a pool at the same time. I know I can go back and add the mirrors later on, but I would like

Re: [zfs-discuss] Re: Uber block corruption?

2006-12-13 Thread Richard Elling
Anton B. Rang wrote: Also note that the UB is written to every vdev (4 per disk) so the chances of all UBs being corrupted is rather low. The chances that they're corrupted by the storage system, yes. However, they are all sourced from the same in-memory buffer, so an undetected in-memory

Re: [zfs-discuss] Need Clarification on ZFS quota property.

2006-12-13 Thread Darren Dunham
This is probably an attempt to 'short-stroke' a larger disk with the intention utilising only a small ammount of the disk surface, as a technique it used to be quite common for certain apps (notably DBs). Hence you saw deployments of quite large disks but with perhaps only 1/4-1/2

Re: [zfs-discuss] ZFS on a damaged disk

2006-12-13 Thread Richard Elling
Bill Sommerfeld wrote: On Tue, 2006-12-12 at 22:49 -0800, Patrick P Korsnick wrote: i have a machine with a disk that has some sort of defect and i've found that if i partition only half of the disk that the machine will still work. i tried to use 'format' to scan the disk and find the bad

Re: [zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS

2006-12-13 Thread Torrey McMahon
Robert Milkowski wrote: Hello Torrey, Tuesday, December 12, 2006, 11:40:42 PM, you wrote: TM Robert Milkowski wrote: Hello Matthew, MCA Also, I am considering what type of zpools to create. I have a MCA SAN with T3Bs and SE3511s. Since neither of these can work as a MCA JBOD (at lesat

Re: [zfs-discuss] ZFS on a damaged disk

2006-12-13 Thread Bill Sommerfeld
On Wed, 2006-12-13 at 10:24 -0800, Richard Elling wrote: I've seen two cases of disk failure where errors only occurred during random I/O; all blocks were readable sequentially; in both cases, this permitted the disk to be replaced without data loss and without resorting to backups by

[zfs-discuss] Re: Re: ZFS Usage in Warehousing (no more lengthy intro)

2006-12-13 Thread Jochen M. Kaiser
Robert, It's not that bad with CPU usage. For example with RAID-Z2 while doing scrub I get something like 800MB/s read from disks (550-600MB/s from zpool iostat perspective) and all four cores are mostly consumed - I get something like 10% idle on each cpu. === But in the end this would

[zfs-discuss] Re: ZFS Usage in Warehousing (lengthy intro, now slightly OT)

2006-12-13 Thread Jochen M. Kaiser
Al, snip Being a friend of simplicity I was thinking about using a pair (or more) of 3320 SCSI JBODs with multiple RAIDZ and/or RAID10 zfs disk pools on which we'd Have you not heard that SCSI is dead? :) scis == slowdead, well more or less, that is While I understand you don't want

Re: [zfs-discuss] ZFS on a damaged disk

2006-12-13 Thread Nathan Kroenert
On a recent journey of pain and frustration, I had to recover a UFS filesystem from a broken disk. The disk had many bad blocks and more were going bad over time. Sadly, there were just a few files that I wanted, but I could not mount the disk without it killing my system. (PATA disks... PITA