Re: [zfs-discuss] Thinking about spliting a zpool in system and data

2012-01-07 Thread Jim Klimov
Hello, Jesus, I have transitioned a number of systems roughly by the same procedure as you've outlined. Sadly, my notes are not in English so they wouldn't be of much help directly; but I can report that I had success with similar in-place manual transitions from mirrored SVM (pre-solaris

[zfs-discuss] ZIL on a dedicated HDD slice (1-2 disk systems)

2012-01-07 Thread Jim Klimov
Hello all, For smaller systems such as laptops or low-end servers, which can house 1-2 disks, would it make sense to dedicate a 2-4Gb slice to the ZIL for the data pool, separate from rpool? Example layout (single-disk or mirrored): s0 - 16Gb - rpool s1 - 4Gb - data-zil s3 - *Gb - data pool

Re: [zfs-discuss] ZFS Upgrade

2012-01-07 Thread Jim Klimov
2012-01-06 17:49, Edward Ned Harvey пишет: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ivan Rodriguez Dear list, I'm about to upgrade a zpool from 10 to 29 version, I suppose that this upgrade will improve several performance issues

Re: [zfs-discuss] Stress test zfs

2012-01-07 Thread Thomas Nau
Hi Grant On 01/06/2012 04:50 PM, Richard Elling wrote: Hi Grant, On Jan 4, 2012, at 2:59 PM, grant lowe wrote: Hi all, I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB memory RIght now oracle . I've been trying to load test the box with bonnie++. I can seem

[zfs-discuss] zfs defragmentation via resilvering?

2012-01-07 Thread Jim Klimov
Hello all, I understand that relatively high fragmentation is inherent to ZFS due to its COW and possible intermixing of metadata and data blocks (of which metadata path blocks are likely to expire and get freed relatively quickly). I believe it was sometimes implied on this list that such

Re: [zfs-discuss] ZIL on a dedicated HDD slice (1-2 disk systems)

2012-01-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov For smaller systems such as laptops or low-end servers, which can house 1-2 disks, would it make sense to dedicate a 2-4Gb slice to the ZIL for the data pool, separate from

Re: [zfs-discuss] zfs defragmentation via resilvering?

2012-01-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov I understand that relatively high fragmentation is inherent to ZFS due to its COW and possible intermixing of metadata and data blocks (of which metadata path blocks are likely

Re: [zfs-discuss] zfs defragmentation via resilvering?

2012-01-07 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
it seems that s11 shadow migration can help:-) On 1/7/2012 9:50 AM, Jim Klimov wrote: Hello all, I understand that relatively high fragmentation is inherent to ZFS due to its COW and possible intermixing of metadata and data blocks (of which metadata path blocks are likely to expire and get

[zfs-discuss] zfs read-ahead and L2ARC

2012-01-07 Thread Jim Klimov
I wonder if it is possible (currently or in the future as an RFE) to tell ZFS to automatically read-ahead some files and cache them in RAM and/or L2ARC? One use-case would be for Home-NAS setups where multimedia (video files or catalogs of images/music) are viewed form a ZFS box. For example, if

Re: [zfs-discuss] ZFS Upgrade

2012-01-07 Thread Bob Friesenhahn
On Sat, 7 Jan 2012, Jim Klimov wrote: I believe in this case it might make sense to boot the target system from this BootCD and use zpool upgrade from this OS image. This way you can be more sure that your recovery software (Solaris BootCD) would be helpful :) Also keep in mind that it would

Re: [zfs-discuss] ZFS and spread-spares (kinda like GPFS declustered RAID)?

2012-01-07 Thread Bob Friesenhahn
On Sat, 7 Jan 2012, Jim Klimov wrote: Several RAID systems have implemented spread spare drives in the sense that there is not an idling disk waiting to receive a burst of resilver data filling it up, but the capacity of the spare disk is spread among all drives in the array. As a result, the

Re: [zfs-discuss] ZFS and spread-spares (kinda like GPFS declustered RAID)?

2012-01-07 Thread Richard Elling
Hi Jim, On Jan 6, 2012, at 3:33 PM, Jim Klimov wrote: Hello all, I have a new idea up for discussion. Several RAID systems have implemented spread spare drives in the sense that there is not an idling disk waiting to receive a burst of resilver data filling it up, but the capacity of

Re: [zfs-discuss] ZIL on a dedicated HDD slice (1-2 disk systems)

2012-01-07 Thread Richard Elling
On Jan 7, 2012, at 7:12 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov For smaller systems such as laptops or low-end servers, which can house 1-2 disks, would it make sense to dedicate a 2-4Gb

Re: [zfs-discuss] ZFS and spread-spares (kinda like GPFS declustered RAID)?

2012-01-07 Thread Jim Klimov
2012-01-08 5:37, Richard Elling пишет: The big question is whether they are worth the effort. Spares solve a serviceability problem and only impact availability in an indirect manner. For single-parity solutions, spares can make a big difference in MTTDL, but have almost no impact on MTTDL for

Re: [zfs-discuss] ZFS and spread-spares (kinda like GPFS declustered RAID)?

2012-01-07 Thread Tim Cook
On Sat, Jan 7, 2012 at 7:37 PM, Richard Elling richard.ell...@gmail.comwrote: Hi Jim, On Jan 6, 2012, at 3:33 PM, Jim Klimov wrote: Hello all, I have a new idea up for discussion. Several RAID systems have implemented spread spare drives in the sense that there is not an idling