Re: [zfs-discuss] ZFS Upgrade

2012-01-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Ivan Rodriguez > > Dear list, > > I'm about to upgrade a zpool from 10 to 29 version, I suppose that > this upgrade will improve several performance issues that are present > on 10, however >

Re: [zfs-discuss] Thinking about spliting a zpool in "system" and "data"

2012-01-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Jesus Cea > > Sorry if this list is inappropriate. Pointers welcomed. Not at all. This is the perfect forum for your question. > So I am thinking about splitting my full two-disk zpool in t

Re: [zfs-discuss] Thinking about spliting a zpool in "system" and "data"

2012-01-06 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha > > > c) Currently Solaris decides to activate write caching in the SATA > > disks, nice. What would happen if I still use the complete disks BUT > > with two slices instead of

Re: [zfs-discuss] ZFS + Dell MD1200's - MD3200 necessary?

2012-01-06 Thread Craig Morgan
Ray, If you are intending to go Nexenta then speak to your local Nexenta SE, we've got HSL qualified solutions which cover our h/w support and we've explicitly qualed some MD1200 configs with Dell for certain deployments to guarantee support via both Dell h/w support and ourselves. If you don't

Re: [zfs-discuss] Stress test zfs

2012-01-06 Thread Richard Elling
Hi Grant, On Jan 4, 2012, at 2:59 PM, grant lowe wrote: > Hi all, > > I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB > memory RIght now oracle . I've been trying to load test the box with > bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more >

Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-01-06 Thread Richard Elling
On Jan 5, 2012, at 10:19 AM, Tim Cook wrote: > Speaking of illumos, what exactly is the deal with the zfs discuss mailing > list? There's all of 3 posts that show up for all of 2011. Am I missing > something, or is there just that little traction currently? > http://www.listbox.com/member/archi

Re: [zfs-discuss] Any HP Servers recommendation for Openindiana (Capacity Server) ?

2012-01-06 Thread Eric D. Mudama
On Wed, Jan 4 at 13:55, Fajar A. Nugraha wrote: Were the Dell cards able to present the disks as JBOD without any third-party-flashing involved? Yes, the ones I have tested (SAS 6/iR) worked as expected (bare drives exposed to ZFS) with no changes to drive firmware. I have not tested the H200

Re: [zfs-discuss] Fixing txg commit frequency

2012-01-06 Thread Sašo Kiselkov
On 07/01/2011 12:01 AM, Sašo Kiselkov wrote: > On 06/30/2011 11:56 PM, Sašo Kiselkov wrote: >> Hm, it appears I'll have to do some reboots and more extensive testing. >> I tried tuning various settings and then returned everything back to the >> defaults. Yet, now I can ramp the number of concurren

Re: [zfs-discuss] Thinking about spliting a zpool in "system" and "data"

2012-01-06 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
may be one can do the following (assume c0t0d0 and c0t1d0) 1)split rpool mirror: zpool split rpool newpool c0t1d0s0 1b)zpool destroy newpool 2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1) 3)zpool create rpool2 c0t1d0s1 4)use lucreate -c c0t0d0s0 -n new-zfsbe -p c0t1d0s0 5)lustatus c0t0d

Re: [zfs-discuss] Thinking about spliting a zpool in "system" and "data"

2012-01-06 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
correction On 1/6/2012 3:34 PM, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D." wrote: may be one can do the following (assume c0t0d0 and c0t1d0) 1)split rpool mirror: zpool split rpool newpool c0t1d0s0 1b)zpool destroy newpool 2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1) 3)zpool create rpool2

[zfs-discuss] ZFS and spread-spares (kinda like GPFS declustered RAID)?

2012-01-06 Thread Jim Klimov
Hello all, I have a new idea up for discussion. Several RAID systems have implemented "spread" spare drives in the sense that there is not an idling disk waiting to receive a burst of resilver data filling it up, but the capacity of the spare disk is spread among all drives in the array. As a re