Hello all,
I have a new idea up for discussion.
Several RAID systems have implemented "spread" spare drives
in the sense that there is not an idling disk waiting to
receive a burst of resilver data filling it up, but the
capacity of the spare disk is spread among all drives in
the array. As a re
correction
On 1/6/2012 3:34 PM, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D." wrote:
may be one can do the following (assume c0t0d0 and c0t1d0)
1)split rpool mirror: zpool split rpool newpool c0t1d0s0
1b)zpool destroy newpool
2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1)
3)zpool create rpool2
may be one can do the following (assume c0t0d0 and c0t1d0)
1)split rpool mirror: zpool split rpool newpool c0t1d0s0
1b)zpool destroy newpool
2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1)
3)zpool create rpool2 c0t1d0s1
4)use lucreate -c c0t0d0s0 -n new-zfsbe -p c0t1d0s0
5)lustatus
c0t0d
On 07/01/2011 12:01 AM, Sašo Kiselkov wrote:
> On 06/30/2011 11:56 PM, Sašo Kiselkov wrote:
>> Hm, it appears I'll have to do some reboots and more extensive testing.
>> I tried tuning various settings and then returned everything back to the
>> defaults. Yet, now I can ramp the number of concurren
On Wed, Jan 4 at 13:55, Fajar A. Nugraha wrote:
Were the Dell cards able to present the disks as JBOD without any
third-party-flashing involved?
Yes, the ones I have tested (SAS 6/iR) worked as expected (bare drives
exposed to ZFS) with no changes to drive firmware. I have not tested
the H200
On Jan 5, 2012, at 10:19 AM, Tim Cook wrote:
> Speaking of illumos, what exactly is the deal with the zfs discuss mailing
> list? There's all of 3 posts that show up for all of 2011. Am I missing
> something, or is there just that little traction currently?
> http://www.listbox.com/member/archi
Hi Grant,
On Jan 4, 2012, at 2:59 PM, grant lowe wrote:
> Hi all,
>
> I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB
> memory RIght now oracle . I've been trying to load test the box with
> bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more
>
Ray,
If you are intending to go Nexenta then speak to your local Nexenta SE,
we've got HSL qualified solutions which cover our h/w support and we've
explicitly qualed some MD1200 configs with Dell for certain deployments
to guarantee support via both Dell h/w support and ourselves.
If you don't
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
>
> > c) Currently Solaris decides to activate write caching in the SATA
> > disks, nice. What would happen if I still use the complete disks BUT
> > with two slices instead of
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jesus Cea
>
> Sorry if this list is inappropriate. Pointers welcomed.
Not at all. This is the perfect forum for your question.
> So I am thinking about splitting my full two-disk zpool in t
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ivan Rodriguez
>
> Dear list,
>
> I'm about to upgrade a zpool from 10 to 29 version, I suppose that
> this upgrade will improve several performance issues that are present
> on 10, however
>
11 matches
Mail list logo