Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Brandon High
On Mon, Sep 6, 2010 at 2:36 PM, Roy Sigurd Karlsbakk  wrote:
> a 7k2 drive for l2arc?

It wouldn't be great, but you could put an SSD in the bay instead.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Roy Sigurd Karlsbakk
- Original Message -
> On Mon, Sep 6, 2010 at 8:53 AM, hatish  wrote:
> > Im setting up a server with 20x1TB disks. Initially I had thought to
> > setup the disks using 2 RaidZ2 groups of 10 discs. However, I have
> > just read the Best Practices guide, and it says your group shouldnt
> > have > 9 disks. So Im thinking a better configuration would be 2 x
> > 7disk RaidZ2 + 1 x 6disk RaidZ2. However its 14TB worth of data
> > instead of 16TB.
> 
> 2 x 10 disk raidz2 should be fine for general storage. It depends on
> what your performance needs are.
> 
> Or go with 3 x 6 disk vdevs, a spare and a l2arc.
> 

a 7k2 drive for l2arc?

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Brandon High
On Mon, Sep 6, 2010 at 8:53 AM, hatish  wrote:
> Im setting up a server with 20x1TB disks. Initially I had thought to setup 
> the disks using 2 RaidZ2 groups of 10 discs. However, I have just read the 
> Best Practices guide, and it says your group shouldnt have > 9 disks. So Im 
> thinking a better configuration would be 2 x 7disk RaidZ2 + 1 x 6disk RaidZ2. 
> However its 14TB worth of data instead of 16TB.

2 x 10 disk raidz2 should be fine for general storage. It depends on
what your performance needs are.

Or go with 3 x 6 disk vdevs, a spare and a l2arc.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Orvar Korvar
Otherwise you can have 2 discs as hot spare. three 6 disc vdevs.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Orvar Korvar
Can you add another disk? then you have three 7 disc vdevs. (Always use raidz2.)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Carsten Aulbert
Hi

On Monday 06 September 2010 17:53:44 hatish wrote:
> Im setting up a server with 20x1TB disks. Initially I had thought to setup
> the disks using 2 RaidZ2 groups of 10 discs. However, I have just read the
> Best Practices guide, and it says your group shouldnt have > 9 disks. So
> Im thinking a better configuration would be 2 x 7disk RaidZ2 + 1 x 6disk
> RaidZ2. However its 14TB worth of data instead of 16TB.
> 
> What are your suggestions and experiences?

Another one is that in one pool all vdev should be equal, i.e. not mixed like 
2x7 and 1x6 (this configuration you most likely will need to force anyway).

First, I'd assess what you want/expect from this file system in then end. 
Maximum performance, maximum reliability or maximum size - as always pick two 
;)

Cheers

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread hatish
Im setting up a server with 20x1TB disks. Initially I had thought to setup the 
disks using 2 RaidZ2 groups of 10 discs. However, I have just read the Best 
Practices guide, and it says your group shouldnt have > 9 disks. So Im thinking 
a better configuration would be 2 x 7disk RaidZ2 + 1 x 6disk RaidZ2. However 
its 14TB worth of data instead of 16TB. 

What are your suggestions and experiences?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what is zfs doing during a log resilver?

2010-09-06 Thread George Wilson

Arne Jansen wrote:

Giovanni Tirloni wrote:



On Thu, Sep 2, 2010 at 10:18 AM, Jeff Bacon > wrote:


So, when you add a log device to a pool, it initiates a resilver.

What is it actually doing, though? Isn't the slog a copy of the
in-memory intent log? Wouldn't it just simply replicate the data 
that's

in the other log, checked against what's in RAM? And presumably there
isn't that much data in the slog so there isn't that much to check?

Or is it just doing a generic resilver for the sake of argument 
because

you changed something?


Good question. Here it takes little over 1 hour to resilver a 32GB SSD 
in a mirror. I've always wondered what exactly it was doing since it 
was supposed to be 30 seconds worth of data. It also generates lots of 
checksum errors.


Here it takes more than 2 days to resilver a failed slog-SSD. I'd also
expect it to finish in a few seconds... It seems it resilvers the whole 
pool,

35T worth of data on 22 spindels (RAID-Z2).

We don't get any errors during resilver.

--
Arne

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Resilvering log devices should really be handled differently than other 
devices in the pool but we don't do that today. This is documented in 
CR: 6899591. As a workaround you can first remove the log device and 
then re-add it to the pool as a mirror-ed log device.


- George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [mdb-discuss] mdb -k - I/O usage

2010-09-06 Thread Jason Banham

On 06/09/2010 10:56, Piotr Jasiukajtis wrote:

Hi,

I am looking for the ideas on how to check if the machine was under
high I/O pressure before it panicked (caused manually by an NMI).
By I/O I mean disks and ZFS stack.



Do you believe ZFS was a key component in the I/O pressure?
I've CC'd zfs-discuss@opensolaris.org on my reply.

If you think there was a lot of I/O happening, you could run:

::walk zio_root | ::zio -r

This should give you an idea of the amount of ZIO going through ZFS.
I would also be curious to look at the state of the pool(s) and the
ZFS memory usage:

::spa -ev
::arc




Kind regards,

Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss