: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of hatish
I have just
read the Best Practices guide, and it says your group shouldnt have 9
disks.
I think the value you can take from this is:
Why does the BPG say that? What is the reasoning
Mattias, what you say makes a lot of sense. When I saw *Both of the above
situations resilver in equal time*, I was like no way! But like you said,
assuming no bus bottlenecks.
This is my exact breakdown (cheap disks on cheap bus :P) :
PCI-E 8X 4-port ESata Raid Controller.
4 x ESata to 5Sata
will fail, in my mind that is mitigated
by the fact that the drives wont be under major stress during that time. Its
a workable solution.
On Thu, Sep 9, 2010 at 3:03 PM, Erik Trimble erik.trim...@oracle.comwrote:
On 9/9/2010 5:49 AM, hatish wrote:
Very interesting...
Well, lets see if we can
out my PM 3 times over.
So my PM will automatically throttle the drives speed to a third of that on
the account that the PM will be maxed out.
Thanks for the rough IO speed check :)
On Thu, Sep 9, 2010 at 3:20 PM, Edward Ned Harvey sh...@nedharvey.comwrote:
From: Hatish Narotam [mailto:hat
Ahhh! So thats how the formula works. That makes perfect sense.
Lets take my case as a scenario:
Each of my vdevs is 10 disk RaidZ2 (8 data + 2 Parity). Using 128K stripe, I'll
have 128K/8 = 16K blocks per data drive 16K blocks per parity drive. That
fits both 512B 4KB.
It works in my
Very interesting...
Well, lets see if we can do the numbers for my setup.
From a previous post of mine:
[i]This is my exact breakdown (cheap disks on cheap bus :P) :
PCI-E 8X 4-port ESata Raid Controller.
4 x ESata to 5Sata Port multipliers (each connected to a ESata port on the
controller).
Rebuild time is not a concern for me. The concern with rebuilding was the
stress it puts on the disks for an extended period of time (increasing the
chances of another disk failure). The % of data used doesnt matter, as the
system will try to get it done at max speed, thus creating the
Thanks for all the replies :)
My mindset is split in two now...
Some detail - I'm using 4 1-to-5 Sata Port multipliers connected to a 4-port
SATA raid card.
I only need reliability and size, as long as my performance is the equivalent
of one drive, Im happy.
Im assuming all the data used in
Im setting up a server with 20x1TB disks. Initially I had thought to setup the
disks using 2 RaidZ2 groups of 10 discs. However, I have just read the Best
Practices guide, and it says your group shouldnt have 9 disks. So Im thinking
a better configuration would be 2 x 7disk RaidZ2 + 1 x 6disk