Ugh!  I meant that to go to the list, so I'll probably re-send it for the
benefit
of everyone involved in the discussion.  There were parts of that that I
wanted
others to read.

>From a re-read of Richard's e-mail, maybe he meant that the number of I/Os
queued to a device can be tuned lower and not the priority of the scrub (as
I took him to mean).  Hopefully Richard can clear that up.  I personally
stand
corrected for mis-reading Richard there.

Of course the performance of a given system cannot be described until it is
built.  Again, my interpretation of your e-mail was that you were looking
for
a model for the performance of concurrent scrub and I/O load of a RAIDZ2
VDEV that you could scale up from your "test" environment of 11 disks to a
200+ TB behemoth.  As I mentioned several times, I doubt such a model
exists, and I have not seen anything published to that effect.  I don't know

how useful it would be if it did exist because the performance of your disks

would be a critical factor.  (Although *any* model beats no model any day.)
Let's just face it.  You're using a new storage system that has not been
modeled.  To get the model you seek, you will probably have to create it
yourself.

(It's notable that most of the ZFS models that I have seen have been done
by Richard.  Of course, they were MTTDL models, not scrub vs. I/O
performance models for different VDEV types.)

As for your point about building large pools from lots of mirror VDEVs, my
response is "meh".  I've said several times, and maybe you've missed it
several times, that there may be pathologies for which YOU should open
bugs.  RAIDZ3 may exhibit the same kind of pathologies you observed with
RAIDZ2.  Apparently RAIDZ does not.  I've also noticed (and I'm sure I'll
be corrected if I'm mistaken) that there is not a limit on the number of
VDEVs in a pool but single digit RAID VDEVs are recommended.  So there
is nothing preventing you from building (for example) VDEVs from 1 TB
disks.  If you take 9 x 1 TB disks per VDEV, and use RAIDZ2, you get 7 TB
usable.  That means about 29 VDEVs to get 200 TB.  Double the disk
capacity and you can probably get to 15 top level VDEVs.  (And you'll want
that RAIDZ2 as well since I don't know if you could trust that many disks,
whether enterprise or consumer.)  However, that number of top level VDEVs
sounds reasonable based on what others have reported.  What's been
proven to be "A Bad Idea(TM)" is putting lots of disks in a single VDEV.

Remember that ZFS is a *new* software system.  It is complex.  It will have
bugs.  You have chosen ZFS; it didn't choose you.  So I'd say you can
contribute to the community by reporting back your experiences, opening
bugs on things which make sense to open bugs on, testing configurations,
modeling, documenting and sharing.  So far, you just seem to be interested
in taking w/o so much as an offer of helping the community or developers to
understand what works and what doesn't.  All take and no give is not cool.
And if you don't like ZFS, then choose something else.  I'm sure EMC or
NetApp will willingly sell you all the spindles you want.  However, I think
it is
still early to write off ZFS as a losing proposition, but that's my opinion.

So far, you seem to be spending a lot of time complaining about a *new*
software system that you're not paying for.  That's pretty tasteless, IMO.

And now I'll re-send that e-mail...

P.S.: Did you remember to re-read this e-mail?  Read it 2 or 3 times and be
clear about what I said and what I did _not_ say.

On Wed, Mar 17, 2010 at 16:12, Tonmaus <sequoiamo...@gmx.net> wrote:

> Hi,
>
> I got a message from you off-list that doesn't show up in the thread even
> after hours. As you mentioned the aspect here as well I'd like to respond
> to, I'll do it from here:
>
> > Third, as for ZFS scrub prioritization, Richard
> > answered your question about that.  He said it is
> > low priority and can be tuned lower.  However, he was
> > answering within the <br>context of an 11 disk RAIDZ2
> > with slow disks  His exact words were:
> >
> >
> > This could be tuned lower, but your storage
> > is slow and *any* I/O activity will be
> > noticed.
>
> Richard told us two times that scrub already is as low in priority as can
> be. From another message:
>
> "Scrub is already the lowest priority. Would you like it to be lower?"
>
>
> =============================================================================
>
> As much as the comparison goes between "slow" and "fast" storage. I have
> understood that Richard's message was that with storage providing better
> random I/O zfs priority scheduling will perform significantly better,
> providing less degradation of concurrent load. While I am even inclined to
> buy that, nobody will be able to tell me how a certain system will behave
> until it was tested, and to what degree concurrent scrubbing still will be
> possible.
> Another thing: people are talking a lot about narrow vdevs and mirrors.
> However, when you need to build a 200 TB pool you end up with a lot of disks
> in the first place. You will need at least double failover resilience for
> such a pool. If one would do that with mirrors, ending up with app. 600 TB
> gross to provide 200 TB net capacity is definitely NOT an option.
>
> Regards,
>
> Tonmaus
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
"You can choose your friends, you can choose the deals." - Equity Private

"If Linux is faster, it's a Solaris bug." - Phil Harman

Blog - http://whatderass.blogspot.com/
Twitter - @khyron4eva
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to