On Sun, Nov 17, 2013 at 4:03 PM, Justin T. Gibbs <[email protected]> wrote:
> On Nov 17, 2013, at 2:36 PM, Matthew Ahrens <[email protected]> wrote: > > > can you explain what mm_preferred points to? looks like it starts with > "off the end of the array", which seems like a questionable decision. Oh, > I see it is off the end of the array but you allocate a little more space > for it. That's pretty trick / confusing. Is this measurably better > performing than doing something straightforward like (a) allocating it in > vdev_mirror_child_select(), or (b) walk mm_children again to find the nth > child with mc_load==lowest_load? Another way to do this even more > efficiently and (probably) straightforwardly would be to start the loop on > a random child, and go through the loop twice. Then you don't need > mm_preferred or mm_preferred_cnt at all, you can just go with the first (or > last) child with the lowest load. > > > mm_preferred was my doing in an attempt to simplify, via memoization, some > of the logic in an early version of the patch that Steven proposed on the > FreeBSD zfs-devel list. I believe that mc_load was obviated by that change. > > At the time, I didn’t review why this extra randomization step was being > taken. In looking at it now, I don’t see why this is necessary at all. At > light load (i.e. only one command outstanding at a time), we’ll favor the > first healthy device. But at any other time, the load code should do it’s > job and create an even distribution. It’s hard for me to believe that this > would cause premature wear out of one due to reads. If it did, that might > be considered a feature since you don’t want your SSDs to fail at exactly > the same time! Hopefully this all means that mm_preferred* and mc_load can > just go with no change in the result. > > Agreed. --matt
_______________________________________________ developer mailing list [email protected] http://lists.open-zfs.org/mailman/listinfo/developer
