On Tue, Jan 07, 2014 at 08:30:12PM +0000, Mel Gorman wrote:
> On Tue, Jan 07, 2014 at 10:54:40AM -0800, Greg KH wrote:
> > On Tue, Jan 07, 2014 at 06:17:15AM -0800, Greg KH wrote:
> > > On Tue, Jan 07, 2014 at 02:00:35PM +0000, Mel Gorman wrote:
> > > > A number of NUMA balancing patches were tagged for -stable but I got a
> > > > number of rejected mails from either Greg or his robot minion. The list
> > > > of relevant patches is
> > > >
> > > > FAILED: patch "[PATCH] mm: numa: serialise parallel get_user_page
> > > > against THP"
> > > > FAILED: patch "[PATCH] mm: numa: call MMU notifiers on THP migration"
> > > > MERGED: Patch "mm: clear pmd_numa before invalidating"
> > > > FAILED: patch "[PATCH] mm: numa: do not clear PMD during PTE update
> > > > scan"
> > > > FAILED: patch "[PATCH] mm: numa: do not clear PTE for pte_numa update"
> > > > MERGED: Patch "mm: numa: ensure anon_vma is locked to prevent parallel
> > > > THP splits"
> > > > MERGED: Patch "mm: numa: avoid unnecessary work on the failure path"
> > > > MERGED: Patch "sched: numa: skip inaccessible VMAs"
> > > > FAILED: patch "[PATCH] mm: numa: clear numa hinting information on
> > > > mprotect"
> > > > FAILED: patch "[PATCH] mm: numa: avoid unnecessary disruption of NUMA
> > > > hinting during"
> > > > Patch "mm: fix TLB flush race between migration, and
> > > > change_protection_range"
> > > > Patch "mm: numa: guarantee that tlb_flush_pending updates are visible
> > > > before page table updates"
> > > > FAILED: patch "[PATCH] mm: numa: defer TLB flush for THP migration as
> > > > long as"
> > > >
> > > > Fixing the rejects one at a time may cause other conflicts due to
> > > > ordering
> > > > issues. Instead, this patch series against 3.12.6 is the full list of
> > > > backported patches in the expected order. Greg, unfortunately this means
> > > > you may have to drop some patches already in your stable tree and
> > > > reapply
> > > > but on the plus side they should be then in the correct order for
> > > > bisection
> > > > purposes and you'll know I've tested this combination of patches.
> > >
> > > Many thanks for these, I'll go queue them up in a bit and drop the
> > > others to ensure I got all of this correct.
> >
> > Ok, I've now queued all of these up, in this order, so we should be
> > good.
> >
> > I'll do a -rc2 in a bit as it needs some testing.
> >
>
> Thanks a million. I should be cc'd on some of those so I'll pick up the
> final result and run it through the same tests just to be sure.
>
Ok, tests completed and look more or less as expected. This is not to
say the performance results are *good* as such. Workloads that normally
demonstrate automatic numa balancing suffered because of other patches that
were merged (primarily fair zone allocation policy) that had interesting
side-effects. However, it now does not crash under heavy stress and I
prefer working a little slowly than crashing fast. NAS at least looks
better.
Other workloads like kernel builds, page fault microbench looked good as
expected from the fair zone allocation policy fixes.
Big downside is that ebizzy performance is *destroyed* in that RC2 patch
somewhere
ebizzy
3.12.6 3.12.6 3.12.7-rc2
vanilla backport-v1r2 stablerc2
Mean 1 3278.67 ( 0.00%) 3180.67 ( -2.99%) 3212.00 ( -2.03%)
Mean 2 2322.67 ( 0.00%) 2294.67 ( -1.21%) 1839.00 (-20.82%)
Mean 3 2257.00 ( 0.00%) 2218.67 ( -1.70%) 1664.00 (-26.27%)
Mean 4 2268.00 ( 0.00%) 2224.67 ( -1.91%) 1629.67 (-28.15%)
Mean 5 2247.67 ( 0.00%) 2255.67 ( 0.36%) 1582.33 (-29.60%)
Mean 6 2263.33 ( 0.00%) 2251.33 ( -0.53%) 1547.67 (-31.62%)
Mean 7 2273.67 ( 0.00%) 2222.67 ( -2.24%) 1545.67 (-32.02%)
Mean 8 2254.67 ( 0.00%) 2232.33 ( -0.99%) 1535.33 (-31.90%)
Mean 12 2237.67 ( 0.00%) 2266.33 ( 1.28%) 1543.33 (-31.03%)
Mean 16 2201.33 ( 0.00%) 2252.67 ( 2.33%) 1540.33 (-30.03%)
Mean 20 2205.67 ( 0.00%) 2229.33 ( 1.07%) 1537.33 (-30.30%)
Mean 24 2162.33 ( 0.00%) 2168.67 ( 0.29%) 1535.33 (-29.00%)
Mean 28 2139.33 ( 0.00%) 2107.67 ( -1.48%) 1535.00 (-28.25%)
Mean 32 2084.67 ( 0.00%) 2089.00 ( 0.21%) 1537.33 (-26.26%)
Mean 36 2002.00 ( 0.00%) 2020.00 ( 0.90%) 1530.33 (-23.56%)
Mean 40 1972.67 ( 0.00%) 1978.67 ( 0.30%) 1530.33 (-22.42%)
Mean 44 1951.00 ( 0.00%) 1953.67 ( 0.14%) 1531.00 (-21.53%)
Mean 48 1931.67 ( 0.00%) 1930.67 ( -0.05%) 1526.67 (-20.97%)
Figures are records/sec, more is better for increasing numbers of threads
up to 48 which is the number of logical CPUs in the machine. Three kernels
tested
3.12.6 is self-explanatory
backport-v1r2 is the backported series I sent you
stablerc2 is the rc2 patch I pulled from kernel.org
I'm not that familiar with the stable workflow but stable-queue.git looked
like it had the correct quilt tree so bisection is in progress. If I had
to bet money on it, I'd bet it's going to be scheduler or power management
related mostly because problems in both of those areas have tended to
screw ebizzy recently.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html