On Tue 20-11-18 15:53:10, David Rientjes wrote:
> On Tue, 20 Nov 2018, Michal Hocko wrote:
> > On Mon 19-11-18 14:16:24, David Rientjes wrote:
> > > On Mon, 19 Nov 2018, Greg Kroah-Hartman wrote:
> > > 
> > > > 4.4-stable review patch.  If anyone has any objections, please let me 
> > > > know.
> > > > 
> > > 
> > > As I noted when this patch was originally proposed and when I nacked 
> > > it[*] 
> > > because it causes a 13.9% increase in remote memory access latency and up 
> > > to 40% increase in remote memory allocation latency on much of our 
> > > software stack that uses MADV_HUGEPAGE after mremapping the text segment 
> > > to memory backed by hugepages, I don't think this is stable material.
> > 
> > There was a wider consensus that this is the most minimal fix for users
> > who see a regression introduced by 5265047ac301 ("mm, thp: really
> > limit transparent hugepage allocation to local node"). As it has been
> > discussed extensively there is no universal win but we should always opt
> > for the safer side which this patch is accomplishing. The changelog goes
> > in length explaining them along with numbers. I am not happy that your
> > particular workload is suffering but this area certainly requires much
> > more changes to satisfy wider range of users.
> > 
> > > The 4.4 kernel is almost three years old and this changes the NUMA 
> > > locality of any user of MADV_HUGEPAGE.
> > 
> > Yes and we have seen bug reports as we adopted this older kernel only
> > now.
> I think the responsible thing to do would be allow users to remain on 
> their stable kernel that they know works, whether that's 4.4 or any of the 
> others this is proposed for, and downgrade from any current kernel release 
> that causes their workloads to have such severe regressions once they try 
> a kernel with this commit.

But we do know that there are people affected on 4.4 kernel. Besides
that we can revert in the stable tree as soon as we see bug reports on
new stable tree releases.

Really, there is no single proper behavior. It was a mistake to merge
5265047ac301. Since then we are in an unfortunate situation that some
workload might have started to depend on the new behavior.

But rather than repeating the previous long discussion I would call for
a new one which actually deals with fallouts. AFAIR there is a patch
series to reduce the fragmentation issues by Mel with a zero feedback so
far. I also think we should start discussing a new memory policy to
establish the semantic you are after.

Michal Hocko

Reply via email to