On Wed, Apr 14, 2021 at 02:50:53PM +0200, Michal Hocko wrote:
> On Wed 17-03-21 11:40:00, Feng Tang wrote:
> > From: Dave Hansen
> >
> > MPOL_PREFERRED honors only a single node set in the nodemask. Add the
> > bare define for a new mode which will allow more than o
On Wed, Apr 14, 2021 at 02:55:39PM +0200, Michal Hocko wrote:
> On Wed 17-03-21 11:40:01, Feng Tang wrote:
> > From: Dave Hansen
> >
> > Create a helper function (mpol_new_preferred_many()) which is usable
> > both by the old, single-node MPOL_PREFERRED and the
Hi Boris, Srinivas,
On Tue, Apr 13, 2021 at 07:28:27PM +0200, Borislav Petkov wrote:
> On Tue, Apr 13, 2021 at 09:58:01PM +0800, kernel test robot wrote:
> > Greeting,
> >
> > FYI, we noticed a -27.4% regression of stress-ng.msg.ops_per_sec due to
> > commit:
> >
> >
> > commit: 9223d0dccb8f85
On Wed, Apr 14, 2021 at 03:08:19PM +0200, Michal Hocko wrote:
> On Wed 17-03-21 11:40:05, Feng Tang wrote:
> > From: Ben Widawsky
> >
> > Add a helper function which takes care of handling multiple preferred
> > nodes. It will be called by future patch
Hi Michal,
Many thanks for reviewing the whole patchset! We will check them.
On Wed, Apr 14, 2021 at 03:25:34PM +0200, Michal Hocko wrote:
> Please use hugetlb prefix to make it explicit that this is hugetlb
> related.
>
> On Wed 17-03-21 11:40:08, Feng Tang wrote:
> >
sources, as there were cases that both of them had been
wrongly judged as unreliable.
[1]. https://lore.kernel.org/lkml/87eekfk8bd@nanos.tec.linutronix.de/
Suggested-by: Thomas Gleixner
Signed-off-by: Feng Tang
---
Change log:
v2:
* Directly skip watchdog check without messing flag
always on.
[1]. https://lore.kernel.org/lkml/875z286xtk@nanos.tec.linutronix.de/
Signed-off-by: Feng Tang
---
Change log:
v2:
* skip timer setup when tsc_clocksource_reliabe==1 (Thomas)
* refine comment and code format (Thomas)
arch/x86/kernel/tsc_s
On Sat, Apr 10, 2021 at 08:46:38PM +0200, Thomas Gleixner wrote:
> Feng,
>
> On Sat, Apr 10 2021 at 22:38, Feng Tang wrote:
> > On Sat, Apr 10, 2021 at 11:27:11AM +0200, Thomas Gleixner wrote:
> >> > +static int __init start_sync_check_timer(void)
> >> &g
Hi Boris,
On Sat, Apr 10, 2021 at 11:47:52AM +0200, Borislav Petkov wrote:
> On Sat, Apr 10, 2021 at 11:27:11AM +0200, Thomas Gleixner wrote:
> > On Tue, Mar 30 2021 at 16:25, Feng Tang wrote:
> > > Normally the tsc_sync will be checked every time system enters idle state,
Hi Thomas,
On Sat, Apr 10, 2021 at 11:27:11AM +0200, Thomas Gleixner wrote:
> On Tue, Mar 30 2021 at 16:25, Feng Tang wrote:
> > Normally the tsc_sync will be checked every time system enters idle state,
> > but there is still caveat that a system won't enter idle, either
el.org/lkml/87eekfk8bd@nanos.tec.linutronix.de/
Suggested-by: Thomas Gleixner
Signed-off-by: Feng Tang
---
arch/x86/kernel/tsc.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index f70dffc..3a451e3 100644
--- a/arch/x86/kernel/tsc
Normally the tsc_sync will be checked every time system enters idle state,
but there is still caveat that a system won't enter idle, either because
it's too busy or configured purposely to not enter idle. Setup a periodic
timer to make sure the check is always on.
Signed-off-by:
Hi Thomas,
On Wed, Mar 03, 2021 at 04:50:31PM +0100, Thomas Gleixner wrote:
> On Tue, Mar 02 2021 at 20:06, Feng Tang wrote:
> > On Tue, Mar 02, 2021 at 10:16:37AM +0100, Peter Zijlstra wrote:
> >> On Tue, Mar 02, 2021 at 10:54:24AM +0800, Feng Tang wrote:
> >> > c
On Thu, Mar 25, 2021 at 02:51:42PM +0800, Feng Tang wrote:
> > > Honestly, normally if I were to get a report about "52% regression"
> > > for a commit that is supposed to optimize something, I'd just revert
> > > the commit as a case of
at 11:21:44AM +0800, Feng Tang wrote:
> Hi Linus,
>
> On Mon, Mar 15, 2021 at 01:42:50PM -0700, Linus Torvalds wrote:
> > On Sun, Mar 14, 2021 at 11:30 PM kernel test robot
> > wrote:
> > > in testcase: fxmark
> > > on test machine: 288 threads Intel(R) Xeon
Hi Linus,
On Mon, Mar 15, 2021 at 01:42:50PM -0700, Linus Torvalds wrote:
> On Sun, Mar 14, 2021 at 11:30 PM kernel test robot
> wrote:
> >
> > FYI, we noticed a -52.4% regression of
> > fxmark.hdd_btrfs_DWAL_63_bufferedio.works/sec
>
> That's quite the huge regression.
>
> But:
>
> > due to
To reduce some code duplication.
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 25 +++--
1 file changed, 7 insertions(+), 18 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 18aa7dc..ee99ecc 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -201,32 +201,21
rally speaking, this is similar to the way MPOL_BIND works, except
the user will only get a SIGSEGV if all nodes in the system are unable
to satisfy the allocation request.
Link: https://lore.kernel.org/r/20200630212517.308045-13-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng
h.
[ feng: add NOWARN flag, and skip the direct reclaim to speedup allocation
in some case ]
Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/hugetlb.c | 26 +++---
mm/mempolicy.c |
MANY
can now be removed with this, too.
All the actual machinery to make this work was part of
("mm/mempolicy: Create a page allocator for policy")
Link: https://lore.kernel.org/r/20200630212517.308045-11-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
ot;mm/mempolicy: Create a page allocator for policy")
Link: https://lore.kernel.org/r/20200630212517.308045-10-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 11 +++
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/mm/mempol
rnel.org/r/20200630212517.308045-9-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 65 ++
1 file changed, 52 insertions(+), 13 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index d945f29..d211
ned-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 22 +++---
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eba207e..d945f29 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -963,8 +963,6 @@ s
do believe it helps demonstrate the exclusivity of the
fields.
Link: https://lore.kernel.org/r/20200630212517.308045-7-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
include/linux/mempolicy.h | 6 +--
mm/mempolicy.c| 114
for checkpatch (Ben)
Link: https://lore.kernel.org/r/20200630212517.308045-4-ben.widaw...@intel.com
Co-developed-by: Ben Widawsky
Signed-off-by: Ben Widawsky
Signed-off-by: Dave Hansen
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 46 --
1 file
hich calls mpol_new_preferred_many().
v3:
* fix a stack overflow caused by emty nodemask (Feng)
Link: https://lore.kernel.org/r/20200630212517.308045-5-ben.widaw...@intel.com
Signed-off-by: Dave Hansen
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 21 +++--
1 file c
)
annotate mpol_rebind_preferred_many as unused (Ben)
Link: https://lore.kernel.org/r/20200630212517.308045-6-ben.widaw...@intel.com
Signed-off-by: Dave Hansen
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 29 ++---
1 file changed, 22 insertions(+), 7
nk: https://lore.kernel.org/r/20200630212517.308045-3-ben.widaw...@intel.com
Co-developed-by: Ben Widawsky
Signed-off-by: Dave Hansen
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
.../admin-guide/mm/numa_memory_policy.rst | 6 ++--
include/linux/mempolicy.h
ocation for many preferred
mm/mempolicy: VMA allocation for many preferred
mm/mempolicy: huge-page allocation for many preferred
mm/mempolicy: Advertise new MPOL_PREFERRED_MANY
Dave Hansen (4):
mm/mempolicy: convert single preferred_node to full nodemask
mm/mempolicy: Add MPOL_PREFERRED_MANY for
ining the situations.
v2:
Change comment to refer to mpol_new (Michal)
Link: https://lore.kernel.org/r/20200630212517.308045-2-ben.widaw...@intel.com
Acked-by: Michal Hocko
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 1 +
1 file changed, 1 insertion(+)
diff --gi
Hi Boris and Sean,
On Tue, Mar 16, 2021 at 10:04:44AM -0700, Sean Christopherson wrote:
> On Tue, Mar 16, 2021, Borislav Petkov wrote:
> > On Tue, Mar 16, 2021 at 03:42:23PM +0800, Feng Tang wrote:
> > > Also I'm wondering for some basic leaf and extended leaf which
On Mon, Mar 15, 2021 at 01:59:01PM +0100, Borislav Petkov wrote:
> From: Borislav Petkov
>
> Contains core IDs, node IDs and other topology info.
>
> Signed-off-by: Borislav Petkov
Acked-by: Feng Tang
Also I'm wondering for some basic leaf and extended leaf whi
short name and tokens[5] becomes NULL which
> explodes later in strcpy().
>
> Check its value too before further processing.
Thanks for the fix!
Acked-by: Feng Tang
> Signed-off-by: Borislav Petkov
> ---
> tools/arch/x86/kcpuid/kcpuid.c | 2 ++
> 1 file changed, 2 insertio
16 +0100
> Subject: [PATCH] tools/x86/kcpuid: Add AMD Secure Encryption leaf
>
> Add the 0x801f leaf's fields.
>
> Signed-off-by: Borislav Petkov
Acked-by: Feng Tang
Thanks!
> ---
> tools/arch/x86/kcpuid/cpuid.csv | 10 ++
> 1 file changed, 10 insert
On Wed, Mar 10, 2021 at 10:44:11AM +0100, Michal Hocko wrote:
> On Wed 10-03-21 13:19:47, Feng Tang wrote:
> [...]
> > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > index d66c1c0..00b19f7 100644
> > --- a/mm/mempolicy.c
> > +++ b/mm/mempolicy.c
> > @@ -2
On Wed, Mar 03, 2021 at 06:20:45PM +0800, Feng Tang wrote:
> From: Ben Widawsky
>
> MPOL_LOCAL is a bit weird because it is simply a different name for an
> existing behavior (preferred policy with no node mask). It has been this
> way since it was added here:
> commit
ld be nice, short-term, to steer MPOL_PREFERRED_MANY
> behavior toward how we expect it to get used first, I think it's a
> mistake if we do it at the cost of long-term divergence from MPOL_PREFERRED.
Hi All,
Based on the discussion, I update the patch as below, please review, thanks
&
The following commit has been merged into the x86/misc branch of tip:
Commit-ID: c6b2f240bf8d5604e6507aff15d5c441944c2f89
Gitweb:
https://git.kernel.org/tip/c6b2f240bf8d5604e6507aff15d5c441944c2f89
Author:Feng Tang
AuthorDate:Fri, 05 Mar 2021 15:21:18 +08:00
Committer
On Wed, Mar 03, 2021 at 10:51:31PM +0800, Thomas Gleixner wrote:
> On Tue, Mar 02 2021 at 10:52, Feng Tang wrote:
> > There are cases that tsc clocksource are wrongly judged as unstable by
> > clocksource watchdogs like hpet, acpi_pm or 'refined-jiffies'. While
>
Borislav Petkov
Suggested-by: Dave Hansen
Suggested-by: Borislav Petkov
Signed-off-by: Feng Tang
Signed-off-by: Borislav Petkov
Link:
https://lkml.kernel.org/r/1603344083-100742-1-git-send-email-feng.t...@intel.com
---
Changelog:
v5:
* rebased against v5.11
* fix a buffer overflow is
On Thu, Mar 04, 2021 at 03:15:13PM +0100, Thomas Gleixner wrote:
> Feng,
>
> On Thu, Mar 04 2021 at 15:43, Feng Tang wrote:
> > On Wed, Mar 03, 2021 at 04:50:31PM +0100, Thomas Gleixner wrote:
> >> Anything pre TSC_ADJUST wants the watchdog on. With TSC ADJUST available
&g
On Thu, Mar 04, 2021 at 01:59:40PM +0100, Michal Hocko wrote:
> On Thu 04-03-21 16:14:14, Feng Tang wrote:
> > On Wed, Mar 03, 2021 at 09:22:50AM -0800, Ben Widawsky wrote:
> > > On 21-03-03 18:14:30, Michal Hocko wrote:
> > > > On Wed 03-03-21 08:31:41, Ben Widawsk
On Wed, Mar 03, 2021 at 09:22:50AM -0800, Ben Widawsky wrote:
> On 21-03-03 18:14:30, Michal Hocko wrote:
> > On Wed 03-03-21 08:31:41, Ben Widawsky wrote:
> > > On 21-03-03 14:59:35, Michal Hocko wrote:
> > > > On Wed 03-03-21 21:46:44, Feng Tang wrote:
> >
Hi Thomas,
On Wed, Mar 03, 2021 at 04:50:31PM +0100, Thomas Gleixner wrote:
> On Tue, Mar 02 2021 at 20:06, Feng Tang wrote:
> > On Tue, Mar 02, 2021 at 10:16:37AM +0100, Peter Zijlstra wrote:
> >> On Tue, Mar 02, 2021 at 10:54:24AM +0800, Feng Tang wrote:
> >> > c
On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote:
> On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> > On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > > On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> > > > Hi Michal,
> >
On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote:
> On Wed 03-03-21 20:18:33, Feng Tang wrote:
> > On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> > > Hi Michal,
> > >
> > > On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wro
On Wed, Mar 03, 2021 at 08:07:17PM +0800, Tang, Feng wrote:
> Hi Michal,
>
> On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> > On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > > When doing broader test, we noticed allocation slowness in one test
> >
Hi Michal,
On Wed, Mar 03, 2021 at 12:39:57PM +0100, Michal Hocko wrote:
> On Wed 03-03-21 18:20:58, Feng Tang wrote:
> > When doing broader test, we noticed allocation slowness in one test
> > case that malloc memory with size which is slightly bigger than free
> > memory o
org/r/20200630212517.308045-9-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 61 +-
1 file changed, 48 insertions(+), 13 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 80
d-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
Documentation/admin-guide/mm/numa_memory_policy.rst | 16
include/uapi/linux/mempolicy.h | 6 +++---
mm/hugetlb.c| 4 ++--
mm/mempol
MANY
can now be removed with this, too.
All the actual machinery to make this work was part of
("mm/mempolicy: Create a page allocator for policy")
Link: https://lore.kernel.org/r/20200630212517.308045-11-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
ot;mm/mempolicy: Create a page allocator for policy")
Link: https://lore.kernel.org/r/20200630212517.308045-10-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 11 +++
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/mm/mempol
To reduce some code duplication.
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 25 +++--
1 file changed, 7 insertions(+), 18 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1438d58..d66c1c0 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -201,32 +201,21
_nodemask() only once.
Signed-off-by: Feng Tang
---
include/linux/gfp.h | 9 +++--
mm/mempolicy.c | 2 +-
mm/page_alloc.c | 2 +-
3 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 6e479e9..81bacbe 100644
--- a/include/l
patch.
v3: add __GFP_NOWARN for first try of prefer_many allocation (Feng)
Link: https://lore.kernel.org/r/20200630212517.308045-12-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/hugetlb.c | 22 +++---
mm/mempolicy.c | 3 ++-
2 files changed, 21 i
for checkpatch (Ben)
Link: https://lore.kernel.org/r/20200630212517.308045-4-ben.widaw...@intel.com
Co-developed-by: Ben Widawsky
Signed-off-by: Ben Widawsky
Signed-off-by: Dave Hansen
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 46 --
1 file
do believe it helps demonstrate the exclusivity of the
fields.
Link: https://lore.kernel.org/r/20200630212517.308045-7-ben.widaw...@intel.com
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
include/linux/mempolicy.h | 6 +--
mm/mempolicy.c| 112
ned-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 22 +++---
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index fe1d83c..80cb554 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -953,8 +953,6 @@ s
hich calls mpol_new_preferred_many().
v3:
* fix a stack overflow caused by emty nodemask (Feng)
Link: https://lore.kernel.org/r/20200630212517.308045-5-ben.widaw...@intel.com
Signed-off-by: Dave Hansen
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 21 +++--
1 file c
)
annotate mpol_rebind_preferred_many as unused (Ben)
Link: https://lore.kernel.org/r/20200630212517.308045-6-ben.widaw...@intel.com
Signed-off-by: Dave Hansen
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 29 ++---
1 file changed, 22 insertions(+), 7
nk: https://lore.kernel.org/r/20200630212517.308045-3-ben.widaw...@intel.com
Co-developed-by: Ben Widawsky
Signed-off-by: Dave Hansen
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
.../admin-guide/mm/numa_memory_policy.rst | 6 ++--
include/linux/mempolicy.h
ining the situations.
v2:
Change comment to refer to mpol_new (Michal)
Link: https://lore.kernel.org/r/20200630212517.308045-2-ben.widaw...@intel.com
#Acked-by: Michal Hocko
Signed-off-by: Ben Widawsky
Signed-off-by: Feng Tang
---
mm/mempolicy.c | 1 +
1 file changed, 1 insertion(+)
diff --gi
convert single preferred_node to full nodemask
mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes
mm/mempolicy: allow preferred code to take a nodemask
mm/mempolicy: refactor rebind code for PREFERRED_MANY
Feng Tang (2):
mem/mempolicy: unify mpol_new_pref
Hi Peter,
On Tue, Mar 02, 2021 at 10:16:37AM +0100, Peter Zijlstra wrote:
> On Tue, Mar 02, 2021 at 10:54:24AM +0800, Feng Tang wrote:
> > clocksource watchdog runs every 500ms, which creates some OS noise.
> > As the clocksource wreckage (especially for those that has per-cpu
&
On Tue, Mar 02, 2021 at 10:14:01AM +0100, Peter Zijlstra wrote:
> On Tue, Mar 02, 2021 at 10:52:52AM +0800, Feng Tang wrote:
> > @@ -1193,6 +1193,17 @@ static void __init check_system_tsc_reliable(void)
> > #endif
> > if (boot_cpu_has(X86
watchdog
and make everyone else wait
Signed-off-by: Feng Tang
Reviewed-by: Andi Kleen
---
include/linux/clocksource.h | 7 +++
kernel/cpu.c| 3 +++
kernel/time/clocksource.c | 31 +--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a
thread, and be
more defensive to use maxim of 2 sockets.
The check is done inside tsc_init() before registering 'tsc-early' and
'tsc' clocksources, as there were cases that both of them have been
wrongly judged as unreliable.
[1]. https://lore.kernel.org/lkml/87eekfk8bd
Hi Paul,
On Wed, Feb 10, 2021 at 02:21:41AM +0800, Paul Moore wrote:
> On Tue, Feb 9, 2021 at 1:09 PM kernel test robot wrote:
> > tree: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
> > master
> > head: 59fa6a163ffabc1bf25c5e0e33899e268a96d3cc
> > commit: 77d8143a5290b
Hi Michael,
On Tue, Feb 16, 2021 at 08:36:02PM +1100, Michael Ellerman wrote:
> Feng Tang writes:
> > Hi Christophe and Michael,
> >
> > On Mon, Jan 18, 2021 at 10:24:08PM +0800, Christophe Leroy wrote:
> >>
> >> Le 05/01/2021 ? 11:58, kernel test robo
nt number as parameter, while is not met by
show_plbopb_regs(). Changing show_plbopb_regs() from function to
a macro fixes the error, as the patch below:
Thanks,
Feng
>From 3bcb9638afc873d0e803aea1aad4f77bf1c2f6f6 Mon Sep 17 00:00:00 2001
From: Feng Tang
Date: Fri, 5 Feb 2021 16:08:43 +0800
Subject: [PATC
On Tue, Jan 19, 2021 at 04:33:50PM +0100, Borislav Petkov wrote:
> On Tue, Jan 19, 2021 at 11:09:03PM +0800, Feng Tang wrote:
> > Yes, that can happen. I started a 4 tasks netperf on a 4C/8T KBL desktop,
> > and also saw around 2% improvement. Both the kernel config and the
>
Hi Boris,
On Tue, Jan 19, 2021 at 02:17:59PM +0100, Borislav Petkov wrote:
> On Tue, Jan 19, 2021 at 08:15:05PM +0800, Feng Tang wrote:
> > On Tue, Jan 19, 2021 at 11:02:55AM +0100, Borislav Petkov wrote:
> > > On Mon, Jan 18, 2021 at 08:27:21PM -0800, Paul E. McKenney wrote:
On Tue, Jan 19, 2021 at 10:11:16AM +0100, Borislav Petkov wrote:
> On Tue, Jan 19, 2021 at 01:19:42PM +0800, Feng Tang wrote:
> > Sorry, after testing on more platforms, the following is needed to fix
> > a potential array overflow ((a full patch with fix is also attached)
> &
On Tue, Jan 19, 2021 at 11:02:55AM +0100, Borislav Petkov wrote:
> On Mon, Jan 18, 2021 at 08:27:21PM -0800, Paul E. McKenney wrote:
> > I bet that the results vary depending on the type of CPU, and also on
> > the kernel address-space layout, which of course also varies based on
> > the Kconfig op
+0.1% 3929fio.write_bw_MBps
[1].https://lore.kernel.org/lkml/20201102091543.GM31092@shao2-debian/
Signed-off-by: Feng Tang
Reviewed-by: Roman Gushchin
Cc: Johannes Weiner
Cc: Michal Hocko
---
Changelogs:
v2:
* Adjust the format of performance data to be more re
return -1;
func = &range->funcs[index];
Thanks,
Feng
On Mon, Jan 18, 2021 at 03:35:11PM +0800, Feng Tang wrote:
> End users frequently want to know what features their processor
> supports, independent of what the kernel supports.
>
> /proc/cpuinfo is great. It i
Borislav Petkov
Suggested-by: Dave Hansen
Suggested-by: Borislav Petkov
Signed-off-by: Feng Tang
Signed-off-by: Borislav Petkov
Link:
https://lkml.kernel.org/r/1603344083-100742-1-git-send-email-feng.t...@intel.com
---
Changelog:
v4:
* rebase against 5.11-rc4
* Boris helped to find and fix
On Sat, Jan 16, 2021 at 07:34:26AM -0800, Paul E. McKenney wrote:
> On Sat, Jan 16, 2021 at 11:52:51AM +0800, Feng Tang wrote:
> > Hi Boris,
> >
> > On Tue, Jan 12, 2021 at 03:14:38PM +0100, Borislav Petkov wrote:
> > > On Tue, Jan 12, 2021 at 10:21:09PM
Hi Chris,
On Wed, Jan 06, 2021 at 03:43:36AM +, Chris Down wrote:
> Feng Tang writes:
> >One further thought is, there are quite some "BATCH" number in
> >kernel for perf-cpu/global data updating, maybe we can add a
> >global flag 'sysctl
Hi Shakeel,
On Tue, Jan 05, 2021 at 04:47:33PM -0800, Shakeel Butt wrote:
> On Tue, Dec 29, 2020 at 6:35 AM Feng Tang wrote:
> >
> > When profiling memory cgroup involved benchmarking, status update
> > sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH
On Mon, Jan 04, 2021 at 02:15:40PM +0100, Michal Hocko wrote:
> On Tue 29-12-20 22:35:14, Feng Tang wrote:
> > When profiling memory cgroup involved benchmarking, status update
> > sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH
> > is used for both charging
On Mon, Jan 04, 2021 at 03:11:40PM +0100, Michal Hocko wrote:
> On Mon 04-01-21 21:34:45, Feng Tang wrote:
> > Hi Michal,
> >
> > On Mon, Jan 04, 2021 at 02:03:57PM +0100, Michal Hocko wrote:
> > > On Tue 29-12-20 22:35:13, Feng Tang wrote:
> > >
Hi Michal,
On Mon, Jan 04, 2021 at 02:03:57PM +0100, Michal Hocko wrote:
> On Tue 29-12-20 22:35:13, Feng Tang wrote:
> > When checking a memory cgroup related performance regression [1],
> > from the perf c2c profiling data, we found high false sharing for
> > accessin
Hi Roman,
On Tue, Dec 29, 2020 at 09:13:27AM -0800, Roman Gushchin wrote:
> On Tue, Dec 29, 2020 at 10:35:14PM +0800, Feng Tang wrote:
> > When profiling memory cgroup involved benchmarking, status update
> > sometimes take quite some CPU cycles. Current MEMCG_CHARGE_BATCH
>
On Tue, Dec 29, 2020 at 08:56:42AM -0800, Roman Gushchin wrote:
> On Tue, Dec 29, 2020 at 10:35:13PM +0800, Feng Tang wrote:
> > When checking a memory cgroup related performance regression [1],
> > from the perf c2c profiling data, we found high false sharing for
> > accessin
One thought is it could be dynamically calculated according to
memcg limit and number of CPUs, and another is to add a periodic
syncing of the data for accuracy reason similar to vmstat, as
suggested by Ying.
Signed-off-by: Feng Tang
Cc: Shakeel Butt
Cc: Roman Gushchin
---
include/linux/memcontro
are
listed:
fio: +1.8% ~ +8.3%
will-it-scale/malloc1: -4.0% ~ +8.9%
will-it-scale/page_fault1: no change
will-it-scale/page_fault2: +2.4% ~ +20.2%
[1].https://lore.kernel.org/lkml/20201102091543.GM31092@shao2-debian/
Signed-off-by: Feng Tang
Cc:
Hi Thomas,
On Mon, Nov 30, 2020 at 08:21:03PM +0100, Thomas Gleixner wrote:
> Feng,
>
> On Fri, Nov 27 2020 at 14:11, Feng Tang wrote:
> > On Fri, Nov 27, 2020 at 12:27:34AM +0100, Thomas Gleixner wrote:
> >> On Thu, Nov 26 2020 at 09:24, Feng Tang wrote:
> >> Y
Hi Thomas,
On Fri, Nov 27, 2020 at 12:27:34AM +0100, Thomas Gleixner wrote:
> Feng,
>
> On Thu, Nov 26 2020 at 09:24, Feng Tang wrote:
> > On Wed, Nov 25, 2020 at 01:46:23PM +0100, Thomas Gleixner wrote:
> >> Now the more interesting question is why this needs to be a PCI
Hi Thomas,
On Wed, Nov 25, 2020 at 01:46:23PM +0100, Thomas Gleixner wrote:
> On Thu, Nov 19 2020 at 12:19, Bjorn Helgaas wrote:
> > 62187910b0fc ("x86/intel: Add quirk to disable HPET for the Baytrail
> > platform") implemented force_disable_hpet() as a special early quirk.
> > These run before t
On Fri, Nov 20, 2020 at 07:44:24PM +0800, Feng Tang wrote:
> On Fri, Nov 13, 2020 at 03:34:36PM +0800, Feng Tang wrote:
> > > I would rather focus on a more effective mem_cgroup layout. It is very
> > > likely that we are just stumbling over two counters here.
> > &g
On Fri, Nov 20, 2020 at 02:19:44PM +0100, Michal Hocko wrote:
> On Fri 20-11-20 19:44:24, Feng Tang wrote:
> > On Fri, Nov 13, 2020 at 03:34:36PM +0800, Feng Tang wrote:
> > > On Thu, Nov 12, 2020 at 03:16:54PM +0100, Michal Hocko wrote:
> > > > > > > I add
On Fri, Nov 13, 2020 at 03:34:36PM +0800, Feng Tang wrote:
> On Thu, Nov 12, 2020 at 03:16:54PM +0100, Michal Hocko wrote:
> > > > > I add one phony page_counter after the union and re-test, the
> > > > > regression
> > > > > reduced to -1.2%.
On Thu, Nov 19, 2020 at 03:15:12PM +0100, Borislav Petkov wrote:
> On Thu, Nov 19, 2020 at 09:50:10PM +0800, Feng Tang wrote:
> > That's really odd. I tried on 3 baremetal machines: one Skylake NUC device,
> > one Xeon E5-2699 and one Xeon E5-2680.
>
> Ah, sorry, not
On Thu, Nov 19, 2020 at 10:18:15AM +0100, Borislav Petkov wrote:
> On Thu, Nov 19, 2020 at 03:20:55PM +0800, Feng Tang wrote:
> > I just tried the patch on one Debian 9 and 2 Ubuntus (16.04 & 20.10) with
> > different gcc versions, and haven't reproduced it yet.
>
>
Hi Borislav,
Thanks for reviewing and trying.
On Wed, Nov 18, 2020 at 08:15:29PM +0100, Borislav Petkov wrote:
> On Thu, Oct 22, 2020 at 01:21:23PM +0800, Feng Tang wrote:
> > diff --git a/tools/arch/x86/kcpuid/kcpuid.c b/tools/arch/x86/kcpuid/kcpuid.c
> > new file mode 100644
&g
On Sat, Nov 14, 2020 at 01:25:44PM +0100, Greg Kroah-Hartman wrote:
> On Sat, Nov 14, 2020 at 03:19:17PM +0800, Feng Tang wrote:
> > Hi Greg,
> >
> > On Fri, Nov 13, 2020 at 07:46:57AM +0100, Greg Kroah-Hartman wrote:
> > > On Thu, Nov 12, 2020 at 10:06:25PM
On Thu, Nov 12, 2020 at 11:43:45AM -0500, Waiman Long wrote:
> >>We tried below patch to make the 'page_counter' aligned.
> >> diff --git a/include/linux/page_counter.h b/include/linux/page_counter.h
> >> index bab7e57..9efa6f7 100644
> >> --- a/include/linux/page_counter.h
> >> +++ b/inclu
On Thu, Nov 12, 2020 at 03:16:54PM +0100, Michal Hocko wrote:
> On Thu 12-11-20 20:28:44, Feng Tang wrote:
> > Hi Michal,
> >
> > On Wed, Nov 04, 2020 at 09:15:46AM +0100, Michal Hocko wrote:
> > > > > > Hi Michal,
> > > > > >
> >
Hi Michal,
On Wed, Nov 04, 2020 at 09:15:46AM +0100, Michal Hocko wrote:
> > > > Hi Michal,
> > > >
> > > > We used the default configure of cgroups, not sure what configuration
> > > > you
> > > > want,
> > > > could you give me more details? and here is the cgroup info of
> > > > will-it-scal
1 - 100 of 563 matches
Mail list logo