On 4/3/21 3:55 AM, Alexander Duyck wrote:
> On Fri, Mar 26, 2021 at 2:45 AM Xunlei Pang wrote:
>>
>> We encountered user memory allocation failure(OOM) on our
>> 512MiB tiny instances, it didn't happen after turning off
>> the page reporting.
>>
>> After so
On 4/3/21 2:56 AM, Alexander Duyck wrote:
> On Fri, Mar 26, 2021 at 2:45 AM Xunlei Pang wrote:
>>
>> Add new "/sys/kernel/mm/page_reporting/reporting_factor"
>> within [0, 100], and stop page reporting when it reaches
>> the configured threshold. Defaul
On 3/26/21 5:44 PM, Xunlei Pang wrote:
> Add the following knobs in PATCH 1~3:
> /sys/kernel/mm/page_reporting/reported_kbytes
> /sys/kernel/mm/page_reporting/refault_kbytes
> /sys/kernel/mm/page_reporting/reporting_factor
>
> Fix unexpected user OOM in PATCH 4.
>
>
: 32
order-10: 16
Reported-by: Helin Guo
Tested-by: Helin Guo
Signed-off-by: Xunlei Pang
---
mm/page_reporting.c | 89 +++--
1 file changed, 72 insertions(+), 17 deletions(-)
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index
memory has refaulted in after been reported out.
Signed-off-by: Xunlei Pang
---
include/linux/mmzone.h | 3 ++
mm/page_alloc.c| 4 +-
mm/page_reporting.c| 112 +++--
mm/page_reporting.h| 5 +++
4 files changed, 119 insertions(+),
Thus it's reasonable to turn the page reporting off by default and
enable it at runtime as needed.
Signed-off-by: Xunlei Pang
---
Documentation/admin-guide/kernel-parameters.txt | 3 +++
mm/page_reporting.c | 13 +
2 files changed, 16 insertions(+)
diff -
Add the following knobs in PATCH 1~3:
/sys/kernel/mm/page_reporting/reported_kbytes
/sys/kernel/mm/page_reporting/refault_kbytes
/sys/kernel/mm/page_reporting/reporting_factor
Fix unexpected user OOM in PATCH 4.
Xunlei Pang (4):
mm/page_reporting: Introduce free page reported counters
mm
it is also useful for testing, gray-release, etc.
Signed-off-by: Xunlei Pang
---
mm/page_reporting.c | 60 -
1 file changed, 59 insertions(+), 1 deletion(-)
diff --git a/mm/page_reporting.c b/mm/page_reporting.c
index ba195ea..86c6479 10064
On 3/18/21 8:18 PM, Vlastimil Babka wrote:
> On 3/17/21 8:54 AM, Xunlei Pang wrote:
>> The node list_lock in count_partial() spends long time iterating
>> in case of large amount of partial page lists, which can cause
>> thunder herd effect to the list_lock contention.
>
On 3/18/21 2:45 AM, Vlastimil Babka wrote:
> On 3/17/21 8:54 AM, Xunlei Pang wrote:
>> The node list_lock in count_partial() spends long time iterating
>> in case of large amount of partial page lists, which can cause
>> thunder herd effect to the list_lock contention.
>
Now the partial counters are ready, let's use them to get rid
of count_partial().
The partial counters will involve in to calculate the accurate
partial usage when CONFIG_SLUB_DEBUG_PARTIAL is on, otherwise
simply assume their zero usage statistics.
Tested-by: James Wang
Signed-off-by: Xunlei
r stats for 'hackbench 32 thread 2' (10 runs):
39.681273015 seconds time elapsed
( +- 0.21% )
Performance counter stats for 'hackbench 32 thread 2' (10 runs):
39.681238459 seconds time elapsed
and "num_objs" fields of "/proc/slabinfo" equal.
"cat /sys/kernel/slab//partial" displays "0".
Tested-by: James Wang
Signed-off-by: Xunlei Pang
---
init/Kconfig | 13 +
mm/slab.h| 6 ++
mm/slub.c| 63 ++
per_cpu_sum() is useful, and deserves to be exported.
Tested-by: James Wang
Signed-off-by: Xunlei Pang
---
include/linux/percpu-defs.h | 10 ++
kernel/locking/percpu-rwsem.c | 10 --
2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/include/linux/percpu-defs.h
On 3/16/21 7:02 PM, Vlastimil Babka wrote:
> On 3/16/21 11:42 AM, Xunlei Pang wrote:
>> On 3/16/21 2:49 AM, Vlastimil Babka wrote:
>>> On 3/9/21 4:25 PM, Xunlei Pang wrote:
>>>> count_partial() can hold n->list_lock spinlock for quite long, which
>>>>
On 3/16/21 2:49 AM, Vlastimil Babka wrote:
> On 3/9/21 4:25 PM, Xunlei Pang wrote:
>> count_partial() can hold n->list_lock spinlock for quite long, which
>> makes much trouble to the system. This series eliminate this problem.
>
> Before I check the details, I have
Now the partial counters are ready, let's use them directly
and get rid of count_partial().
Tested-by: James Wang
Reviewed-by: Pekka Enberg
Signed-off-by: Xunlei Pang
---
mm/slub.c | 54 ++
1 file changed, 22 insertions(+), 32 deletions
later.
Tested-by: James Wang
Reviewed-by: Pekka Enberg
Signed-off-by: Xunlei Pang
---
mm/slab.h | 4
mm/slub.c | 46 +-
2 files changed, 49 insertions(+), 1 deletion(-)
diff --git a/mm/slab.h b/mm/slab.h
index 076582f..817bfa0 100644
-
per_cpu_sum() is useful, and deserves to be exported.
Tested-by: James Wang
Signed-off-by: Xunlei Pang
---
include/linux/percpu-defs.h | 10 ++
kernel/locking/percpu-rwsem.c | 10 --
2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/include/linux/percpu-defs.h
( +- 0.21% )
Performance counter stats for 'hackbench 32 thread 2' (10 runs):
39.681238459 seconds time elapsed
( +- 0.09% )
Xunlei Pang (4):
mm/slub: Introduce two counters for partial objects
mm/slub: Get rid of count_partial()
percpu:
Wang
Reviewed-by: Pekka Enberg
Signed-off-by: Xunlei Pang
---
mm/slab.h | 6 --
mm/slub.c | 30 +++---
2 files changed, 27 insertions(+), 9 deletions(-)
diff --git a/mm/slab.h b/mm/slab.h
index 817bfa0..c819597 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -546,16
On 3/2/21 5:14 PM, Christoph Lameter wrote:
> On Mon, 10 Aug 2020, Xunlei Pang wrote:
>
>>
>> diff --git a/mm/slab.h b/mm/slab.h
>> index c85e2fa..a709a70 100644
>> --- a/mm/slab.h
>> +++ b/mm/slab.h
>> @@ -616,7 +616,7 @@ struct kmem_cache_node {
&g
2020 at 6:05 PM xunlei wrote:
>>
>> On 2020/8/20 下午10:02, Pekka Enberg wrote:
>>> On Mon, Aug 10, 2020 at 3:18 PM Xunlei Pang
>>> wrote:
>>>>
>>>> v1->v2:
>>>> - Improved changelog and variable naming for PATCH 1~2.
>&g
The following commit has been merged into the sched/core branch of tip:
Commit-ID: df3cb4ea1fb63ff326488efd671ba3c39034255e
Gitweb:
https://git.kernel.org/tip/df3cb4ea1fb63ff326488efd671ba3c39034255e
Author:Xunlei Pang
AuthorDate:Thu, 24 Sep 2020 14:48:47 +08:00
Committer
On 9/24/20 3:18 PM, Vincent Guittot wrote:
> On Thu, 24 Sep 2020 at 08:48, Xunlei Pang wrote:
>>
>> We've met problems that occasionally tasks with full cpumask
>> (e.g. by putting it into a cpuset or setting to full affinity)
>> were migrated to our isolated cpu
he valid domain mask in select_idle_smt().
Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
Reported-by: Wetp Zhang
Reviewed-by: Jiang Biao
Signed-off-by: Xunlei Pang
---
kernel/sched/fair.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
On 2020/9/9 AM2:50, Julius Hemanth Pitti wrote:
> For non root CG, in try_charge(), we keep trying
> to charge until we succeed. On non-preemptive
> kernel, when we are OOM, this results in holding
> CPU forever.
>
> On SMP systems, this doesn't create a big problem
> because oom_reaper get a
On 2020/8/24 PM8:30, Xunlei Pang wrote:
> We've met problems that occasionally tasks with full cpumask
> (e.g. by putting it into a cpuset or setting to full affinity)
> were migrated to our isolated cpus in production environment.
>
> After some analysis, we found that it is due
) to solve this
issue, this will mean that we will get a scheduling point for each
memcg in the reclaimed hierarchy without any dependency on the
reclaimable memory in that memcg thus making it more predictable.
Acked-by: Chris Down
Acked-by: Michal Hocko
Suggested-by: Michal Hocko
Signed-off-by: X
) to solve this
issue, and any other possible issue like meomry.min protection.
Suggested-by: Michal Hocko
Signed-off-by: Xunlei Pang
---
mm/vmscan.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 99e1796..bbdc38b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2617
On 2020/8/26 下午7:45, xunlei wrote:
> On 2020/8/26 下午7:00, Michal Hocko wrote:
>> On Wed 26-08-20 18:41:18, xunlei wrote:
>>> On 2020/8/26 下午4:11, Michal Hocko wrote:
>>>> On Wed 26-08-20 15:27:02, Xunlei Pang wrote:
>>>>> We've met softlockup with &
ink_lruvec()
to give up the cpu to others.
Signed-off-by: Xunlei Pang
---
mm/vmscan.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 99e1796..349a88e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2449,6 +2449,12 @@ static void shrink_lruvec(struct lruv
mask.
Fix it by checking the valid domain mask in select_idle_smt().
Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings())
Reported-by: Wetp Zhang
Signed-off-by: Xunlei Pang
---
kernel/sched/fair.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
Now the partial counters are ready, let's use them directly
and get rid of count_partial().
Co-developed-by: Wen Yang
Signed-off-by: Xunlei Pang
---
mm/slub.c | 57 -
1 file changed, 24 insertions(+), 33 deletions(-)
diff --git a/mm
sed
( +- 0.17% )
== patched with patch1~3
Performance counter stats for 'hackbench 20 thread 2' (10 runs):
19.112106847 seconds time elapsed
( +- 0.64% )
Xunlei Pang (3):
mm/slub: Introduce two counters for partial objects
mm/s
The only concern of introducing partial counter is that,
partial_free_objs may cause atomic operation contention
in case of same SLUB concurrent __slab_free().
This patch changes it to be a percpu counter to avoid that.
Co-developed-by: Wen Yang
Signed-off-by: Xunlei Pang
---
mm/slab.h | 2
later.
Acked-by: Pekka Enberg
Co-developed-by: Wen Yang
Signed-off-by: Xunlei Pang
---
mm/slab.h | 2 ++
mm/slub.c | 37 -
2 files changed, 38 insertions(+), 1 deletion(-)
diff --git a/mm/slab.h b/mm/slab.h
index 7e94700..c85e2fa 100644
--- a/mm/slab.h
++
Now the partial counters are ready, let's use them directly
and get rid of count_partial().
Co-developed-by: Wen Yang
Signed-off-by: Xunlei Pang
---
mm/slub.c | 57 -
1 file changed, 24 insertions(+), 33 deletions(-)
diff --git a/mm
nce
impact is minimal.
Co-developed-by: Wen Yang
Signed-off-by: Xunlei Pang
---
mm/slab.h | 2 ++
mm/slub.c | 38 +-
2 files changed, 39 insertions(+), 1 deletion(-)
diff --git a/mm/slab.h b/mm/slab.h
index 7e94700..5935749 100644
--- a/mm/slab.h
+++ b/mm/slab.h
Hi Chris,
On 2019/6/16 PM 6:37, Chris Down wrote:
> Hi Xunlei,
>
> Xunlei Pang writes:
>> docker and various types(different memory capacity) of containers
>> are managed by k8s, it's a burden for k8s to maintain those dynamic
>> figures, simply set "max"
Hi Chirs,
On 2019/6/16 AM 12:08, Chris Down wrote:
> Hi Xunlei,
>
> Xunlei Pang writes:
>> Currently memory.min|low implementation requires the whole
>> hierarchy has the settings, otherwise the protection will
>> be broken.
>>
>> Our hierarchy is ki
Hi Chris,
On 2019/6/15 PM 11:58, Chris Down wrote:
> Hi Xunlei,
>
> Xunlei Pang writes:
>> There're several cases like resize and force_empty that don't
>> need to account to psi, otherwise is misleading.
>
> I'm afraid I'm quite confused by this patch. Why do you t
.
Signed-off-by: Xunlei Pang
---
include/linux/swap.h | 3 ++-
mm/memcontrol.c | 13 +++--
mm/vmscan.c | 9 ++---
3 files changed, 15 insertions(+), 10 deletions(-)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 4bfb5c4ac108..74b5443877d4 100644
achieve the flexibility.
In order not to break previous hierarchical behaviour, only
ignore the parent when there's no protected ancestor upwards
the hierarchy.
Signed-off-by: Xunlei Pang
---
include/linux/page_counter.h | 2 ++
mm/memcontrol.c | 5 +
mm/page_counter.c
since we
> need to start a new timer if the current one is in the process of
> finishing.
>
> Signed-off-by: Ben Segall
> ---
We've also suffered from this performance issue recently:
Reviewed-by: Xunlei Pang
> kernel/sched/fair.c | 7 +++
> kernel/sched/sched.h | 1 +
&g
Hi Roman,
On 2018/12/4 AM 2:00, Roman Gushchin wrote:
> On Mon, Dec 03, 2018 at 04:01:17PM +0800, Xunlei Pang wrote:
>> When usage exceeds min, min usage should be min other than 0.
>> Apply the same for low.
>>
>> Signed-off-by: Xunlei Pang
>> ---
>> mm
Hi Roman,
On 2018/12/4 AM 2:00, Roman Gushchin wrote:
> On Mon, Dec 03, 2018 at 04:01:17PM +0800, Xunlei Pang wrote:
>> When usage exceeds min, min usage should be min other than 0.
>> Apply the same for low.
>>
>> Signed-off-by: Xunlei Pang
>> ---
>> mm
On 2018/12/4 PM 3:25, Michal Hocko wrote:
> On Tue 04-12-18 10:40:29, Xunlei Pang wrote:
>> On 2018/12/4 AM 1:22, Michal Hocko wrote:
>>> On Mon 03-12-18 23:20:31, Xunlei Pang wrote:
>>>> On 2018/12/3 下午7:56, Michal Hocko wrote:
>>>>> On Mon 03-12-18
On 2018/12/4 PM 3:25, Michal Hocko wrote:
> On Tue 04-12-18 10:40:29, Xunlei Pang wrote:
>> On 2018/12/4 AM 1:22, Michal Hocko wrote:
>>> On Mon 03-12-18 23:20:31, Xunlei Pang wrote:
>>>> On 2018/12/3 下午7:56, Michal Hocko wrote:
>>>>> On Mon 03-12-18
On 2018/12/3 PM 7:57, Michal Hocko wrote:
> On Mon 03-12-18 16:01:19, Xunlei Pang wrote:
>> When memcgs get reclaimed after its usage exceeds min, some
>> usages below the min may also be reclaimed in the current
>> implementation, the amount is considerably large dur
On 2018/12/3 PM 7:57, Michal Hocko wrote:
> On Mon 03-12-18 16:01:19, Xunlei Pang wrote:
>> When memcgs get reclaimed after its usage exceeds min, some
>> usages below the min may also be reclaimed in the current
>> implementation, the amount is considerably large dur
On 2018/12/4 AM 1:22, Michal Hocko wrote:
> On Mon 03-12-18 23:20:31, Xunlei Pang wrote:
>> On 2018/12/3 下午7:56, Michal Hocko wrote:
>>> On Mon 03-12-18 16:01:18, Xunlei Pang wrote:
>>>> There may be cgroup memory overcommitment, it will become
>>>>
On 2018/12/4 AM 1:22, Michal Hocko wrote:
> On Mon 03-12-18 23:20:31, Xunlei Pang wrote:
>> On 2018/12/3 下午7:56, Michal Hocko wrote:
>>> On Mon 03-12-18 16:01:18, Xunlei Pang wrote:
>>>> There may be cgroup memory overcommitment, it will become
>>>>
On 2018/12/3 下午7:56, Michal Hocko wrote:
> On Mon 03-12-18 16:01:18, Xunlei Pang wrote:
>> There may be cgroup memory overcommitment, it will become
>> even common in the future.
>>
>> Let's enable kswapd to reclaim low-protected memory in case
>> of memory pressu
On 2018/12/3 下午7:56, Michal Hocko wrote:
> On Mon 03-12-18 16:01:18, Xunlei Pang wrote:
>> There may be cgroup memory overcommitment, it will become
>> even common in the future.
>>
>> Let's enable kswapd to reclaim low-protected memory in case
>> of memory pressu
On 2018/12/3 下午7:54, Michal Hocko wrote:
> On Mon 03-12-18 16:01:17, Xunlei Pang wrote:
>> When usage exceeds min, min usage should be min other than 0.
>> Apply the same for low.
>
> Why? What is the actual problem.
children_min_usage tracks the total children usages un
On 2018/12/3 下午7:54, Michal Hocko wrote:
> On Mon 03-12-18 16:01:17, Xunlei Pang wrote:
>> When usage exceeds min, min usage should be min other than 0.
>> Apply the same for low.
>
> Why? What is the actual problem.
children_min_usage tracks the total children usages un
-off-by: Xunlei Pang
---
mm/vmscan.c | 8
1 file changed, 8 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 62ac0c488624..3d412eb91f73 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3531,6 +3531,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int
classzone_idx
this part of usages to be reclaimed.
Signed-off-by: Xunlei Pang
---
include/linux/memcontrol.h | 7 +--
mm/memcontrol.c| 9 +++--
mm/vmscan.c| 17 +++--
3 files changed, 27 insertions(+), 6 deletions(-)
diff --git a/include/linux/memcontrol.h b
-off-by: Xunlei Pang
---
mm/vmscan.c | 8
1 file changed, 8 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 62ac0c488624..3d412eb91f73 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3531,6 +3531,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int
classzone_idx
this part of usages to be reclaimed.
Signed-off-by: Xunlei Pang
---
include/linux/memcontrol.h | 7 +--
mm/memcontrol.c| 9 +++--
mm/vmscan.c| 17 +++--
3 files changed, 27 insertions(+), 6 deletions(-)
diff --git a/include/linux/memcontrol.h b
When usage exceeds min, min usage should be min other than 0.
Apply the same for low.
Signed-off-by: Xunlei Pang
---
mm/page_counter.c | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/mm/page_counter.c b/mm/page_counter.c
index de31470655f6..75d53f15f040 100644
When usage exceeds min, min usage should be min other than 0.
Apply the same for low.
Signed-off-by: Xunlei Pang
---
mm/page_counter.c | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/mm/page_counter.c b/mm/page_counter.c
index de31470655f6..75d53f15f040 100644
On 8/1/18 4:55 AM, Cong Wang wrote:
> On Tue, Jul 31, 2018 at 10:13 AM wrote:
>>
>> Xunlei Pang writes:
>>
>>> On 7/31/18 1:55 AM, Cong Wang wrote:
>>>> On Sun, Jul 29, 2018 at 10:29 PM Xunlei Pang
>>>> wrote:
>>>>>
>&g
On 8/1/18 4:55 AM, Cong Wang wrote:
> On Tue, Jul 31, 2018 at 10:13 AM wrote:
>>
>> Xunlei Pang writes:
>>
>>> On 7/31/18 1:55 AM, Cong Wang wrote:
>>>> On Sun, Jul 29, 2018 at 10:29 PM Xunlei Pang
>>>> wrote:
>>>>>
>&g
On 7/31/18 1:55 AM, Cong Wang wrote:
> On Sun, Jul 29, 2018 at 10:29 PM Xunlei Pang wrote:
>>
>> Hi Cong,
>>
>> On 7/28/18 8:24 AM, Cong Wang wrote:
>>> Each time we sync cfs_rq->runtime_expires with cfs_b->runtime_expires,
>>> we shoul
On 7/31/18 1:55 AM, Cong Wang wrote:
> On Sun, Jul 29, 2018 at 10:29 PM Xunlei Pang wrote:
>>
>> Hi Cong,
>>
>> On 7/28/18 8:24 AM, Cong Wang wrote:
>>> Each time we sync cfs_rq->runtime_expires with cfs_b->runtime_expires,
>>> we shoul
em, as expires_seq will get synced in
assign_cfs_rq_runtime().
Thanks,
Xunlei
>
> Fixes: 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift condition")
> Cc: Xunlei Pang
> Cc: Ben Segall
> Cc: Linus Torvalds
> Cc: Peter Zijlstra
> Cc: Thomas Gleixner
em, as expires_seq will get synced in
assign_cfs_rq_runtime().
Thanks,
Xunlei
>
> Fixes: 512ac999d275 ("sched/fair: Fix bandwidth timer clock drift condition")
> Cc: Xunlei Pang
> Cc: Ben Segall
> Cc: Linus Torvalds
> Cc: Peter Zijlstra
> Cc: Thomas Gleixner
On 7/23/18 5:21 PM, Peter Zijlstra wrote:
> On Tue, Jul 17, 2018 at 12:08:36PM +0800, Xunlei Pang wrote:
>> The trace data corresponds to the last sample period:
>> trace entry 1:
>> cat-20755 [022] d... 1370.106496: cputime_adjust: task
>> tick-bas
On 7/23/18 5:21 PM, Peter Zijlstra wrote:
> On Tue, Jul 17, 2018 at 12:08:36PM +0800, Xunlei Pang wrote:
>> The trace data corresponds to the last sample period:
>> trace entry 1:
>> cat-20755 [022] d... 1370.106496: cputime_adjust: task
>> tick-bas
On 7/17/18 1:41 AM, Ingo Molnar wrote:
>
> * Peter Zijlstra wrote:
>
>> On Sun, Jul 15, 2018 at 04:36:17PM -0700, tip-bot for Xunlei Pang wrote:
>>> Commit-ID: 8d4c00dc38a8aa30dae8402955e55e7b34e74bc8
>>> Gitweb:
>>> https://git.kernel.org/ti
On 7/17/18 1:41 AM, Ingo Molnar wrote:
>
> * Peter Zijlstra wrote:
>
>> On Sun, Jul 15, 2018 at 04:36:17PM -0700, tip-bot for Xunlei Pang wrote:
>>> Commit-ID: 8d4c00dc38a8aa30dae8402955e55e7b34e74bc8
>>> Gitweb:
>>> https://git.kernel.org/ti
Commit-ID: 8d4c00dc38a8aa30dae8402955e55e7b34e74bc8
Gitweb: https://git.kernel.org/tip/8d4c00dc38a8aa30dae8402955e55e7b34e74bc8
Author: Xunlei Pang
AuthorDate: Mon, 9 Jul 2018 22:58:43 +0800
Committer: Ingo Molnar
CommitDate: Mon, 16 Jul 2018 00:28:31 +0200
sched/cputime: Ensure
Commit-ID: 8d4c00dc38a8aa30dae8402955e55e7b34e74bc8
Gitweb: https://git.kernel.org/tip/8d4c00dc38a8aa30dae8402955e55e7b34e74bc8
Author: Xunlei Pang
AuthorDate: Mon, 9 Jul 2018 22:58:43 +0800
Committer: Ingo Molnar
CommitDate: Mon, 16 Jul 2018 00:28:31 +0200
sched/cputime: Ensure
Hi Peter,
On 7/9/18 6:48 PM, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 01:52:38PM +0800, Xunlei Pang wrote:
>> Please see the enclosure for the reproducer cputime_adjust.tgz
>
> No, I'm not going to reverse engineer something if you cannot even
> explain what the problem
Hi Peter,
On 7/9/18 6:48 PM, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 01:52:38PM +0800, Xunlei Pang wrote:
>> Please see the enclosure for the reproducer cputime_adjust.tgz
>
> No, I'm not going to reverse engineer something if you cannot even
> explain what the problem
time type field is added in prev_cputime to record
previous task_cputime so that we can get the elapsed times as the accurate
ratio.
Signed-off-by: Xunlei Pang
---
v1->v2:
- Rewrite the changelog.
include/linux/sched.h | 34
include/linux/sched/cputi
time type field is added in prev_cputime to record
previous task_cputime so that we can get the elapsed times as the accurate
ratio.
Signed-off-by: Xunlei Pang
---
v1->v2:
- Rewrite the changelog.
include/linux/sched.h | 34
include/linux/sched/cputi
Hi Peter,
On 7/5/18 9:21 PM, Xunlei Pang wrote:
> On 7/5/18 6:46 PM, Peter Zijlstra wrote:
>> On Wed, Jun 27, 2018 at 08:22:42PM +0800, Xunlei Pang wrote:
>>> tick-based whole utime is utime_0, tick-based whole stime
>>> is stime_0, scheduler time is rtime_0.
>>
Hi Peter,
On 7/5/18 9:21 PM, Xunlei Pang wrote:
> On 7/5/18 6:46 PM, Peter Zijlstra wrote:
>> On Wed, Jun 27, 2018 at 08:22:42PM +0800, Xunlei Pang wrote:
>>> tick-based whole utime is utime_0, tick-based whole stime
>>> is stime_0, scheduler time is rtime_0.
>>
On 7/5/18 6:46 PM, Peter Zijlstra wrote:
> On Wed, Jun 27, 2018 at 08:22:42PM +0800, Xunlei Pang wrote:
>> tick-based whole utime is utime_0, tick-based whole stime
>> is stime_0, scheduler time is rtime_0.
>
>> For a long time, the process runs mainly in userspace wi
On 7/5/18 6:46 PM, Peter Zijlstra wrote:
> On Wed, Jun 27, 2018 at 08:22:42PM +0800, Xunlei Pang wrote:
>> tick-based whole utime is utime_0, tick-based whole stime
>> is stime_0, scheduler time is rtime_0.
>
>> For a long time, the process runs mainly in userspace wi
On 7/2/18 11:21 PM, Tejun Heo wrote:
> Hello, Peter.
>
> On Tue, Jun 26, 2018 at 05:49:08PM +0200, Peter Zijlstra wrote:
>> Well, no, because the Changelog is incomprehensible and the patch
>> doesn't really have useful comments, so I'll have to reverse engineer
>> the entire thing, and I've just
On 7/2/18 11:21 PM, Tejun Heo wrote:
> Hello, Peter.
>
> On Tue, Jun 26, 2018 at 05:49:08PM +0200, Peter Zijlstra wrote:
>> Well, no, because the Changelog is incomprehensible and the patch
>> doesn't really have useful comments, so I'll have to reverse engineer
>> the entire thing, and I've just
Commit-ID: f1d1be8aee6c461652aea8f58bedebaa73d7f4d3
Gitweb: https://git.kernel.org/tip/f1d1be8aee6c461652aea8f58bedebaa73d7f4d3
Author: Xunlei Pang
AuthorDate: Wed, 20 Jun 2018 18:18:34 +0800
Committer: Ingo Molnar
CommitDate: Tue, 3 Jul 2018 09:17:29 +0200
sched/fair: Advance global
Commit-ID: f1d1be8aee6c461652aea8f58bedebaa73d7f4d3
Gitweb: https://git.kernel.org/tip/f1d1be8aee6c461652aea8f58bedebaa73d7f4d3
Author: Xunlei Pang
AuthorDate: Wed, 20 Jun 2018 18:18:34 +0800
Committer: Ingo Molnar
CommitDate: Tue, 3 Jul 2018 09:17:29 +0200
sched/fair: Advance global
Commit-ID: 512ac999d2755d2b7109e996a76b6fb8b888631d
Gitweb: https://git.kernel.org/tip/512ac999d2755d2b7109e996a76b6fb8b888631d
Author: Xunlei Pang
AuthorDate: Wed, 20 Jun 2018 18:18:33 +0800
Committer: Ingo Molnar
CommitDate: Tue, 3 Jul 2018 09:17:29 +0200
sched/fair: Fix bandwidth
Commit-ID: 512ac999d2755d2b7109e996a76b6fb8b888631d
Gitweb: https://git.kernel.org/tip/512ac999d2755d2b7109e996a76b6fb8b888631d
Author: Xunlei Pang
AuthorDate: Wed, 20 Jun 2018 18:18:33 +0800
Committer: Ingo Molnar
CommitDate: Tue, 3 Jul 2018 09:17:29 +0200
sched/fair: Fix bandwidth
On 6/26/18 11:49 PM, Peter Zijlstra wrote:
> On Tue, Jun 26, 2018 at 08:19:49PM +0800, Xunlei Pang wrote:
>> On 6/22/18 3:15 PM, Xunlei Pang wrote:
>>> We use per-cgroup cpu usage statistics similar to "cgroup rstat",
>>> and encountered a problem that user
On 6/26/18 11:49 PM, Peter Zijlstra wrote:
> On Tue, Jun 26, 2018 at 08:19:49PM +0800, Xunlei Pang wrote:
>> On 6/22/18 3:15 PM, Xunlei Pang wrote:
>>> We use per-cgroup cpu usage statistics similar to "cgroup rstat",
>>> and encountered a problem that user
On 6/22/18 3:15 PM, Xunlei Pang wrote:
> We use per-cgroup cpu usage statistics similar to "cgroup rstat",
> and encountered a problem that user and sys usages are wrongly
> split sometimes.
>
> Run tasks with some random run-sleep pattern for a long time, and
> when t
On 6/22/18 3:15 PM, Xunlei Pang wrote:
> We use per-cgroup cpu usage statistics similar to "cgroup rstat",
> and encountered a problem that user and sys usages are wrongly
> split sometimes.
>
> Run tasks with some random run-sleep pattern for a long time, and
> when t
please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/Xunlei-Pang/sched-cputime-Ensure-correct-utime-and-stime-proportion/20180622-153720
> reproduce: make htmldocs
>
> All warnings (new ones prefixed by >>):
>
please drop us a note to
> help improve the system]
>
> url:
> https://github.com/0day-ci/linux/commits/Xunlei-Pang/sched-cputime-Ensure-correct-utime-and-stime-proportion/20180622-153720
> reproduce: make htmldocs
>
> All warnings (new ones prefixed by >>):
>
nce last parse in cputime_adjust(), and accumulate the
corresponding results calculated into prev_cputime. A new field
of task_cputime type is added in structure prev_cputime to record
previous task_cputime so that we can get the elapsed time deltas.
Signed-off-by: Xunlei Pang
---
include/linux/sched.h
nce last parse in cputime_adjust(), and accumulate the
corresponding results calculated into prev_cputime. A new field
of task_cputime type is added in structure prev_cputime to record
previous task_cputime so that we can get the elapsed time deltas.
Signed-off-by: Xunlei Pang
---
include/linux/sched.h
On 6/21/18 4:08 PM, Peter Zijlstra wrote:
> On Thu, Jun 21, 2018 at 11:56:56AM +0800, Xunlei Pang wrote:
>>>> Fixes: 51f2176d74ac ("sched/fair: Fix unlocked reads of some
>>>> cfs_b->quota/period")
>>>> Cc: Ben Segall
>>>
>>&g
On 6/21/18 4:08 PM, Peter Zijlstra wrote:
> On Thu, Jun 21, 2018 at 11:56:56AM +0800, Xunlei Pang wrote:
>>>> Fixes: 51f2176d74ac ("sched/fair: Fix unlocked reads of some
>>>> cfs_b->quota/period")
>>>> Cc: Ben Segall
>>>
>>&g
On 6/21/18 1:01 AM, bseg...@google.com wrote:
> Xunlei Pang writes:
>
>> I noticed the group constantly got throttled even it consumed
>> low cpu usage, this caused some jitters on the response time
>> to some of our business containers enabling cpu quota.
>>
&g
1 - 100 of 1061 matches
Mail list logo