e=1344
group_shared=0
numa_faults node=1 task_private=641 task_shared=65 group_private=641
group_shared=65
numa_faults node=2 task_private=512 task_shared=0 group_private=512
group_shared=0
numa_faults node=3 task_private=64 task_shared=1 group_private=64 group_shared=1
Srikar Dronamraju (3):
o have absolute numbers since
differential migrations between two accesses can be easily calculated.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/debug.c | 38 +-
kernel/sched/fair.c | 22 +-
kernel/sched/sched.h | 10 +-
3
Having the numa group id in /proc/sched_debug helps to see how the numa
groups have spread across the system.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 315c68e
Having the numa group id in /proc/sched_debug helps to see how the numa
groups have spread across the system.
Signed-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
kernel/sched/debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/debug.c b/kernel
absolute numbers since
differential migrations between two accesses can be easily calculated.
Signed-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
kernel/sched/debug.c | 38 +-
kernel/sched/fair.c | 22 +-
kernel/sched/sched.h
group_shared=0
numa_faults node=1 task_private=641 task_shared=65 group_private=641
group_shared=65
numa_faults node=2 task_private=512 task_shared=0 group_private=512
group_shared=0
numa_faults node=3 task_private=64 task_shared=1 group_private=64 group_shared=1
Srikar Dronamraju (3):
sched
Currently print_cfs_rq() is declared in include/linux/sched.h.
However its not used outside kernel/sched. Hence move the declaration to
kernel/sched/sched.h
Also some functions are only available for CONFIG_SCHED_DEBUG. Hence move
the declarations within #ifdef.
Signed-off-by: Srikar Dronamraju
1e455f on top of tip/master seems to avoid the problem.
The below patch fixes the problem.
--
Thanks and Regards
Srikar Dronamraju
>8
>From 88199ad8a3d6495080eaa016b87a612bc742b1c4 Mon Sep 17 00:00:00 2001
From: Srikar Dronamraju
Date: Wed,
, total
1.173 nsecs/byte/thread runtime
0.853 GB/sec/thread speed
54.562 GB/sec total speed
Even reverting e1e455f on top of tip/master seems to avoid the problem.
The below patch fixes the problem.
--
Thanks and Regards
Srikar Dronamraju
8
the workload 3 times (doing only time measurement) and report
> the
> stddev in a human readable form.
>
Thanks again for this hint. Wouldnt system time/ user time also matter?
I guess once Mel did point out that it was important to make sure that
system time and user time dont increase when e
form.
Thanks again for this hint. Wouldnt system time/ user time also matter?
I guess once Mel did point out that it was important to make sure that
system time and user time dont increase when elapsed time decreases. But
I cant find the email though.
--
Thanks and Regards
Srikar Dronamraju
+ numa_has_capacity fix + Rik's modified patch.
(Rik's modified patch == I removed node_isset check before setting
nid as the preferred node)
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FA
* Rik van Riel [2015-06-16 10:39:13]:
> On 06/16/2015 07:56 AM, Srikar Dronamraju wrote:
> > This is consistent with all other load balancing instances where we
> > absorb unfairness upto env->imbalance_pct. Absorbing unfairness upto
> > env->imbalance_pct all
e. In such a case, we
shouldnt be ruling out migrating the task to the node.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
dont regress in
numa02 (probably a little less with modified patch)
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
)
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
Please read the FAQ at http://www.tux.org/lkml/
+ numa_has_capacity fix + Rik's modified patch.
(Rik's modified patch == I removed node_isset check before setting
nid as the preferred node)
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
Please read the FAQ at http
and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
Please read the FAQ at http://www.tux.org/lkml/
* Rik van Riel r...@redhat.com [2015-06-16 10:39:13]:
On 06/16/2015 07:56 AM, Srikar Dronamraju wrote:
This is consistent with all other load balancing instances where we
absorb unfairness upto env-imbalance_pct. Absorbing unfairness upto
env-imbalance_pct allows to pull and retain task
Commit-ID: 82a0d2762699b95d6ce4114d00dc1865df9b0df3
Gitweb: http://git.kernel.org/tip/82a0d2762699b95d6ce4114d00dc1865df9b0df3
Author: Srikar Dronamraju
AuthorDate: Mon, 8 Jun 2015 13:40:41 +0530
Committer: Ingo Molnar
CommitDate: Fri, 19 Jun 2015 10:03:11 +0200
sched/debug: Add
Commit-ID: c5f3ab1c3b2e277cca6462415038dab02b4ad396
Gitweb: http://git.kernel.org/tip/c5f3ab1c3b2e277cca6462415038dab02b4ad396
Author: Srikar Dronamraju
AuthorDate: Mon, 8 Jun 2015 13:40:40 +0530
Committer: Ingo Molnar
CommitDate: Fri, 19 Jun 2015 10:03:11 +0200
sched/debug: Replace
Commit-ID: 33d6176eb12d1b0ae6d2f672b47367fd90726b91
Gitweb: http://git.kernel.org/tip/33d6176eb12d1b0ae6d2f672b47367fd90726b91
Author: Srikar Dronamraju
AuthorDate: Mon, 8 Jun 2015 13:40:39 +0530
Committer: Ingo Molnar
CommitDate: Fri, 19 Jun 2015 10:03:10 +0200
sched/debug: Properly
Max Avg StdDev %Change
bopsperJVM: 266774.00 272434.00 269839.60 2083.19 +0.17%
So fix for numa_has_capacity and always setting preferred node based on
fault stats seems to help autonuma and specjbb.
--
Thanks and Regards
Srikar Dronamraju
--
To un
Commit-ID: c5f3ab1c3b2e277cca6462415038dab02b4ad396
Gitweb: http://git.kernel.org/tip/c5f3ab1c3b2e277cca6462415038dab02b4ad396
Author: Srikar Dronamraju sri...@linux.vnet.ibm.com
AuthorDate: Mon, 8 Jun 2015 13:40:40 +0530
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Fri, 19 Jun
Commit-ID: 82a0d2762699b95d6ce4114d00dc1865df9b0df3
Gitweb: http://git.kernel.org/tip/82a0d2762699b95d6ce4114d00dc1865df9b0df3
Author: Srikar Dronamraju sri...@linux.vnet.ibm.com
AuthorDate: Mon, 8 Jun 2015 13:40:41 +0530
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Fri, 19 Jun
Commit-ID: 33d6176eb12d1b0ae6d2f672b47367fd90726b91
Gitweb: http://git.kernel.org/tip/33d6176eb12d1b0ae6d2f672b47367fd90726b91
Author: Srikar Dronamraju sri...@linux.vnet.ibm.com
AuthorDate: Mon, 8 Jun 2015 13:40:39 +0530
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Fri, 19 Jun
%Change
bopsperJVM: 266774.00 272434.00 269839.60 2083.19 +0.17%
So fix for numa_has_capacity and always setting preferred node based on
fault stats seems to help autonuma and specjbb.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send
gether.
> --
Okay, I will do the needful and come back to you.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/major
t, on a two node system, both nodes end up in the numa_group's
> active_mask.
>
Just to add this was on a 4 node machine.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger
2155.02 1916.12 132.57 -4.38%
Testcase: Min Max Avg StdDev %Change
total_numa01:56940.5077128.2065935.92 7993.49 17.91%
total_numa02: 1834.97 2227.03 1980.76 136.51 -3.99%
--
Thanks and Rega
up in the numa_group's
active_mask.
Just to add this was on a 4 node machine.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
: Min Max Avg StdDev %Change
total_numa01:56940.5077128.2065935.92 7993.49 17.91%
total_numa02: 1834.97 2227.03 1980.76 136.51 -3.99%
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send
the needful and come back to you.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
* Rik van Riel [2015-06-16 11:00:18]:
> On 06/16/2015 07:56 AM, Srikar Dronamraju wrote:
> > In task_numa_migrate(), while evaluating other nodes for group
> > consolidation, env.dst_nid is used instead of using the iterator nid.
> > Using env.dst_nid would mean dist is al
sk from moving
> to its preferred nid at a later time (when the good
> nodes are no longer overloaded).
>
> Have you tested this patch with any workload that does
> not consist of tasks that are running at 100% cpu time
> for the duration of the test?
>
Okay, I will try to se
This is consistent with all other load balancing instances where we
absorb unfairness upto env->imbalance_pct. Absorbing unfairness upto
env->imbalance_pct allows to pull and retain task to their preferred
nodes.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 5 +++--
1 file c
the above, this commit merges migrate_degrades_locality() and
migrate_improves_locality(). It also replaces 3 sched features NUMA,
NUMA_FAVOUR_HIGHER and NUMA_RESIST_LOWER by a single sched feature NUMA.
Acked-by: Rik van Riel
Signed-off-by: Srikar Dronamraju
---
Changes from previous version:
- Rebased
e.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7b23efa..d1aa374 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1503,7 +1503,7 @@
the iterator nid.
Also the task/group weights from the src_nid should be calculated
irrespective of numa topology type.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d1aa374
20 10
Srikar Dronamraju (4):
sched/tip:Prefer numa hotness over cache hotness
sched: Consider imbalance_pct when comparing loads in numa_has_capacity
sched: Fix task_numa_migrate to always update preferred node
sched: Use correct nid while evaluating task weights
kernel/sched/fair.c
20 10
Srikar Dronamraju (4):
sched/tip:Prefer numa hotness over cache hotness
sched: Consider imbalance_pct when comparing loads in numa_has_capacity
sched: Fix task_numa_migrate to always update preferred node
sched: Use correct nid while evaluating task weights
kernel/sched/fair.c
This is consistent with all other load balancing instances where we
absorb unfairness upto env-imbalance_pct. Absorbing unfairness upto
env-imbalance_pct allows to pull and retain task to their preferred
nodes.
Signed-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
kernel/sched/fair.c
the above, this commit merges migrate_degrades_locality() and
migrate_improves_locality(). It also replaces 3 sched features NUMA,
NUMA_FAVOUR_HIGHER and NUMA_RESIST_LOWER by a single sched feature NUMA.
Acked-by: Rik van Riel r...@redhat.com
Signed-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7b23efa..d1aa374 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1503,7 +1503,7
the iterator nid.
Also the task/group weights from the src_nid should be calculated
irrespective of numa topology type.
Signed-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel
* Rik van Riel r...@redhat.com [2015-06-16 11:00:18]:
On 06/16/2015 07:56 AM, Srikar Dronamraju wrote:
In task_numa_migrate(), while evaluating other nodes for group
consolidation, env.dst_nid is used instead of using the iterator nid.
Using env.dst_nid would mean dist is always the same
nodes are no longer overloaded).
Have you tested this patch with any workload that does
not consist of tasks that are running at 100% cpu time
for the duration of the test?
Okay, I will try to see if I can run some workloads and get back to you.
--
Thanks and Regards
Srikar Dronamraju
the above, this commit merges migrate_degrades_locality() and
migrate_improves_locality(). It also replaces 3 sched features NUMA,
NUMA_FAVOUR_HIGHER and NUMA_RESIST_LOWER by a single sched feature NUMA.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/fair.c | 96
: 31455 MB
node distances:
node 0 1 2 3
0: 10 20 40 40
1: 20 10 40 40
2: 40 40 10 20
3: 40 40 20 10
Srikar Dronamraju (1):
sched:Prefer numa hotness over cache hotness
kernel/sched/fair.c | 96 ++---
kernel/sched
the above, this commit merges migrate_degrades_locality() and
migrate_improves_locality(). It also replaces 3 sched features NUMA,
NUMA_FAVOUR_HIGHER and NUMA_RESIST_LOWER by a single sched feature NUMA.
Signed-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
kernel/sched/fair.c | 96
: 31455 MB
node distances:
node 0 1 2 3
0: 10 20 40 40
1: 20 10 40 40
2: 40 40 10 20
3: 40 40 20 10
Srikar Dronamraju (1):
sched:Prefer numa hotness over cache hotness
kernel/sched/fair.c | 96 ++---
kernel/sched
Within runnable tasks in /proc/sched_debug, vruntime is printed twice,
once as tree-key and again as exec-runtime.
Since exec-runtime isnt populated in !CONFIG_SCHEDSTATS, use this field
to print wait_sum.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/debug.c | 4 ++--
1 file changed, 2
120 0.012962
0.163522 17495.280072 0 /
kworker/31:2 712 10136.506903 158 120 5.358470
2.269444 81162.182416 0 /
kworker/31:1H 5904 10047.528224 6 100 0.028400
0.207288 20745.582406 0 /
Srikar
When CONFIG_SCHEDSTATS is enabled, /proc//sched prints almost all
sched statistics except sum_sleep_runtime. Since sum_sleep_runtime is
a good info to collect, add this it to /proc//sched.
Signed-off-by: Srikar Dronamraju
---
kernel/sched/debug.c | 1 +
1 file changed, 1 insertion(+)
diff
With !CONFIG_SCHEDSTATS, runnable tasks in /proc/sched_debug has too
many columns than required. Fix this by printing appropriate columns.
While at this, print sum_exec_runtime, since this information is
available even in !CONFIG_SCHEDSTATS case.
Signed-off-by: Srikar Dronamraju
---
kernel
* Peter Zijlstra [2015-06-08 09:06:40]:
> On Thu, May 28, 2015 at 03:56:16PM +0530, Srikar Dronamraju wrote:
> >
> > Srikar Dronamraju (3):
> > sched:Properly format runnable tasks in /proc/sched_debug
> > sched:Replace vruntime with wait_sum in /proc/
Within runnable tasks in /proc/sched_debug, vruntime is printed twice,
once as tree-key and again as exec-runtime.
Since exec-runtime isnt populated in !CONFIG_SCHEDSTATS, use this field
to print wait_sum.
Signed-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
kernel/sched/debug.c | 4
120 0.012962
0.163522 17495.280072 0 /
kworker/31:2 712 10136.506903 158 120 5.358470
2.269444 81162.182416 0 /
kworker/31:1H 5904 10047.528224 6 100 0.028400
0.207288 20745.582406 0 /
Srikar
* Peter Zijlstra pet...@infradead.org [2015-06-08 09:06:40]:
On Thu, May 28, 2015 at 03:56:16PM +0530, Srikar Dronamraju wrote:
Srikar Dronamraju (3):
sched:Properly format runnable tasks in /proc/sched_debug
sched:Replace vruntime with wait_sum in /proc/sched_debug
sched:Add
With !CONFIG_SCHEDSTATS, runnable tasks in /proc/sched_debug has too
many columns than required. Fix this by printing appropriate columns.
While at this, print sum_exec_runtime, since this information is
available even in !CONFIG_SCHEDSTATS case.
Signed-off-by: Srikar Dronamraju sri
When CONFIG_SCHEDSTATS is enabled, /proc/pid/sched prints almost all
sched statistics except sum_sleep_runtime. Since sum_sleep_runtime is
a good info to collect, add this it to /proc/pid/sched.
Signed-off-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
kernel/sched/debug.c | 1 +
1 file
> At Facebook we have a pretty heavily multi-threaded application that is
> sensitive to latency. We have been pulling forward the old SD_WAKE_IDLE code
> because it gives us a pretty significant performance gain (like 20%). It
> turns
> out this is because there are cases where the scheduler
if (src->load * dst->compute_capacity >
> + dst->load * src->compute_capacity)
> + return true;
> +
> + return false;
> +}
> +
>
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-
120 0.012962
0.163522 17495.280072 0 /
kworker/31:2 712 10136.506903 158 120 5.358470
2.269444 81162.182416 0 /
kworker/31:1H 5904 10047.528224 6 100 0.028400
0.207288 20745.582406 0 /
Srikar
With !CONFIG_SCHEDSTATS, runnable tasks in /proc/sched_debug has too
many columns than required. Fix this by printing appropriate columns.
While at this, print sum_exec_runtime, since this information is
available even in !CONFIG_SCHEDSTATS case.
---
kernel/sched/debug.c | 6 --
1 file
Within runnable tasks in /proc/sched_debug, vruntime is printed twice,
once as tree-key and again as exec-runtime.
Since exec-runtime isnt populated in !CONFIG_SCHEDSTATS, use this field
to print wait_sum.
---
kernel/sched/debug.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
When CONFIG_SCHEDSTATS is enabled, /proc//sched prints almost all
sched statistics except sum_sleep_runtime. Since sum_sleep_runtime is
a good info to collect, add this it to /proc//sched.
---
kernel/sched/debug.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/debug.c
120 0.012962
0.163522 17495.280072 0 /
kworker/31:2 712 10136.506903 158 120 5.358470
2.269444 81162.182416 0 /
kworker/31:1H 5904 10047.528224 6 100 0.028400
0.207288 20745.582406 0 /
Srikar
With !CONFIG_SCHEDSTATS, runnable tasks in /proc/sched_debug has too
many columns than required. Fix this by printing appropriate columns.
While at this, print sum_exec_runtime, since this information is
available even in !CONFIG_SCHEDSTATS case.
---
kernel/sched/debug.c | 6 --
1 file
When CONFIG_SCHEDSTATS is enabled, /proc/pid/sched prints almost all
sched statistics except sum_sleep_runtime. Since sum_sleep_runtime is
a good info to collect, add this it to /proc/pid/sched.
---
kernel/sched/debug.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/debug.c
Within runnable tasks in /proc/sched_debug, vruntime is printed twice,
once as tree-key and again as exec-runtime.
Since exec-runtime isnt populated in !CONFIG_SCHEDSTATS, use this field
to print wait_sum.
---
kernel/sched/debug.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
At Facebook we have a pretty heavily multi-threaded application that is
sensitive to latency. We have been pulling forward the old SD_WAKE_IDLE code
because it gives us a pretty significant performance gain (like 20%). It
turns
out this is because there are cases where the scheduler puts
;
+}
+
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
* Peter Zijlstra [2015-05-18 15:06:58]:
> On Mon, 2015-05-18 at 18:30 +0530, Srikar Dronamraju wrote:
> > >
> > > static void account_numa_dequeue(struct rq *rq, struct task_struct *p)
> > > {
> > > + if (p->nr_cpus_allowed ==
+ * - movable: there are (migratable) tasks
> + * - all: there are tasks
>*
>* In order to avoid migrating ideally placed numa tasks,
>* ignore those when there's better options.
--
Thanks and Regards
Srik
its numa faults
but then is pinned to a different cpu, then we can see this warning.:w!
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
* Peter Zijlstra pet...@infradead.org [2015-05-18 15:06:58]:
On Mon, 2015-05-18 at 18:30 +0530, Srikar Dronamraju wrote:
static void account_numa_dequeue(struct rq *rq, struct task_struct *p)
{
+ if (p-nr_cpus_allowed == 1) {
+ rq-nr_pinned_running
this warning.:w!
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org
+ * - all: there are tasks
*
* In order to avoid migrating ideally placed numa tasks,
* ignore those when there's better options.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux
we can always cleanup
> this later, this is trivial. Right now I'd like to ensure that if the
> same or similar logic can work on powerpc, it only needs to touch the
> code in arch/powerpc.
>
> Oleg.
>
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
with (say) powerpc, we can always cleanup
this later, this is trivial. Right now I'd like to ensure that if the
same or similar logic can work on powerpc, it only needs to touch the
code in arch/powerpc.
Oleg.
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send
even seem to use this assumption when kprobe_tracer/uprobe_tracer
fetch arguments from stack. See fetch_kernel_stack_address() /
fetch_user_stack_address() and get_user_stack_nth().
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linu
fetch_kernel_stack_address() /
fetch_user_stack_address() and get_user_stack_nth().
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
() can be false positive,
> the stack can grow after longjmp(). Unfortunately, the kernel can't
> 100% solve this problem, but see the next patch.
>
> Signed-off-by: Oleg Nesterov
> ---
> kernel/events/uprobes.c | 13 +++++
Looks good to me.
Acked-by: Srikar Dronamra
(regs) <= sp;
}
Am I missing something?
>
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
e()
> can be fooled by sigaltstack/etc.
>
> Signed-off-by: Oleg Nesterov
> ---
Looks good to me.
Acked-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message t
know that the probed func
> has already returned.
>
> TODO: this assumes that the probed app can't use multiple stacks (say
> sigaltstack). We will try to improve this logic later.
>
> Signed-off-by: Oleg Nesterov
Looks good to me.
Acked-by: Srikar Dronamraju
>
rov
> ---
Looks good to me.
Acked-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info
erov
Looks good to me.
Acked-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
esterov
Looks good to me.
Acked-by: Srikar Dronamraju
> ---
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
that the probed func
has already returned.
TODO: this assumes that the probed app can't use multiple stacks (say
sigaltstack). We will try to improve this logic later.
Signed-off-by: Oleg Nesterov o...@redhat.com
Looks good to me.
Acked-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
--
Thanks
be false positive,
the stack can grow after longjmp(). Unfortunately, the kernel can't
100% solve this problem, but see the next patch.
Signed-off-by: Oleg Nesterov o...@redhat.com
---
kernel/events/uprobes.c | 13 +
Looks good to me.
Acked-by: Srikar Dronamraju sri
Looks good to me.
Acked-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
---
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ
and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
to me.
Acked-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
to me.
Acked-by: Srikar Dronamraju sri...@linux.vnet.ibm.com
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
It is pointless to return to user mode
>with the corrupted instruction_pointer() which we can't restore.
I agree
>
> Signed-off-by: Oleg Nesterov
> ---
Acked-by: Srikar Dronamraju
--
Thanks and Regards
Srikar Dronamraju
--
To unsubscribe from this list: send the line "unsub
* Oleg Nesterov [2015-05-04 14:48:54]:
> We can simplify uprobe_free_utask() and handle_uretprobe_chain()
> if we add a simple helper which does put_uprobe/kfree and returns
> the ->next return_instance.
>
> Signed-off-by: Oleg Nesterov
> ---
Looks good to me.
Acked-
* Oleg Nesterov [2015-05-04 14:48:50]:
> Cosmetic. Add the new trivial helper, get_uprobe(). It matches
> put_uprobe() we already have and we can simplify a couple of its
> users.
>
> Signed-off-by: Oleg Nesterov
Looks good to me.
Acked-by: Srikar Dronamraju
--
To u
701 - 800 of 1523 matches
Mail list logo