On 04/09/2014 06:44 PM, Luck, Tony wrote:
So when the driver sees uncorrected errors, I'm also seeing them in my
memory scanning program - so they correspond nicely. I didn't see anything
logged in /var/log/mcelog, but I will update to the latest when possible.
I wonder if there are some BIOS
On 06/29/2013 03:20 AM, Ingo Molnar wrote:
* jba...@akamai.com jba...@akamai.com wrote:
Hi,
As pointed out by Andi Kleen, some static key users can be racy because they
check the value of the key-enabled, and then subsequently update the branch
direction. A number of call sites have 'higher'
that led to this. Each change log should be sufficient to stand
on its own.
Explain why this patch is needed. And it's not about the use of a
simpler API.
It actually fixes a real bug.
Signed-off-by: Jason Baron jba...@akamai.com
---
net/ipv4/udp.c |9 -
net/ipv6/udp.c |9
some bounces.
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -4,8 +4,8 @@
/*
* Jump label support
*
- * Copyright (C) 2009-2012 Jason Baron jba...@redhat.com
- * Copyright (C) 2011-2012 Peter Zijlstra pzijl...@redhat.com
+ * Copyright (C) 2009-2012 Red Hat, Inc
If 'arat' is set in the cpuflags, we can avoid the checks for entering/exiting
the tick broadcast code entirely. It would seem that this is a hot enough code
path to make this worthwhile. I ran a few hackbench runs, and consistenly see
reduced branches and cycles.
Signed-off-by: Jason Baron jba
On 06/23/2014 10:28 PM, Steven Rostedt wrote:
Cleaning out my INBOX I found this patch series. It seems to have been
forgotten about. It ended up with Ingo and Peter agreeing with the way
things should be done and I thought Jason was going to send an update.
But that seems to never have
Convert to the generic API.
Signed-off-by: Jason Baron jba...@akamai.com
---
drivers/edac/x38_edac.c | 15 ++-
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/edac/x38_edac.c b/drivers/edac/x38_edac.c
index 4891b45..e644b52 100644
--- a/drivers/edac/x38_edac.c
/processors/xeon/xeon-e3-1200-family-vol-2-datasheet.html
p. 16)
I can confirm this is true via several hard machine lockups.
Thus, add explicit hi_lo_[readq|write]_q and lo_hi_[read|write]_q so that these
uses are spelled out.
Signed-off-by: Jason Baron jba...@akamai.com
---
include/asm-generic/io-64
(sandy bridge)
Signed-off-by: Jason Baron jba...@akamai.com
---
drivers/edac/Kconfig| 7 +
drivers/edac/Makefile | 1 +
drivers/edac/ie31200_edac.c | 542
3 files changed, 550 insertions(+)
create mode 100644 drivers/edac/ie31200_edac.c
)
CPU E3-1270 V2 @ 3.50GHz : 8086:0158 (ivy bridge)
CPU E31270 @ 3.40GHz : 8086:0108 (sandy bridge)
Posted before as: https://lkml.org/lkml/2014/4/4/462
Patch against 3.16.0-rc2
Thanks,
-Jason
Changes since v1:
- Cleanups and formatting based on previous feedback
Jason Baron (3):
readq/writeq
call sites a bit.
Users of static keys should use either the inc/dec or the set_true/set_false
API.
Thanks,
-Jason
Jason Baron (3):
static_keys: Add a static_key_slow_set_true()/false() interface
sched: fix static keys race in sched_feat
udp: make use of static_key_slow_set_true
.
Take the i_mutex around these calls to resolve the race.
Reported-by: Andi Kleen a...@firstfloor.org
Signed-off-by: Jason Baron jba...@akamai.com
---
kernel/sched/core.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3bdf01b..249c865 100644
Jason Baron (2):
sched: remove extra static_key function indirection
sched: fix static_key race with sched_feat
kernel/sched/core.c | 5 +
kernel/sched/sched.h | 12 +---
2 files changed, 6 insertions(+), 11 deletions(-)
--
1.8.2.rc2
--
To unsubscribe from this list: send
I think its a bit simpler without having to follow an extra layer of static
inline fuctions. No functional change just cosmetic.
Signed-off-by: Jason Baron jba...@akamai.com
---
kernel/sched/sched.h | 12 +---
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/kernel/sched
On 07/28/2014 04:38 PM, Len Brown wrote:
On Fri, Jul 11, 2014 at 1:54 PM, Jason Baron jba...@akamai.com wrote:
If 'arat' is set in the cpuflags, we can avoid the checks for
entering/exiting
the tick broadcast code entirely. It would seem that this is a hot enough
code
path to make
Yes, that looks good. While at it I grep'd the tree for
'CONFIG_JUMP_LABEL', and found some uses in the
netfilter code which should probably be
'HAVE_JUMP_LABEL' as well.
Thanks,
-Jason
On 08/20/2014 05:29 AM, Zhouyi Zhou wrote:
jump_label_ratelimit.h is split from jump_label.h to enable the
On 08/20/2014 10:58 PM, Zhouyi Zhou wrote:
I have submitted two patches according to you suggestions:
https://lkml.org/lkml/2014/8/20/885
https://lkml.org/lkml/2014/8/20/883
Hope I have made them right
Thanks - they look good to me.
-Jason
--
To unsubscribe from this list: send the line
if the h/w simply doesn't support that, or if the
driver could be improved here. In any case, adding the 0x3f makes
the driver useful for me. Tested aginst cpu:
Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
Signed-off-by: Jason Baron jba...@akamai.com
---
drivers/powercap/intel_rapl.c | 1 +
1 file
On 08/13/2014 06:50 PM, Jacob Pan wrote:
On Wed, 13 Aug 2014 20:33:26 + (GMT)
Jason Baron jba...@akamai.com wrote:
I've confirmed that monitoring the package power usage as well
as setting power limits appear to be working as expected. However,
I do see in the logs:
[5.082632
On 08/10/2014 12:07 PM, Borislav Petkov wrote:
On Sun, Aug 10, 2014 at 05:45:15PM +0200, Ingo Molnar wrote:
Indeed - but could we use that interface to cleanly expose the
arch_static_branch_active() code you've written, or do we need new
variants?
We could probably.
The thing is, if we want
On 10/15/2014 11:34 AM, Kees Cook wrote:
On Wed, Oct 15, 2014 at 5:21 AM, Paolo Pisati p.pis...@gmail.com wrote:
Hi,
i keep hitting this with BRIDGE=m, JUMP_LABEL=y and DEBUG_SET_MODULE_RONX=y:
I think my RO/NX patch series solves this. I sent a pull request, but
I haven't seen any movement
On 08/29/2014 12:06 PM, Rob Jones wrote:
Using seq_open_private() removes boilerplate code from ddebug_proc_open()
Looks good.
Acked-by: Jason Baron jba...@akamai.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
On 09/24/2014 02:17 PM, Joe Perches wrote:
The return value is not used by callers of these functions
so change the functions to return void.
Acked-by: Jason Baron jba...@akamai.com
Greg - can you pick this up?
Thanks,
-Jason
--
To unsubscribe from this list: send the line unsubscribe
Hi Prarit,
On 10/24/2014 08:53 AM, Prarit Bhargava wrote:
There have been several times where I have had to rebuild a kernel to
cause a panic when hitting a WARN() in the code in order to get a crash
dump from a system. Sometimes this is easy to do, other times (such as
in the case of a
On 10/28/2014 08:31 AM, Prarit Bhargava wrote:
There have been several times where I have had to rebuild a kernel to
cause a panic when hitting a WARN() in the code in order to get a crash
dump from a system. Sometimes this is easy to do, other times (such as
in the case of a remote admin) it
Fam Zheng: http://lwn.net/Articles/628828/ since these
patches are somewhat invasive.
Thanks,
-Jason
Jason Baron (5):
epoll: Remove ep_call_nested() from ep_eventpoll_poll()
epoll: Remove ep_call_nested() from ep_poll_safewake()
epoll: Add ep_call_nested_nolock()
epoll: Allow topology
of a global spin_lock()
that was in use in ep_call_nested().
Signed-off-by: Jason Baron jba...@akamai.com
---
fs/eventpoll.c | 59 +++---
1 file changed, 23 insertions(+), 36 deletions(-)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index d77f944
of per-instance data structures, such that these topology checks can safely
be made in parallel. In addition, we make use of the ep_call_nested_nolock(),
which avoids the nested_calls-lock. This is safe here, since we are already
holding the 'epmutex' across all of these calls.
Signed-off-by: Jason
On 01/15/2015 06:10 PM, Eric Wong wrote:
Jason Baron jba...@akamai.com wrote:
I've done a bit of performance evaluation on a dual socket, 10 core, hyper
threading enabled box: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz. For the
simple epfdN-epfdN-pipefdN topology case where each thread has its
CONFIG_DEBUG_LOCK_ALLOC is generally used
for production, so this should provide a decent speedup for the cases that
we care about.
Signed-off-by: Jason Baron jba...@akamai.com
---
fs/eventpoll.c | 47 ++-
1 file changed, 18 insertions(+), 29 deletions(-)
diff --git a/fs
Add an ep_call_nested_nolock() variant which functions the same as
the current ep_call_nested(), except it does not acquire any locks. This
call wil be used by subsequent patches which have provide their own
'external' locking.
Signed-off-by: Jason Baron jba...@akamai.com
---
fs/eventpoll.c | 33
.
Signed-off-by: Jason Baron jba...@akamai.com
---
fs/eventpoll.c| 325 ++
include/linux/eventpoll.h | 52 +---
include/linux/fs.h| 3 +
3 files changed, 311 insertions(+), 69 deletions(-)
diff --git a/fs/eventpoll.c b/fs
On 02/10/2015 04:03 AM, Peter Zijlstra wrote:
On Mon, Feb 09, 2015 at 11:06:17PM -0500, Jason Baron wrote:
On 02/09/2015 04:50 PM, Peter Zijlstra wrote:
On Mon, Feb 09, 2015 at 08:05:57PM +, Jason Baron wrote:
diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index 852143a..17d1039
On 02/17/2015 04:09 PM, Andy Lutomirski wrote:
On Tue, Feb 17, 2015 at 12:33 PM, Jason Baron jba...@akamai.com wrote:
On 02/17/2015 02:46 PM, Andy Lutomirski wrote:
On Tue, Feb 17, 2015 at 11:33 AM, Jason Baron jba...@akamai.com wrote:
When we are sharing a wakeup source among multiple epoll
On 02/18/2015 11:33 AM, Ingo Molnar wrote:
* Jason Baron jba...@akamai.com wrote:
This has two main advantages: firstly it solves the
O(N) (micro-)problem, but it also more evenly
distributes events both between task-lists and within
epoll groups as tasks as well.
Its solving 2 issues
this is not desirable.
The WQ_FLAG_ROUND_ROBIN is restricted to being exclusive as well, otherwise we
do not know who is being woken up.
Signed-off-by: Jason Baron jba...@akamai.com
---
include/linux/wait.h | 11 +++
kernel/sched/wait.c | 10 --
2 files changed, 19 insertions(+), 2 deletions
this additional
heuristic in order minimize wakeup latencies.
Signed-off-by: Jason Baron jba...@akamai.com
---
fs/eventpoll.c | 25 -
include/uapi/linux/eventpoll.h | 6 ++
2 files changed, 26 insertions(+), 5 deletions(-)
diff --git a/fs/eventpoll.c b/fs
On 02/17/2015 02:46 PM, Andy Lutomirski wrote:
On Tue, Feb 17, 2015 at 11:33 AM, Jason Baron jba...@akamai.com wrote:
When we are sharing a wakeup source among multiple epoll fds, we end up with
thundering herd wakeups, since there is currently no way to add to the
wakeup source exclusively
epoll
fds to a shared wakeup soruce. Depends on EPOLLEXCLUSIVE being set and
must be specified with an EPOLL_CTL_ADD operation.
Thanks,
-Jason
Jason Baron (2):
sched/wait: add round robin wakeup mode
epoll: introduce EPOLLEXCLUSIVE and EPOLLROUNDROBIN
fs/eventpoll.c
On 02/18/2015 03:07 AM, Ingo Molnar wrote:
* Jason Baron jba...@akamai.com wrote:
Epoll file descriptors that are added to a shared wakeup
source are always added in a non-exclusive manner. That
means that when we have multiple epoll fds attached to a
shared wakeup source they are all
to KBUILD_AFLAGS.
Signed-off-by: Anton Blanchard an...@samba.org
(adding Steven)
Acked-by: Jason Baron jba...@akamai.com
The jump label stuff often gets picked up by Steve, but I guess this
could go through the powerpc tree as well...
Thanks,
-Jason
--
To unsubscribe from this list: send
);
printf(nohit is: %d\n, nohit);
}
Jason Baron (2):
sched/wait: add round robin wakeup mode
epoll: introduce EPOLLEXCLUSIVE and EPOLLROUNDROBIN
fs/eventpoll.c | 25 -
include/linux/wait.h | 11 +++
include/uapi/linux/eventpoll.h | 6
this additional
heuristic in order minimize wakeup latencies.
Signed-off-by: Jason Baron jba...@akamai.com
---
fs/eventpoll.c | 25 -
include/uapi/linux/eventpoll.h | 6 ++
2 files changed, 26 insertions(+), 5 deletions(-)
diff --git a/fs/eventpoll.c b/fs
this is not desirable.
The WQ_FLAG_ROUND_ROBIN is restricted to being exclusive as well, otherwise we
do not know who is being woken up.
Signed-off-by: Jason Baron jba...@akamai.com
---
include/linux/wait.h | 11 +++
kernel/sched/wait.c | 5 -
2 files changed, 15 insertions(+), 1 deletion
On 02/09/2015 04:50 PM, Peter Zijlstra wrote:
On Mon, Feb 09, 2015 at 08:05:57PM +, Jason Baron wrote:
diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index 852143a..17d1039 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -71,8 +71,11 @@ static void __wake_up_common
On 02/09/2015 11:49 PM, Eric Wong wrote:
Jason Baron jba...@akamai.com wrote:
On 02/09/2015 05:45 PM, Andy Lutomirski wrote:
On Mon, Feb 9, 2015 at 1:32 PM, Jason Baron jba...@akamai.com wrote:
On 02/09/2015 03:18 PM, Andy Lutomirski wrote:
On 02/09/2015 12:06 PM, Jason Baron wrote:
Epoll
On 02/09/2015 03:18 PM, Andy Lutomirski wrote:
On 02/09/2015 12:06 PM, Jason Baron wrote:
Epoll file descriptors that are added to a shared wakeup source are always
added in a non-exclusive manner. That means that when we have multiple epoll
fds attached to a shared wakeup source they are all
On 02/09/2015 05:45 PM, Andy Lutomirski wrote:
On Mon, Feb 9, 2015 at 1:32 PM, Jason Baron jba...@akamai.com wrote:
On 02/09/2015 03:18 PM, Andy Lutomirski wrote:
On 02/09/2015 12:06 PM, Jason Baron wrote:
Epoll file descriptors that are added to a shared wakeup source are always
added
On 01/07/2015 05:35 AM, Anton Blanchard wrote:
Commit 1bc9e47aa8e4 (powerpc/jump_label: Use HAVE_JUMP_LABEL)
converted uses of CONFIG_JUMP_LABEL to HAVE_JUMP_LABEL in
some assembly files.
HAVE_JUMP_LABEL is defined in linux/jump_label.h, so we need to
include this or we always get the non
On 02/18/2015 12:51 PM, Ingo Molnar wrote:
* Ingo Molnar mi...@kernel.org wrote:
[...] However, I think the userspace API change is less
clear since epoll_wait() doesn't currently have an
'input' events argument as epoll_ctl() does.
... but the change would be a bit clearer and somewhat
On 03/05/2015 04:15 AM, Ingo Molnar wrote:
* Jason Baron jba...@akamai.com wrote:
2) We are using the wakeup in this case to 'assign' work more
permanently to the thread. That is, in the case of a listen socket
we then add the connected socket to the woken up threads local set
of epoll
On 03/05/2015 04:15 AM, Ingo Molnar wrote:
* Jason Baron jba...@akamai.com wrote:
2) We are using the wakeup in this case to 'assign' work more
permanently to the thread. That is, in the case of a listen socket
we then add the connected socket to the woken up threads local set
of epoll
On 03/09/2015 09:49 PM, Fam Zheng wrote:
Benchmark for epoll_pwait1
==
By running fio tests inside VM with both original and modified QEMU, we can
compare their difference in performance.
With a small VM setup [t1], the original QEMU (ppoll based) has an 4k read
On 03/13/2015 07:31 AM, Fam Zheng wrote:
On Thu, 03/12 11:02, Jason Baron wrote:
On 03/09/2015 09:49 PM, Fam Zheng wrote:
Hi,
So it sounds like you are comparing original qemu code (which was using
ppoll) vs. using epoll with these new syscalls. Curious if you have numbers
comparing
On 02/27/2015 04:10 PM, Andrew Morton wrote:
On Wed, 25 Feb 2015 11:27:04 -0500 Jason Baron jba...@akamai.com wrote:
Libenzi inactive eventpoll appears to be without a
dedicated maintainer since 2011 or so. Is there anyone who
knows the code and its usages in detail and does final ABI
Hi,
v3 of this series implements this idea using using a different
approach:
http://lkml.iu.edu/hypermail/linux/kernel/1502.3/00667.html
If that still meets your needs it would be helpful to know in
order to move this forward.
Looking back at your posting, I was concerned about the
test case
On 03/04/2015 07:02 PM, Ingo Molnar wrote:
* Andrew Morton a...@linux-foundation.org wrote:
On Fri, 27 Feb 2015 17:01:32 -0500 Jason Baron jba...@akamai.com wrote:
I don't really understand the need for rotation/round-robin. We can
solve the thundering herd via exclusive wakeups, but what
On 02/27/2015 04:31 PM, Jonathan Corbet wrote:
On Fri, 27 Feb 2015 13:10:34 -0800
Andrew Morton a...@linux-foundation.org wrote:
I don't really understand the need for rotation/round-robin. We can
solve the thundering herd via exclusive wakeups, but what is the point
in choosing to wake the
epoll optimization for overflow list
Jason Baron (3):
sched/wait: add __wake_up_rotate()
epoll: limit wakeups to the overflow list
epoll: Add EPOLL_ROTATE mode
fs/eventpoll.c | 52 +++---
include/linux/wait.h | 1 +
include/uapi
results in the same thread woken up again and again. The first intended user of
this functionality is epoll.
Signed-off-by: Jason Baron jba...@akamai.com
---
include/linux/wait.h | 1 +
kernel/sched/wait.c | 27 +++
2 files changed, 28 insertions(+)
diff --git a/include/linux
.
Signed-off-by: Jason Baron jba...@akamai.com
---
fs/eventpoll.c | 41 +++--
include/uapi/linux/eventpoll.h | 4
2 files changed, 39 insertions(+), 6 deletions(-)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index da84712..a8b06a1 100644
). However, we wish to
add policies that are stateful (for example rotating wakeups among epoll
sets), and these unnecessary wakeups cause unwanted transitions.
Signed-off-by: Jason Baron jba...@akamai.com
---
fs/eventpoll.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git
On 02/21/2015 07:24 PM, Eric Wong wrote:
Jason Baron jba...@akamai.com wrote:
On 02/18/2015 12:51 PM, Ingo Molnar wrote:
* Ingo Molnar mi...@kernel.org wrote:
[...] However, I think the userspace API change is less
clear since epoll_wait() doesn't currently have an
'input' events argument
On 02/25/2015 02:38 AM, Ingo Molnar wrote:
* Jason Baron jba...@akamai.com wrote:
Hi,
When we are sharing a wakeup source among multiple epoll
fds, we end up with thundering herd wakeups, since there
is currently no way to add to the wakeup source
exclusively. This series introduces
Hi,
Creating a synflood via: 'hping3 -V -S -M 0 -p 80 --flood -y ip addr', when
I have:
CONFIG_NO_HZ_IDLE=y or
CONFIG_NO_HZ_FULL=y set, and
either:
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y or
CONFIG_TICK_CPU_ACCOUNTING=y
shows almost 0% cpu usage via top (or /proc/stat), whereas if I have turbostat
.
Jason Baron was working on this (search LKML archives for
EPOLLEXCLUSIVE, EPOLLROUNDROBIN, EPOLL_ROTATE)
However, I was unconvinced about modifying epoll.
Perhaps I may be more easily convinced about your mqueue case than his
case for listen sockets, though[*]
Yeah, so I implemented
On 08/05/2015 07:06 AM, Madars Vitolins wrote:
Jason Baron @ 2015-08-04 18:02 rakstīja:
On 08/03/2015 07:48 PM, Eric Wong wrote:
Madars Vitolins m...@silodev.com wrote:
Hi Folks,
I am developing kind of open systems application, which uses
multiple processes/executables where each of them
/lib/test_jump_label.c
@@ -0,0 +1,225 @@
+/*
+ * Kernel module for testing jump labels.
+ *
+ * Copyright 2015 Akamai Technologies Inc. All Rights Reserved
+ *
+ * Authors:
+ * Jason Baron jba...@akamai.com
+ *
+ * This software is licensed under the terms of the GNU General Public
On 07/22/2015 12:24 AM, Borislav Petkov wrote:
On Tue, Jul 21, 2015 at 02:50:25PM -0400, Jason Baron wrote:
hmmm...so this is a case where need to the default the branch
to the out-of-line branch at boot. That is, we can't just enable
the out-of-line branch at boot time, b/c it might be too
On 07/21/2015 12:12 PM, Peter Zijlstra wrote:
On Tue, Jul 21, 2015 at 08:51:51AM -0700, Andy Lutomirski wrote:
To clarify my (mis-)understanding:
There are two degrees of freedom in a static_key. They can start out
true or false, and they can be unlikely or likely. Are those two
degrees
On 07/21/2015 02:15 PM, Borislav Petkov wrote:
On Tue, Jul 21, 2015 at 06:12:15PM +0200, Peter Zijlstra wrote:
Yes, if you start out false, you must be unlikely. If you start out
true, you must be likely.
We could maybe try and untangle that if there really is a good use case,
but this is
On 07/24/2015 08:36 AM, Peter Zijlstra wrote:
On Fri, Jul 24, 2015 at 12:56:02PM +0200, Peter Zijlstra wrote:
On Thu, Jul 23, 2015 at 03:14:13PM -0400, Jason Baron wrote:
+static enum jump_label_type jump_label_type(struct jump_entry *entry)
+{
+ struct static_key *key = static_key_cast
On 07/23/2015 06:42 AM, Peter Zijlstra wrote:
On Wed, Jul 22, 2015 at 01:06:44PM -0400, Jason Baron wrote:
Ok,
So we could add all 4 possible initial states, where the
branches would be:
static_likely_init_true_branch(struct static_likely_init_true_key *key)
static_likely_init_false_branch
On 07/23/2015 10:49 AM, Peter Zijlstra wrote:
On Thu, Jul 23, 2015 at 04:33:08PM +0200, Peter Zijlstra wrote:
Lemme finish this and I'll post it.
Compile tested on x86_64 only..
Please have a look, I think you said I got some of the logic wrong, I've
not double checked that.
I'll go write
On 07/23/2015 01:08 PM, Peter Zijlstra wrote:
On Thu, Jul 23, 2015 at 11:34:50AM -0400, Steven Rostedt wrote:
On Thu, 23 Jul 2015 12:42:15 +0200
Peter Zijlstra pet...@infradead.org wrote:
static __always_inline bool arch_static_branch_jump(struct static_key *key,
bool inv)
{
if (!inv)
On 10/27/2015 03:40 AM, Peter Chen wrote:
> The parse_args will delete space between boot parameters, so
> if we add dyndbg="file drivers/usb/* +p" at bootargs, the parse_args
> will split it as three parameters, and only "file" is for dyndbg,
> then below error will occur at ddebug, it causes all
On 10/28/2015 12:46 PM, Rainer Weikusat wrote:
> Rainer Weikusat <r...@doppelsaurus.mobileactivedefense.com> writes:
>> Jason Baron <jba...@akamai.com> writes:
>
> [...]
>
>>> 2)
>>>
>>> For the case of epoll() in edge triggered mo
Hi Rainer,
> +
> +/* Needs sk unix state lock. After recv_ready indicated not ready,
> + * establish peer_wait connection if still needed.
> + */
> +static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other)
> +{
> + int connected;
> +
> + connected =
On 11/13/2015 01:51 PM, Rainer Weikusat wrote:
[...]
>
> - if (unix_peer(other) != sk && unix_recvq_full(other)) {
> - if (!timeo) {
> - err = -EAGAIN;
> - goto out_unlock;
> - }
> + if (unix_peer(sk) == other &&
corresponding peer_wait queue. There's no way to forcibly deregister a
> wait queue with epoll.
>
> Based on an idea by Jason Baron, the patch below changes the code such
> that a wait_queue_t belonging to the client socket is enqueued on the
> peer_wait queue of the server wheneve
On 11/15/2015 01:32 PM, Rainer Weikusat wrote:
>
> That was my original idea. The problem with this is that the code
> starting after the _lock and running until the main code path unlock has
> to be executed in one go with the other lock held as the results of the
> tests above this one may
corresponding peer_wait queue. There's no way to forcibly deregister a
> wait queue with epoll.
>
> Based on an idea by Jason Baron, the patch below changes the code such
> that a wait_queue_t belonging to the client socket is enqueued on the
> peer_wait queue of the server wheneve
On 11/06/2015 08:06 AM, Dmitry Vyukov wrote:
> On Mon, Oct 12, 2015 at 2:17 PM, Dmitry Vyukov wrote:
>> On Mon, Oct 12, 2015 at 2:14 PM, Eric Dumazet wrote:
>>> On Mon, 2015-10-12 at 14:02 +0200, Michal Kubecek wrote:
>>>
Probably the issue
On 10/18/2015 04:58 PM, Rainer Weikusat wrote:
[...]
>
> The idea behind 'the wait queue' (insofar I'm aware of it) is that it
> will be used as list of threads who need to be notified when the
> associated event occurs. Since you seem to argue that the run-of-the-mill
> algorithm is too slow
On 10/12/2015 04:41 PM, Rainer Weikusat wrote:
> Jason Baron <jba...@akamai.com> writes:
>> On 10/05/2015 12:31 PM, Rainer Weikusat wrote:
>
> [...]
>
>>> Here's a more simple idea which _might_ work. The underlying problem
>>> seems to be that the
Now that connect() permanently registers a callback routine, we can induce
extra overhead in unix_dgram_recvmsg(), which unconditionally wakes up
its peer_wait queue on every receive. This patch makes the wakeup there
conditional on there being waiters.
Signed-off-by: Jason Baron <
comments in 3/3 (Peter Zijlstra)
-clean up unix_dgram_writable() function in 3/3 (Joe Perches)
Jason Baron (3):
net: unix: fix use-after-free in unix_dgram_poll()
net: unix: Convert gc_flags to flags
net: unix: optimize wakeups in unix_dgram_recvmsg()
include/net/af_unix.h | 4 +-
net/unix
ooglemail.com>
Signed-off-by: Jason Baron <jba...@akamai.com>
---
include/net/af_unix.h | 1 +
net/unix/af_unix.c| 32 +++-
2 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/include/net/af_unix.h b/include/net/af_unix.h
index 4a167b3..9698aff 10
Convert gc_flags to flags in perparation for the subsequent patch, which will
make use of a flag bit for a non-gc purpose.
Signed-off-by: Jason Baron <jba...@akamai.com>
---
include/net/af_unix.h | 2 +-
net/unix/garbage.c| 12 ++--
2 files changed, 7 insertions(+), 7 del
>
> X-Signed-Off-By: Rainer Weikusat
>
Hi,
So the patches I've posted and yours both use the idea of a relaying
the remote peer wakeup via callbacks that are internal to the net/unix,
such that we avoid exposing the remote peer wakeup to the external
Convert gc_flags to flags in perparation for the subsequent patch, which will
make use of a flag bit for a non-gc purpose.
Signed-off-by: Jason Baron <jba...@akamai.com>
---
include/net/af_unix.h | 2 +-
net/unix/garbage.c| 12 ++--
2 files changed, 7 insertions(+), 7 del
if the peer socket has receive space
v3:
-beef up memory barrier comments in 3/3 (Peter Zijlstra)
-clean up unix_dgram_writable() function in 3/3 (Joe Perches)
Jason Baron (3):
net: unix: fix use-after-free in unix_dgram_poll()
net: unix: Convert gc_flags to flags
net: unix: optimize wakeups
/netdev/msg145533.html
Signed-off-by: Jason Baron <jba...@akamai.com>
---
include/net/af_unix.h | 1 +
net/unix/af_unix.c| 92 +--
2 files changed, 69 insertions(+), 24 deletions(-)
diff --git a/include/net/af_unix.h b/include/net/af_
ooglemail.com>
Signed-off-by: Jason Baron <jba...@akamai.com>
---
include/net/af_unix.h | 1 +
net/unix/af_unix.c| 32 +++-
2 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/include/net/af_unix.h b/include/net/af_unix.h
index 4a167b3..9698aff 10
On 10/11/2015 07:55 AM, David Miller wrote:
> From: Jason Baron <jba...@akamai.com>
> Date: Fri, 9 Oct 2015 00:15:59 -0400
>
>> These patches are against mainline, I can re-base to net-next, please
>> let me know.
>>
>> They have been tested against: https:
On 10/09/2015 10:38 AM, Hannes Frederic Sowa wrote:
> Hi,
>
> Jason Baron <jba...@akamai.com> writes:
>
>> The unix_dgram_poll() routine calls sock_poll_wait() not only for the wait
>> queue associated with the socket s that we are poll'ing against, but al
On 10/05/2015 12:31 PM, Rainer Weikusat wrote:
> Jason Baron <jba...@akamai.com> writes:
>> The unix_dgram_poll() routine calls sock_poll_wait() not only for the wait
>> queue associated with the socket s that we are poll'ing against, but also
>> calls
>> sock_pol
gisters a callback routine, we can induce
extra overhead in unix_dgram_recvmsg(), which unconditionally wakes up
its peer_wait queue on every receive. This patch makes the wakeup there
conditional on there being waiters.
Tested using: http://www.spinics.net/lists/netdev/msg145533.html
Signed-off-b
On 07/08/2015 01:37 PM, Andy Lutomirski wrote:
On Wed, Jul 8, 2015 at 9:07 AM, Peter Zijlstra pet...@infradead.org wrote:
On Wed, Jul 08, 2015 at 11:17:38AM -0400, Mikulas Patocka wrote:
Hi
I found out that the patch a66734297f78707ce39d756b656bfae861d53f62 breaks
the kernel on processors
On 07/10/2015 10:13 AM, Peter Zijlstra wrote:
On Wed, Jul 08, 2015 at 05:36:43PM -0700, Andy Lutomirski wrote:
In what universe is static_key_false a reasonable name for a
function that returns true if a static key is true?
I think the current naming is almost maximally bad. The naming would
101 - 200 of 856 matches
Mail list logo