it
comes to kernel threads.
One can always use force_sig() or allow_signal() + send_sig() when
it is really needed, like cifs does.
On 6/26/07, Oleg Nesterov [EMAIL PROTECTED] wrote:
Personally, I don't think we should do this.
kthread_stop() doesn't always mean kill this thread asap
On 07/02, Jarek Poplawski wrote:
diff -Nurp 2.6.22-rc7-/net/core/netpoll.c 2.6.22-rc7/net/core/netpoll.c
--- 2.6.22-rc7-/net/core/netpoll.c2007-07-02 09:03:27.0 +0200
+++ 2.6.22-rc7/net/core/netpoll.c 2007-07-02 09:32:34.0 +0200
@@ -72,8 +72,7 @@ static void
On 07/02, Jarek Poplawski wrote:
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -72,7 +72,8 @@ static void queue_process(struct work_struct *work)
netif_tx_unlock(dev);
local_irq_restore(flags);
-
On 04/20, Andrew Morton wrote:
On Fri, 20 Apr 2007 11:41:46 +0100
David Howells [EMAIL PROTECTED] wrote:
There are only two non-net patches that AF_RXRPC depends on:
(1) The key facility changes. That's all my code anyway, and shouldn't be
a
problem to merge unless someone
On 04/23, David Howells wrote:
We only care when del_timer() returns true. In that case, if the timer
function still runs (possible for single-threaded wqs), it has already
passed __queue_work().
Why do you assume that?
If del_timer() returns true, the timer was pending. This means it
On 04/24, David Howells wrote:
Oleg Nesterov [EMAIL PROTECTED] wrote:
We only care when del_timer() returns true. In that case, if the timer
function still runs (possible for single-threaded wqs), it has already
passed __queue_work().
Why do you assume that?
Sorry, I
On 04/24, David Howells wrote:
Oleg Nesterov [EMAIL PROTECTED] wrote:
The current code uses del_timer_sync(). It will also return 0. However, it
will spin waiting for timer-function() to complete. So we are just wasting
CPU.
That's my objection to using cancel_delayed_work
On 04/24, David Howells wrote:
Oleg Nesterov [EMAIL PROTECTED] wrote:
Great. I'll send the s/del_timer_sync/del_timer/ patch.
I didn't say I necessarily agreed that this was a good idea. I just meant
that
I agree that it will waste CPU. You must still audit all uses
On 04/24, David Howells wrote:
Oleg Nesterov [EMAIL PROTECTED] wrote:
Sure, I'll grep for cancel_delayed_work(). But unless I missed something,
this change should be completely transparent for all users. Otherwise, it
is buggy.
I guess you will have to make sure
On 04/25, David Howells wrote:
Oleg Nesterov [EMAIL PROTECTED] wrote:
Yes sure. Note that this is documented:
/*
* Kill off a pending schedule_delayed_work(). Note that the work
callback
* function may still be running on return from cancel_delayed_work().
Run
.
port_carrier_check() uses p-dev-br_port == NULL as indication that
net_bridge_port
is under destruction. Otherwise it assumes that p-dev-br_port == p.
Signed-off-by: Oleg Nesterov [EMAIL PROTECTED]
Acked-By: David Howells [EMAIL PROTECTED]
--- WQ/net/bridge/br_if.c~5_bridge_uaf 2007-02-18 23:06
On 02/20, Stephen Hemminger wrote:
On Wed, 21 Feb 2007 01:19:41 +0300
Oleg Nesterov [EMAIL PROTECTED] wrote:
static void release_nbp(struct kobject *kobj)
{
struct net_bridge_port *p
= container_of(kobj, struct net_bridge_port, kobj);
+
+ dev_put(p-dev
On 02/21, Stephen Hemminger wrote:
This is what I was suggesting by getting rid of the work queue completely.
Can't comment this patch, but if we can get rid of the work_struct - good!
-static void port_carrier_check(struct work_struct *work)
+void br_port_carrier_check(struct
On 02/21, Stephen Hemminger wrote:
I would rather put it in a bugfix patchset for 2.6.21 and 2.6.20-stable
OK. Even better. Could you also remove br_private.h:BR_PORT_DEBOUNCE then?
Oleg.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL
On 10/12, Andrew Morton wrote:
On Fri, 12 Oct 2007 15:47:59 -0400
Mathieu Desnoyers [EMAIL PROTECTED] wrote:
Hi Andrew,
I noticed a regression between 2.6.23-rc8-mm2 and 2.6.23-mm1 (with your
hotfixes). User space threads seems to receive a ERESTART_RESTARTBLOCK
as soon as a thread
On 10/13, Oleg Nesterov wrote:
On 10/12, Andrew Morton wrote:
Bisection shows that this problem is caused by these two patches:
pid-namespaces-allow-cloning-of-new-namespace.patch
This? http://marc.info/?l=linux-mm-commitsm=118712242002039
Pavel, this patch has a subtle difference
On 10/18, Jarek Poplawski wrote:
+/**
+ * flush_work_sync - block until a work_struct's callback has terminated
^^^
Hmm...
+ * Similar to cancel_work_sync() but will only busy wait (without cancel)
+ * if the work is
On 10/22, Jarek Poplawski wrote:
On Fri, Oct 19, 2007 at 09:50:14AM +0200, Jarek Poplawski wrote:
On Thu, Oct 18, 2007 at 07:48:19PM +0400, Oleg Nesterov wrote:
On 10/18, Jarek Poplawski wrote:
+/**
+ * flush_work_sync - block until a work_struct's callback has
terminated
.
Afaics, we can simply remove both set_current_state()'s, and we
could do this a long ago right after ef87979c273a2 pktgen: better
scheduler friendliness which changed pktgen_thread_worker() to
use wait_event_interruptible_timeout().
Reported-by: Huang Ying ying.hu...@intel.com
Signed-off-by: Oleg
On 08/04, Marcelo Ricardo Leitner wrote:
On Tue, Aug 04, 2015 at 06:33:34PM +0200, Oleg Nesterov wrote:
Commit 1fbe4b46caca net: pktgen: kill the Wait for kthread_stop
code in pktgen_thread_worker() removed (in particular) the final
__set_current_state(TASK_RUNNING) and I didn't notice
Hello,
I am not familiar with this code and I have no idea how to test
these changes, so 2/2 comes as a separate change. 1/2 looks like
the obvious bugfix, and probably candidate for -stable.
Oleg.
net/core/pktgen.c |9 ++---
1 files changed, 2 insertions(+), 7 deletions(-)
--
To
pktgen_thread_worker() is obviously racy, kthread_stop() can come
between the kthread_should_stop() check and set_current_state().
Signed-off-by: Oleg Nesterov o...@redhat.com
Reported-by: Jan Stancek jstan...@redhat.com
Reported-by: Marcelo Leitner mleit...@redhat.com
---
net/core/pktgen.c
pktgen_thread_worker() doesn't need to wait for kthread_stop(), it
can simply exit. Just pktgen_create_thread() and pg_net_exit() should
do get_task_struct()/put_task_struct(). kthread_stop(dead_thread) is
fine.
Signed-off-by: Oleg Nesterov o...@redhat.com
---
net/core/pktgen.c | 11
Hello,
smp_mb__after_atomic() looks wrong and misleading, sock_reset_flag() does the
non-atomic __clear_bit() and thus it can not guarantee test_bit(SOCK_NOSPACE)
(non-atomic too) won't be reordered.
It was added by 3c7151275c0c9a "tcp: add memory barriers to write space paths"
and the patch
On 01/23, Eric Dumazet wrote:
>
> On Mon, 2017-01-23 at 11:56 -0500, Jason Baron wrote:
> > On 01/23/2017 09:30 AM, Oleg Nesterov wrote:
> > > Hello,
> > >
> > > smp_mb__after_atomic() looks wrong and misleading, sock_reset_flag() does
> > > th
On 01/23, Jason Baron wrote:
>
> >It was added by 3c7151275c0c9a "tcp: add memory barriers to write space
> >paths"
> >and the patch looks correct in that we need the barriers in tcp_check_space()
> >and tcp_poll() in theory, so it seems tcp_check_space() needs smp_mb() ?
> >
>
> Yes, I think it
On 06/29, Paul E. McKenney wrote:
>
> --- a/kernel/task_work.c
> +++ b/kernel/task_work.c
> @@ -109,7 +109,8 @@ void task_work_run(void)
>* the first entry == work, cmpxchg(task_works) should
>* fail, but it can play with *work and other entries.
>*/
On 06/30, Paul E. McKenney wrote:
>
> > > + raw_spin_lock_irq(>pi_lock);
> > > + raw_spin_unlock_irq(>pi_lock);
>
> I agree that the spin_unlock_wait() implementations would avoid the
> deadlock with an acquisition from an interrupt handler, while also
> avoiding the need to
On 06/30, Paul E. McKenney wrote:
>
> On Fri, Jun 30, 2017 at 05:20:10PM +0200, Oleg Nesterov wrote:
> >
> > I do not think the overhead will be noticeable in this particular case.
> >
> > But I am not sure I understand why do we wa
is ongoing to address this issue.
Reviewed-by: Oleg Nesterov <o...@redhat.com>
Please test your patch with the fix below, in this particular case the
TIF_IA32 check should be fine. Although this is not what we really want,
we should probably use user_64bit_mode(regs) which checks ->c
Yonghong,
The patch looks good to me, but I'll try to read it carefully later.
Just a couple of cosmetic nits for now.
On 11/09, Yonghong Song wrote:
>
> --- a/arch/x86/include/asm/uprobes.h
> +++ b/arch/x86/include/asm/uprobes.h
> @@ -53,6 +53,10 @@ struct arch_uprobe {
>
The patch looks good to me, but I have a question because I know nothing
about insn encoding,
On 11/10, Yonghong Song wrote:
>
> +static int push_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
> +{
> + u8 opc1 = OPCODE1(insn), reg_offset = 0;
> +
> + if (opc1 < 0x50 || opc1
On 11/13, Yonghong Song wrote:
>
> +static int push_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
> +{
> + u8 opc1 = OPCODE1(insn), reg_offset = 0;
> +
> + if (opc1 < 0x50 || opc1 > 0x57)
> + return -ENOSYS;
> +
> + if (insn->length > 2)
> +
On 11/14, Oleg Nesterov wrote:
>
> > +#ifdef CONFIG_X86_64
> > + if (test_thread_flag(TIF_ADDR32))
> > + return -ENOSYS;
> > +#endif
>
> No, this doesn't look right, see my previous email. You should do this
> check in the "if (insn->
On 11/13, Yonghong Song wrote:
>
> On 11/13/17 4:59 AM, Oleg Nesterov wrote:
> >>+ switch (opc1) {
> >>+ case 0x50:
> >>+ reg_offset = offsetof(struct pt_regs, r8);
> >>+ break;
> >>+
On 11/17, Yonghong Song wrote:
>
> On 11/17/17 9:25 AM, Oleg Nesterov wrote:
> >On 11/15, Yonghong Song wrote:
> >>
> >>v3 -> v4:
> >> . Revert most of v3 change as 32bit emulation is not really working
> >> on x86_64 platform
On 11/14, Yonghong Song wrote:
>
> On 11/14/17 7:51 AM, Oleg Nesterov wrote:
> >
> >And test_thread_flag(TIF_ADDR32) is not right too. The caller is not
> >necessarily the probed task. See is_64bit_mm(mm) in
> >arch_uprobe_analyze_insn().
>
> I printed out
On 11/14, Yonghong Song wrote:
>
>
> On 11/14/17 8:03 AM, Oleg Nesterov wrote:
> >Ah, no, sizeof_long() is broken by the same reason, so you can't test it...
>
> Right. I hacked the emulate_push_stack (original name: push_ret_address)
> with sizeof_long = 4, and 32bit
On 11/15, Oleg Nesterov wrote:
>
> So please, check if uprobe_init_insn() fails or not in this case. After that
> we will know whether your patch needs the additional is_64bit_mm() check in
> push_setup_xol_ops() or not.
OK, I did the check for you.
uprobe_init_insn() doesn't fail b
On 11/09, Oleg Nesterov wrote:
>
> And. Do you really need ->post_xol() method to emulate "push"? Why we can't
> simply execute it out-of-line if copy_to_user() fails?
>
> branch_post_xol_op() is needed because we can't execute "call" out-of-line,
> we need
On 11/09, Yonghong Song wrote:
>
> This patch extends the emulation to "push "
> insns. These insns are typical in the beginning
> of the function. For example, bcc
> in https://github.com/iovisor/bcc repo provides
> tools to measure funclantency, detect memleak, etc.
> The tools will place
On 11/09, Yonghong Song wrote:
>
> + if (insn_class == UPROBE_PUSH_INSN) {
> + src_ptr = get_push_reg_ptr(auprobe, regs);
> + reg_width = sizeof_long();
> + sp = regs->sp;
> + if (copy_to_user((void __user *)(sp - reg_width), src_ptr,
>
42 matches
Mail list logo