Re: [PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression

2023-06-05 Thread Oleg Nesterov
On 06/01, Mike Christie wrote:
>
> --- a/kernel/signal.c
> +++ b/kernel/signal.c
> @@ -1368,7 +1368,9 @@ int zap_other_threads(struct task_struct *p)
>  
>   while_each_thread(p, t) {
>   task_clear_jobctl_pending(t, JOBCTL_PENDING_MASK);
> - count++;
> + /* Don't require de_thread to wait for the vhost_worker */
> + if ((t->flags & (PF_IO_WORKER | PF_USER_WORKER)) != 
> PF_USER_WORKER)
> + count++;

Well if you do this, then you should also change __exit_signal() to
not decrement sig->notify_count. Otherwise de_thread() can succeed
before the "normal" sub-threads exit.

But this can't be right anyway... If nothing else, suppose we have
a process with 3 threads:

M   - the main thread, group leader
T   - sub-thread
V   - vhost worker

T does exec and calls de_thread().

M exits. T takes the leadership and does release_task()

V is still running but V->group_leader points to already freed M.

Or unshare_sighand() after that... If nothing else this means that
lock_task_sighand(T) and lock_task_sighand(V) will take different
locks.

Oleg.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression

2023-06-05 Thread Oleg Nesterov
On 06/02, Eric W. Biederman wrote:
>
> Oleg Nesterov  writes:
>
> > Hi Mike,
> >
> > sorry, but somehow I can't understand this patch...
> >
> > I'll try to read it with a fresh head on Weekend, but for example,
> >
> > On 06/01, Mike Christie wrote:
> >>
> >>  static int vhost_task_fn(void *data)
> >>  {
> >>struct vhost_task *vtsk = data;
> >> -  int ret;
> >> +  bool dead = false;
> >> +
> >> +  for (;;) {
> >> +  bool did_work;
> >> +
> >> +  /* mb paired w/ vhost_task_stop */
> >> +  if (test_bit(VHOST_TASK_FLAGS_STOP, >flags))
> >> +  break;
> >> +
> >> +  if (!dead && signal_pending(current)) {
> >> +  struct ksignal ksig;
> >> +  /*
> >> +   * Calling get_signal will block in SIGSTOP,
> >> +   * or clear fatal_signal_pending, but remember
> >> +   * what was set.
> >> +   *
> >> +   * This thread won't actually exit until all
> >> +   * of the file descriptors are closed, and
> >> +   * the release function is called.
> >> +   */
> >> +  dead = get_signal();
> >> +  if (dead)
> >> +  clear_thread_flag(TIF_SIGPENDING);
> >
> > this can't be right or I am totally confused.
> >
> > Another signal_wake_up() can come right after clear(SIGPENDING).
>
> Technically yes.

...

> Beyond that clearing TIF_SIGPENDING is just an optimization so
> the thread can sleep in schedule and not spin.

Yes. So if another signal_wake_up() comes after clear(SIGPENDING)
this code will spin in busy-wait loop waiting VHOST_TASK_FLAGS_STOP.
Obviously not good and even deadlockable on UP && !PREEMPT.

Oleg.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression

2023-06-02 Thread Eric W. Biederman
Oleg Nesterov  writes:

> Hi Mike,
>
> sorry, but somehow I can't understand this patch...
>
> I'll try to read it with a fresh head on Weekend, but for example,
>
> On 06/01, Mike Christie wrote:
>>
>>  static int vhost_task_fn(void *data)
>>  {
>>  struct vhost_task *vtsk = data;
>> -int ret;
>> +bool dead = false;
>> +
>> +for (;;) {
>> +bool did_work;
>> +
>> +/* mb paired w/ vhost_task_stop */
>> +if (test_bit(VHOST_TASK_FLAGS_STOP, >flags))
>> +break;
>> +
>> +if (!dead && signal_pending(current)) {
>> +struct ksignal ksig;
>> +/*
>> + * Calling get_signal will block in SIGSTOP,
>> + * or clear fatal_signal_pending, but remember
>> + * what was set.
>> + *
>> + * This thread won't actually exit until all
>> + * of the file descriptors are closed, and
>> + * the release function is called.
>> + */
>> +dead = get_signal();
>> +if (dead)
>> +clear_thread_flag(TIF_SIGPENDING);
>
> this can't be right or I am totally confused.
>
> Another signal_wake_up() can come right after clear(SIGPENDING).

Technically yes.

However please not that prepare_signal does:
if (signal->flags & SIGNAL_GROUP_EXIT)
return false;

In general it is wrong to receive or attempt to process a signal
after task death has been decided.

Strictly speaking that doesn't cover de_thread, and coredumping
but still receiving any kind of signal at that point is rare
and generally wrong behavior.

Beyond that clearing TIF_SIGPENDING is just an optimization so
the thread can sleep in schedule and not spin.

> Again, I'll try to re-read this patch, but let me ask anyway...
>
> Do we have a plan B? I mean... iirc you have mentioned that you can
> change these code paths to do something like
>
>   if (killed)
>   tell_the_drivers_that_all_callbacks_will_fail();
>
>
> so that vhost_worker() can exit after get_signal() returns SIGKILL.
>
> Probably I misunderstood you, but it would be nice to avoid the changes
> in coredump/etc code just to add a temporary (iiuc!) fix.

One saving grace with the the vhost code is that you need to open
device nodes that normally have root-only permissions.

If we are willing to allow races in process shutdown to cause leaks I
think we can do something better, and put the burden of work on vhost
layer.

I will follow up with a patch doing that.

Eric

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression

2023-06-02 Thread Oleg Nesterov
Hi Mike,

sorry, but somehow I can't understand this patch...

I'll try to read it with a fresh head on Weekend, but for example,

On 06/01, Mike Christie wrote:
>
>  static int vhost_task_fn(void *data)
>  {
>   struct vhost_task *vtsk = data;
> - int ret;
> + bool dead = false;
> +
> + for (;;) {
> + bool did_work;
> +
> + /* mb paired w/ vhost_task_stop */
> + if (test_bit(VHOST_TASK_FLAGS_STOP, >flags))
> + break;
> +
> + if (!dead && signal_pending(current)) {
> + struct ksignal ksig;
> + /*
> +  * Calling get_signal will block in SIGSTOP,
> +  * or clear fatal_signal_pending, but remember
> +  * what was set.
> +  *
> +  * This thread won't actually exit until all
> +  * of the file descriptors are closed, and
> +  * the release function is called.
> +  */
> + dead = get_signal();
> + if (dead)
> + clear_thread_flag(TIF_SIGPENDING);

this can't be right or I am totally confused.

Another signal_wake_up() can come right after clear(SIGPENDING).


Again, I'll try to re-read this patch, but let me ask anyway...

Do we have a plan B? I mean... iirc you have mentioned that you can
change these code paths to do something like

if (killed)
tell_the_drivers_that_all_callbacks_will_fail();


so that vhost_worker() can exit after get_signal() returns SIGKILL.

Probably I misunderstood you, but it would be nice to avoid the changes
in coredump/etc code just to add a temporary (iiuc!) fix.

Oleg.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression

2023-06-01 Thread Eric W. Biederman
Mike Christie  writes:

> When switching from kthreads to vhost_tasks two bugs were added:
> 1. The vhost worker tasks's now show up as processes so scripts doing
> ps or ps a would not incorrectly detect the vhost task as another
> process.  2. kthreads disabled freeze by setting PF_NOFREEZE, but
> vhost tasks's didn't disable or add support for them.
>
> To fix both bugs, this switches the vhost task to be thread in the
> process that does the VHOST_SET_OWNER ioctl, and has vhost_worker call
> get_signal to support SIGKILL/SIGSTOP and freeze signals. Note that
> SIGKILL/STOP support is required because CLONE_THREAD requires
> CLONE_SIGHAND which requires those 2 signals to be supported.
>
> This is a modified version of the patch written by Mike Christie
>  which was a modified version of patch
> originally written by Linus.
>
> Much of what depended upon PF_IO_WORKER now depends on PF_USER_WORKER.
> Including ignoring signals, setting up the register state, and having
> get_signal return instead of calling do_group_exit.
>
> Tidied up the vhost_task abstraction so that the definition of
> vhost_task only needs to be visible inside of vhost_task.c.  Making
> it easier to review the code and tell what needs to be done where.
> As part of this the main loop has been moved from vhost_worker into
> vhost_task_fn.  vhost_worker now returns true if work was done.

Please see below for a race in that tidying up.

> The main loop has been updated to call get_signal which handles
> SIGSTOP, freezing, and collects the message that tells the thread to
> exit as part of process exit.  This collection clears
> __fatal_signal_pending.  This collection is not guaranteed to
> clear signal_pending() so clear that explicitly so the schedule()
> sleeps.
>
> For now the vhost thread continues to exist and run work until the
> last file descriptor is closed and the release function is called as
> part of freeing struct file.  To avoid hangs in the coredump
> rendezvous and when killing threads in a multi-threaded exec.  The
> coredump code and de_thread have been modified to ignore vhost threads.
>
> Remvoing the special case for exec appears to require teaching
> vhost_dev_flush how to directly complete transactions in case
> the vhost thread is no longer running.
>
> Removing the special case for coredump rendezvous requires either the
> above fix needed for exec or moving the coredump rendezvous into
> get_signal.



In just fixing the hang after exec I am afraid I may have introduced
something worse.

Two different sighand_struct's (and their associated locks) pointing
at the same signal_struct.  (Should be fixable?)

I am worried about what happens with that vhost task after an exec.
It retains it's existing cred (and technically the old mm) but shares
signal_struct so it might be possible to use permission checks against
the old vhost task cred to bypass permission checks on the new tasks
cred.  In particular for exec's that gain privilege.

It doesn't look like that is an issue for signals and suid exec as
kill_ok_by_cred seems to deliberately allow the same thing.


We may be ok but the way that vhost task remains after exec it smells
like the setup for a local privilege escalation.


Oleg do you have any insights?

Does anyone see why using the vhost task to modify the process should
not result in privilege escalation?


> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index a92af08e7864..074273020849 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -256,7 +256,7 @@ void vhost_work_queue(struct vhost_dev *dev, struct 
> vhost_work *work)
>* test_and_set_bit() implies a memory barrier.
>*/
>   llist_add(>node, >worker->work_list);
> - wake_up_process(dev->worker->vtsk->task);
> + vhost_task_wake(dev->worker->vtsk);
>   }
>  }
>  EXPORT_SYMBOL_GPL(vhost_work_queue);
> @@ -333,31 +333,19 @@ static void vhost_vq_reset(struct vhost_dev *dev,
>   __vhost_vq_meta_reset(vq);
>  }
>  
> -static int vhost_worker(void *data)
> +static bool vhost_worker(void *data)
>  {
>   struct vhost_worker *worker = data;
>   struct vhost_work *work, *work_next;
>   struct llist_node *node;
>  
> - for (;;) {
> - /* mb paired w/ kthread_stop */
> - set_current_state(TASK_INTERRUPTIBLE);
> -
> - if (vhost_task_should_stop(worker->vtsk)) {
> - __set_current_state(TASK_RUNNING);
> - break;
> - }
> -
> - node = llist_del_all(>work_list);
> - if (!node)
> - schedule();
> -
> + node = llist_del_all(>work_list);
> + if (node) {
>   node = llist_reverse_order(node);
>   /* make sure flag is seen after deletion */
>   smp_wmb();
>   llist_for_each_entry_safe(work, work_next, node, node) {
>   

Re: [PATCH 1/1] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression

2023-06-01 Thread Michael S. Tsirkin
On Thu, Jun 01, 2023 at 01:32:32PM -0500, Mike Christie wrote:
> When switching from kthreads to vhost_tasks two bugs were added:
> 1. The vhost worker tasks's now show up as processes so scripts doing
> ps or ps a would not incorrectly detect the vhost task as another
> process.  2. kthreads disabled freeze by setting PF_NOFREEZE, but
> vhost tasks's didn't disable or add support for them.
> 
> To fix both bugs, this switches the vhost task to be thread in the
> process that does the VHOST_SET_OWNER ioctl, and has vhost_worker call
> get_signal to support SIGKILL/SIGSTOP and freeze signals. Note that
> SIGKILL/STOP support is required because CLONE_THREAD requires
> CLONE_SIGHAND which requires those 2 signals to be supported.
> 
> This is a modified version of the patch written by Mike Christie
>  which was a modified version of patch
> originally written by Linus.
> 
> Much of what depended upon PF_IO_WORKER now depends on PF_USER_WORKER.
> Including ignoring signals, setting up the register state, and having
> get_signal return instead of calling do_group_exit.
> 
> Tidied up the vhost_task abstraction so that the definition of
> vhost_task only needs to be visible inside of vhost_task.c.  Making
> it easier to review the code and tell what needs to be done where.
> As part of this the main loop has been moved from vhost_worker into
> vhost_task_fn.  vhost_worker now returns true if work was done.
> 
> The main loop has been updated to call get_signal which handles
> SIGSTOP, freezing, and collects the message that tells the thread to
> exit as part of process exit.  This collection clears
> __fatal_signal_pending.  This collection is not guaranteed to
> clear signal_pending() so clear that explicitly so the schedule()
> sleeps.
> 
> For now the vhost thread continues to exist and run work until the
> last file descriptor is closed and the release function is called as
> part of freeing struct file.  To avoid hangs in the coredump
> rendezvous and when killing threads in a multi-threaded exec.  The
> coredump code and de_thread have been modified to ignore vhost threads.
> 
> Remvoing the special case for exec appears to require teaching
> vhost_dev_flush how to directly complete transactions in case
> the vhost thread is no longer running.
> 
> Removing the special case for coredump rendezvous requires either the
> above fix needed for exec or moving the coredump rendezvous into
> get_signal.
> 
> Fixes: 6e890c5d5021 ("vhost: use vhost_tasks for worker threads")
> Signed-off-by: Eric W. Biederman 
> Co-developed-by: Mike Christie 
> Signed-off-by: Mike Christie 


Acked-by: Michael S. Tsirkin 


> ---
>  arch/x86/include/asm/fpu/sched.h |  2 +-
>  arch/x86/kernel/fpu/context.h|  2 +-
>  arch/x86/kernel/fpu/core.c   |  2 +-
>  drivers/vhost/vhost.c| 22 ++--
>  fs/coredump.c|  4 +-
>  include/linux/sched/task.h   |  1 -
>  include/linux/sched/vhost_task.h | 15 ++
>  kernel/exit.c|  5 +-
>  kernel/fork.c| 13 ++---
>  kernel/signal.c  |  8 +--
>  kernel/vhost_task.c  | 92 +---
>  11 files changed, 89 insertions(+), 77 deletions(-)
> 
> diff --git a/arch/x86/include/asm/fpu/sched.h 
> b/arch/x86/include/asm/fpu/sched.h
> index c2d6cd78ed0c..78fcde7b1f07 100644
> --- a/arch/x86/include/asm/fpu/sched.h
> +++ b/arch/x86/include/asm/fpu/sched.h
> @@ -39,7 +39,7 @@ extern void fpu_flush_thread(void);
>  static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu)
>  {
>   if (cpu_feature_enabled(X86_FEATURE_FPU) &&
> - !(current->flags & (PF_KTHREAD | PF_IO_WORKER))) {
> + !(current->flags & (PF_KTHREAD | PF_USER_WORKER))) {
>   save_fpregs_to_fpstate(old_fpu);
>   /*
>* The save operation preserved register state, so the
> diff --git a/arch/x86/kernel/fpu/context.h b/arch/x86/kernel/fpu/context.h
> index 9fcfa5c4dad7..af5cbdd9bd29 100644
> --- a/arch/x86/kernel/fpu/context.h
> +++ b/arch/x86/kernel/fpu/context.h
> @@ -57,7 +57,7 @@ static inline void fpregs_restore_userregs(void)
>   struct fpu *fpu = >thread.fpu;
>   int cpu = smp_processor_id();
>  
> - if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_IO_WORKER)))
> + if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_USER_WORKER)))
>   return;
>  
>   if (!fpregs_state_valid(fpu, cpu)) {
> diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
> index caf33486dc5e..1015af1ae562 100644
> --- a/arch/x86/kernel/fpu/core.c
> +++ b/arch/x86/kernel/fpu/core.c
> @@ -426,7 +426,7 @@ void kernel_fpu_begin_mask(unsigned int kfpu_mask)
>  
>   this_cpu_write(in_kernel_fpu, true);
>  
> - if (!(current->flags & (PF_KTHREAD | PF_IO_WORKER)) &&
> + if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER)) &&
>   !test_thread_flag(TIF_NEED_FPU_LOAD)) {
>