Re: [PATCH 1/2] vdpa: mlx5: prevent cvq work from hogging CPU

2022-03-25 Thread Hillf Danton
On Fri, 25 Mar 2022 15:53:09 +0800 Jason Wang wrote: > > Ok, Hillf, does this make sense for you? We want the issue to be fixed > soon, it's near to our product release. Feel free to go ahead - product release is important. BR Hillf ___ Virtualization

Re: [PATCH 1/2] vdpa: mlx5: prevent cvq work from hogging CPU

2022-03-24 Thread Hillf Danton
On Thu, 24 Mar 2022 16:20:34 +0800 Jason Wang wrote: > On Thu, Mar 24, 2022 at 2:17 PM Michael S. Tsirkin wrote: > > On Thu, Mar 24, 2022 at 02:04:19PM +0800, Hillf Danton wrote: > > > On Thu, 24 Mar 2022 10:34:09 +0800 Jason Wang wrote: > > > > On Thu, Mar 24, 2022

Re: [PATCH 1/2] vdpa: mlx5: prevent cvq work from hogging CPU

2022-03-24 Thread Hillf Danton
On Thu, 24 Mar 2022 10:34:09 +0800 Jason Wang wrote: > On Thu, Mar 24, 2022 at 8:54 AM Hillf Danton wrote: > > > > On Tue, 22 Mar 2022 09:59:14 +0800 Jason Wang wrote: > > > > > > Yes, there will be no "infinite" loop, but since the loop is t

Re: [PATCH 1/2] vdpa: mlx5: prevent cvq work from hogging CPU

2022-03-23 Thread Hillf Danton
On Tue, 22 Mar 2022 09:59:14 +0800 Jason Wang wrote: > > Yes, there will be no "infinite" loop, but since the loop is triggered > by userspace. It looks to me it will delay the flush/drain of the > workqueue forever which is still suboptimal. Usually it is barely possible to shoot two birds

Re: [PATCH 1/2] vdpa: mlx5: prevent cvq work from hogging CPU

2022-03-21 Thread Hillf Danton
On Mon, 21 Mar 2022 17:00:09 +0800 Jason Wang wrote: > > Ok, speak too fast. Frankly I have fun running faster five days a week. > So you meant to add a cond_resched() in the loop? Yes, it is one liner. Hillf ___ Virtualization mailing list

Re: [PATCH 1/2] vdpa: mlx5: prevent cvq work from hogging CPU

2022-03-21 Thread Hillf Danton
On Mon, 21 Mar 2022 14:04:28 +0800 Jason Wang wrote: > A userspace triggerable infinite loop could happen in > mlx5_cvq_kick_handler() if userspace keeps sending a huge amount of > cvq requests. > > Fixing this by introducing a quota and re-queue the work if we're out > of the budget. While at

Re: [PATCH 7/8] vhost: use kernel_copy_process to check RLIMITs and inherit cgroups

2021-09-19 Thread Hillf Danton
On Thu, 16 Sep 2021 16:20:50 -0500 Mike Christie wrote: > > static int vhost_worker_create(struct vhost_dev *dev) > { > + DECLARE_COMPLETION_ONSTACK(start_done); Nit, cut it. > struct vhost_worker *worker; > struct task_struct *task; > + char buf[TASK_COMM_LEN]; >

Re: INFO: task hung in lock_sock_nested (2)

2020-02-24 Thread Hillf Danton
On Mon, 24 Feb 2020 11:08:53 +0100 Stefano Garzarella wrote: > On Sun, Feb 23, 2020 at 03:50:25PM +0800, Hillf Danton wrote: > > > > Seems like vsock needs a word to track lock owner in an attempt to > > avoid trying to lock sock while the current is the lo

Re: INFO: task hung in lock_sock_nested (2)

2020-02-23 Thread Hillf Danton
On Sat, 22 Feb 2020 10:58:12 -0800 > syzbot found the following crash on: > > HEAD commit:2bb07f4e tc-testing: updated tdc tests for basic filter > git tree: net-next > console output: https://syzkaller.appspot.com/x/log.txt?x=122efdede0 > kernel config:

Re: [Spice-devel] Xorg indefinitely hangs in kernelspace

2019-10-03 Thread Hillf Danton
On Thu, 3 Oct 2019 09:45:55 +0300 Jaak Ristioja wrote: > On 30.09.19 16:29, Frediano Ziglio wrote: > > Why didn't you update bug at > > https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1813620? > > I know it can seem tedious but would help tracking it. > > I suppose the lack on

Re: Xorg indefinitely hangs in kernelspace

2019-09-09 Thread Hillf Danton
On Mon, 9 Sep 2019 from Gerd Hoffmann > > Hmm, I think the patch is wrong. Hmm...it should have added change only in the error path, leaving locks for drivers to release if job is done with no error returned. > As far I know it is the qxl drivers's > job to call ttm_eu_backoff_reservation().

Re: Xorg indefinitely hangs in kernelspace

2019-09-09 Thread Hillf Danton
Hi, On Mon, 9 Sep 2019 from Gerd Hoffmann > > Hmm, I think the patch is wrong. As far I know it is the qxl drivers's > job to call ttm_eu_backoff_reservation(). Doing that automatically in > ttm will most likely break other ttm users. > Perhaps. >So I guess the call is missing in the qxl

Re: [Spice-devel] Xorg indefinitely hangs in kernelspace

2019-09-06 Thread Hillf Danton
>From Frediano Ziglio > > Where does it came this patch? My fingers tapping the keyboard. > Is it already somewhere? No idea yet. > Is it supposed to fix this issue? It may do nothing else as far as I can tell. > Does it affect some other card beside QXL? Perhaps.

Re: Xorg indefinitely hangs in kernelspace

2019-09-05 Thread Hillf Danton
On Tue, 6 Aug 2019 21:00:10 +0300 From: Jaak Ristioja > Hello! > > I'm writing to report a crash in the QXL / DRM code in the Linux kernel. > I originally filed the issue on LaunchPad and more details can be found > there, although I doubt whether these details are useful. > >

Re: [PATCH V5 3/5] iommu/dma-iommu: Handle deferred devices

2019-08-16 Thread Hillf Danton
On Thu, 15 Aug 2019 12:09:41 +0100 Tom Murphy wrote: > > Handle devices which defer their attach to the iommu in the dma-iommu api > > Signed-off-by: Tom Murphy > --- > drivers/iommu/dma-iommu.c | 27 ++- > 1 file changed, 26 insertions(+), 1 deletion(-) > > diff

Re: INFO: rcu detected stall in vhost_worker

2019-07-27 Thread Hillf Danton
Fri, 26 Jul 2019 08:26:01 -0700 (PDT) syzbot has bisected this bug to: commit 0ecfebd2b52404ae0c54a878c872bb93363ada36 Author: Linus Torvalds Date: Sun Jul 7 22:41:56 2019 + Linux 5.2 bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=118810bfa0 start commit:

Re: Reminder: 2 open syzbot bugs in vhost subsystem

2019-07-24 Thread Hillf Danton
On Tue, 2 Jul 2019 13:30:07 +0800 Jason Wang wrote: > On 2019/7/2 Eric Biggers wrote: > > [This email was generated by a script. Let me know if you have any > > suggestions > > to make it better, or if you want it re-generated with the latest status.] > > > > Of the currently open syzbot

Re: memory leak in vhost_net_ioctl

2019-07-24 Thread Hillf Danton
Hello Syzbot On Fri, 14 Jun 2019 11:04:03 +0800 syzbot wrote: > >Hello, > >syzbot has tested the proposed patch but the reproducer still triggered crash: >memory leak in batadv_tvlv_handler_register > > 484.626788][ T156] bond0 (unregistering): Releasing backup interface > bond_slave_1

Re: memory leak in vhost_net_ioctl

2019-07-24 Thread Hillf Danton
Hello Syzbot On Fri, 14 Jun 2019 11:04:03 +0800 syzbot wrote: > >Hello, > >syzbot has tested the proposed patch but the reproducer still triggered crash: >memory leak in batadv_tvlv_handler_register > It is not ubuf leak which is addressed in this thread. Good news. I will see this new leak

Re: memory leak in vhost_net_ioctl

2019-07-24 Thread Hillf Danton
Hello Syzbot On Fri, 14 Jun 2019 02:26:02 +0800 syzbot wrote: > > Hello, > > syzbot has tested the proposed patch but the reproducer still triggered crash: > memory leak in vhost_net_ioctl > Oh sorry for my poor patch. > ANGE): hsr_slave_1: link becomes ready > 2019/06/13 18:24:57 executed

Re: memory leak in vhost_net_ioctl

2019-07-24 Thread Hillf Danton
Hello Dmitry On Thu, 13 Jun 2019 20:12:06 +0800 Dmitry Vyukov wrote: > On Thu, Jun 13, 2019 at 2:07 PM Hillf Danton wrote: > > > > Hello Jason > > > > On Thu, 13 Jun 2019 17:10:39 +0800 Jason Wang wrote: > > > > > > This is basically a

Re: memory leak in vhost_net_ioctl

2019-07-24 Thread Hillf Danton
Hello Jason On Thu, 13 Jun 2019 17:10:39 +0800 Jason Wang wrote: > > This is basically a kfree(ubuf) after the second vhost_net_flush() in > vhost_net_release(). > Fairly good catch. > Could you please post a formal patch? > I'd like very much to do that; but I wont, I am afraid, until I

Re: memory leak in vhost_net_ioctl

2019-07-24 Thread Hillf Danton
On Wed, 05 Jun 2019 16:42:05 -0700 (PDT) syzbot wrote: Hello, syzbot found the following crash on: HEAD commit:788a0249 Merge tag 'arc-5.2-rc4' of git://git.kernel.org/p.. git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?x=15dc9ea6a0 kernel config: