[PATCH net v1 2/2] virtio_net: Close queue pairs using helper function

2023-04-28 Thread Feng Liu via Virtualization
Use newly introduced helper function that exactly does the same of closing the queue pairs. Signed-off-by: Feng Liu Reviewed-by: William Tu Reviewed-by: Parav Pandit --- drivers/net/virtio_net.c | 7 ++- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git

[PATCH net v1 1/2] virtio_net: Fix error unwinding of XDP initialization

2023-04-28 Thread Feng Liu via Virtualization
When initializing XDP in virtnet_open(), some rq xdp initialization may hit an error causing net device open failed. However, previous rqs have already initialized XDP and enabled NAPI, which is not the expected behavior. Need to roll back the previous rq initialization to avoid leaks in error

[PATCH v7 00/14] vhost: multiple worker support

2023-04-28 Thread michael . christie
The following patches were built over Linux's tree. They allow us to support multiple vhost workers tasks per device. The design is a modified version of Stefan's original idea where userspace has the kernel create a worker and we pass back the pid. In this version instead of passing the pid

[PATCH 14/14] vhost_scsi: add support for worker ioctls

2023-04-28 Thread Mike Christie
This has vhost-scsi support the worker ioctls by calling the vhost_worker_ioctl helper. With a single worker, the single thread becomes a bottlneck when trying to use 3 or more virtqueues like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3

[PATCH 12/14] vhost: replace single worker pointer with xarray

2023-04-28 Thread Mike Christie
The next patch allows userspace to create multiple workers per device, so this patch replaces the vhost_worker pointer with an xarray so we can store mupltiple workers and look them up. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 48 +--

[PATCH 11/14] vhost: add helper to parse userspace vring state/file

2023-04-28 Thread Mike Christie
The next patches add new vhost worker ioctls which will need to get a vhost_virtqueue from a userspace struct which specifies the vq's index. This moves the vhost_vring_ioctl code to do this to a helper so it can be shared. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 29

[PATCH 09/14] vhost: remove vhost_work_queue

2023-04-28 Thread Mike Christie
vhost_work_queue is no longer used. Each driver is using the poll or vq based queueing, so remove vhost_work_queue. Signed-off-by: Mike Christie --- drivers/vhost/vhost.c | 6 -- drivers/vhost/vhost.h | 1 - 2 files changed, 7 deletions(-) diff --git a/drivers/vhost/vhost.c

[PATCH 13/14] vhost: allow userspace to create workers

2023-04-28 Thread Mike Christie
For vhost-scsi with 3 vqs or more and a workload that tries to use them in parallel like: fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \ --ioengine=libaio --iodepth=128 --numjobs=3 the single vhost worker thread will become a bottlneck and we are stuck at around 500K IOPs no matter

[PATCH 08/14] vhost_scsi: convert to vhost_vq_work_queue

2023-04-28 Thread Mike Christie
Convert from vhost_work_queue to vhost_vq_work_queue. Signed-off-by: Mike Christie --- drivers/vhost/scsi.c | 18 +- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index a77c53bb035a..1668009bd489 100644 ---

[PATCH 10/14] vhost_scsi: flush IO vqs then send TMF rsp

2023-04-28 Thread Mike Christie
With one worker we will always send the scsi cmd responses then send the TMF rsp, because LIO will always complete the scsi cmds first then call into us to send the TMF response. With multiple workers, the IO vq workers could be running while the TMF/ctl vq worker is so this has us do a flush

[PATCH 06/14] vhost_sock: convert to vhost_vq_work_queue

2023-04-28 Thread Mike Christie
Convert from vhost_work_queue to vhost_vq_work_queue. Signed-off-by: Mike Christie --- drivers/vhost/vsock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 6578db78f0ae..817d377a3f36 100644 --- a/drivers/vhost/vsock.c

[PATCH 05/14] vhost: convert poll work to be vq based

2023-04-28 Thread Mike Christie
This has the drivers pass in their poll to vq mapping and then converts the core poll code to use the vq based helpers. In the next patches we will allow vqs to be handled by different workers, so to allow drivers to execute operations like queue, stop, flush, etc on specific polls/vqs we need to

[PATCH 07/14] vhost_scsi: make SCSI cmd completion per vq

2023-04-28 Thread Mike Christie
This patch separates the scsi cmd completion code paths so we can complete cmds based on their vq instead of having all cmds complete on the same worker/CPU. This will be useful with the next patches that allow us to create mulitple worker threads and bind them to different vqs, so we can have

[PATCH 04/14] vhost: take worker or vq instead of dev for flushing

2023-04-28 Thread Mike Christie
This patch has the core work flush function take a worker. When we support multiple workers we can then flush each worker during device removal, stoppage, etc. Signed-off-by: Mike Christie Acked-by: Jason Wang --- drivers/vhost/vhost.c | 24 +++- 1 file changed, 15

[PATCH 03/14] vhost: take worker or vq instead of dev for queueing

2023-04-28 Thread Mike Christie
This patch has the core work queueing function take a worker for when we support multiple workers. It also adds a helper that takes a vq during queueing so modules can control which vq/worker to queue work on. This temp leaves vhost_work_queue. It will be removed when the drivers are converted in

[PATCH 01/14] vhost: add vhost_worker pointer to vhost_virtqueue

2023-04-28 Thread Mike Christie
This patchset allows userspace to map vqs to different workers. This patch adds a worker pointer to the vq so we can store that info. Signed-off-by: Mike Christie Acked-by: Jason Wang --- drivers/vhost/vhost.c | 24 +--- drivers/vhost/vhost.h | 1 + 2 files changed, 14

[PATCH 02/14] vhost, vhost_net: add helper to check if vq has work

2023-04-28 Thread Mike Christie
In the next patches each vq might have different workers so one could have work but others do not. For net, we only want to check specific vqs, so this adds a helper to check if a vq has work pending and converts vhost-net to use it. Signed-off-by: Mike Christie Acked-by: Jason Wang ---

Re: [PATCH] virtio_net: suppress cpu stall when free_unused_bufs

2023-04-28 Thread Willem de Bruijn
Qi Zheng wrote: > > > On 2023/4/27 16:23, Michael S. Tsirkin wrote: > > On Thu, Apr 27, 2023 at 04:13:45PM +0800, Xuan Zhuo wrote: > >> On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin" > >> wrote: > >>> On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote: > On Thu, 27 Apr

Re: [PATCH RFC net-next v2 0/4] virtio/vsock: support datagrams

2023-04-28 Thread Stefano Garzarella
On Sat, Apr 15, 2023 at 07:13:47AM +, Bobby Eshleman wrote: CC'ing virtio-...@lists.oasis-open.org because this thread is starting to touch the spec. On Wed, Apr 19, 2023 at 12:00:17PM +0200, Stefano Garzarella wrote: Hi Bobby, On Fri, Apr 14, 2023 at 11:18:40AM +, Bobby Eshleman

Re: [PATCH RFC net-next v2 3/4] vsock: Add lockless sendmsg() support

2023-04-28 Thread Stefano Garzarella
On Sat, Apr 15, 2023 at 10:30:55AM +, Bobby Eshleman wrote: On Wed, Apr 19, 2023 at 11:30:53AM +0200, Stefano Garzarella wrote: On Fri, Apr 14, 2023 at 12:25:59AM +, Bobby Eshleman wrote: > Because the dgram sendmsg() path for AF_VSOCK acquires the socket lock > it does not scale when