Use newly introduced helper function that exactly does the same of
closing the queue pairs.
Signed-off-by: Feng Liu
Reviewed-by: William Tu
Reviewed-by: Parav Pandit
---
drivers/net/virtio_net.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git
When initializing XDP in virtnet_open(), some rq xdp initialization
may hit an error causing net device open failed. However, previous
rqs have already initialized XDP and enabled NAPI, which is not the
expected behavior. Need to roll back the previous rq initialization
to avoid leaks in error
The following patches were built over Linux's tree. They allow us to
support multiple vhost workers tasks per device. The design is a modified
version of Stefan's original idea where userspace has the kernel create a
worker and we pass back the pid. In this version instead of passing the
pid
This has vhost-scsi support the worker ioctls by calling the
vhost_worker_ioctl helper.
With a single worker, the single thread becomes a bottlneck when trying
to use 3 or more virtqueues like:
fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \
--ioengine=libaio --iodepth=128 --numjobs=3
The next patch allows userspace to create multiple workers per device,
so this patch replaces the vhost_worker pointer with an xarray so we
can store mupltiple workers and look them up.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 48 +--
The next patches add new vhost worker ioctls which will need to get a
vhost_virtqueue from a userspace struct which specifies the vq's index.
This moves the vhost_vring_ioctl code to do this to a helper so it can
be shared.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 29
vhost_work_queue is no longer used. Each driver is using the poll or vq
based queueing, so remove vhost_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 6 --
drivers/vhost/vhost.h | 1 -
2 files changed, 7 deletions(-)
diff --git a/drivers/vhost/vhost.c
For vhost-scsi with 3 vqs or more and a workload that tries to use
them in parallel like:
fio --filename=/dev/sdb --direct=1 --rw=randrw --bs=4k \
--ioengine=libaio --iodepth=128 --numjobs=3
the single vhost worker thread will become a bottlneck and we are stuck
at around 500K IOPs no matter
Convert from vhost_work_queue to vhost_vq_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/scsi.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index a77c53bb035a..1668009bd489 100644
---
With one worker we will always send the scsi cmd responses then send the
TMF rsp, because LIO will always complete the scsi cmds first then call
into us to send the TMF response.
With multiple workers, the IO vq workers could be running while the
TMF/ctl vq worker is so this has us do a flush
Convert from vhost_work_queue to vhost_vq_work_queue.
Signed-off-by: Mike Christie
---
drivers/vhost/vsock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 6578db78f0ae..817d377a3f36 100644
--- a/drivers/vhost/vsock.c
This has the drivers pass in their poll to vq mapping and then converts
the core poll code to use the vq based helpers. In the next patches we
will allow vqs to be handled by different workers, so to allow drivers
to execute operations like queue, stop, flush, etc on specific polls/vqs
we need to
This patch separates the scsi cmd completion code paths so we can complete
cmds based on their vq instead of having all cmds complete on the same
worker/CPU. This will be useful with the next patches that allow us to
create mulitple worker threads and bind them to different vqs, so we can
have
This patch has the core work flush function take a worker. When we
support multiple workers we can then flush each worker during device
removal, stoppage, etc.
Signed-off-by: Mike Christie
Acked-by: Jason Wang
---
drivers/vhost/vhost.c | 24 +++-
1 file changed, 15
This patch has the core work queueing function take a worker for when we
support multiple workers. It also adds a helper that takes a vq during
queueing so modules can control which vq/worker to queue work on.
This temp leaves vhost_work_queue. It will be removed when the drivers
are converted in
This patchset allows userspace to map vqs to different workers. This
patch adds a worker pointer to the vq so we can store that info.
Signed-off-by: Mike Christie
Acked-by: Jason Wang
---
drivers/vhost/vhost.c | 24 +---
drivers/vhost/vhost.h | 1 +
2 files changed, 14
In the next patches each vq might have different workers so one could
have work but others do not. For net, we only want to check specific vqs,
so this adds a helper to check if a vq has work pending and converts
vhost-net to use it.
Signed-off-by: Mike Christie
Acked-by: Jason Wang
---
Qi Zheng wrote:
>
>
> On 2023/4/27 16:23, Michael S. Tsirkin wrote:
> > On Thu, Apr 27, 2023 at 04:13:45PM +0800, Xuan Zhuo wrote:
> >> On Thu, 27 Apr 2023 04:12:44 -0400, "Michael S. Tsirkin"
> >> wrote:
> >>> On Thu, Apr 27, 2023 at 03:13:44PM +0800, Xuan Zhuo wrote:
> On Thu, 27 Apr
On Sat, Apr 15, 2023 at 07:13:47AM +, Bobby Eshleman wrote:
CC'ing virtio-...@lists.oasis-open.org because this thread is starting
to touch the spec.
On Wed, Apr 19, 2023 at 12:00:17PM +0200, Stefano Garzarella wrote:
Hi Bobby,
On Fri, Apr 14, 2023 at 11:18:40AM +, Bobby Eshleman
On Sat, Apr 15, 2023 at 10:30:55AM +, Bobby Eshleman wrote:
On Wed, Apr 19, 2023 at 11:30:53AM +0200, Stefano Garzarella wrote:
On Fri, Apr 14, 2023 at 12:25:59AM +, Bobby Eshleman wrote:
> Because the dgram sendmsg() path for AF_VSOCK acquires the socket lock
> it does not scale when
20 matches
Mail list logo