[Qemu-devel] [PATCH] vhost-user: verify that number of queues is less than MAX_QUEUE_NUM

2016-02-24 Thread Ilya Maximets
Fix QEMU crash when -netdev vhost-user,queues=n is passed with number of queues greater than MAX_QUEUE_NUM. Signed-off-by: Ilya Maximets --- net/vhost-user.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/vhost-user.c b/net/vhost-user.c index 451dbbf..b753b3d

[Qemu-devel] [PATCH 3/4] vhost: check for vhost_net device validity.

2016-03-30 Thread Ilya Maximets
qdisc <...> link/ether 00:16:35:af:aa:4b brd ff:ff:ff:ff:ff:ff [---- cut ---] Signed-off-by: Ilya Maximets --- hw/net/vhost_net.c | 18 +- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/hw/net/vhost_net.c b/hw/net/vhost_n

[Qemu-devel] [PATCH 4/4] net: notify about link status only if it changed.

2016-03-30 Thread Ilya Maximets
No need to notify nc->peer if nothing changed. Signed-off-by: Ilya Maximets --- net/net.c | 7 --- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/net/net.c b/net/net.c index 3b5a142..6f6a8ce 100644 --- a/net/net.c +++ b/net/net.c @@ -1385,9 +1385,10 @@ void qmp_set_link(co

[Qemu-devel] [PATCH 1/4] vhost-user: fix crash on socket disconnect.

2016-03-30 Thread Ilya Maximets
io_net_vhost_status #8 virtio_net_set_status #9 virtio_set_status <...> [ cut ---] Fix that by introducing of reference counter for vhost_net device and freeing memory only after dropping of last reference. Signed-off-by: Ilya Maximets --- hw/net/vho

[Qemu-devel] [PATCH 2/4] vhost: prevent double stop of vhost_net device.

2016-03-30 Thread Ilya Maximets
---] In example above assertion will fail when control will be brought back to function at #17 and it will try to free 'eventfd' that was already freed at call #3. Fix that by disallowing execution of vhost_net_stop() if we're already inside of it. Signed-of

[Qemu-devel] [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect.

2016-03-30 Thread Ilya Maximets
estarted to restore communication after restarting of vhost-user application. Ilya Maximets (4): vhost-user: fix crash on socket disconnect. vhost: prevent double stop of vhost_net device. vhost: check for vhost_net device validity. net: notify about li

Re: [Qemu-devel] [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect.

2016-03-30 Thread Ilya Maximets
On 30.03.2016 20:01, Michael S. Tsirkin wrote: > On Wed, Mar 30, 2016 at 06:14:05PM +0300, Ilya Maximets wrote: >> Currently QEMU always crashes in following scenario (assume that >> vhost-user application is Open vSwitch with 'dpdkvhostuser' port): > > In fact, wo

Re: [Qemu-devel] [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect.

2016-03-31 Thread Ilya Maximets
On 31.03.2016 12:21, Michael S. Tsirkin wrote: > On Thu, Mar 31, 2016 at 09:02:01AM +0300, Ilya Maximets wrote: >> On 30.03.2016 20:01, Michael S. Tsirkin wrote: >>> On Wed, Mar 30, 2016 at 06:14:05PM +0300, Ilya Maximets wrote: >>>> Currently QEMU always crashes in f

Re: [Qemu-devel] [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect.

2016-04-06 Thread Ilya Maximets
--- Original Message --- Sender : Michael S. Tsirkin Date : Apr 05, 2016 13:46 (GMT+03:00) Title : Re: [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect. > On Thu, Mar 31, 2016 at 09:02:01AM +0300, Ilya Maximets wrote: > > On 30.03.2016 20:01, Michael S. Tsirkin wrote:

Re: [Qemu-devel] [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect.

2016-04-07 Thread Ilya Maximets
> --- Original Message --- > Sender : Michael S. Tsirkin > Date : Apr 07, 2016 10:01 (GMT+03:00) > Title : Re: Re: [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect. > > On Wed, Apr 06, 2016 at 11:52:56PM +0000, Ilya Maximets wrote: > > --- Original Me

Re: [PATCH v2] net: add initial support for AF_XDP network backend

2023-07-20 Thread Ilya Maximets
On 7/20/23 09:37, Jason Wang wrote: > On Thu, Jul 6, 2023 at 4:58 AM Ilya Maximets wrote: >> >> AF_XDP is a network socket family that allows communication directly >> with the network device driver in the kernel, bypassing most or all >> of the kernel networking

Re: [PATCH v2] net: add initial support for AF_XDP network backend

2023-08-04 Thread Ilya Maximets
On 7/25/23 08:55, Jason Wang wrote: > On Thu, Jul 20, 2023 at 9:26 PM Ilya Maximets wrote: >> >> On 7/20/23 09:37, Jason Wang wrote: >>> On Thu, Jul 6, 2023 at 4:58 AM Ilya Maximets wrote: >>>> >>>> AF_XDP is a network socket family that allows com

[PATCH v3] net: add initial support for AF_XDP network backend

2023-08-04 Thread Ilya Maximets
: 1.0 Mpps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by: Ilya Maximets --- Version 3: - Bump requirements to libxdp 1.4.0+. Having that, rem

Re: [Qemu-devel] [PATCH v2 0/4] memfd fixes.

2019-03-11 Thread Ilya Maximets
Best regards, Ilya Maximets. On 27.11.2018 16:50, Ilya Maximets wrote: > Version 2: > * First patch changed to just drop the memfd backend > if seals are not supported. > > Ilya Maximets (4): > hostmem-memfd: disable for systems wihtout sealing support >

[Qemu-devel] [PATCH v3 0/4] memfd fixes.

2019-03-11 Thread Ilya Maximets
Version 3: * Rebase on top of current master. Version 2: * First patch changed to just drop the memfd backend if seals are not supported. Ilya Maximets (4): hostmem-memfd: disable for systems wihtout sealing support memfd: always check for MFD_CLOEXEC memfd: set up correct

[Qemu-devel] [PATCH v3 1/4] hostmem-memfd: disable for systems wihtout sealing support

2019-03-11 Thread Ilya Maximets
em,size=2M,: \ failed to create memfd: Invalid argument and actually breaks the feature on such systems. Let's restrict memfd backend to systems with sealing support. Signed-off-by: Ilya Maximets --- backends/hostmem-memfd.c | 18 -- tests/vhost-user-test.c | 5 +++--

[Qemu-devel] [PATCH v3 2/4] memfd: always check for MFD_CLOEXEC

2019-03-11 Thread Ilya Maximets
QEMU always sets this flag unconditionally. We need to check if it's supported. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/util/memfd.c b/util/memfd.c index 8debd0d037..d74ce4d793 100644

[Qemu-devel] [PATCH v3 3/4] memfd: set up correct errno if not supported

2019-03-11 Thread Ilya Maximets
qemu_memfd_create() prints the value of 'errno' which is not set in this case. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/util/memfd.c b/util/memfd.c index d74ce4d793..393d23da96 100644 --- a/ut

[Qemu-devel] [PATCH v3 4/4] memfd: improve error messages

2019-03-11 Thread Ilya Maximets
This gives more information about the failure. Additionally 'ENOSYS' returned for a non-Linux platforms instead of 'errno', which is not initilaized in this case. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 7 ++- 1 file changed

[PATCH] vhost_net: Print feature masks in hex

2022-03-18 Thread Ilya Maximets
"0x2" is much more readable than "8589934592". The change saves one step (conversion) while debugging. Signed-off-by: Ilya Maximets --- hw/net/vhost_net.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/hw/net/vhost_net.c b/hw/net/vhost_ne

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 9/25/23 16:23, Stefan Hajnoczi wrote: > On Fri, 25 Aug 2023 at 13:04, Ilya Maximets wrote: >> >> We do not need the most up to date number of heads, we only want to >> know if there is at least one. >> >> Use shadow variable as long as it is not equal to th

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 9/25/23 17:12, Stefan Hajnoczi wrote: > On Mon, 25 Sept 2023 at 11:02, Ilya Maximets wrote: >> >> On 9/25/23 16:23, Stefan Hajnoczi wrote: >>> On Fri, 25 Aug 2023 at 13:04, Ilya Maximets wrote: >>>> >>>> We do not need the most up to date number

Re: [PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-25 Thread Ilya Maximets
On 9/25/23 16:32, Stefan Hajnoczi wrote: > On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote: >> >> It was supposed to be a compiler barrier and it was a compiler barrier >> initially called 'wmb' (??) when virtio core support was introduced. >> Later all th

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 9/25/23 17:38, Stefan Hajnoczi wrote: > On Mon, 25 Sept 2023 at 11:36, Ilya Maximets wrote: >> >> On 9/25/23 17:12, Stefan Hajnoczi wrote: >>> On Mon, 25 Sept 2023 at 11:02, Ilya Maximets wrote: >>>> >>>> On 9/25/23 16:23, Stefan Hajnoczi

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 9/25/23 23:24, Michael S. Tsirkin wrote: > On Mon, Sep 25, 2023 at 10:58:05PM +0200, Ilya Maximets wrote: >> On 9/25/23 17:38, Stefan Hajnoczi wrote: >>> On Mon, 25 Sept 2023 at 11:36, Ilya Maximets wrote: >>>> >>>> On 9/25/23 17:12, Stefan Hajnoczi

[PATCH v2] virtio: use shadow_avail_idx while checking number of heads

2023-09-27 Thread Ilya Maximets
itself. The change improves performance of the af-xdp network backend by 2-3%. Signed-off-by: Ilya Maximets --- Version 2: - Changed to not skip error checks and a barrier. - Added comments about the need for a barrier. hw/virtio/virtio.c | 18 +++--- 1 file changed, 15

[PATCH v2 2/2] virtio: remove unused next argument from virtqueue_split_read_next_desc()

2023-09-27 Thread Ilya Maximets
"virtio: combine the read of a descriptor") Remove the unused argument to simplify the code. Also, adding a comment to the function to describe what it is actually doing, as it is not obvious that the 'desc' is both an input and an output argument. Signed-off-by: Ilya Maximets

[PATCH v2 1/2] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-27 Thread Ilya Maximets
27;t need to be an actual barrier, as its only purpose was to ensure that the value is not read twice. And since commit aa570d6fb6bd ("virtio: combine the read of a descriptor") there is no need for a barrier at all, since we're no longer reading guest memory here, but accessing a local

[PATCH v2 0/2] virtio: clean up of virtqueue_split_read_next_desc()

2023-09-27 Thread Ilya Maximets
Version 2: - Converted into a patch set adding a new patch that removes the 'next' argument. [Stefan] - Completely removing the barrier instead of changing into compiler barrier. [Stefan] Ilya Maximets (2): virtio: remove unnecessary thread fence while reading next

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-27 Thread Ilya Maximets
On 9/26/23 00:24, Michael S. Tsirkin wrote: > On Tue, Sep 26, 2023 at 12:13:11AM +0200, Ilya Maximets wrote: >> On 9/25/23 23:24, Michael S. Tsirkin wrote: >>> On Mon, Sep 25, 2023 at 10:58:05PM +0200, Ilya Maximets wrote: >>>> On 9/25/23 17:38, Stefan Hajnoczi wrote:

Re: [PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-27 Thread Ilya Maximets
On 9/25/23 20:04, Ilya Maximets wrote: > On 9/25/23 16:32, Stefan Hajnoczi wrote: >> On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote: >>> >>> It was supposed to be a compiler barrier and it was a compiler barrier >>> initially called 'wmb' (??) when

Re: [PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-27 Thread Ilya Maximets
On 9/27/23 17:41, Michael S. Tsirkin wrote: > On Wed, Sep 27, 2023 at 04:06:41PM +0200, Ilya Maximets wrote: >> On 9/25/23 20:04, Ilya Maximets wrote: >>> On 9/25/23 16:32, Stefan Hajnoczi wrote: >>>> On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote: >>>&g

Re: [PULL 00/17] Net patches

2023-09-13 Thread Ilya Maximets
On 9/8/23 16:15, Daniel P. Berrangé wrote: > On Fri, Sep 08, 2023 at 04:06:35PM +0200, Ilya Maximets wrote: >> On 9/8/23 14:15, Daniel P. Berrangé wrote: >>> On Fri, Sep 08, 2023 at 02:00:47PM +0200, Ilya Maximets wrote: >>>> On 9/8/23 13:49, Daniel P. Berrangé wrote:

Re: [PULL 00/17] Net patches

2023-09-18 Thread Ilya Maximets
On 9/14/23 10:13, Daniel P. Berrangé wrote: > On Wed, Sep 13, 2023 at 08:46:42PM +0200, Ilya Maximets wrote: >> On 9/8/23 16:15, Daniel P. Berrangé wrote: >>> On Fri, Sep 08, 2023 at 04:06:35PM +0200, Ilya Maximets wrote: >>>> On 9/8/23 14:15, Daniel P. Berrangé wro

Re: [PULL 00/17] Net patches

2023-09-19 Thread Ilya Maximets
On 9/19/23 10:40, Daniel P. Berrangé wrote: > On Mon, Sep 18, 2023 at 09:36:10PM +0200, Ilya Maximets wrote: >> On 9/14/23 10:13, Daniel P. Berrangé wrote: >>> On Wed, Sep 13, 2023 at 08:46:42PM +0200, Ilya Maximets wrote: >>>> On 9/8/23 16:15, Daniel P. Berrangé wro

Re: [PATCH v2] virtio: don't zero out memory region cache for indirect descriptors

2023-09-25 Thread Ilya Maximets
On 8/11/23 16:34, Ilya Maximets wrote: > Lots of virtio functions that are on a hot path in data transmission > are initializing indirect descriptor cache at the point of stack > allocation. It's a 112 byte structure that is getting zeroed out on > each call adding unnecessar

Re: [PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-09-25 Thread Ilya Maximets
On 8/25/23 19:04, Ilya Maximets wrote: > We do not need the most up to date number of heads, we only want to > know if there is at least one. > > Use shadow variable as long as it is not equal to the last available > index checked. This avoids expensive qatomic dereference of the

Re: [PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-09-25 Thread Ilya Maximets
On 8/25/23 19:01, Ilya Maximets wrote: > It was supposed to be a compiler barrier and it was a compiler barrier > initially called 'wmb' (??) when virtio core support was introduced. > Later all the instances of 'wmb' were switched to smp_wmb to fix memory > orde

[PATCH] virtio: don't zero out memory region cache for indirect descriptors

2023-08-07 Thread Ilya Maximets
s in terms of 64B packets per second by 6-14 % depending on the case. Tested with a proposed af-xdp network backend and a dpdk testpmd application in the guest, but should be beneficial for other virtio devices as well. Signed-off-by: Ilya Maximets --- hw/virtio/vir

Re: [PATCH] virtio: don't zero out memory region cache for indirect descriptors

2023-08-11 Thread Ilya Maximets
On 8/9/23 04:37, Jason Wang wrote: > On Tue, Aug 8, 2023 at 6:28 AM Ilya Maximets wrote: >> >> Lots of virtio functions that are on a hot path in data transmission >> are initializing indirect descriptor cache at the point of stack >> allocation. It's a 112 byte

Re: [PATCH] virtio: don't zero out memory region cache for indirect descriptors

2023-08-11 Thread Ilya Maximets
On 8/10/23 17:50, Stefan Hajnoczi wrote: > On Tue, Aug 08, 2023 at 12:28:47AM +0200, Ilya Maximets wrote: >> Lots of virtio functions that are on a hot path in data transmission >> are initializing indirect descriptor cache at the point of stack >> allocation. It's a

Re: [PATCH] virtio: don't zero out memory region cache for indirect descriptors

2023-08-11 Thread Ilya Maximets
On 8/11/23 15:58, Stefan Hajnoczi wrote: > > > On Fri, Aug 11, 2023, 08:50 Ilya Maximets <mailto:i.maxim...@ovn.org>> wrote: > > On 8/10/23 17:50, Stefan Hajnoczi wrote: > > On Tue, Aug 08, 2023 at 12:28:47AM +0200, Ilya Maximets wrote: > >>

[PATCH v2] virtio: don't zero out memory region cache for indirect descriptors

2023-08-11 Thread Ilya Maximets
n terms of 64B packets per second by 6-14 % depending on the case. Tested with a proposed af-xdp network backend and a dpdk testpmd application in the guest, but should be beneficial for other virtio devices as well. Signed-off-by: Ilya Maximets --- Version 2: * Introduced an initialization fu

Re: [PATCH 1/2] virtio: use blk_io_plug_call() in virtio_irqfd_notify()

2023-08-16 Thread Ilya Maximets
mit will remove it. I'm likely missing something, but could you explain why it is safe to batch unconditionally here? The current BH code, as you mentioned in the second patch, is only batching if EVENT_IDX is not set. Maybe worth adding a few words in the commit message for people like me, who are a bit

Re: [PATCH 1/2] virtio: use blk_io_plug_call() in virtio_irqfd_notify()

2023-08-16 Thread Ilya Maximets
On 8/16/23 17:30, Stefan Hajnoczi wrote: > On Wed, Aug 16, 2023 at 03:36:32PM +0200, Ilya Maximets wrote: >> On 8/15/23 14:08, Stefan Hajnoczi wrote: >>> virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used >>> Buffer Notifications from an IOThrea

Re: [PATCH v2 3/4] virtio: use defer_call() in virtio_irqfd_notify()

2023-08-21 Thread Ilya Maximets
On 8/17/23 17:58, Stefan Hajnoczi wrote: > virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used > Buffer Notifications from an IOThread. This involves an eventfd > write(2) syscall. Calling this repeatedly when completing multiple I/O > requests in a row is wasteful. > > Use the de

[PATCH] memory: initialize 'fv' in MemoryRegionCache to make Coverity happy

2023-10-09 Thread Ilya Maximets
1631 err_undo_map: 1632 virtqueue_undo_map_desc(out_num, in_num, iov); ** CID 1522370: Memory - illegal accesses (UNINIT) Instead of trying to silence these false positive reports in 4 different places, initializing 'fv' as well, as this doesn't result in any noti

[PATCH v2] net: add initial support for AF_XDP network backend

2023-07-05 Thread Ilya Maximets
Tx only : 1.2 Mpps Rx only : 1.0 Mpps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by: Ilya Maximets --- Version 2: - Added sup

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-07-07 Thread Ilya Maximets
2023 at 4:15 PM Stefan Hajnoczi >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> On Wed, 28 Jun 2023 at 09:59, Jason Wang wrote: >>>>>>>>>>> >>>>>>>>>>> On

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-07-10 Thread Ilya Maximets
On 7/10/23 05:51, Jason Wang wrote: > On Fri, Jul 7, 2023 at 7:21 PM Ilya Maximets wrote: >> >> On 7/7/23 03:43, Jason Wang wrote: >>> On Fri, Jul 7, 2023 at 3:08 AM Stefan Hajnoczi wrote: >>>> >>>> On Wed, 5 Jul 2023 at 02:02, Jason Wang wrote: &

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-26 Thread Ilya Maximets
On 6/26/23 08:32, Jason Wang wrote: > On Sun, Jun 25, 2023 at 3:06 PM Jason Wang wrote: >> >> On Fri, Jun 23, 2023 at 5:58 AM Ilya Maximets wrote: >>> >>> AF_XDP is a network socket family that allows communication directly >>> with the network device d

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-27 Thread Ilya Maximets
On 6/27/23 04:54, Jason Wang wrote: > On Mon, Jun 26, 2023 at 9:17 PM Ilya Maximets wrote: >> >> On 6/26/23 08:32, Jason Wang wrote: >>> On Sun, Jun 25, 2023 at 3:06 PM Jason Wang wrote: >>>> >>>> On Fri, Jun 23, 2023 at 5:58 AM Ilya Maximets wrot

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-27 Thread Ilya Maximets
> > Whether you pursue the passthrough approach or not, making -netdev > af-xdp work in an environment where QEMU runs unprivileged seems like > the most important practical issue to solve. Yes, working on it. Doesn't seem to be hard to do, but I need to test. Best regards, Ilya Maximets.

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-28 Thread Ilya Maximets
On 6/28/23 05:27, Jason Wang wrote: > On Wed, Jun 28, 2023 at 6:45 AM Ilya Maximets wrote: >> >> On 6/27/23 04:54, Jason Wang wrote: >>> On Mon, Jun 26, 2023 at 9:17 PM Ilya Maximets wrote: >>>> >>>> On 6/26/23 08:32, Jason Wang wrote: >>

Re: [PATCH] net: add initial support for AF_XDP network backend

2023-06-30 Thread Ilya Maximets
On 6/30/23 09:44, Jason Wang wrote: > On Wed, Jun 28, 2023 at 7:14 PM Ilya Maximets wrote: >> >> On 6/28/23 05:27, Jason Wang wrote: >>> On Wed, Jun 28, 2023 at 6:45 AM Ilya Maximets wrote: >>>> >>>> On 6/27/23 04:54, Jason Wang wrote: >>&g

[PATCH] net: add initial support for AF_XDP network backend

2023-06-22 Thread Ilya Maximets
pps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by: Ilya Maximets --- MAINTAINERS | 4 + hmp-commands.hx

Re: [PULL 00/17] Net patches

2023-09-08 Thread Ilya Maximets
On 9/8/23 13:19, Stefan Hajnoczi wrote: > Hi Ilya and Jason, > There is a CI failure related to a missing Debian libxdp-dev package: > https://gitlab.com/qemu-project/qemu/-/jobs/5046139967 > > I think the issue is that the debian-amd64 container image that QEMU > uses for testing is based on Debi

Re: [PULL 12/17] net: add initial support for AF_XDP network backend

2023-09-08 Thread Ilya Maximets
On 9/8/23 13:48, Daniel P. Berrangé wrote: > On Fri, Sep 08, 2023 at 02:45:02PM +0800, Jason Wang wrote: >> From: Ilya Maximets >> >> AF_XDP is a network socket family that allows communication directly >> with the network device driver in the kernel, bypassing m

Re: [PULL 00/17] Net patches

2023-09-08 Thread Ilya Maximets
On 9/8/23 13:49, Daniel P. Berrangé wrote: > On Fri, Sep 08, 2023 at 01:34:54PM +0200, Ilya Maximets wrote: >> On 9/8/23 13:19, Stefan Hajnoczi wrote: >>> Hi Ilya and Jason, >>> There is a CI failure related to a missing Debian libxdp-dev package: >>> https:/

Re: [PULL 00/17] Net patches

2023-09-08 Thread Ilya Maximets
On 9/8/23 14:15, Daniel P. Berrangé wrote: > On Fri, Sep 08, 2023 at 02:00:47PM +0200, Ilya Maximets wrote: >> On 9/8/23 13:49, Daniel P. Berrangé wrote: >>> On Fri, Sep 08, 2023 at 01:34:54PM +0200, Ilya Maximets wrote: >>>> On 9/8/23 13:19, Stefan Hajnoczi

[PATCH v4 0/2] net: add initial support for AF_XDP network backend

2023-09-13 Thread Ilya Maximets
having 32 MB of RLIMIT_MEMLOCK per queue. - Refined and extended documentation. Ilya Maximets (2): tests: bump libvirt-ci for libasan and libxdp net: add initial support for AF_XDP network backend MAINTAINERS | 4 + hmp-commands.hx

[PATCH v4 1/2] tests: bump libvirt-ci for libasan and libxdp

2023-09-13 Thread Ilya Maximets
This pulls in the fixes for libasan version as well as support for libxdp that will be used for af-xdp netdev in the next commits. Signed-off-by: Ilya Maximets --- tests/docker/dockerfiles/debian-amd64-cross.docker | 2 +- tests/docker/dockerfiles/debian-amd64.docker | 2 +- tests

[PATCH v4 2/2] net: add initial support for AF_XDP network backend

2023-09-13 Thread Ilya Maximets
: 1.0 Mpps L2 FWD Loopback : 0.7 Mpps Results in skb mode or over the veth are close to results of a tap backend with vhost=on and disabled segmentation offloading bridged with a NIC. Signed-off-by: Ilya Maximets --- MAINTAINERS | 4

Re: [PATCH v3 2/4] tests/lcitool: Refresh generated files

2024-01-02 Thread Ilya Maximets
refresh' on current git master that doesn't happen > > FTR since commit cb039ef3d9 libxdp-devel is also being changed on my > host, similarly to libpmem-devel, so I suppose it also has some host > specific restriction. Yeah, many distributions are not building libxdp for non

[PATCH] virtio: remove unnecessary thread fence while reading next descriptor

2023-08-25 Thread Ilya Maximets
27;t need to be an actual barrier. It's enough for it to stay a compiler barrier as its only purpose is to ensure that the value is not read twice. There is no counterpart read barrier in the drivers, AFAICT. And even if we needed an actual barrier, it shouldn't have been a write bar

[PATCH] virtio: use shadow_avail_idx while checking number of heads

2023-08-25 Thread Ilya Maximets
itself and the subsequent memory barrier. The change improves performance of the af-xdp network backend by 2-3%. Signed-off-by: Ilya Maximets --- hw/virtio/virtio.c | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index

Re: [Qemu-devel] [v5, 09/31] vhost: fix calling vhost_dev_cleanup() after vhost_dev_init()

2016-07-25 Thread Ilya Maximets
dev->nvqs; ++i, ++n_initialized_vqs) { r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i); if (r < 0) { -hdev->nvqs = i; goto fail; } } @@ -1136,6 +1137,7 @@ fail_busyloop: vhost_virtqueue_set_busyloop_timeout(hdev, hdev->vq_index + i, 0); } fail: +hdev->nvqs = n_initialized_vqs; vhost_dev_cleanup(hdev); return r; } -- Best regards, Ilya Maximets.

Re: [Qemu-devel] [v5, 17/31] vhost-user: keep vhost_net after a disconnection

2016-07-25 Thread Ilya Maximets
+42,7 @@ uint64_t vhost_user_get_acked_features(NetClientState *nc) { VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc); assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_USER); -return s->vhost_net ? vhost_net_get_acked_features(s->vhost_net) : 0; +return s->acked_features; } static void vhost_user_stop(int queues, NetClientState *ncs[]) @@ -55,6 +56,11 @@ static void vhost_user_stop(int queues, NetClientState *ncs[]) s = DO_UPCAST(VhostUserState, nc, ncs[i]); if (s->vhost_net) { +/* save acked features */ +uint64_t features = vhost_net_get_acked_features(s->vhost_net); +if (features) { +s->acked_features = features; +} vhost_net_cleanup(s->vhost_net); } } -- Best regards, Ilya Maximets.

Re: [Qemu-devel] [v5, 09/31] vhost: fix calling vhost_dev_cleanup() after vhost_dev_init()

2016-07-25 Thread Ilya Maximets
-1069,10 +1071,9 @@ int vhost_dev_init(struct vhost_dev *hdev, void >> *opaque, >> goto fail; >> } >> >> -for (i = 0; i < hdev->nvqs; ++i) { >> +for (i = 0; i < hdev->nvqs; ++i, ++n_initialized_vqs) { >> r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i); >> if (r < 0) { >> -hdev->nvqs = i; > > Isn't that assignment doing the same thing? Yes. But assignment to zero (hdev->nvqs = 0) required before all previous 'goto fail;' instructions. I think, it's not a clean solution. > btw, thanks for the review > >> goto fail; >> } >> } >> @@ -1136,6 +1137,7 @@ fail_busyloop: >> vhost_virtqueue_set_busyloop_timeout(hdev, hdev->vq_index + i, 0); >> } >> fail: >> +hdev->nvqs = n_initialized_vqs; >> vhost_dev_cleanup(hdev); >> return r; >> } >> -- >> >> Best regards, Ilya Maximets. >> > >

Re: [Qemu-devel] [v5, 09/31] vhost: fix calling vhost_dev_cleanup() after vhost_dev_init()

2016-07-25 Thread Ilya Maximets
_init(struct vhost_dev *hdev, void >>>> *opaque, >>>> VhostBackendType backend_type, uint32_t >>>> busyloop_timeout) >>>> { >>>> uint64_t features; >>>> -int i, r; >>>> +int i, r, n_initialized_vqs; >>>> >>>> +n_initialized_vqs = 0; >>>> hdev->migration_blocker = NULL; >>>> >>>> r = vhost_set_backend_type(hdev, backend_type); >>>> >>>> @@ -1069,10 +1071,9 @@ int vhost_dev_init(struct vhost_dev *hdev, void >>>> *opaque, >>>> goto fail; >>>> } >>>> >>>> -for (i = 0; i < hdev->nvqs; ++i) { >>>> +for (i = 0; i < hdev->nvqs; ++i, ++n_initialized_vqs) { >>>> r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + >>>> i); >>>> if (r < 0) { >>>> -hdev->nvqs = i; >>> >>> Isn't that assignment doing the same thing? >> >> Yes. >> But assignment to zero (hdev->nvqs = 0) required before all previous >> 'goto fail;' instructions. I think, it's not a clean solution. >> > > Good point, I'll squash your change, Thanks for fixing it. > should I add your sign-off-by? I don't mind if you want to. Best regards, Ilya Maximets.

[Qemu-devel] [PATCH] vhost: check for vhost_ops before using.

2016-08-02 Thread Ilya Maximets
ool -L eth0 combined 2' if vhost disconnected. Signed-off-by: Ilya Maximets --- hw/net/vhost_net.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index dc61dc1..f2d49ad 100644 --- a/hw/net/vhost_net.c +++ b/hw/net/vhost_net.c @@ -428,7 +

[Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-06 Thread Ilya Maximets
7;vhost_net_stop' to avoid any possible double frees and segmentation faults doue to using of already freed resources by setting 'vhost_started' flag to zero prior to 'vhost_net_stop' call. Signed-off-by: Ilya Maximets --- This issue was already addressed more than a year ago by th

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-06 Thread Ilya Maximets
On 06.12.2017 19:45, Michael S. Tsirkin wrote: > On Wed, Dec 06, 2017 at 04:06:18PM +0300, Ilya Maximets wrote: >> In case virtio error occured after vhost_dev_close(), qemu will crash >> in nested cleanup while checking IOMMU flag because dev->vdev already >> set to zero a

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-08 Thread Ilya Maximets
On 07.12.2017 20:27, Michael S. Tsirkin wrote: > On Thu, Dec 07, 2017 at 09:39:36AM +0300, Ilya Maximets wrote: >> On 06.12.2017 19:45, Michael S. Tsirkin wrote: >>> On Wed, Dec 06, 2017 at 04:06:18PM +0300, Ilya Maximets wrote: >>>> In case virtio error occured afte

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-13 Thread Ilya Maximets
On 11.12.2017 07:35, Michael S. Tsirkin wrote: > On Fri, Dec 08, 2017 at 05:54:18PM +0300, Ilya Maximets wrote: >> On 07.12.2017 20:27, Michael S. Tsirkin wrote: >>> On Thu, Dec 07, 2017 at 09:39:36AM +0300, Ilya Maximets wrote: >>>> On 06.12.2017 19:45, Michael S. Ts

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-13 Thread Ilya Maximets
On 13.12.2017 22:48, Michael S. Tsirkin wrote: > On Wed, Dec 13, 2017 at 04:45:20PM +0300, Ilya Maximets wrote: >>>> That >>>> looks very strange. Some of the functions gets 'old_status', others >>>> the 'new_status'. I'm a bit

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-14 Thread Ilya Maximets
of broken guest index. Thanks. Best regards, Ilya Maximets. P.S. Previously I mentioned that I can not reproduce virtio driver crash with "[PATCH] virtio_error: don't invoke status callbacks" applied. I was wrong. I can reproduce now. System was misconfigured. So

Re: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop

2017-12-14 Thread Ilya Maximets
On 14.12.2017 17:31, Ilya Maximets wrote: > One update for the testing scenario: > > No need to kill OVS. The issue reproducible with simple 'del-port' > and 'add-port'. virtio driver in guest could crash on both operations. > Most times it crashes in m

Re: [Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2018-12-11 Thread Ilya Maximets
On 10.12.2018 19:18, Igor Mammedov wrote: > On Tue, 27 Nov 2018 16:50:27 +0300 > Ilya Maximets wrote: > > s/wihtout/without/ in subj > >> If seals are not supported, memfd_create() will fail. >> Furthermore, there is no way to disable it in this case because

Re: [Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2018-12-11 Thread Ilya Maximets
On 11.12.2018 13:53, Daniel P. Berrangé wrote: > On Tue, Nov 27, 2018 at 04:50:27PM +0300, Ilya Maximets wrote: >> If seals are not supported, memfd_create() will fail. >> Furthermore, there is no way to disable it in this case because >> '.seal' property is not

[Qemu-devel] Are FreeBSD guest images working?

2018-11-15 Thread Ilya Maximets
achine accel=kvm -m 2048 \ -cpu host -enable-kvm -nographic -smp 2 \ -drive if=virtio,file=./FreeBSD-11.2-RELEASE-amd64.qcow2,format=qcow2 Best regards, Ilya Maximets.

[Qemu-devel] [RFC 0/2] vhost+postcopy fixes

2018-10-08 Thread Ilya Maximets
Sending as RFC because it's not fully tested yet. Ilya Maximets (2): migration: Stop postcopy fault thread before notifying vhost-user: Fix userfaultfd leak hw/virtio/vhost-user.c | 7 +++ migration/postcopy-ram.c | 11 ++- 2 files changed, 13 insertions(+), 5 dele

[Qemu-devel] [RFC 2/2] vhost-user: Fix userfaultfd leak

2018-10-08 Thread Ilya Maximets
ed ufd with postcopy") Cc: qemu-sta...@nongnu.org Signed-off-by: Ilya Maximets --- hw/virtio/vhost-user.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index c442daa562..e09bed0e4a 100644 --- a/hw/virtio/vhost-user.c +++ b/h

[Qemu-devel] [RFC 1/2] migration: Stop postcopy fault thread before notifying

2018-10-08 Thread Ilya Maximets
END notify") Cc: qemu-sta...@nongnu.org Signed-off-by: Ilya Maximets --- migration/postcopy-ram.c | 11 ++- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index 853d8b32ca..e5c02a32c5 100644 --- a/migration/postc

[Qemu-devel] [PATCH] vhost-user: Don't ask for reply on postcopy mem table set

2018-10-02 Thread Ilya Maximets
2c ("vhost+postcopy: Send address back to qemu") Signed-off-by: Ilya Maximets --- hw/virtio/vhost-user.c | 13 + 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index b041343632..c442daa562 100644 --- a/hw/virtio/v

[Qemu-devel] Have multiple virtio-net devices, but only one of them receives all traffic

2018-10-02 Thread Ilya Maximets
> Hi, > > I'm using QEMU 3.0.0 and Linux kernel 4.15.0 on x86 machines. I'm > observing pretty weird behavior when I have multiple virtio-net > devices. My KVM VM has two virtio-net devices (vhost=off) and I'm > using a Linux bridge in the host. The two devices have different > MAC/IP addresses. >

[Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
em,size=2M,: \ failed to create memfd: Invalid argument Signed-off-by: Ilya Maximets --- backends/hostmem-memfd.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/backends/hostmem-memfd.c b/backends/hostmem-memfd.c index b6836b28e5..ee39bdbde6 100644 --- a/backends/hostm

[Qemu-devel] [PATCH 0/4] memfd fixes.

2018-11-27 Thread Ilya Maximets
Ilya Maximets (4): hostmem-memfd: enable seals only if supported memfd: always check for MFD_CLOEXEC memfd: set up correct errno if not supported memfd: improve error messages backends/hostmem-memfd.c | 4 ++-- util/memfd.c | 10 -- 2 files changed, 10 insertions

[Qemu-devel] [PATCH 3/4] memfd: set up correct errno if not supported

2018-11-27 Thread Ilya Maximets
qemu_memfd_create() prints the value of 'errno' which is not set in this case. Signed-off-by: Ilya Maximets --- util/memfd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/util/memfd.c b/util/memfd.c index d74ce4d793..393d23da96 100644 --- a/util/memfd.c +++ b/util/memfd.c @@ -

[Qemu-devel] [PATCH 2/4] memfd: always check for MFD_CLOEXEC

2018-11-27 Thread Ilya Maximets
QEMU always sets this flag unconditionally. We need to check if it's supported. Signed-off-by: Ilya Maximets --- util/memfd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/util/memfd.c b/util/memfd.c index 8debd0d037..d74ce4d793 100644 --- a/util/memfd.c +++ b/util/me

[Qemu-devel] [PATCH 4/4] memfd: improve error messages

2018-11-27 Thread Ilya Maximets
This gives more information about the failure. Additionally 'ENOSYS' returned for a non-Linux platforms instead of 'errno', which is not initilaized in this case. Signed-off-by: Ilya Maximets --- util/memfd.c | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) diff

Re: [Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
On 27.11.2018 14:49, Marc-André Lureau wrote: > Hi > On Tue, Nov 27, 2018 at 3:11 PM Ilya Maximets wrote: >> >> If seals are not supported, memfd_create() will fail. >> Furthermore, there is no way to disable it in this case because >> '.seal' property is n

Re: [Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
On 27.11.2018 15:00, Marc-André Lureau wrote: > Hi > On Tue, Nov 27, 2018 at 3:56 PM Ilya Maximets wrote: >> >> On 27.11.2018 14:49, Marc-André Lureau wrote: >>> Hi >>> On Tue, Nov 27, 2018 at 3:11 PM Ilya Maximets >>> wrote: >>>>

Re: [Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
On 27.11.2018 15:29, Marc-André Lureau wrote: > Hi > > On Tue, Nov 27, 2018 at 4:02 PM Ilya Maximets wrote: >> >> On 27.11.2018 15:00, Marc-André Lureau wrote: >>> Hi >>> On Tue, Nov 27, 2018 at 3:56 PM Ilya Maximets >>> wrote: >>>>

Re: [Qemu-devel] [PATCH 1/4] hostmem-memfd: enable seals only if supported

2018-11-27 Thread Ilya Maximets
On 27.11.2018 15:56, Marc-André Lureau wrote: > Hi > > On Tue, Nov 27, 2018 at 4:37 PM Ilya Maximets wrote: >> >> On 27.11.2018 15:29, Marc-André Lureau wrote: >>> Hi >>> >>> On Tue, Nov 27, 2018 at 4:02 PM Ilya Maximets >>> wrot

[Qemu-devel] [PATCH v2 2/4] memfd: always check for MFD_CLOEXEC

2018-11-27 Thread Ilya Maximets
QEMU always sets this flag unconditionally. We need to check if it's supported. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/util/memfd.c b/util/memfd.c index 8debd0d037..d74ce4d793 100644

[Qemu-devel] [PATCH v2 0/4] memfd fixes.

2018-11-27 Thread Ilya Maximets
Version 2: * First patch changed to just drop the memfd backend if seals are not supported. Ilya Maximets (4): hostmem-memfd: disable for systems wihtout sealing support memfd: always check for MFD_CLOEXEC memfd: set up correct errno if not supported memfd: improve error

[Qemu-devel] [PATCH v2 1/4] hostmem-memfd: disable for systems wihtout sealing support

2018-11-27 Thread Ilya Maximets
em,size=2M,: \ failed to create memfd: Invalid argument and actually breaks the feature on such systems. Let's restrict memfd backend to systems with sealing support. Signed-off-by: Ilya Maximets --- backends/hostmem-memfd.c | 18 -- tests/vhost-user-test.c | 6

[Qemu-devel] [PATCH v2 3/4] memfd: set up correct errno if not supported

2018-11-27 Thread Ilya Maximets
qemu_memfd_create() prints the value of 'errno' which is not set in this case. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 1 + 1 file changed, 1 insertion(+) diff --git a/util/memfd.c b/util/memfd.c index d74ce4d793..393d23da96 100644 --- a/ut

[Qemu-devel] [PATCH v2 4/4] memfd: improve error messages

2018-11-27 Thread Ilya Maximets
This gives more information about the failure. Additionally 'ENOSYS' returned for a non-Linux platforms instead of 'errno', which is not initilaized in this case. Signed-off-by: Ilya Maximets Reviewed-by: Marc-André Lureau --- util/memfd.c | 7 ++- 1 file changed

  1   2   >