Fix QEMU crash when -netdev vhost-user,queues=n is passed with number
of queues greater than MAX_QUEUE_NUM.
Signed-off-by: Ilya Maximets
---
net/vhost-user.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/vhost-user.c b/net/vhost-user.c
index 451dbbf..b753b3d
qdisc <...>
link/ether 00:16:35:af:aa:4b brd ff:ff:ff:ff:ff:ff
[---- cut ---]
Signed-off-by: Ilya Maximets
---
hw/net/vhost_net.c | 18 +-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_n
No need to notify nc->peer if nothing changed.
Signed-off-by: Ilya Maximets
---
net/net.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/net.c b/net/net.c
index 3b5a142..6f6a8ce 100644
--- a/net/net.c
+++ b/net/net.c
@@ -1385,9 +1385,10 @@ void qmp_set_link(co
io_net_vhost_status
#8 virtio_net_set_status
#9 virtio_set_status
<...>
[ cut ---]
Fix that by introducing of reference counter for vhost_net device
and freeing memory only after dropping of last reference.
Signed-off-by: Ilya Maximets
---
hw/net/vho
---]
In example above assertion will fail when control will be brought back
to function at #17 and it will try to free 'eventfd' that was already
freed at call #3.
Fix that by disallowing execution of vhost_net_stop() if we're
already inside of it.
Signed-of
estarted to restore communication after restarting
of vhost-user application.
Ilya Maximets (4):
vhost-user: fix crash on socket disconnect.
vhost: prevent double stop of vhost_net device.
vhost: check for vhost_net device validity.
net: notify about li
On 30.03.2016 20:01, Michael S. Tsirkin wrote:
> On Wed, Mar 30, 2016 at 06:14:05PM +0300, Ilya Maximets wrote:
>> Currently QEMU always crashes in following scenario (assume that
>> vhost-user application is Open vSwitch with 'dpdkvhostuser' port):
>
> In fact, wo
On 31.03.2016 12:21, Michael S. Tsirkin wrote:
> On Thu, Mar 31, 2016 at 09:02:01AM +0300, Ilya Maximets wrote:
>> On 30.03.2016 20:01, Michael S. Tsirkin wrote:
>>> On Wed, Mar 30, 2016 at 06:14:05PM +0300, Ilya Maximets wrote:
>>>> Currently QEMU always crashes in f
--- Original Message ---
Sender : Michael S. Tsirkin
Date : Apr 05, 2016 13:46 (GMT+03:00)
Title : Re: [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect.
> On Thu, Mar 31, 2016 at 09:02:01AM +0300, Ilya Maximets wrote:
> > On 30.03.2016 20:01, Michael S. Tsirkin wrote:
> --- Original Message ---
> Sender : Michael S. Tsirkin
> Date : Apr 07, 2016 10:01 (GMT+03:00)
> Title : Re: Re: [PATCH 0/4] Fix QEMU crash on vhost-user socket disconnect.
>
> On Wed, Apr 06, 2016 at 11:52:56PM +0000, Ilya Maximets wrote:
> > --- Original Me
On 7/20/23 09:37, Jason Wang wrote:
> On Thu, Jul 6, 2023 at 4:58 AM Ilya Maximets wrote:
>>
>> AF_XDP is a network socket family that allows communication directly
>> with the network device driver in the kernel, bypassing most or all
>> of the kernel networking
On 7/25/23 08:55, Jason Wang wrote:
> On Thu, Jul 20, 2023 at 9:26 PM Ilya Maximets wrote:
>>
>> On 7/20/23 09:37, Jason Wang wrote:
>>> On Thu, Jul 6, 2023 at 4:58 AM Ilya Maximets wrote:
>>>>
>>>> AF_XDP is a network socket family that allows com
: 1.0 Mpps
L2 FWD Loopback : 0.7 Mpps
Results in skb mode or over the veth are close to results of a tap
backend with vhost=on and disabled segmentation offloading bridged
with a NIC.
Signed-off-by: Ilya Maximets
---
Version 3:
- Bump requirements to libxdp 1.4.0+. Having that, rem
Best regards, Ilya Maximets.
On 27.11.2018 16:50, Ilya Maximets wrote:
> Version 2:
> * First patch changed to just drop the memfd backend
> if seals are not supported.
>
> Ilya Maximets (4):
> hostmem-memfd: disable for systems wihtout sealing support
>
Version 3:
* Rebase on top of current master.
Version 2:
* First patch changed to just drop the memfd backend
if seals are not supported.
Ilya Maximets (4):
hostmem-memfd: disable for systems wihtout sealing support
memfd: always check for MFD_CLOEXEC
memfd: set up correct
em,size=2M,: \
failed to create memfd: Invalid argument
and actually breaks the feature on such systems.
Let's restrict memfd backend to systems with sealing support.
Signed-off-by: Ilya Maximets
---
backends/hostmem-memfd.c | 18 --
tests/vhost-user-test.c | 5 +++--
QEMU always sets this flag unconditionally. We need to
check if it's supported.
Signed-off-by: Ilya Maximets
Reviewed-by: Marc-André Lureau
---
util/memfd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/util/memfd.c b/util/memfd.c
index 8debd0d037..d74ce4d793 100644
qemu_memfd_create() prints the value of 'errno' which is not
set in this case.
Signed-off-by: Ilya Maximets
Reviewed-by: Marc-André Lureau
---
util/memfd.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/util/memfd.c b/util/memfd.c
index d74ce4d793..393d23da96 100644
--- a/ut
This gives more information about the failure.
Additionally 'ENOSYS' returned for a non-Linux platforms instead of
'errno', which is not initilaized in this case.
Signed-off-by: Ilya Maximets
Reviewed-by: Marc-André Lureau
---
util/memfd.c | 7 ++-
1 file changed
"0x2" is much more readable than "8589934592".
The change saves one step (conversion) while debugging.
Signed-off-by: Ilya Maximets
---
hw/net/vhost_net.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_ne
On 9/25/23 16:23, Stefan Hajnoczi wrote:
> On Fri, 25 Aug 2023 at 13:04, Ilya Maximets wrote:
>>
>> We do not need the most up to date number of heads, we only want to
>> know if there is at least one.
>>
>> Use shadow variable as long as it is not equal to th
On 9/25/23 17:12, Stefan Hajnoczi wrote:
> On Mon, 25 Sept 2023 at 11:02, Ilya Maximets wrote:
>>
>> On 9/25/23 16:23, Stefan Hajnoczi wrote:
>>> On Fri, 25 Aug 2023 at 13:04, Ilya Maximets wrote:
>>>>
>>>> We do not need the most up to date number
On 9/25/23 16:32, Stefan Hajnoczi wrote:
> On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote:
>>
>> It was supposed to be a compiler barrier and it was a compiler barrier
>> initially called 'wmb' (??) when virtio core support was introduced.
>> Later all th
On 9/25/23 17:38, Stefan Hajnoczi wrote:
> On Mon, 25 Sept 2023 at 11:36, Ilya Maximets wrote:
>>
>> On 9/25/23 17:12, Stefan Hajnoczi wrote:
>>> On Mon, 25 Sept 2023 at 11:02, Ilya Maximets wrote:
>>>>
>>>> On 9/25/23 16:23, Stefan Hajnoczi
On 9/25/23 23:24, Michael S. Tsirkin wrote:
> On Mon, Sep 25, 2023 at 10:58:05PM +0200, Ilya Maximets wrote:
>> On 9/25/23 17:38, Stefan Hajnoczi wrote:
>>> On Mon, 25 Sept 2023 at 11:36, Ilya Maximets wrote:
>>>>
>>>> On 9/25/23 17:12, Stefan Hajnoczi
itself.
The change improves performance of the af-xdp network backend by 2-3%.
Signed-off-by: Ilya Maximets
---
Version 2:
- Changed to not skip error checks and a barrier.
- Added comments about the need for a barrier.
hw/virtio/virtio.c | 18 +++---
1 file changed, 15
"virtio: combine the read of a descriptor")
Remove the unused argument to simplify the code.
Also, adding a comment to the function to describe what it is actually
doing, as it is not obvious that the 'desc' is both an input and an
output argument.
Signed-off-by: Ilya Maximets
27;t need
to be an actual barrier, as its only purpose was to ensure that the
value is not read twice.
And since commit aa570d6fb6bd ("virtio: combine the read of a descriptor")
there is no need for a barrier at all, since we're no longer reading
guest memory here, but accessing a local
Version 2:
- Converted into a patch set adding a new patch that removes the
'next' argument. [Stefan]
- Completely removing the barrier instead of changing into compiler
barrier. [Stefan]
Ilya Maximets (2):
virtio: remove unnecessary thread fence while reading next
On 9/26/23 00:24, Michael S. Tsirkin wrote:
> On Tue, Sep 26, 2023 at 12:13:11AM +0200, Ilya Maximets wrote:
>> On 9/25/23 23:24, Michael S. Tsirkin wrote:
>>> On Mon, Sep 25, 2023 at 10:58:05PM +0200, Ilya Maximets wrote:
>>>> On 9/25/23 17:38, Stefan Hajnoczi wrote:
On 9/25/23 20:04, Ilya Maximets wrote:
> On 9/25/23 16:32, Stefan Hajnoczi wrote:
>> On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote:
>>>
>>> It was supposed to be a compiler barrier and it was a compiler barrier
>>> initially called 'wmb' (??) when
On 9/27/23 17:41, Michael S. Tsirkin wrote:
> On Wed, Sep 27, 2023 at 04:06:41PM +0200, Ilya Maximets wrote:
>> On 9/25/23 20:04, Ilya Maximets wrote:
>>> On 9/25/23 16:32, Stefan Hajnoczi wrote:
>>>> On Fri, 25 Aug 2023 at 13:02, Ilya Maximets wrote:
>>>&g
On 9/8/23 16:15, Daniel P. Berrangé wrote:
> On Fri, Sep 08, 2023 at 04:06:35PM +0200, Ilya Maximets wrote:
>> On 9/8/23 14:15, Daniel P. Berrangé wrote:
>>> On Fri, Sep 08, 2023 at 02:00:47PM +0200, Ilya Maximets wrote:
>>>> On 9/8/23 13:49, Daniel P. Berrangé wrote:
On 9/14/23 10:13, Daniel P. Berrangé wrote:
> On Wed, Sep 13, 2023 at 08:46:42PM +0200, Ilya Maximets wrote:
>> On 9/8/23 16:15, Daniel P. Berrangé wrote:
>>> On Fri, Sep 08, 2023 at 04:06:35PM +0200, Ilya Maximets wrote:
>>>> On 9/8/23 14:15, Daniel P. Berrangé wro
On 9/19/23 10:40, Daniel P. Berrangé wrote:
> On Mon, Sep 18, 2023 at 09:36:10PM +0200, Ilya Maximets wrote:
>> On 9/14/23 10:13, Daniel P. Berrangé wrote:
>>> On Wed, Sep 13, 2023 at 08:46:42PM +0200, Ilya Maximets wrote:
>>>> On 9/8/23 16:15, Daniel P. Berrangé wro
On 8/11/23 16:34, Ilya Maximets wrote:
> Lots of virtio functions that are on a hot path in data transmission
> are initializing indirect descriptor cache at the point of stack
> allocation. It's a 112 byte structure that is getting zeroed out on
> each call adding unnecessar
On 8/25/23 19:04, Ilya Maximets wrote:
> We do not need the most up to date number of heads, we only want to
> know if there is at least one.
>
> Use shadow variable as long as it is not equal to the last available
> index checked. This avoids expensive qatomic dereference of the
On 8/25/23 19:01, Ilya Maximets wrote:
> It was supposed to be a compiler barrier and it was a compiler barrier
> initially called 'wmb' (??) when virtio core support was introduced.
> Later all the instances of 'wmb' were switched to smp_wmb to fix memory
> orde
s in terms of 64B packets per second by 6-14 %
depending on the case. Tested with a proposed af-xdp network backend
and a dpdk testpmd application in the guest, but should be beneficial
for other virtio devices as well.
Signed-off-by: Ilya Maximets
---
hw/virtio/vir
On 8/9/23 04:37, Jason Wang wrote:
> On Tue, Aug 8, 2023 at 6:28 AM Ilya Maximets wrote:
>>
>> Lots of virtio functions that are on a hot path in data transmission
>> are initializing indirect descriptor cache at the point of stack
>> allocation. It's a 112 byte
On 8/10/23 17:50, Stefan Hajnoczi wrote:
> On Tue, Aug 08, 2023 at 12:28:47AM +0200, Ilya Maximets wrote:
>> Lots of virtio functions that are on a hot path in data transmission
>> are initializing indirect descriptor cache at the point of stack
>> allocation. It's a
On 8/11/23 15:58, Stefan Hajnoczi wrote:
>
>
> On Fri, Aug 11, 2023, 08:50 Ilya Maximets <mailto:i.maxim...@ovn.org>> wrote:
>
> On 8/10/23 17:50, Stefan Hajnoczi wrote:
> > On Tue, Aug 08, 2023 at 12:28:47AM +0200, Ilya Maximets wrote:
> >>
n terms of 64B packets per second by 6-14 %
depending on the case. Tested with a proposed af-xdp network backend
and a dpdk testpmd application in the guest, but should be beneficial
for other virtio devices as well.
Signed-off-by: Ilya Maximets
---
Version 2:
* Introduced an initialization fu
mit will remove it.
I'm likely missing something, but could you explain why it is safe to
batch unconditionally here? The current BH code, as you mentioned in
the second patch, is only batching if EVENT_IDX is not set.
Maybe worth adding a few words in the commit message for people like
me, who are a bit
On 8/16/23 17:30, Stefan Hajnoczi wrote:
> On Wed, Aug 16, 2023 at 03:36:32PM +0200, Ilya Maximets wrote:
>> On 8/15/23 14:08, Stefan Hajnoczi wrote:
>>> virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used
>>> Buffer Notifications from an IOThrea
On 8/17/23 17:58, Stefan Hajnoczi wrote:
> virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used
> Buffer Notifications from an IOThread. This involves an eventfd
> write(2) syscall. Calling this repeatedly when completing multiple I/O
> requests in a row is wasteful.
>
> Use the de
1631 err_undo_map:
1632 virtqueue_undo_map_desc(out_num, in_num, iov);
** CID 1522370: Memory - illegal accesses (UNINIT)
Instead of trying to silence these false positive reports in 4
different places, initializing 'fv' as well, as this doesn't result
in any noti
Tx only : 1.2 Mpps
Rx only : 1.0 Mpps
L2 FWD Loopback : 0.7 Mpps
Results in skb mode or over the veth are close to results of a tap
backend with vhost=on and disabled segmentation offloading bridged
with a NIC.
Signed-off-by: Ilya Maximets
---
Version 2:
- Added sup
2023 at 4:15 PM Stefan Hajnoczi
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> On Wed, 28 Jun 2023 at 09:59, Jason Wang wrote:
>>>>>>>>>>>
>>>>>>>>>>> On
On 7/10/23 05:51, Jason Wang wrote:
> On Fri, Jul 7, 2023 at 7:21 PM Ilya Maximets wrote:
>>
>> On 7/7/23 03:43, Jason Wang wrote:
>>> On Fri, Jul 7, 2023 at 3:08 AM Stefan Hajnoczi wrote:
>>>>
>>>> On Wed, 5 Jul 2023 at 02:02, Jason Wang wrote:
&
On 6/26/23 08:32, Jason Wang wrote:
> On Sun, Jun 25, 2023 at 3:06 PM Jason Wang wrote:
>>
>> On Fri, Jun 23, 2023 at 5:58 AM Ilya Maximets wrote:
>>>
>>> AF_XDP is a network socket family that allows communication directly
>>> with the network device d
On 6/27/23 04:54, Jason Wang wrote:
> On Mon, Jun 26, 2023 at 9:17 PM Ilya Maximets wrote:
>>
>> On 6/26/23 08:32, Jason Wang wrote:
>>> On Sun, Jun 25, 2023 at 3:06 PM Jason Wang wrote:
>>>>
>>>> On Fri, Jun 23, 2023 at 5:58 AM Ilya Maximets wrot
>
> Whether you pursue the passthrough approach or not, making -netdev
> af-xdp work in an environment where QEMU runs unprivileged seems like
> the most important practical issue to solve.
Yes, working on it. Doesn't seem to be hard to do, but I need to test.
Best regards, Ilya Maximets.
On 6/28/23 05:27, Jason Wang wrote:
> On Wed, Jun 28, 2023 at 6:45 AM Ilya Maximets wrote:
>>
>> On 6/27/23 04:54, Jason Wang wrote:
>>> On Mon, Jun 26, 2023 at 9:17 PM Ilya Maximets wrote:
>>>>
>>>> On 6/26/23 08:32, Jason Wang wrote:
>>
On 6/30/23 09:44, Jason Wang wrote:
> On Wed, Jun 28, 2023 at 7:14 PM Ilya Maximets wrote:
>>
>> On 6/28/23 05:27, Jason Wang wrote:
>>> On Wed, Jun 28, 2023 at 6:45 AM Ilya Maximets wrote:
>>>>
>>>> On 6/27/23 04:54, Jason Wang wrote:
>>&g
pps
L2 FWD Loopback : 0.7 Mpps
Results in skb mode or over the veth are close to results of a tap
backend with vhost=on and disabled segmentation offloading bridged
with a NIC.
Signed-off-by: Ilya Maximets
---
MAINTAINERS | 4 +
hmp-commands.hx
On 9/8/23 13:19, Stefan Hajnoczi wrote:
> Hi Ilya and Jason,
> There is a CI failure related to a missing Debian libxdp-dev package:
> https://gitlab.com/qemu-project/qemu/-/jobs/5046139967
>
> I think the issue is that the debian-amd64 container image that QEMU
> uses for testing is based on Debi
On 9/8/23 13:48, Daniel P. Berrangé wrote:
> On Fri, Sep 08, 2023 at 02:45:02PM +0800, Jason Wang wrote:
>> From: Ilya Maximets
>>
>> AF_XDP is a network socket family that allows communication directly
>> with the network device driver in the kernel, bypassing m
On 9/8/23 13:49, Daniel P. Berrangé wrote:
> On Fri, Sep 08, 2023 at 01:34:54PM +0200, Ilya Maximets wrote:
>> On 9/8/23 13:19, Stefan Hajnoczi wrote:
>>> Hi Ilya and Jason,
>>> There is a CI failure related to a missing Debian libxdp-dev package:
>>> https:/
On 9/8/23 14:15, Daniel P. Berrangé wrote:
> On Fri, Sep 08, 2023 at 02:00:47PM +0200, Ilya Maximets wrote:
>> On 9/8/23 13:49, Daniel P. Berrangé wrote:
>>> On Fri, Sep 08, 2023 at 01:34:54PM +0200, Ilya Maximets wrote:
>>>> On 9/8/23 13:19, Stefan Hajnoczi
having 32 MB of RLIMIT_MEMLOCK per queue.
- Refined and extended documentation.
Ilya Maximets (2):
tests: bump libvirt-ci for libasan and libxdp
net: add initial support for AF_XDP network backend
MAINTAINERS | 4 +
hmp-commands.hx
This pulls in the fixes for libasan version as well as support for
libxdp that will be used for af-xdp netdev in the next commits.
Signed-off-by: Ilya Maximets
---
tests/docker/dockerfiles/debian-amd64-cross.docker | 2 +-
tests/docker/dockerfiles/debian-amd64.docker | 2 +-
tests
: 1.0 Mpps
L2 FWD Loopback : 0.7 Mpps
Results in skb mode or over the veth are close to results of a tap
backend with vhost=on and disabled segmentation offloading bridged
with a NIC.
Signed-off-by: Ilya Maximets
---
MAINTAINERS | 4
refresh' on current git master that doesn't happen
>
> FTR since commit cb039ef3d9 libxdp-devel is also being changed on my
> host, similarly to libpmem-devel, so I suppose it also has some host
> specific restriction.
Yeah, many distributions are not building libxdp for non
27;t need
to be an actual barrier. It's enough for it to stay a compiler barrier
as its only purpose is to ensure that the value is not read twice.
There is no counterpart read barrier in the drivers, AFAICT. And even
if we needed an actual barrier, it shouldn't have been a write bar
itself
and the subsequent memory barrier.
The change improves performance of the af-xdp network backend by 2-3%.
Signed-off-by: Ilya Maximets
---
hw/virtio/virtio.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index
dev->nvqs; ++i, ++n_initialized_vqs) {
r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i);
if (r < 0) {
-hdev->nvqs = i;
goto fail;
}
}
@@ -1136,6 +1137,7 @@ fail_busyloop:
vhost_virtqueue_set_busyloop_timeout(hdev, hdev->vq_index + i, 0);
}
fail:
+hdev->nvqs = n_initialized_vqs;
vhost_dev_cleanup(hdev);
return r;
}
--
Best regards, Ilya Maximets.
+42,7 @@ uint64_t vhost_user_get_acked_features(NetClientState *nc)
{
VhostUserState *s = DO_UPCAST(VhostUserState, nc, nc);
assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_USER);
-return s->vhost_net ? vhost_net_get_acked_features(s->vhost_net) : 0;
+return s->acked_features;
}
static void vhost_user_stop(int queues, NetClientState *ncs[])
@@ -55,6 +56,11 @@ static void vhost_user_stop(int queues, NetClientState
*ncs[])
s = DO_UPCAST(VhostUserState, nc, ncs[i]);
if (s->vhost_net) {
+/* save acked features */
+uint64_t features = vhost_net_get_acked_features(s->vhost_net);
+if (features) {
+s->acked_features = features;
+}
vhost_net_cleanup(s->vhost_net);
}
}
--
Best regards, Ilya Maximets.
-1069,10 +1071,9 @@ int vhost_dev_init(struct vhost_dev *hdev, void
>> *opaque,
>> goto fail;
>> }
>>
>> -for (i = 0; i < hdev->nvqs; ++i) {
>> +for (i = 0; i < hdev->nvqs; ++i, ++n_initialized_vqs) {
>> r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index + i);
>> if (r < 0) {
>> -hdev->nvqs = i;
>
> Isn't that assignment doing the same thing?
Yes.
But assignment to zero (hdev->nvqs = 0) required before all previous
'goto fail;' instructions. I think, it's not a clean solution.
> btw, thanks for the review
>
>> goto fail;
>> }
>> }
>> @@ -1136,6 +1137,7 @@ fail_busyloop:
>> vhost_virtqueue_set_busyloop_timeout(hdev, hdev->vq_index + i, 0);
>> }
>> fail:
>> +hdev->nvqs = n_initialized_vqs;
>> vhost_dev_cleanup(hdev);
>> return r;
>> }
>> --
>>
>> Best regards, Ilya Maximets.
>>
>
>
_init(struct vhost_dev *hdev, void
>>>> *opaque,
>>>> VhostBackendType backend_type, uint32_t
>>>> busyloop_timeout)
>>>> {
>>>> uint64_t features;
>>>> -int i, r;
>>>> +int i, r, n_initialized_vqs;
>>>>
>>>> +n_initialized_vqs = 0;
>>>> hdev->migration_blocker = NULL;
>>>>
>>>> r = vhost_set_backend_type(hdev, backend_type);
>>>>
>>>> @@ -1069,10 +1071,9 @@ int vhost_dev_init(struct vhost_dev *hdev, void
>>>> *opaque,
>>>> goto fail;
>>>> }
>>>>
>>>> -for (i = 0; i < hdev->nvqs; ++i) {
>>>> +for (i = 0; i < hdev->nvqs; ++i, ++n_initialized_vqs) {
>>>> r = vhost_virtqueue_init(hdev, hdev->vqs + i, hdev->vq_index +
>>>> i);
>>>> if (r < 0) {
>>>> -hdev->nvqs = i;
>>>
>>> Isn't that assignment doing the same thing?
>>
>> Yes.
>> But assignment to zero (hdev->nvqs = 0) required before all previous
>> 'goto fail;' instructions. I think, it's not a clean solution.
>>
>
> Good point, I'll squash your change,
Thanks for fixing it.
> should I add your sign-off-by?
I don't mind if you want to.
Best regards, Ilya Maximets.
ool -L eth0 combined 2' if vhost disconnected.
Signed-off-by: Ilya Maximets
---
hw/net/vhost_net.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index dc61dc1..f2d49ad 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -428,7 +
7;vhost_net_stop' to avoid
any possible double frees and segmentation faults doue to using of
already freed resources by setting 'vhost_started' flag to zero prior
to 'vhost_net_stop' call.
Signed-off-by: Ilya Maximets
---
This issue was already addressed more than a year ago by th
On 06.12.2017 19:45, Michael S. Tsirkin wrote:
> On Wed, Dec 06, 2017 at 04:06:18PM +0300, Ilya Maximets wrote:
>> In case virtio error occured after vhost_dev_close(), qemu will crash
>> in nested cleanup while checking IOMMU flag because dev->vdev already
>> set to zero a
On 07.12.2017 20:27, Michael S. Tsirkin wrote:
> On Thu, Dec 07, 2017 at 09:39:36AM +0300, Ilya Maximets wrote:
>> On 06.12.2017 19:45, Michael S. Tsirkin wrote:
>>> On Wed, Dec 06, 2017 at 04:06:18PM +0300, Ilya Maximets wrote:
>>>> In case virtio error occured afte
On 11.12.2017 07:35, Michael S. Tsirkin wrote:
> On Fri, Dec 08, 2017 at 05:54:18PM +0300, Ilya Maximets wrote:
>> On 07.12.2017 20:27, Michael S. Tsirkin wrote:
>>> On Thu, Dec 07, 2017 at 09:39:36AM +0300, Ilya Maximets wrote:
>>>> On 06.12.2017 19:45, Michael S. Ts
On 13.12.2017 22:48, Michael S. Tsirkin wrote:
> On Wed, Dec 13, 2017 at 04:45:20PM +0300, Ilya Maximets wrote:
>>>> That
>>>> looks very strange. Some of the functions gets 'old_status', others
>>>> the 'new_status'. I'm a bit
of broken guest index.
Thanks.
Best regards, Ilya Maximets.
P.S. Previously I mentioned that I can not reproduce virtio driver
crash with "[PATCH] virtio_error: don't invoke status callbacks"
applied. I was wrong. I can reproduce now. System was misconfigured.
So
On 14.12.2017 17:31, Ilya Maximets wrote:
> One update for the testing scenario:
>
> No need to kill OVS. The issue reproducible with simple 'del-port'
> and 'add-port'. virtio driver in guest could crash on both operations.
> Most times it crashes in m
On 10.12.2018 19:18, Igor Mammedov wrote:
> On Tue, 27 Nov 2018 16:50:27 +0300
> Ilya Maximets wrote:
>
> s/wihtout/without/ in subj
>
>> If seals are not supported, memfd_create() will fail.
>> Furthermore, there is no way to disable it in this case because
On 11.12.2018 13:53, Daniel P. Berrangé wrote:
> On Tue, Nov 27, 2018 at 04:50:27PM +0300, Ilya Maximets wrote:
>> If seals are not supported, memfd_create() will fail.
>> Furthermore, there is no way to disable it in this case because
>> '.seal' property is not
achine accel=kvm -m 2048 \
-cpu host -enable-kvm -nographic -smp 2 \
-drive if=virtio,file=./FreeBSD-11.2-RELEASE-amd64.qcow2,format=qcow2
Best regards, Ilya Maximets.
Sending as RFC because it's not fully tested yet.
Ilya Maximets (2):
migration: Stop postcopy fault thread before notifying
vhost-user: Fix userfaultfd leak
hw/virtio/vhost-user.c | 7 +++
migration/postcopy-ram.c | 11 ++-
2 files changed, 13 insertions(+), 5 dele
ed ufd with postcopy")
Cc: qemu-sta...@nongnu.org
Signed-off-by: Ilya Maximets
---
hw/virtio/vhost-user.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index c442daa562..e09bed0e4a 100644
--- a/hw/virtio/vhost-user.c
+++ b/h
END notify")
Cc: qemu-sta...@nongnu.org
Signed-off-by: Ilya Maximets
---
migration/postcopy-ram.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 853d8b32ca..e5c02a32c5 100644
--- a/migration/postc
2c ("vhost+postcopy: Send address back to qemu")
Signed-off-by: Ilya Maximets
---
hw/virtio/vhost-user.c | 13 +
1 file changed, 1 insertion(+), 12 deletions(-)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index b041343632..c442daa562 100644
--- a/hw/virtio/v
> Hi,
>
> I'm using QEMU 3.0.0 and Linux kernel 4.15.0 on x86 machines. I'm
> observing pretty weird behavior when I have multiple virtio-net
> devices. My KVM VM has two virtio-net devices (vhost=off) and I'm
> using a Linux bridge in the host. The two devices have different
> MAC/IP addresses.
>
em,size=2M,: \
failed to create memfd: Invalid argument
Signed-off-by: Ilya Maximets
---
backends/hostmem-memfd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/backends/hostmem-memfd.c b/backends/hostmem-memfd.c
index b6836b28e5..ee39bdbde6 100644
--- a/backends/hostm
Ilya Maximets (4):
hostmem-memfd: enable seals only if supported
memfd: always check for MFD_CLOEXEC
memfd: set up correct errno if not supported
memfd: improve error messages
backends/hostmem-memfd.c | 4 ++--
util/memfd.c | 10 --
2 files changed, 10 insertions
qemu_memfd_create() prints the value of 'errno' which is not
set in this case.
Signed-off-by: Ilya Maximets
---
util/memfd.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/util/memfd.c b/util/memfd.c
index d74ce4d793..393d23da96 100644
--- a/util/memfd.c
+++ b/util/memfd.c
@@ -
QEMU always sets this flag unconditionally. We need to
check if it's supported.
Signed-off-by: Ilya Maximets
---
util/memfd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/util/memfd.c b/util/memfd.c
index 8debd0d037..d74ce4d793 100644
--- a/util/memfd.c
+++ b/util/me
This gives more information about the failure.
Additionally 'ENOSYS' returned for a non-Linux platforms instead of
'errno', which is not initilaized in this case.
Signed-off-by: Ilya Maximets
---
util/memfd.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff
On 27.11.2018 14:49, Marc-André Lureau wrote:
> Hi
> On Tue, Nov 27, 2018 at 3:11 PM Ilya Maximets wrote:
>>
>> If seals are not supported, memfd_create() will fail.
>> Furthermore, there is no way to disable it in this case because
>> '.seal' property is n
On 27.11.2018 15:00, Marc-André Lureau wrote:
> Hi
> On Tue, Nov 27, 2018 at 3:56 PM Ilya Maximets wrote:
>>
>> On 27.11.2018 14:49, Marc-André Lureau wrote:
>>> Hi
>>> On Tue, Nov 27, 2018 at 3:11 PM Ilya Maximets
>>> wrote:
>>>>
On 27.11.2018 15:29, Marc-André Lureau wrote:
> Hi
>
> On Tue, Nov 27, 2018 at 4:02 PM Ilya Maximets wrote:
>>
>> On 27.11.2018 15:00, Marc-André Lureau wrote:
>>> Hi
>>> On Tue, Nov 27, 2018 at 3:56 PM Ilya Maximets
>>> wrote:
>>>>
On 27.11.2018 15:56, Marc-André Lureau wrote:
> Hi
>
> On Tue, Nov 27, 2018 at 4:37 PM Ilya Maximets wrote:
>>
>> On 27.11.2018 15:29, Marc-André Lureau wrote:
>>> Hi
>>>
>>> On Tue, Nov 27, 2018 at 4:02 PM Ilya Maximets
>>> wrot
QEMU always sets this flag unconditionally. We need to
check if it's supported.
Signed-off-by: Ilya Maximets
Reviewed-by: Marc-André Lureau
---
util/memfd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/util/memfd.c b/util/memfd.c
index 8debd0d037..d74ce4d793 100644
Version 2:
* First patch changed to just drop the memfd backend
if seals are not supported.
Ilya Maximets (4):
hostmem-memfd: disable for systems wihtout sealing support
memfd: always check for MFD_CLOEXEC
memfd: set up correct errno if not supported
memfd: improve error
em,size=2M,: \
failed to create memfd: Invalid argument
and actually breaks the feature on such systems.
Let's restrict memfd backend to systems with sealing support.
Signed-off-by: Ilya Maximets
---
backends/hostmem-memfd.c | 18 --
tests/vhost-user-test.c | 6
qemu_memfd_create() prints the value of 'errno' which is not
set in this case.
Signed-off-by: Ilya Maximets
Reviewed-by: Marc-André Lureau
---
util/memfd.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/util/memfd.c b/util/memfd.c
index d74ce4d793..393d23da96 100644
--- a/ut
This gives more information about the failure.
Additionally 'ENOSYS' returned for a non-Linux platforms instead of
'errno', which is not initilaized in this case.
Signed-off-by: Ilya Maximets
Reviewed-by: Marc-André Lureau
---
util/memfd.c | 7 ++-
1 file changed
1 - 100 of 114 matches
Mail list logo