On Thu, Sep 25, 2025 at 06:36:49PM +0800, Jason Wang wrote:
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then
On Thu, Sep 25, 2025 at 06:37:08PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Performance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
>
Hi Ulf,
Thanks for reviewing this patch.
On Thu, Sep 25, 2025 at 12:18:39PM +0200, Ulf Hansson wrote:
>On Tue, 23 Sept 2025 at 07:17, Peng Fan wrote:
>>
>> The order of runtime PM API calls in the remove path is wrong.
>> pm_runtime_put() should be called before pm_runti
On Thu, 25 Sept 2025 at 15:12, Peng Fan wrote:
>
> Hi Ulf,
>
> Thanks for reviewing this patch.
>
> On Thu, Sep 25, 2025 at 12:18:39PM +0200, Ulf Hansson wrote:
> >On Tue, 23 Sept 2025 at 07:17, Peng Fan wrote:
> >>
> >> The order of runtim
On Wed, Sep 24, 2025 at 01:38:03PM +0800, Jason Wang wrote:
> On Mon, Sep 22, 2025 at 2:24 AM Michael S. Tsirkin wrote:
> >
> > On Fri, Sep 19, 2025 at 03:31:54PM +0800, Jason Wang wrote:
> > > This patch implements in order support for both split virtqueue and
> >
The order of runtime PM API calls in the remove path is wrong.
pm_runtime_put() should be called before pm_runtime_disable(), per the
runtime PM guidelines. Calling pm_runtime_disable() prematurely can
lead to incorrect reference counting and improper device suspend behavior.
Additionally, proper
On Mon, Sep 22, 2025 at 10:07:08AM -0600, Mathieu Poirier wrote:
>On Wed, Sep 17, 2025 at 09:19:13PM +0800, Peng Fan wrote:
>> The order of runtime PM API calls in the remove path is wrong.
>> pm_runtime_put() should be called before pm_runtime_disable(), per the
>> runtime PM
On Wed, Sep 17, 2025 at 09:19:13PM +0800, Peng Fan wrote:
> The order of runtime PM API calls in the remove path is wrong.
> pm_runtime_put() should be called before pm_runtime_disable(), per the
> runtime PM guidelines.
Where is this mentioned? I have looked in [1] and couldn't
On Mon, Sep 22, 2025 at 1:40 AM Michael S. Tsirkin wrote:
>
> On Fri, Sep 19, 2025 at 03:31:54PM +0800, Jason Wang wrote:
> > This patch implements in order support for both split virtqueue and
> > packed virtqueue. Perfomance could be gained for the device where the
> >
On Fri, Sep 19, 2025 at 03:31:54PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
> Ben
On Fri, Sep 19, 2025 at 03:31:54PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance
Performance
> could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a rea
On Fri, Sep 19, 2025 at 03:31:54PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
> Ben
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 2%-19% imporvment with packed virtqueue
This patch implements in order support for both split virtqueue and
packed virtqueue. Perfomance could be gained for the device where the
memory access could be expensive (e.g vhost-net or a real PCI device):
Benchmark with KVM guest:
Vhost-net on the host: (pktgen + XDP_DROP
On Wed, Sep 17, 2025 at 09:19:13PM +0800, Peng Fan wrote:
> The order of runtime PM API calls in the remove path is wrong.
> pm_runtime_put() should be called before pm_runtime_disable(), per the
> runtime PM guidelines. Calling pm_runtime_disable() prematurely can
> lead to incorre
The order of runtime PM API calls in the remove path is wrong.
pm_runtime_put() should be called before pm_runtime_disable(), per the
runtime PM guidelines. Calling pm_runtime_disable() prematurely can
lead to incorrect reference counting and improper device suspend behavior.
Additionally, proper
this discrepancy is fixed one way or the
> > other,
> > but it should most definitely be fixed.
>
> I'm of the same opinion, but if it is fixed on the kernel side, then
> (assuming no device implementation with the wrong order exists) I
> think
> maybe the fix should be
On Fri, Mar 14, 2025 at 10:17:02AM +0100, Luca Weiss wrote:
> During upstreaming the order of clocks was adjusted to match the
> upstream sort order, but mistakently freq-table-hz wasn't re-ordered
> with the new order.
>
> Fix that by moving the entry for the ICE c
#x27;t really care if this discrepancy is fixed one way or the other,
> but it should most definitely be fixed.
I'm of the same opinion, but if it is fixed on the kernel side, then
(assuming no device implementation with the wrong order exists) I think
maybe the fix should be backported
On Fri, 2025-09-12 at 04:40 -0400, Michael S. Tsirkin wrote:
> This reverts commit 5326ab737a47278dbd16ed3ee7380b26c7056ddd.
>
> The problem is that for a long time, the
> Linux kernel used a different field order from what was specified in
> the
> virtio spec. The kernel
On Fri, Sep 12, 2025 at 04:06:37PM +0200, Filip Hejsek wrote:
> On Fri, 2025-09-12 at 04:56 -0400, Michael S. Tsirkin wrote:
> > when a previous version of this
> > patch series was being discussed here on this mailing list in 2020, it
> > was decided that QEMU should match the Linux implementation
On Fri, 2025-09-12 at 04:56 -0400, Michael S. Tsirkin wrote:
> when a previous version of this
> patch series was being discussed here on this mailing list in 2020, it
> was decided that QEMU should match the Linux implementation, and ideally,
> the virtio spec should be changed.
This wording has
This reverts commit 5326ab737a47278dbd16ed3ee7380b26c7056ddd.
The problem is that for a long time, the
Linux kernel used a different field order from what was specified in the
virtio spec. The kernel implementation was apparently merged around 2010,
while the virtio spec came in 2014, so when a
This reverts commit 5326ab737a47278dbd16ed3ee7380b26c7056ddd.
The problem is that for a long time, the
Linux kernel used a different field order from what was specified in the
virtio spec. The kernel implementation was apparently merged around 2010,
while the virtio spec came in 2014, so when a
api disables.
>
> For details, during suspend flow of virtio-net,
> the tx queue state is set to "__QUEUE_STATE_DRV_XOFF" by CPU-A.
>
> [...]
Here is the summary with links:
- virtio_net: adjust the execution order of function `virtnet_close` during
freeze
https:
stop_queue`,
once `virnet_poll` schedules in such coincidental time,
the tx queue state will be cleared.
To solve this issue, adjusts the order of
function `virtnet_close` in `virtnet_freeze_down`.
Co-developed-by: Ying Xu
Signed-off-by: Ying Xu
Signed-off-by: Junnan Wu
Message-Id: <
l be easier to find the fixes tag. At the face of it the patch
> > > > makes it look like close() doesn't reliably stop the device, which
> > > > is highly odd.
> > >
> > > Yes, you are right. It is really strange that `close()` acts like
> > > th
ably stop the device, which
> > > is highly odd.
> >
> > Yes, you are right. It is really strange that `close()` acts like
> > that, because current order has worked for long time. But panic call
> > stack in our env shows that the function `virtnet_close` and
> &
s to be more clearly understood, and then it
> > will be easier to find the fixes tag. At the face of it the patch
> > makes it look like close() doesn't reliably stop the device, which
> > is highly odd.
>
> Yes, you are right. It is really strange that `close
> will be easier to find the fixes tag. At the face of it the patch
> makes it look like close() doesn't reliably stop the device, which
> is highly odd.
Yes, you are right. It is really strange that `close()` acts like that, because
current order has worked for long time.
But panic ca
gt; > kernel test robot noticed "BUG:KASAN:slab-use-after-free_in__inet_hash"
> > > on:
> > >
> > > commit: 859ca60b71ef223e210d3d003a225d9ca70879fd ("[PATCH net v2] net:
> > > ip: order the reuseport socket in __inet_hash")
> > > url:
> &
: 859ca60b71ef223e210d3d003a225d9ca70879fd ("[PATCH net v2] net: ip:
> > order the reuseport socket in __inet_hash")
> > url:
> > https://github.com/intel-lab-lkp/linux/commits/Menglong-Dong/net-ip-order-the-reuseport-socket-in-__inet_has
On Mon, Aug 11, 2025 at 01:27:12PM +0800, kernel test robot wrote:
>
>
> Hello,
>
> kernel test robot noticed "BUG:KASAN:slab-use-after-free_in__inet_hash" on:
>
> commit: 859ca60b71ef223e210d3d003a225d9ca70879fd ("[PATCH net v2] net: ip:
> order the
On Fri, 15 Aug 2025 14:06:15 +0800 Junnan Wu wrote:
> On Fri, 15 Aug 2025 13:38:21 +0800 Jason Wang wrote
> > On Fri, Aug 15, 2025 at 10:24 AM Junnan Wu wrote:
> >
> > > Sorry, I basically mean that the tx napi which caused by
> > > userspace will not be scheduled during suspend, others can no
On Fri, 15 Aug 2025 13:38:21 +0800 Jason Wang wrote
> On Fri, Aug 15, 2025 at 10:24 AM Junnan Wu wrote:
> >
> > Sorry, I basically mean that the tx napi which caused by userspace will not
> > be scheduled during suspend,
> > others can not be guaranteed, such as unfinished packets already in tx
On Fri, Aug 15, 2025 at 10:24 AM Junnan Wu wrote:
>
> Sorry, I basically mean that the tx napi which caused by userspace will not
> be scheduled during suspend,
> others can not be guaranteed, such as unfinished packets already in tx vq etc.
>
> But after this patch, once `virtnet_close` complete
Sorry, I basically mean that the tx napi which caused by userspace will not be
scheduled during suspend,
others can not be guaranteed, such as unfinished packets already in tx vq etc.
But after this patch, once `virtnet_close` completes,
both tx and rq napi will be disabled which guarantee their
On Thu, Aug 14, 2025 at 2:44 PM Junnan Wu wrote:
>
> On Thu, 14 Aug 2025 12:01:18 +0800 Jason Wang wrote:
> > On Thu, Aug 14, 2025 at 10:36 AM Junnan Wu wrote:
> > >
> > > On Wed, 13 Aug 2025 17:23:07 -0700 Jakub Kicinski wrote:
> > > > Sounds like a fix people may want to backport. Could you rep
On Thu, 14 Aug 2025 14:49:06 +0800 Jason Wang wrote:
> On Thu, Aug 14, 2025 at 2:44 PM Junnan Wu wrote:
> >
> > On Thu, 14 Aug 2025 12:01:18 +0800 Jason Wang wrote:
> > > On Thu, Aug 14, 2025 at 10:36 AM Junnan Wu
> > > wrote:
> > > >
> > > > On Wed, 13 Aug 2025 17:23:07 -0700 Jakub Kicinski wro
On Thu, Aug 14, 2025 at 2:44 PM Junnan Wu wrote:
>
> On Thu, 14 Aug 2025 12:01:18 +0800 Jason Wang wrote:
> > On Thu, Aug 14, 2025 at 10:36 AM Junnan Wu wrote:
> > >
> > > On Wed, 13 Aug 2025 17:23:07 -0700 Jakub Kicinski wrote:
> > > > Sounds like a fix people may want to backport. Could you rep
On Thu, 14 Aug 2025 12:01:18 +0800 Jason Wang wrote:
> On Thu, Aug 14, 2025 at 10:36 AM Junnan Wu wrote:
> >
> > On Wed, 13 Aug 2025 17:23:07 -0700 Jakub Kicinski wrote:
> > > Sounds like a fix people may want to backport. Could you repost with
> > > an appropriate Fixes tag added, pointing to the
On Thu, Aug 14, 2025 at 10:36 AM Junnan Wu wrote:
>
> On Wed, 13 Aug 2025 17:23:07 -0700 Jakub Kicinski wrote:
> > Sounds like a fix people may want to backport. Could you repost with
> > an appropriate Fixes tag added, pointing to the earliest commit where
> > the problem can be observed?
>
> Thi
On Wed, 13 Aug 2025 17:23:07 -0700 Jakub Kicinski wrote:
> Sounds like a fix people may want to backport. Could you repost with
> an appropriate Fixes tag added, pointing to the earliest commit where
> the problem can be observed?
This issue is caused by commit "7b0411ef4aa69c9256d6a2c289d0a2b320
On Tue, 12 Aug 2025 17:08:17 +0800 Junnan Wu wrote:
> "Use after free" issue appears in suspend once race occurs when
> napi poll scheduls after `netif_device_detach` and before napi disables.
Sounds like a fix people may want to backport. Could you repost with
an appropriate Fixes tag added, poi
virnet_poll` schedules in such coincidental time,
the tx queue state will be cleared.
To solve this issue, adjusts the order of
function `virtnet_close` in `virtnet_freeze_down`.
Co-developed-by: Ying Xu
Signed-off-by: Ying Xu
Signed-off-by: Junnan Wu
---
drivers/net/virtio_net.c | 7 ---
Hello,
kernel test robot noticed "BUG:KASAN:slab-use-after-free_in__inet_hash" on:
commit: 859ca60b71ef223e210d3d003a225d9ca70879fd ("[PATCH net v2] net: ip:
order the reuseport socket in __inet_hash")
url:
https://github.com/intel-lab-lkp/linux/commits/Menglong-D
, Jason Wang wrote:
> > > This patch implements in order support for both split virtqueue and
> > > packed virtqueue. Perfomance could be gained for the device where the
> > > memory access could be expensive (e.g vhost-net or a real PCI device):
> > >
> > &
e can
> > implement separate helpers for different virtqueue layout/features
> > then the in-order were implemented on top.
> >
> > Tests shows 2%-19% imporvment with packed virtqueue PPS with KVM guest
> > vhost-net/testpmd on the host.
> >
> > Chang
On Mon, Jul 28, 2025 at 6:17 PM Michael S. Tsirkin wrote:
>
> On Mon, Jul 28, 2025 at 02:41:29PM +0800, Jason Wang wrote:
> > This patch implements in order support for both split virtqueue and
> > packed virtqueue. Perfomance could be gained for the device where the
> >
On Mon, Jul 14, 2025 at 10:48 AM Jason Wang wrote:
>
> This patch adds basic in order support for vhost. Two optimizations
> are implemented in this patch:
>
> 1) Since driver uses descriptor in order, vhost can deduce the next
>avail ring head by counting the number of de
On Thu, Jul 24, 2025 at 8:40 AM Jason Wang wrote:
>
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then the in-o
On Mon, Jul 28, 2025 at 02:41:29PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
> Ben
This patch implements in order support for both split virtqueue and
packed virtqueue. Perfomance could be gained for the device where the
memory access could be expensive (e.g vhost-net or a real PCI device):
Benchmark with KVM guest:
Vhost-net on the host: (pktgen + XDP_DROP
On Sat, Jul 26, 2025 at 4:57 AM Thorsten Blum wrote:
>
> Hi Jason,
>
> On 23. Jul 2025, at 23:40, Jason Wang wrote:
> >
> > This patch implements in order support for both split virtqueue and
> > packed virtqueue. Perfomance could be gained for the device where
Hi Jason,
On 23. Jul 2025, at 23:40, Jason Wang wrote:
>
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
>
On Thu, Jul 24, 2025 at 02:40:17PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue. Perfomance could be gained for the device where the
> memory access could be expensive (e.g vhost-net or a real PCI device):
>
> Ben
This patch implements in order support for both split virtqueue and
packed virtqueue. Perfomance could be gained for the device where the
memory access could be expensive (e.g vhost-net or a real PCI device):
Benchmark with KVM guest:
Vhost-net on the host: (pktgen + XDP_DROP
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 2%-19% imporvment with packed virtqueue
On 7/18/25 11:29 AM, Michael S. Tsirkin wrote:
> Paolo I'm likely confused. That series is in net-next, right?
> So now it would be work to drop it from there, and invalidate
> all the testing it got there, for little benefit -
> the merge conflict is easy to resolve.
Yes, that series is in net-ne
On Fri, Jul 18, 2025 at 11:19:26AM +0200, Paolo Abeni wrote:
> On 7/18/25 4:04 AM, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni wrote:
> >> On 7/17/25 8:01 AM, Jason Wang wrote:
> >>> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin
> >>> wrote:
> On Thu, Jul 17, 2025
On 7/18/25 4:04 AM, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni wrote:
>> On 7/17/25 8:01 AM, Jason Wang wrote:
>>> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 8:04 AM
On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni wrote:
>
> On 7/17/25 8:01 AM, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
> >> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> >>> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
>
> On
On 7/17/25 8:01 AM, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
>> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
>>> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> This seri
On Thu, Jul 17, 2025 at 02:01:06PM +0800, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
> >
> > On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> > > On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
> > > >
> > > > On Mon, 14 Jul 2025 16:47:52 +080
On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin wrote:
>
> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
> > >
> > > On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > > > This series implements VIRTIO_F_IN_ORDER sup
On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
> >
> > On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > > This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> > > feature is designed to improve the perfo
On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski wrote:
>
> On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> > feature is designed to improve the performance of the virtio ring by
> > optimizing descriptor processing.
> >
On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> feature is designed to improve the performance of the virtio ring by
> optimizing descriptor processing.
>
> Benchmarks show a notable improvement. Please see patch 3 for d
of vhost_add_used_ooo()
> - conisty nheads for vhost_add_used_in_order()
> - typo fixes and other tweaks
>
> Thanks
>
> Jason Wang (3):
> vhost: fail early when __vhost_add_used() fails
> vhost: basic in order support
> vhost_net: basic in_order support
>
> driv
This patch adds basic in order support for vhost. Two optimizations
are implemented in this patch:
1) Since driver uses descriptor in order, vhost can deduce the next
avail ring head by counting the number of descriptors that has been
used in next_avail_head. This eliminate the need to
early when vhost_add_used() fails
- drop unused parameters of vhost_add_used_ooo()
- conisty nheads for vhost_add_used_in_order()
- typo fixes and other tweaks
Thanks
Jason Wang (3):
vhost: fail early when __vhost_add_used() fails
vhost: basic in order support
vhost_net: basic in_order
On Thu, Jul 10, 2025 at 5:05 PM Eugenio Perez Martin
wrote:
>
> On Tue, Jul 8, 2025 at 8:48 AM Jason Wang wrote:
> >
> > This patch adds basic in order support for vhost. Two optimizations
> > are implemented in this patch:
> >
> > 1) Since driver uses descr
On Tue, Jul 8, 2025 at 8:48 AM Jason Wang wrote:
>
> This patch adds basic in order support for vhost. Two optimizations
> are implemented in this patch:
>
> 1) Since driver uses descriptor in order, vhost can deduce the next
>avail ring head by counting the number of de
On 7/8/25 2:48 AM, Jason Wang wrote:
This patch adds basic in order support for vhost. Two optimizations
are implemented in this patch:
1) Since driver uses descriptor in order, vhost can deduce the next
avail ring head by counting the number of descriptors that has been
used in
This patch adds basic in order support for vhost. Two optimizations
are implemented in this patch:
1) Since driver uses descriptor in order, vhost can deduce the next
avail ring head by counting the number of descriptors that has been
used in next_avail_head. This eliminate the need to
order support
vhost_net: basic in_order support
drivers/vhost/net.c | 88 +-
drivers/vhost/vhost.c | 121 +++---
drivers/vhost/vhost.h | 8 ++-
3 files changed, 170 insertions(+), 47 deletions(-)
--
2.31.1
Jul 1, 2025 at 2:57 PM Michael S. Tsirkin wrote:
> >>>>
> >>>> On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> >>>>> This patch implements in order support for both split virtqueue and
> >>>>> packed virtqueue.
> >&g
implements in order support for both split virtqueue and
packed virtqueue.
I'd like to see more motivation for this work, documented.
It's not really performance, not as it stands, see below:
Benchmark with KVM guest + testpmd on the host shows:
For split virtqueue: no obvious differ
On Wed, Jul 2, 2025 at 6:57 PM Michael S. Tsirkin wrote:
>
> On Wed, Jul 02, 2025 at 05:29:18PM +0800, Jason Wang wrote:
> > On Tue, Jul 1, 2025 at 2:57 PM Michael S. Tsirkin wrote:
> > >
> > > On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> >
On Wed, Jul 02, 2025 at 05:29:18PM +0800, Jason Wang wrote:
> On Tue, Jul 1, 2025 at 2:57 PM Michael S. Tsirkin wrote:
> >
> > On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> > > This patch implements in order support for both split virtqueue and
> > &
On Tue, Jul 1, 2025 at 2:57 PM Michael S. Tsirkin wrote:
>
> On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> > This patch implements in order support for both split virtqueue and
> > packed virtqueue.
>
> I'd like to see more motivation for this work,
When writing symtypes information, we iterate through the entire hash
table containing type expansions. The key order varies unpredictably
as new entries are added, making it harder to compare symtypes between
builds.
Resolve this by sorting the type expansions by name before output.
Signed-off
On Mon, Jun 16, 2025 at 04:24:58PM +0800, Jason Wang wrote:
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then
On Mon, Jun 16, 2025 at 04:25:17PM +0800, Jason Wang wrote:
> This patch implements in order support for both split virtqueue and
> packed virtqueue.
I'd like to see more motivation for this work, documented.
It's not really performance, not as it stands, see below:
>
> Be
18:51, Masahiro Yamada
> > > wrote:
> > > >
> > > > On Wed, Jun 25, 2025 at 6:52 PM Giuliano Procida
> > > > wrote:
> > > > >
> > > > > When writing symtypes information, we iterate through the entire hash
> > > &
cida
> > > wrote:
> > > >
> > > > When writing symtypes information, we iterate through the entire hash
> > > > table containing type expansions. The key order varies unpredictably
> > > > as new entries are added, making it harder to compare sy
e through the entire hash
> > > table containing type expansions. The key order varies unpredictably
> > > as new entries are added, making it harder to compare symtypes between
> > > builds.
> > >
> > > Resolve this by sorting the type expansions b
Hi.
On Sun, 29 Jun 2025 at 18:51, Masahiro Yamada wrote:
>
> On Wed, Jun 25, 2025 at 6:52 PM Giuliano Procida wrote:
> >
> > When writing symtypes information, we iterate through the entire hash
> > table containing type expansions. The key order varies unpredictabl
On Wed, Jun 25, 2025 at 6:52 PM Giuliano Procida wrote:
>
> When writing symtypes information, we iterate through the entire hash
> table containing type expansions. The key order varies unpredictably
> as new entries are added, making it harder to compare symtypes between
> buil
When writing symtypes information, we iterate through the entire hash
table containing type expansions. The key order varies unpredictably
as new entries are added, making it harder to compare symtypes between
builds.
Resolve this by sorting the type expansions by name before output.
Signed-off
On Mon, Jun 16, 2025 at 10:25 AM Jason Wang wrote:
>
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> t
This patch implements in order support for both split virtqueue and
packed virtqueue.
Benchmark with KVM guest + testpmd on the host shows:
For split virtqueue: no obvious differences were noticed
For packed virtqueue:
1) RX gets 3.1% PPS improvements from 6.3 Mpps to 6.5 Mpps
2) TX gets 4.6
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 3%-5% imporvment with packed virtqueue PPS
Previously, the order for acquiring the locks required for the migration
function move_enc_context_from() was: 1) memslot lock 2) vCPU lock. This
can trigger a deadlock warning because a vCPU IOCTL modifying memslots
will acquire the locks in reverse order: 1) vCPU lock 2) memslot lock.
This
we can
> > implement separate helpers for different virtqueue layout/features
> > then the in-order were implemented on top.
> >
> > Tests shows 3%-5% imporvment with packed virtqueue PPS with KVM guest
> > testpmd on the host.
>
> ok this looks quite clean. We are i
On Wed, May 28, 2025 at 02:42:15PM +0800, Jason Wang wrote:
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then
On Wed, May 28, 2025 at 8:42 AM Jason Wang wrote:
>
> Hello all:
>
> This sereis tries to implement the VIRTIO_F_IN_ORDER to
> virtio_ring. This is done by introducing virtqueue ops so we can
> implement separate helpers for different virtqueue layout/features
> then the in-o
This patch implements in order support for both split virtqueue and
packed virtqueue.
Benchmark with KVM guest + testpmd on the host shows:
For split virtqueue: no obvious differences were noticed
For packed virtqueue:
1) RX gets 3.1% PPS improvements from 6.3 Mpps to 6.5 Mpps
2) TX gets 4.6
Hello all:
This sereis tries to implement the VIRTIO_F_IN_ORDER to
virtio_ring. This is done by introducing virtqueue ops so we can
implement separate helpers for different virtqueue layout/features
then the in-order were implemented on top.
Tests shows 3%-5% imporvment with packed virtqueue PPS
>
> > > > Tested-by: Lei Yang
> > > >
> > > > On Mon, Mar 24, 2025 at 1:45 PM Jason Wang wrote:
> > > > >
> > > > > Hello all:
> > > > >
> > > > > This sereis tries to implement the VIRTIO_F_IN_OR
1 - 100 of 3222 matches
Mail list logo