Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size

2017-05-25 Thread Michael S. Tsirkin
On Thu, May 25, 2017 at 08:13:31PM +0800, Jason Wang wrote:
> > 
> > The point of using the config field here is, when tomorrow's device is
> > released with a requirement for the driver
> > to use max_chain_size=1022 (not today's 1023), today's driver will
> > naturally support tomorrow's device without
> > any modification, since it reads the max_chain_size from the config
> > field which is filled by the device (either today's
> > device or tomorrow's device with different values).
> 
> I'm not saying anything wrong with the config filed you introduced. But you
> should answer the following question:
> 
> Is it useful to support more than 1024? If yes, why? If not, introduce a
> VIRTIO_F_SG_1024 is more than enough I think.
> 
> Thanks

I think it's useful to limit it below queue size too.

-- 
MST

-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size

2017-05-25 Thread Jason Wang



On 2017年05月25日 19:50, Wei Wang wrote:

On 05/25/2017 03:49 PM, Jason Wang wrote:



On 2017年05月24日 16:18, Wei Wang wrote:

On 05/24/2017 11:19 AM, Jason Wang wrote:



On 2017年05月23日 18:36, Wei Wang wrote:

On 05/23/2017 02:24 PM, Jason Wang wrote:



On 2017年05月23日 13:15, Wei Wang wrote:

On 05/23/2017 10:04 AM, Jason Wang wrote:



On 2017年05月22日 19:52, Wei Wang wrote:

On 05/20/2017 04:42 AM, Michael S. Tsirkin wrote:

On Fri, May 19, 2017 at 10:32:19AM +0800, Wei Wang wrote:
This patch enables the virtio-net tx queue size to be 
configurable
between 256 (the default queue size) and 1024 by the user. 
The queue

size specified by the user should be power of 2.

Setting the tx queue size to be 1024 requires the guest 
driver to

support the VIRTIO_NET_F_MAX_CHAIN_SIZE feature.
This should be a generic ring feature, not one specific to 
virtio net.

OK. How about making two more changes below:

1) make the default tx queue size = 1024 (instead of 256).


As has been pointed out, you need compat the default value too 
in this case.


The driver gets the size info from the device, then would it 
cause any
compatibility issue if we change the default ring size to 1024 
in the
vhost case? In other words, is there any software (i.e. any 
virtio-net driver)

functions based on the assumption of 256 queue size?


I don't know. But is it safe e.g we migrate from 1024 to an older 
qemu with 256 as its queue size?


Yes, I think it is safe, because the default queue size is used 
when the device is being

set up (e.g. feature negotiation).
During migration (the device has already been running), the 
destination machine will
load the device state based on the the queue size that is being 
used (i.e. vring.num).

The default value is not used any more after the setup phase.


I haven't checked all cases, but there's two obvious things:

- After migration and after a reset, it will go back to 256 on dst.


Please let me clarify what we want first: when QEMU boots and it 
realizes the
virtio-net device, if the tx_queue_size is not given by the command 
line, we want
to use 1024 as the queue size, that is, virtio_add_queue(,1024,), 
which sets

vring.num=1024 and vring.num_default=1024.

When migration happens, the vring.num variable (has been 1024) is 
sent to
the destination machine, where virtio_load() will assign the 
destination side vring.num
to that value (1024). So, vring.num=1024 continues to work on the 
destination machine

with old QEMU. I don't see an issue here.

If reset happens, I think the device and driver will re-do the 
initialization steps. So, if they are
with the old QEMU, then they use the old qemu realize() function to 
do virtio_add_queue(,256,),
and the driver will re-do the probe() steps and take vring.num=256, 
then everything works fine.


Probably works fine but the size is 256 forever after migration. 
Instead of using 1024 which work just one time and maybe risky, isn't 
it better to just use 256 for old machine types?




If it migrates to the old QEMU, then I think everything should work in 
the old QEMU style after
reset (not just our virtio-net case). I think this should be something 
natural and reasonable.


The point is it should behave exactly the same not only after reset but 
also before.




Why would the change depends on machine types?





- ABI is changed, e.g -M pc-q35-2.10 returns 1024 on 2.11

Didn't get this. Could you please explain more? which ABI would be 
changed, and why it affects q35?




Nothing specific to q35, just to point out the machine type of 2.10.

E.g on 2.10, -M pc-q35-2.10, vring.num is 256; On 2.11 -M pc-q35-2.10 
vring.num is 1024.




I think it's not related to the machine type.

Probably we can use QEMU version to discuss here.
Suppose this change will be made to the next version, QEMU 2.10. Then 
with QEMU 2.10, when

people create a virtio-net device as usual:
-device virtio-net-pci,netdev=net1,mac=52:54:00:00:00:01
it will creates a device with queue size = 1024.
If they use QEMU 2.9, then the queue size = 256.

What ABI change did you mean?


See https://fedoraproject.org/wiki/Features/KVM_Stable_Guest_ABI.











For live migration, the queue size that is being used will also 
be transferred

to the destination.




We can reduce the size (to 256) if the MAX_CHAIN_SIZE feature
is not supported by the guest.
In this way, people who apply the QEMU patch can directly use the
largest queue size(1024) without adding the booting command line.

2) The vhost backend does not use writev, so I think when the 
vhost

backed is used, using 1024 queue size should not depend on the
MAX_CHAIN_SIZE feature.


But do we need to consider even larger queue size now?


Need Michael's feedback on this. Meanwhile, I'll get the next 
version of
code ready and check if larger queue size would cause any corner 
case.


The problem is, do we really need a new config filed for this? Or 
just introduce a flag which means "I support up to 1024 sgs" is 

[virtio-dev] Re: [Qemu-devel] [virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size

2017-05-25 Thread Wei Wang

On 05/25/2017 03:49 PM, Jason Wang wrote:



On 2017年05月24日 16:18, Wei Wang wrote:

On 05/24/2017 11:19 AM, Jason Wang wrote:



On 2017年05月23日 18:36, Wei Wang wrote:

On 05/23/2017 02:24 PM, Jason Wang wrote:



On 2017年05月23日 13:15, Wei Wang wrote:

On 05/23/2017 10:04 AM, Jason Wang wrote:



On 2017年05月22日 19:52, Wei Wang wrote:

On 05/20/2017 04:42 AM, Michael S. Tsirkin wrote:

On Fri, May 19, 2017 at 10:32:19AM +0800, Wei Wang wrote:
This patch enables the virtio-net tx queue size to be 
configurable
between 256 (the default queue size) and 1024 by the user. 
The queue

size specified by the user should be power of 2.

Setting the tx queue size to be 1024 requires the guest 
driver to

support the VIRTIO_NET_F_MAX_CHAIN_SIZE feature.
This should be a generic ring feature, not one specific to 
virtio net.

OK. How about making two more changes below:

1) make the default tx queue size = 1024 (instead of 256).


As has been pointed out, you need compat the default value too 
in this case.


The driver gets the size info from the device, then would it 
cause any
compatibility issue if we change the default ring size to 1024 in 
the
vhost case? In other words, is there any software (i.e. any 
virtio-net driver)

functions based on the assumption of 256 queue size?


I don't know. But is it safe e.g we migrate from 1024 to an older 
qemu with 256 as its queue size?


Yes, I think it is safe, because the default queue size is used 
when the device is being

set up (e.g. feature negotiation).
During migration (the device has already been running), the 
destination machine will
load the device state based on the the queue size that is being 
used (i.e. vring.num).

The default value is not used any more after the setup phase.


I haven't checked all cases, but there's two obvious things:

- After migration and after a reset, it will go back to 256 on dst.


Please let me clarify what we want first: when QEMU boots and it 
realizes the
virtio-net device, if the tx_queue_size is not given by the command 
line, we want
to use 1024 as the queue size, that is, virtio_add_queue(,1024,), 
which sets

vring.num=1024 and vring.num_default=1024.

When migration happens, the vring.num variable (has been 1024) is 
sent to
the destination machine, where virtio_load() will assign the 
destination side vring.num
to that value (1024). So, vring.num=1024 continues to work on the 
destination machine

with old QEMU. I don't see an issue here.

If reset happens, I think the device and driver will re-do the 
initialization steps. So, if they are
with the old QEMU, then they use the old qemu realize() function to 
do virtio_add_queue(,256,),
and the driver will re-do the probe() steps and take vring.num=256, 
then everything works fine.


Probably works fine but the size is 256 forever after migration. 
Instead of using 1024 which work just one time and maybe risky, isn't 
it better to just use 256 for old machine types?




If it migrates to the old QEMU, then I think everything should work in 
the old QEMU style after
reset (not just our virtio-net case). I think this should be something 
natural and reasonable.


Why would the change depends on machine types?





- ABI is changed, e.g -M pc-q35-2.10 returns 1024 on 2.11

Didn't get this. Could you please explain more? which ABI would be 
changed, and why it affects q35?




Nothing specific to q35, just to point out the machine type of 2.10.

E.g on 2.10, -M pc-q35-2.10, vring.num is 256; On 2.11 -M pc-q35-2.10 
vring.num is 1024.




I think it's not related to the machine type.

Probably we can use QEMU version to discuss here.
Suppose this change will be made to the next version, QEMU 2.10. Then 
with QEMU 2.10, when

people create a virtio-net device as usual:
-device virtio-net-pci,netdev=net1,mac=52:54:00:00:00:01
it will creates a device with queue size = 1024.
If they use QEMU 2.9, then the queue size = 256.

What ABI change did you mean?









For live migration, the queue size that is being used will also 
be transferred

to the destination.




We can reduce the size (to 256) if the MAX_CHAIN_SIZE feature
is not supported by the guest.
In this way, people who apply the QEMU patch can directly use the
largest queue size(1024) without adding the booting command line.

2) The vhost backend does not use writev, so I think when the 
vhost

backed is used, using 1024 queue size should not depend on the
MAX_CHAIN_SIZE feature.


But do we need to consider even larger queue size now?


Need Michael's feedback on this. Meanwhile, I'll get the next 
version of
code ready and check if larger queue size would cause any corner 
case.


The problem is, do we really need a new config filed for this? Or 
just introduce a flag which means "I support up to 1024 sgs" is 
sufficient?




For now, it also works without the new config field, max_chain_size,
But I would prefer to keep the new config field, because:

Without that, the driver will work on  an assumed value, 1023.



Re: [virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size

2017-05-24 Thread Wei Wang

On 05/24/2017 11:19 AM, Jason Wang wrote:



On 2017年05月23日 18:36, Wei Wang wrote:

On 05/23/2017 02:24 PM, Jason Wang wrote:



On 2017年05月23日 13:15, Wei Wang wrote:

On 05/23/2017 10:04 AM, Jason Wang wrote:



On 2017年05月22日 19:52, Wei Wang wrote:

On 05/20/2017 04:42 AM, Michael S. Tsirkin wrote:

On Fri, May 19, 2017 at 10:32:19AM +0800, Wei Wang wrote:

This patch enables the virtio-net tx queue size to be configurable
between 256 (the default queue size) and 1024 by the user. The 
queue

size specified by the user should be power of 2.

Setting the tx queue size to be 1024 requires the guest driver to
support the VIRTIO_NET_F_MAX_CHAIN_SIZE feature.
This should be a generic ring feature, not one specific to 
virtio net.

OK. How about making two more changes below:

1) make the default tx queue size = 1024 (instead of 256).


As has been pointed out, you need compat the default value too in 
this case.


The driver gets the size info from the device, then would it cause any
compatibility issue if we change the default ring size to 1024 in the
vhost case? In other words, is there any software (i.e. any 
virtio-net driver)

functions based on the assumption of 256 queue size?


I don't know. But is it safe e.g we migrate from 1024 to an older 
qemu with 256 as its queue size?


Yes, I think it is safe, because the default queue size is used when 
the device is being

set up (e.g. feature negotiation).
During migration (the device has already been running), the 
destination machine will
load the device state based on the the queue size that is being used 
(i.e. vring.num).

The default value is not used any more after the setup phase.


I haven't checked all cases, but there's two obvious things:

- After migration and after a reset, it will go back to 256 on dst.


Please let me clarify what we want first: when QEMU boots and it 
realizes the
virtio-net device, if the tx_queue_size is not given by the command 
line, we want

to use 1024 as the queue size, that is, virtio_add_queue(,1024,), which sets
vring.num=1024 and vring.num_default=1024.

When migration happens, the vring.num variable (has been 1024) is sent to
the destination machine, where virtio_load() will assign the destination 
side vring.num
to that value (1024). So, vring.num=1024 continues to work on the 
destination machine

with old QEMU. I don't see an issue here.

If reset happens, I think the device and driver will re-do the 
initialization steps. So, if they are
with the old QEMU, then they use the old qemu realize() function to do 
virtio_add_queue(,256,),
and the driver will re-do the probe() steps and take vring.num=256, then 
everything works fine.




- ABI is changed, e.g -M pc-q35-2.10 returns 1024 on 2.11

Didn't get this. Could you please explain more? which ABI would be 
changed, and why it affects q35?









For live migration, the queue size that is being used will also be 
transferred

to the destination.




We can reduce the size (to 256) if the MAX_CHAIN_SIZE feature
is not supported by the guest.
In this way, people who apply the QEMU patch can directly use the
largest queue size(1024) without adding the booting command line.

2) The vhost backend does not use writev, so I think when the vhost
backed is used, using 1024 queue size should not depend on the
MAX_CHAIN_SIZE feature.


But do we need to consider even larger queue size now?


Need Michael's feedback on this. Meanwhile, I'll get the next 
version of

code ready and check if larger queue size would cause any corner case.


The problem is, do we really need a new config filed for this? Or 
just introduce a flag which means "I support up to 1024 sgs" is 
sufficient?




For now, it also works without the new config field, max_chain_size,
But I would prefer to keep the new config field, because:

Without that, the driver will work on  an assumed value, 1023.


This is the fact, and it's too late to change legacy driver.


If the future, QEMU needs to change it to 1022, then how can the
new QEMU tell the old driver, which supports the MAX_CHAIN_SIZE
feature but works with the old hardcode value 1023?


Can config filed help in this case? The problem is similar to 
ANY_HEADER_SG, the only thing we can is to clarify the limitation for 
new drivers.




I think it helps, because the driver will do
virtio_cread_feature(vdev, VIRTIO_NET_F_MAX_CHAIN_SIZE,
  struct virtio_net_config, 
max_chain_size, _size);
to get the max_chain_size from the device. So when new QEMU has a new 
value of max_chain_size, old driver

will get the new value.

Best,
Wei






-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size

2017-05-22 Thread Wei Wang

On 05/23/2017 10:04 AM, Jason Wang wrote:



On 2017年05月22日 19:52, Wei Wang wrote:

On 05/20/2017 04:42 AM, Michael S. Tsirkin wrote:

On Fri, May 19, 2017 at 10:32:19AM +0800, Wei Wang wrote:

This patch enables the virtio-net tx queue size to be configurable
between 256 (the default queue size) and 1024 by the user. The queue
size specified by the user should be power of 2.

Setting the tx queue size to be 1024 requires the guest driver to
support the VIRTIO_NET_F_MAX_CHAIN_SIZE feature.

This should be a generic ring feature, not one specific to virtio net.

OK. How about making two more changes below:

1) make the default tx queue size = 1024 (instead of 256).


As has been pointed out, you need compat the default value too in this 
case.


The driver gets the size info from the device, then would it cause any
compatibility issue if we change the default ring size to 1024 in the
vhost case? In other words, is there any software (i.e. any virtio-net 
driver)

functions based on the assumption of 256 queue size?

For live migration, the queue size that is being used will also be 
transferred

to the destination.




We can reduce the size (to 256) if the MAX_CHAIN_SIZE feature
is not supported by the guest.
In this way, people who apply the QEMU patch can directly use the
largest queue size(1024) without adding the booting command line.

2) The vhost backend does not use writev, so I think when the vhost
backed is used, using 1024 queue size should not depend on the
MAX_CHAIN_SIZE feature.


But do we need to consider even larger queue size now?


Need Michael's feedback on this. Meanwhile, I'll get the next version of
code ready and check if larger queue size would cause any corner case.



Btw, I think it's better to draft a spec patch.



I think it should be easier to draft the spec patch when the code is
almost done.


Best,
Wei





-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



Re: [virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size

2017-05-22 Thread Jason Wang



On 2017年05月22日 19:52, Wei Wang wrote:

On 05/20/2017 04:42 AM, Michael S. Tsirkin wrote:

On Fri, May 19, 2017 at 10:32:19AM +0800, Wei Wang wrote:

This patch enables the virtio-net tx queue size to be configurable
between 256 (the default queue size) and 1024 by the user. The queue
size specified by the user should be power of 2.

Setting the tx queue size to be 1024 requires the guest driver to
support the VIRTIO_NET_F_MAX_CHAIN_SIZE feature.

This should be a generic ring feature, not one specific to virtio net.

OK. How about making two more changes below:

1) make the default tx queue size = 1024 (instead of 256).


As has been pointed out, you need compat the default value too in this case.


We can reduce the size (to 256) if the MAX_CHAIN_SIZE feature
is not supported by the guest.
In this way, people who apply the QEMU patch can directly use the
largest queue size(1024) without adding the booting command line.

2) The vhost backend does not use writev, so I think when the vhost
backed is used, using 1024 queue size should not depend on the
MAX_CHAIN_SIZE feature.


But do we need to consider even larger queue size now?

Btw, I think it's better to draft a spec patch.

Thanks




Best,
Wei



-
To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org



[virtio-dev] Re: [PATCH RFC] virtio-net: enable configurable tx queue size

2017-05-19 Thread Michael S. Tsirkin
On Fri, May 19, 2017 at 10:32:19AM +0800, Wei Wang wrote:
> This patch enables the virtio-net tx queue size to be configurable
> between 256 (the default queue size) and 1024 by the user. The queue
> size specified by the user should be power of 2.
> 
> Setting the tx queue size to be 1024 requires the guest driver to
> support the VIRTIO_NET_F_MAX_CHAIN_SIZE feature.

This should be a generic ring feature, not one specific to virtio net.

> This feature restricts
> the guest driver from chaining 1024 vring descriptors, which may cause
> the device side implementation to send more than 1024 iov to writev.
> Currently, the max chain size allowed for the guest driver is set to
> 1023.
> 
> In the case that the tx queue size is set to 1024 and the
> VIRTIO_NET_F_MAX_CHAIN_SIZE feature is not supported by the guest driver,
> the default tx queue size (256) will be used.
> 
> Signed-off-by: Wei Wang 
> ---
>  hw/net/virtio-net.c | 71 
> +++--
>  include/hw/virtio/virtio-net.h  |  1 +
>  include/standard-headers/linux/virtio_net.h |  3 ++
>  3 files changed, 71 insertions(+), 4 deletions(-)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 7d091c9..ef38cb1 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -33,8 +33,12 @@
>  
>  /* previously fixed value */
>  #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
> +#define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256
>  /* for now, only allow larger queues; with virtio-1, guest can downsize */
>  #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE
> +#define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE
> +
> +#define VIRTIO_NET_MAX_CHAIN_SIZE 1023
>  
>  /*
>   * Calculate the number of bytes up to and including the given 'field' of
> @@ -57,6 +61,8 @@ static VirtIOFeature feature_sizes[] = {
>   .end = endof(struct virtio_net_config, max_virtqueue_pairs)},
>  {.flags = 1 << VIRTIO_NET_F_MTU,
>   .end = endof(struct virtio_net_config, mtu)},
> +{.flags = 1 << VIRTIO_NET_F_MAX_CHAIN_SIZE,
> + .end = endof(struct virtio_net_config, max_chain_size)},
>  {}
>  };
>  
> @@ -84,6 +90,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, 
> uint8_t *config)
>  virtio_stw_p(vdev, , n->status);
>  virtio_stw_p(vdev, _virtqueue_pairs, n->max_queues);
>  virtio_stw_p(vdev, , n->net_conf.mtu);
> +virtio_stw_p(vdev, _chain_size, VIRTIO_NET_MAX_CHAIN_SIZE);
>  memcpy(netcfg.mac, n->mac, ETH_ALEN);
>  memcpy(config, , n->config_size);
>  }
> @@ -568,6 +575,7 @@ static uint64_t virtio_net_get_features(VirtIODevice 
> *vdev, uint64_t features,
>  features |= n->host_features;
>  
>  virtio_add_feature(, VIRTIO_NET_F_MAC);
> +virtio_add_feature(, VIRTIO_NET_F_MAX_CHAIN_SIZE);
>  
>  if (!peer_has_vnet_hdr(n)) {
>  virtio_clear_feature(, VIRTIO_NET_F_CSUM);
> @@ -603,6 +611,7 @@ static uint64_t virtio_net_bad_features(VirtIODevice 
> *vdev)
>  virtio_add_feature(, VIRTIO_NET_F_HOST_TSO4);
>  virtio_add_feature(, VIRTIO_NET_F_HOST_TSO6);
>  virtio_add_feature(, VIRTIO_NET_F_HOST_ECN);
> +virtio_add_feature(, VIRTIO_NET_F_MAX_CHAIN_SIZE);
>  
>  return features;
>  }
> @@ -635,6 +644,27 @@ static inline uint64_t 
> virtio_net_supported_guest_offloads(VirtIONet *n)
>  return virtio_net_guest_offloads_by_features(vdev->guest_features);
>  }
>  
> +static bool is_tx(int queue_index)
> +{
> +return queue_index % 2 == 1;
> +}
> +
> +static void virtio_net_change_tx_queue_size(VirtIONet *n)
> +{
> +VirtIODevice *vdev = VIRTIO_DEVICE(n);
> +int i, num_queues = virtio_get_num_queues(vdev);
> +
> +if (n->net_conf.tx_queue_size == VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE) {
> +return;
> +}
> +
> +for (i = 0; i < num_queues; i++) {
> +if (is_tx(i)) {
> +virtio_queue_set_num(vdev, i, n->net_conf.tx_queue_size);
> +}
> +}
> +}
> +
>  static void virtio_net_set_features(VirtIODevice *vdev, uint64_t features)
>  {
>  VirtIONet *n = VIRTIO_NET(vdev);
> @@ -649,6 +679,16 @@ static void virtio_net_set_features(VirtIODevice *vdev, 
> uint64_t features)
> virtio_has_feature(features,
>VIRTIO_F_VERSION_1));
>  
> +/*
> + * Change the tx queue size if the guest supports
> + * VIRTIO_NET_F_MAX_CHAIN_SIZE. This will restrict the guest from sending
> + * a very large chain of vring descriptors (e.g. 1024), which may cause
> + * 1025 iov to be written to writev.
> + */
> +if (virtio_has_feature(features, VIRTIO_NET_F_MAX_CHAIN_SIZE)) {
> +virtio_net_change_tx_queue_size(n);
> +}
> +
>  if (n->has_vnet_hdr) {
>  n->curr_guest_offloads =
>  virtio_net_guest_offloads_by_features(features);
> @@ -1297,8 +1337,8 @@ static int32_t virtio_net_flush_tx(VirtIONetQueue *q)
>  
>