Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-06 Thread Akihiko Odaki
On Mon, Mar 7, 2022 at 1:52 PM Jason Wang  wrote:
>
> On Mon, Mar 7, 2022 at 12:25 PM Akihiko Odaki  wrote:
> >
> > On Mon, Mar 7, 2022 at 12:59 PM Jason Wang  wrote:
> > >
> > > On Fri, Mar 4, 2022 at 12:37 PM Akihiko Odaki  
> > > wrote:
> > > >
> > > > On 2022/03/04 10:37, Jason Wang wrote:
> > > > > On Thu, Mar 3, 2022 at 11:43 PM Vladislav Yaroshchuk
> > > > >  wrote:
> > > > >>
> > > > >>
> > > > >>
> > > > >> On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki 
> > > > >>  wrote:
> > > > >>>
> > > > >>> On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:
> > > >    > Not sure that only one field is enough, cause
> > > >    > we may have two states on bh execution start:
> > > >    > 1. There are packets in vmnet buffer s->packets_buf
> > > >    >  that were rejected by qemu_send_async and waiting
> > > >    >  to be sent. If this happens, we should complete 
> > > >  sending
> > > >    >  these waiting packets with qemu_send_async firstly,
> > > >    >  and after that we should call vmnet_read to get
> > > >    >  new ones and send them to QEMU;
> > > >    > 2. There are no packets in s->packets_buf to be sent to
> > > >    >  qemu, we only need to get new packets from vmnet
> > > >    >  with vmnet_read and send them to QEMU
> > > > 
> > > >   In case 1, you should just keep calling 
> > > >  qemu_send_packet_async.
> > > >   Actually
> > > >   qemu_send_packet_async adds the packet to its internal queue 
> > > >  and calls
> > > >   the callback when it is consumed.
> > > > 
> > > > 
> > > >  I'm not sure we can keep calling qemu_send_packet_async,
> > > >  because as docs from net/queue.c says:
> > > > 
> > > >  /* [...]
> > > > * If a sent callback is provided to send(), the caller must 
> > > >  handle a
> > > > * zero return from the delivery handler by not sending any more 
> > > >  packets
> > > > * until we have invoked the callback. Only in that case will we 
> > > >  queue
> > > > * the packet.
> > > > *
> > > > * If a sent callback isn't provided, we just drop the packet to 
> > > >  avoid
> > > > * unbounded queueing.
> > > > */
> > > > 
> > > >  So after we did vmnet_read and read N packets
> > > >  into temporary s->packets_buf, we begin calling
> > > >  qemu_send_packet_async. If it returns 0 - it says
> > > >  "no more packets until sent_cb called please".
> > > >  At this moment we have N packets in s->packets_buf
> > > >  and already queued K < N of them. But, packets K..N
> > > >  are not queued and keep waiting for sent_cb to be sent
> > > >  with qemu_send_packet_async.
> > > >  Thus when sent_cb called, we should finish
> > > >  our transfer of packets K..N from s->packets_buf
> > > >  to qemu calling qemu_send_packet_async.
> > > >  I meant this.
> > > > >>>
> > > > >>> I missed the comment. The description is contradicting with the 
> > > > >>> actual
> > > > >>> code; qemu_net_queue_send_iov appends the packet to the queue 
> > > > >>> whenever
> > > > >>> it cannot send one immediately.
> > > > >>>
> > > > >>
> > > > >> Yes, it appends, but (net/queue.c):
> > > > >> *  qemu_net_queue_send tries to deliver the packet
> > > > >>  immediately. If the packet cannot be delivered, the
> > > > >>  qemu_net_queue_append is called and 0 is returned
> > > > >>  to say the caller "the receiver is not ready, hold on";
> > > > >> *  qemu_net_queue_append does a probe before adding
> > > > >>  the packet to the queue:
> > > > >>  if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
> > > > >>  return; /* drop if queue full and no callback */
> > > > >>  }
> > > > >>
> > > > >> The queue is not infinite, so we have three cases:
> > > > >> 1. The queue is not full -> append the packet, no
> > > > >>  problems here
> > > > >> 2. The queue is full, no callback -> we cannot notify
> > > > >>  a caller when we're ready, so just drop the packet
> > > > >>  if we can't append it.
> > > > >> 3. The queue is full, callback present -> we can notify
> > > > >>  a caller when we are ready, so "let's queue this packet,
> > > > >>  but expect no more (!) packets is sent until I call
> > > > >>  sent_cb when the queue is ready"
> > > > >>
> > > > >> Therefore if we provide a callback and keep sending
> > > > >> packets if 0 is returned, this may cause unlimited(!)
> > > > >> queue growth. To prevent this, we should stop sending
> > > > >> packets and wait for notification callback to continue.
> > > > >
> > > > > Right.
> > > > >
> > > > >>
> > > > >> I don't see any contradiction with that comment.
> > > > >>
> > > > >>> Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell 
> > > > >>> if
> > > > >>> calling 

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-06 Thread Jason Wang
On Mon, Mar 7, 2022 at 12:25 PM Akihiko Odaki  wrote:
>
> On Mon, Mar 7, 2022 at 12:59 PM Jason Wang  wrote:
> >
> > On Fri, Mar 4, 2022 at 12:37 PM Akihiko Odaki  
> > wrote:
> > >
> > > On 2022/03/04 10:37, Jason Wang wrote:
> > > > On Thu, Mar 3, 2022 at 11:43 PM Vladislav Yaroshchuk
> > > >  wrote:
> > > >>
> > > >>
> > > >>
> > > >> On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki 
> > > >>  wrote:
> > > >>>
> > > >>> On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:
> > >    > Not sure that only one field is enough, cause
> > >    > we may have two states on bh execution start:
> > >    > 1. There are packets in vmnet buffer s->packets_buf
> > >    >  that were rejected by qemu_send_async and waiting
> > >    >  to be sent. If this happens, we should complete sending
> > >    >  these waiting packets with qemu_send_async firstly,
> > >    >  and after that we should call vmnet_read to get
> > >    >  new ones and send them to QEMU;
> > >    > 2. There are no packets in s->packets_buf to be sent to
> > >    >  qemu, we only need to get new packets from vmnet
> > >    >  with vmnet_read and send them to QEMU
> > > 
> > >   In case 1, you should just keep calling qemu_send_packet_async.
> > >   Actually
> > >   qemu_send_packet_async adds the packet to its internal queue 
> > >  and calls
> > >   the callback when it is consumed.
> > > 
> > > 
> > >  I'm not sure we can keep calling qemu_send_packet_async,
> > >  because as docs from net/queue.c says:
> > > 
> > >  /* [...]
> > > * If a sent callback is provided to send(), the caller must 
> > >  handle a
> > > * zero return from the delivery handler by not sending any more 
> > >  packets
> > > * until we have invoked the callback. Only in that case will we 
> > >  queue
> > > * the packet.
> > > *
> > > * If a sent callback isn't provided, we just drop the packet to 
> > >  avoid
> > > * unbounded queueing.
> > > */
> > > 
> > >  So after we did vmnet_read and read N packets
> > >  into temporary s->packets_buf, we begin calling
> > >  qemu_send_packet_async. If it returns 0 - it says
> > >  "no more packets until sent_cb called please".
> > >  At this moment we have N packets in s->packets_buf
> > >  and already queued K < N of them. But, packets K..N
> > >  are not queued and keep waiting for sent_cb to be sent
> > >  with qemu_send_packet_async.
> > >  Thus when sent_cb called, we should finish
> > >  our transfer of packets K..N from s->packets_buf
> > >  to qemu calling qemu_send_packet_async.
> > >  I meant this.
> > > >>>
> > > >>> I missed the comment. The description is contradicting with the actual
> > > >>> code; qemu_net_queue_send_iov appends the packet to the queue whenever
> > > >>> it cannot send one immediately.
> > > >>>
> > > >>
> > > >> Yes, it appends, but (net/queue.c):
> > > >> *  qemu_net_queue_send tries to deliver the packet
> > > >>  immediately. If the packet cannot be delivered, the
> > > >>  qemu_net_queue_append is called and 0 is returned
> > > >>  to say the caller "the receiver is not ready, hold on";
> > > >> *  qemu_net_queue_append does a probe before adding
> > > >>  the packet to the queue:
> > > >>  if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
> > > >>  return; /* drop if queue full and no callback */
> > > >>  }
> > > >>
> > > >> The queue is not infinite, so we have three cases:
> > > >> 1. The queue is not full -> append the packet, no
> > > >>  problems here
> > > >> 2. The queue is full, no callback -> we cannot notify
> > > >>  a caller when we're ready, so just drop the packet
> > > >>  if we can't append it.
> > > >> 3. The queue is full, callback present -> we can notify
> > > >>  a caller when we are ready, so "let's queue this packet,
> > > >>  but expect no more (!) packets is sent until I call
> > > >>  sent_cb when the queue is ready"
> > > >>
> > > >> Therefore if we provide a callback and keep sending
> > > >> packets if 0 is returned, this may cause unlimited(!)
> > > >> queue growth. To prevent this, we should stop sending
> > > >> packets and wait for notification callback to continue.
> > > >
> > > > Right.
> > > >
> > > >>
> > > >> I don't see any contradiction with that comment.
> > > >>
> > > >>> Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if
> > > >>> calling qemu_send_packet_async is allowed after it returns 0?
> > > >>>
> > > >>
> > > >> It may be wrong, but I think it's not allowed to send
> > > >> packets after qemu_send_packet_async returns 0.
> > > >>
> > > >> Jason Wang, can you confirm please?
> > > >
> > > > With a cb, we can't do this. All users with cb will disable the source
> > > > 

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-06 Thread Akihiko Odaki
On Mon, Mar 7, 2022 at 12:59 PM Jason Wang  wrote:
>
> On Fri, Mar 4, 2022 at 12:37 PM Akihiko Odaki  wrote:
> >
> > On 2022/03/04 10:37, Jason Wang wrote:
> > > On Thu, Mar 3, 2022 at 11:43 PM Vladislav Yaroshchuk
> > >  wrote:
> > >>
> > >>
> > >>
> > >> On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki  
> > >> wrote:
> > >>>
> > >>> On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:
> >    > Not sure that only one field is enough, cause
> >    > we may have two states on bh execution start:
> >    > 1. There are packets in vmnet buffer s->packets_buf
> >    >  that were rejected by qemu_send_async and waiting
> >    >  to be sent. If this happens, we should complete sending
> >    >  these waiting packets with qemu_send_async firstly,
> >    >  and after that we should call vmnet_read to get
> >    >  new ones and send them to QEMU;
> >    > 2. There are no packets in s->packets_buf to be sent to
> >    >  qemu, we only need to get new packets from vmnet
> >    >  with vmnet_read and send them to QEMU
> > 
> >   In case 1, you should just keep calling qemu_send_packet_async.
> >   Actually
> >   qemu_send_packet_async adds the packet to its internal queue and 
> >  calls
> >   the callback when it is consumed.
> > 
> > 
> >  I'm not sure we can keep calling qemu_send_packet_async,
> >  because as docs from net/queue.c says:
> > 
> >  /* [...]
> > * If a sent callback is provided to send(), the caller must handle a
> > * zero return from the delivery handler by not sending any more 
> >  packets
> > * until we have invoked the callback. Only in that case will we 
> >  queue
> > * the packet.
> > *
> > * If a sent callback isn't provided, we just drop the packet to 
> >  avoid
> > * unbounded queueing.
> > */
> > 
> >  So after we did vmnet_read and read N packets
> >  into temporary s->packets_buf, we begin calling
> >  qemu_send_packet_async. If it returns 0 - it says
> >  "no more packets until sent_cb called please".
> >  At this moment we have N packets in s->packets_buf
> >  and already queued K < N of them. But, packets K..N
> >  are not queued and keep waiting for sent_cb to be sent
> >  with qemu_send_packet_async.
> >  Thus when sent_cb called, we should finish
> >  our transfer of packets K..N from s->packets_buf
> >  to qemu calling qemu_send_packet_async.
> >  I meant this.
> > >>>
> > >>> I missed the comment. The description is contradicting with the actual
> > >>> code; qemu_net_queue_send_iov appends the packet to the queue whenever
> > >>> it cannot send one immediately.
> > >>>
> > >>
> > >> Yes, it appends, but (net/queue.c):
> > >> *  qemu_net_queue_send tries to deliver the packet
> > >>  immediately. If the packet cannot be delivered, the
> > >>  qemu_net_queue_append is called and 0 is returned
> > >>  to say the caller "the receiver is not ready, hold on";
> > >> *  qemu_net_queue_append does a probe before adding
> > >>  the packet to the queue:
> > >>  if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
> > >>  return; /* drop if queue full and no callback */
> > >>  }
> > >>
> > >> The queue is not infinite, so we have three cases:
> > >> 1. The queue is not full -> append the packet, no
> > >>  problems here
> > >> 2. The queue is full, no callback -> we cannot notify
> > >>  a caller when we're ready, so just drop the packet
> > >>  if we can't append it.
> > >> 3. The queue is full, callback present -> we can notify
> > >>  a caller when we are ready, so "let's queue this packet,
> > >>  but expect no more (!) packets is sent until I call
> > >>  sent_cb when the queue is ready"
> > >>
> > >> Therefore if we provide a callback and keep sending
> > >> packets if 0 is returned, this may cause unlimited(!)
> > >> queue growth. To prevent this, we should stop sending
> > >> packets and wait for notification callback to continue.
> > >
> > > Right.
> > >
> > >>
> > >> I don't see any contradiction with that comment.
> > >>
> > >>> Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if
> > >>> calling qemu_send_packet_async is allowed after it returns 0?
> > >>>
> > >>
> > >> It may be wrong, but I think it's not allowed to send
> > >> packets after qemu_send_packet_async returns 0.
> > >>
> > >> Jason Wang, can you confirm please?
> > >
> > > With a cb, we can't do this. All users with cb will disable the source
> > > polling and depend on the cb to re-read the polling.
> > > (tap/l2tpv3/socket).
> > >
> > > Without a cb, we can. As analyzed above, qemu_net_queue_append() can
> > > limit the number of packets queued in this case.
> >
> > vmnet can read multiple packets at once. What about such a case? Isn't

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-06 Thread Jason Wang
On Fri, Mar 4, 2022 at 12:37 PM Akihiko Odaki  wrote:
>
> On 2022/03/04 10:37, Jason Wang wrote:
> > On Thu, Mar 3, 2022 at 11:43 PM Vladislav Yaroshchuk
> >  wrote:
> >>
> >>
> >>
> >> On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki  
> >> wrote:
> >>>
> >>> On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:
>    > Not sure that only one field is enough, cause
>    > we may have two states on bh execution start:
>    > 1. There are packets in vmnet buffer s->packets_buf
>    >  that were rejected by qemu_send_async and waiting
>    >  to be sent. If this happens, we should complete sending
>    >  these waiting packets with qemu_send_async firstly,
>    >  and after that we should call vmnet_read to get
>    >  new ones and send them to QEMU;
>    > 2. There are no packets in s->packets_buf to be sent to
>    >  qemu, we only need to get new packets from vmnet
>    >  with vmnet_read and send them to QEMU
> 
>   In case 1, you should just keep calling qemu_send_packet_async.
>   Actually
>   qemu_send_packet_async adds the packet to its internal queue and 
>  calls
>   the callback when it is consumed.
> 
> 
>  I'm not sure we can keep calling qemu_send_packet_async,
>  because as docs from net/queue.c says:
> 
>  /* [...]
> * If a sent callback is provided to send(), the caller must handle a
> * zero return from the delivery handler by not sending any more 
>  packets
> * until we have invoked the callback. Only in that case will we queue
> * the packet.
> *
> * If a sent callback isn't provided, we just drop the packet to avoid
> * unbounded queueing.
> */
> 
>  So after we did vmnet_read and read N packets
>  into temporary s->packets_buf, we begin calling
>  qemu_send_packet_async. If it returns 0 - it says
>  "no more packets until sent_cb called please".
>  At this moment we have N packets in s->packets_buf
>  and already queued K < N of them. But, packets K..N
>  are not queued and keep waiting for sent_cb to be sent
>  with qemu_send_packet_async.
>  Thus when sent_cb called, we should finish
>  our transfer of packets K..N from s->packets_buf
>  to qemu calling qemu_send_packet_async.
>  I meant this.
> >>>
> >>> I missed the comment. The description is contradicting with the actual
> >>> code; qemu_net_queue_send_iov appends the packet to the queue whenever
> >>> it cannot send one immediately.
> >>>
> >>
> >> Yes, it appends, but (net/queue.c):
> >> *  qemu_net_queue_send tries to deliver the packet
> >>  immediately. If the packet cannot be delivered, the
> >>  qemu_net_queue_append is called and 0 is returned
> >>  to say the caller "the receiver is not ready, hold on";
> >> *  qemu_net_queue_append does a probe before adding
> >>  the packet to the queue:
> >>  if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
> >>  return; /* drop if queue full and no callback */
> >>  }
> >>
> >> The queue is not infinite, so we have three cases:
> >> 1. The queue is not full -> append the packet, no
> >>  problems here
> >> 2. The queue is full, no callback -> we cannot notify
> >>  a caller when we're ready, so just drop the packet
> >>  if we can't append it.
> >> 3. The queue is full, callback present -> we can notify
> >>  a caller when we are ready, so "let's queue this packet,
> >>  but expect no more (!) packets is sent until I call
> >>  sent_cb when the queue is ready"
> >>
> >> Therefore if we provide a callback and keep sending
> >> packets if 0 is returned, this may cause unlimited(!)
> >> queue growth. To prevent this, we should stop sending
> >> packets and wait for notification callback to continue.
> >
> > Right.
> >
> >>
> >> I don't see any contradiction with that comment.
> >>
> >>> Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if
> >>> calling qemu_send_packet_async is allowed after it returns 0?
> >>>
> >>
> >> It may be wrong, but I think it's not allowed to send
> >> packets after qemu_send_packet_async returns 0.
> >>
> >> Jason Wang, can you confirm please?
> >
> > With a cb, we can't do this. All users with cb will disable the source
> > polling and depend on the cb to re-read the polling.
> > (tap/l2tpv3/socket).
> >
> > Without a cb, we can. As analyzed above, qemu_net_queue_append() can
> > limit the number of packets queued in this case.
>
> vmnet can read multiple packets at once. What about such a case? Isn't
> calling qemu_send_packet_async for already read packet and stopping
> reading more fine?

It should be fine, I remember I've asked whether or not the source
could be disabled but I get the answer that it can't.

Thanks

>
> Regards,
> Akihiko Odaki
>
> >
> > Thanks
> >
> >>
> >> Best Regards,
> 

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-03 Thread Akihiko Odaki

On 2022/03/04 3:34, Akihiko Odaki wrote:

On Fri, Mar 4, 2022 at 12:43 AM Vladislav Yaroshchuk
 wrote:




On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki  wrote:


On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:

  > Not sure that only one field is enough, cause
  > we may have two states on bh execution start:
  > 1. There are packets in vmnet buffer s->packets_buf
  >  that were rejected by qemu_send_async and waiting
  >  to be sent. If this happens, we should complete sending
  >  these waiting packets with qemu_send_async firstly,
  >  and after that we should call vmnet_read to get
  >  new ones and send them to QEMU;
  > 2. There are no packets in s->packets_buf to be sent to
  >  qemu, we only need to get new packets from vmnet
  >  with vmnet_read and send them to QEMU

 In case 1, you should just keep calling qemu_send_packet_async.
 Actually
 qemu_send_packet_async adds the packet to its internal queue and calls
 the callback when it is consumed.


I'm not sure we can keep calling qemu_send_packet_async,
because as docs from net/queue.c says:

/* [...]
   * If a sent callback is provided to send(), the caller must handle a
   * zero return from the delivery handler by not sending any more packets
   * until we have invoked the callback. Only in that case will we queue
   * the packet.
   *
   * If a sent callback isn't provided, we just drop the packet to avoid
   * unbounded queueing.
   */

So after we did vmnet_read and read N packets
into temporary s->packets_buf, we begin calling
qemu_send_packet_async. If it returns 0 - it says
"no more packets until sent_cb called please".
At this moment we have N packets in s->packets_buf
and already queued K < N of them. But, packets K..N
are not queued and keep waiting for sent_cb to be sent
with qemu_send_packet_async.
Thus when sent_cb called, we should finish
our transfer of packets K..N from s->packets_buf
to qemu calling qemu_send_packet_async.
I meant this.


I missed the comment. The description is contradicting with the actual
code; qemu_net_queue_send_iov appends the packet to the queue whenever
it cannot send one immediately.



Yes, it appends, but (net/queue.c):
*  qemu_net_queue_send tries to deliver the packet
 immediately. If the packet cannot be delivered, the
 qemu_net_queue_append is called and 0 is returned
 to say the caller "the receiver is not ready, hold on";
*  qemu_net_queue_append does a probe before adding
 the packet to the queue:
 if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
 return; /* drop if queue full and no callback */
 }

The queue is not infinite, so we have three cases:
1. The queue is not full -> append the packet, no
 problems here
2. The queue is full, no callback -> we cannot notify
 a caller when we're ready, so just drop the packet
 if we can't append it.
3. The queue is full, callback present -> we can notify
 a caller when we are ready, so "let's queue this packet,
 but expect no more (!) packets is sent until I call
 sent_cb when the queue is ready"

Therefore if we provide a callback and keep sending
packets if 0 is returned, this may cause unlimited(!)
queue growth. To prevent this, we should stop sending
packets and wait for notification callback to continue.

I don't see any contradiction with that comment.


Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if
calling qemu_send_packet_async is allowed after it returns 0?



It may be wrong, but I think it's not allowed to send
packets after qemu_send_packet_async returns 0.

Jason Wang, can you confirm please?

Best Regards,

Vladislav Yaroshchuk



Regards,
Akihiko Odaki






The unlimited queue growth would not happen if you stop calling
vmnet_read after qemu_send_packet_async returns 0. So I think the
comment should be amended to say something like:
"Once qemu_send_packet_async returns 0, the client should stop reading
more packets from the underlying NIC to prevent infinite growth of the
queue until the last callback gets called."

The unique feature of vmnet is that it can read multiple packets at
once, and I guess it is the reason why the comment in net/queue.c
missed the case. But this is all my guess so I need confirmation from
the maintainer.

Regards,
Akihiko Odaki


I forgot to include Jason Wang to To for this email.



Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-03 Thread Akihiko Odaki

On 2022/03/04 10:37, Jason Wang wrote:

On Thu, Mar 3, 2022 at 11:43 PM Vladislav Yaroshchuk
 wrote:




On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki  wrote:


On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:

  > Not sure that only one field is enough, cause
  > we may have two states on bh execution start:
  > 1. There are packets in vmnet buffer s->packets_buf
  >  that were rejected by qemu_send_async and waiting
  >  to be sent. If this happens, we should complete sending
  >  these waiting packets with qemu_send_async firstly,
  >  and after that we should call vmnet_read to get
  >  new ones and send them to QEMU;
  > 2. There are no packets in s->packets_buf to be sent to
  >  qemu, we only need to get new packets from vmnet
  >  with vmnet_read and send them to QEMU

 In case 1, you should just keep calling qemu_send_packet_async.
 Actually
 qemu_send_packet_async adds the packet to its internal queue and calls
 the callback when it is consumed.


I'm not sure we can keep calling qemu_send_packet_async,
because as docs from net/queue.c says:

/* [...]
   * If a sent callback is provided to send(), the caller must handle a
   * zero return from the delivery handler by not sending any more packets
   * until we have invoked the callback. Only in that case will we queue
   * the packet.
   *
   * If a sent callback isn't provided, we just drop the packet to avoid
   * unbounded queueing.
   */

So after we did vmnet_read and read N packets
into temporary s->packets_buf, we begin calling
qemu_send_packet_async. If it returns 0 - it says
"no more packets until sent_cb called please".
At this moment we have N packets in s->packets_buf
and already queued K < N of them. But, packets K..N
are not queued and keep waiting for sent_cb to be sent
with qemu_send_packet_async.
Thus when sent_cb called, we should finish
our transfer of packets K..N from s->packets_buf
to qemu calling qemu_send_packet_async.
I meant this.


I missed the comment. The description is contradicting with the actual
code; qemu_net_queue_send_iov appends the packet to the queue whenever
it cannot send one immediately.



Yes, it appends, but (net/queue.c):
*  qemu_net_queue_send tries to deliver the packet
 immediately. If the packet cannot be delivered, the
 qemu_net_queue_append is called and 0 is returned
 to say the caller "the receiver is not ready, hold on";
*  qemu_net_queue_append does a probe before adding
 the packet to the queue:
 if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
 return; /* drop if queue full and no callback */
 }

The queue is not infinite, so we have three cases:
1. The queue is not full -> append the packet, no
 problems here
2. The queue is full, no callback -> we cannot notify
 a caller when we're ready, so just drop the packet
 if we can't append it.
3. The queue is full, callback present -> we can notify
 a caller when we are ready, so "let's queue this packet,
 but expect no more (!) packets is sent until I call
 sent_cb when the queue is ready"

Therefore if we provide a callback and keep sending
packets if 0 is returned, this may cause unlimited(!)
queue growth. To prevent this, we should stop sending
packets and wait for notification callback to continue.


Right.



I don't see any contradiction with that comment.


Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if
calling qemu_send_packet_async is allowed after it returns 0?



It may be wrong, but I think it's not allowed to send
packets after qemu_send_packet_async returns 0.

Jason Wang, can you confirm please?


With a cb, we can't do this. All users with cb will disable the source
polling and depend on the cb to re-read the polling.
(tap/l2tpv3/socket).

Without a cb, we can. As analyzed above, qemu_net_queue_append() can
limit the number of packets queued in this case.


vmnet can read multiple packets at once. What about such a case? Isn't 
calling qemu_send_packet_async for already read packet and stopping 
reading more fine?


Regards,
Akihiko Odaki



Thanks



Best Regards,

Vladislav Yaroshchuk



Regards,
Akihiko Odaki











Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-03 Thread Jason Wang
On Thu, Mar 3, 2022 at 11:43 PM Vladislav Yaroshchuk
 wrote:
>
>
>
> On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki  wrote:
>>
>> On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:
>> >  > Not sure that only one field is enough, cause
>> >  > we may have two states on bh execution start:
>> >  > 1. There are packets in vmnet buffer s->packets_buf
>> >  >  that were rejected by qemu_send_async and waiting
>> >  >  to be sent. If this happens, we should complete sending
>> >  >  these waiting packets with qemu_send_async firstly,
>> >  >  and after that we should call vmnet_read to get
>> >  >  new ones and send them to QEMU;
>> >  > 2. There are no packets in s->packets_buf to be sent to
>> >  >  qemu, we only need to get new packets from vmnet
>> >  >  with vmnet_read and send them to QEMU
>> >
>> > In case 1, you should just keep calling qemu_send_packet_async.
>> > Actually
>> > qemu_send_packet_async adds the packet to its internal queue and calls
>> > the callback when it is consumed.
>> >
>> >
>> > I'm not sure we can keep calling qemu_send_packet_async,
>> > because as docs from net/queue.c says:
>> >
>> > /* [...]
>> >   * If a sent callback is provided to send(), the caller must handle a
>> >   * zero return from the delivery handler by not sending any more packets
>> >   * until we have invoked the callback. Only in that case will we queue
>> >   * the packet.
>> >   *
>> >   * If a sent callback isn't provided, we just drop the packet to avoid
>> >   * unbounded queueing.
>> >   */
>> >
>> > So after we did vmnet_read and read N packets
>> > into temporary s->packets_buf, we begin calling
>> > qemu_send_packet_async. If it returns 0 - it says
>> > "no more packets until sent_cb called please".
>> > At this moment we have N packets in s->packets_buf
>> > and already queued K < N of them. But, packets K..N
>> > are not queued and keep waiting for sent_cb to be sent
>> > with qemu_send_packet_async.
>> > Thus when sent_cb called, we should finish
>> > our transfer of packets K..N from s->packets_buf
>> > to qemu calling qemu_send_packet_async.
>> > I meant this.
>>
>> I missed the comment. The description is contradicting with the actual
>> code; qemu_net_queue_send_iov appends the packet to the queue whenever
>> it cannot send one immediately.
>>
>
> Yes, it appends, but (net/queue.c):
> *  qemu_net_queue_send tries to deliver the packet
> immediately. If the packet cannot be delivered, the
> qemu_net_queue_append is called and 0 is returned
> to say the caller "the receiver is not ready, hold on";
> *  qemu_net_queue_append does a probe before adding
> the packet to the queue:
> if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
> return; /* drop if queue full and no callback */
> }
>
> The queue is not infinite, so we have three cases:
> 1. The queue is not full -> append the packet, no
> problems here
> 2. The queue is full, no callback -> we cannot notify
> a caller when we're ready, so just drop the packet
> if we can't append it.
> 3. The queue is full, callback present -> we can notify
> a caller when we are ready, so "let's queue this packet,
> but expect no more (!) packets is sent until I call
> sent_cb when the queue is ready"
>
> Therefore if we provide a callback and keep sending
> packets if 0 is returned, this may cause unlimited(!)
> queue growth. To prevent this, we should stop sending
> packets and wait for notification callback to continue.

Right.

>
> I don't see any contradiction with that comment.
>
>> Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if
>> calling qemu_send_packet_async is allowed after it returns 0?
>>
>
> It may be wrong, but I think it's not allowed to send
> packets after qemu_send_packet_async returns 0.
>
> Jason Wang, can you confirm please?

With a cb, we can't do this. All users with cb will disable the source
polling and depend on the cb to re-read the polling.
(tap/l2tpv3/socket).

Without a cb, we can. As analyzed above, qemu_net_queue_append() can
limit the number of packets queued in this case.

Thanks

>
> Best Regards,
>
> Vladislav Yaroshchuk
>
>>
>> Regards,
>> Akihiko Odaki
>
>
>




Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-03 Thread Akihiko Odaki
On Fri, Mar 4, 2022 at 12:43 AM Vladislav Yaroshchuk
 wrote:
>
>
>
> On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki  wrote:
>>
>> On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:
>> >  > Not sure that only one field is enough, cause
>> >  > we may have two states on bh execution start:
>> >  > 1. There are packets in vmnet buffer s->packets_buf
>> >  >  that were rejected by qemu_send_async and waiting
>> >  >  to be sent. If this happens, we should complete sending
>> >  >  these waiting packets with qemu_send_async firstly,
>> >  >  and after that we should call vmnet_read to get
>> >  >  new ones and send them to QEMU;
>> >  > 2. There are no packets in s->packets_buf to be sent to
>> >  >  qemu, we only need to get new packets from vmnet
>> >  >  with vmnet_read and send them to QEMU
>> >
>> > In case 1, you should just keep calling qemu_send_packet_async.
>> > Actually
>> > qemu_send_packet_async adds the packet to its internal queue and calls
>> > the callback when it is consumed.
>> >
>> >
>> > I'm not sure we can keep calling qemu_send_packet_async,
>> > because as docs from net/queue.c says:
>> >
>> > /* [...]
>> >   * If a sent callback is provided to send(), the caller must handle a
>> >   * zero return from the delivery handler by not sending any more packets
>> >   * until we have invoked the callback. Only in that case will we queue
>> >   * the packet.
>> >   *
>> >   * If a sent callback isn't provided, we just drop the packet to avoid
>> >   * unbounded queueing.
>> >   */
>> >
>> > So after we did vmnet_read and read N packets
>> > into temporary s->packets_buf, we begin calling
>> > qemu_send_packet_async. If it returns 0 - it says
>> > "no more packets until sent_cb called please".
>> > At this moment we have N packets in s->packets_buf
>> > and already queued K < N of them. But, packets K..N
>> > are not queued and keep waiting for sent_cb to be sent
>> > with qemu_send_packet_async.
>> > Thus when sent_cb called, we should finish
>> > our transfer of packets K..N from s->packets_buf
>> > to qemu calling qemu_send_packet_async.
>> > I meant this.
>>
>> I missed the comment. The description is contradicting with the actual
>> code; qemu_net_queue_send_iov appends the packet to the queue whenever
>> it cannot send one immediately.
>>
>
> Yes, it appends, but (net/queue.c):
> *  qemu_net_queue_send tries to deliver the packet
> immediately. If the packet cannot be delivered, the
> qemu_net_queue_append is called and 0 is returned
> to say the caller "the receiver is not ready, hold on";
> *  qemu_net_queue_append does a probe before adding
> the packet to the queue:
> if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
> return; /* drop if queue full and no callback */
> }
>
> The queue is not infinite, so we have three cases:
> 1. The queue is not full -> append the packet, no
> problems here
> 2. The queue is full, no callback -> we cannot notify
> a caller when we're ready, so just drop the packet
> if we can't append it.
> 3. The queue is full, callback present -> we can notify
> a caller when we are ready, so "let's queue this packet,
> but expect no more (!) packets is sent until I call
> sent_cb when the queue is ready"
>
> Therefore if we provide a callback and keep sending
> packets if 0 is returned, this may cause unlimited(!)
> queue growth. To prevent this, we should stop sending
> packets and wait for notification callback to continue.
>
> I don't see any contradiction with that comment.
>
>> Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if
>> calling qemu_send_packet_async is allowed after it returns 0?
>>
>
> It may be wrong, but I think it's not allowed to send
> packets after qemu_send_packet_async returns 0.
>
> Jason Wang, can you confirm please?
>
> Best Regards,
>
> Vladislav Yaroshchuk
>
>>
>> Regards,
>> Akihiko Odaki
>
>
>

The unlimited queue growth would not happen if you stop calling
vmnet_read after qemu_send_packet_async returns 0. So I think the
comment should be amended to say something like:
"Once qemu_send_packet_async returns 0, the client should stop reading
more packets from the underlying NIC to prevent infinite growth of the
queue until the last callback gets called."

The unique feature of vmnet is that it can read multiple packets at
once, and I guess it is the reason why the comment in net/queue.c
missed the case. But this is all my guess so I need confirmation from
the maintainer.

Regards,
Akihiko Odaki



Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-03 Thread Vladislav Yaroshchuk
On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki 
wrote:

> On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:
> >  > Not sure that only one field is enough, cause
> >  > we may have two states on bh execution start:
> >  > 1. There are packets in vmnet buffer s->packets_buf
> >  >  that were rejected by qemu_send_async and waiting
> >  >  to be sent. If this happens, we should complete sending
> >  >  these waiting packets with qemu_send_async firstly,
> >  >  and after that we should call vmnet_read to get
> >  >  new ones and send them to QEMU;
> >  > 2. There are no packets in s->packets_buf to be sent to
> >  >  qemu, we only need to get new packets from vmnet
> >  >  with vmnet_read and send them to QEMU
> >
> > In case 1, you should just keep calling qemu_send_packet_async.
> > Actually
> > qemu_send_packet_async adds the packet to its internal queue and
> calls
> > the callback when it is consumed.
> >
> >
> > I'm not sure we can keep calling qemu_send_packet_async,
> > because as docs from net/queue.c says:
> >
> > /* [...]
> >   * If a sent callback is provided to send(), the caller must handle a
> >   * zero return from the delivery handler by not sending any more packets
> >   * until we have invoked the callback. Only in that case will we queue
> >   * the packet.
> >   *
> >   * If a sent callback isn't provided, we just drop the packet to avoid
> >   * unbounded queueing.
> >   */
> >
> > So after we did vmnet_read and read N packets
> > into temporary s->packets_buf, we begin calling
> > qemu_send_packet_async. If it returns 0 - it says
> > "no more packets until sent_cb called please".
> > At this moment we have N packets in s->packets_buf
> > and already queued K < N of them. But, packets K..N
> > are not queued and keep waiting for sent_cb to be sent
> > with qemu_send_packet_async.
> > Thus when sent_cb called, we should finish
> > our transfer of packets K..N from s->packets_buf
> > to qemu calling qemu_send_packet_async.
> > I meant this.
>
> I missed the comment. The description is contradicting with the actual
> code; qemu_net_queue_send_iov appends the packet to the queue whenever
> it cannot send one immediately.
>
>
Yes, it appends, but (net/queue.c):
*  qemu_net_queue_send tries to deliver the packet
immediately. If the packet cannot be delivered, the
qemu_net_queue_append is called and 0 is returned
to say the caller "the receiver is not ready, hold on";
*  qemu_net_queue_append does a probe before adding
the packet to the queue:
if (queue->nq_count >= queue->nq_maxlen && !sent_cb) {
return; /* drop if queue full and no callback */
}

The queue is not infinite, so we have three cases:
1. The queue is not full -> append the packet, no
problems here
2. The queue is full, no callback -> we cannot notify
a caller when we're ready, so just drop the packet
if we can't append it.
3. The queue is full, callback present -> we can notify
a caller when we are ready, so "let's queue this packet,
but expect no more (!) packets is sent until I call
sent_cb when the queue is ready"

Therefore if we provide a callback and keep sending
packets if 0 is returned, this may cause unlimited(!)
queue growth. To prevent this, we should stop sending
packets and wait for notification callback to continue.

I don't see any contradiction with that comment.

Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if
> calling qemu_send_packet_async is allowed after it returns 0?
>
>
It may be wrong, but I think it's not allowed to send
packets after qemu_send_packet_async returns 0.

Jason Wang, can you confirm please?

Best Regards,

Vladislav Yaroshchuk


> Regards,
> Akihiko Odaki
>


Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-01 Thread Akihiko Odaki

On 2022/03/01 17:09, Vladislav Yaroshchuk wrote:

 > Not sure that only one field is enough, cause
 > we may have two states on bh execution start:
 > 1. There are packets in vmnet buffer s->packets_buf
 >      that were rejected by qemu_send_async and waiting
 >      to be sent. If this happens, we should complete sending
 >      these waiting packets with qemu_send_async firstly,
 >      and after that we should call vmnet_read to get
 >      new ones and send them to QEMU;
 > 2. There are no packets in s->packets_buf to be sent to
 >      qemu, we only need to get new packets from vmnet
 >      with vmnet_read and send them to QEMU

In case 1, you should just keep calling qemu_send_packet_async.
Actually
qemu_send_packet_async adds the packet to its internal queue and calls
the callback when it is consumed.


I'm not sure we can keep calling qemu_send_packet_async,
because as docs from net/queue.c says:

/* [...]
  * If a sent callback is provided to send(), the caller must handle a
  * zero return from the delivery handler by not sending any more packets
  * until we have invoked the callback. Only in that case will we queue
  * the packet.
  *
  * If a sent callback isn't provided, we just drop the packet to avoid
  * unbounded queueing.
  */

So after we did vmnet_read and read N packets
into temporary s->packets_buf, we begin calling
qemu_send_packet_async. If it returns 0 - it says
"no more packets until sent_cb called please".
At this moment we have N packets in s->packets_buf
and already queued K < N of them. But, packets K..N
are not queued and keep waiting for sent_cb to be sent
with qemu_send_packet_async.
Thus when sent_cb called, we should finish
our transfer of packets K..N from s->packets_buf
to qemu calling qemu_send_packet_async.
I meant this.


I missed the comment. The description is contradicting with the actual 
code; qemu_net_queue_send_iov appends the packet to the queue whenever 
it cannot send one immediately.


Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if 
calling qemu_send_packet_async is allowed after it returns 0?


Regards,
Akihiko Odaki



Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-03-01 Thread Vladislav Yaroshchuk
On Tue, Mar 1, 2022 at 8:52 AM Akihiko Odaki 
wrote:

> On 2022/02/28 20:59, Vladislav Yaroshchuk wrote:
> >
> >
> > On Sat, Feb 26, 2022 at 3:27 PM Akihiko Odaki  > > wrote:
> >
> > On Sat, Feb 26, 2022 at 8:33 PM Vladislav Yaroshchuk
> >  > > wrote:
> >  >
> >  >
> >  >
> >  > On Sat, Feb 26, 2022 at 12:16 PM Akihiko Odaki
> > mailto:akihiko.od...@gmail.com>> wrote:
> >  >>
> >  >> On 2022/02/26 17:37, Vladislav Yaroshchuk wrote:
> >  >> >
> >  >> > Hi Akihiko,
> >  >> >
> >  >> > On Fri, Feb 25, 2022 at 8:46 PM Akihiko Odaki
> > mailto:akihiko.od...@gmail.com>
> >  >> >  > >> wrote:
> >  >> >
> >  >> > On 2022/02/26 2:13, Vladislav Yaroshchuk wrote:
> >  >> >  > Interaction with vmnet.framework in different modes
> >  >> >  > differs only on configuration stage, so we can create
> >  >> >  > common `send`, `receive`, etc. procedures and reuse
> them.
> >  >> >  >
> >  >> >  > vmnet.framework supports iov, but writing more than
> >  >> >  > one iov into vmnet interface fails with
> >  >> >  > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
> >  >> >  > one and passing it to vmnet works fine. That's the
> >  >> >  > reason why receive_iov() left unimplemented. But it
> still
> >  >> >  > works with good enough performance having .receive()
> >  >> >  > net/vmnet: implement shared mode (vmnet-shared)
> >  >> >  >
> >  >> >  > Interaction with vmnet.framework in different modes
> >  >> >  > differs only on configuration stage, so we can create
> >  >> >  > common `send`, `receive`, etc. procedures and reuse
> them.
> >  >> >  >
> >  >> >  > vmnet.framework supports iov, but writing more than
> >  >> >  > one iov into vmnet interface fails with
> >  >> >  > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
> >  >> >  > one and passing it to vmnet works fine. That's the
> >  >> >  > reason why receive_iov() left unimplemented. But it
> still
> >  >> >  > works with good enough performance having .receive()
> >  >> >  > implemented only.
> >  >> >  >
> >  >> >  > Also, there is no way to unsubscribe from vmnet packages
> >  >> >  > receiving except registering and unregistering event
> >  >> >  > callback or simply drop packages just ignoring and
> >  >> >  > not processing them when related flag is set. Here we do
> >  >> >  > using the second way.
> >  >> >  >
> >  >> >  > Signed-off-by: Phillip Tennen  > 
> >  >> > >>
> >  >> >  > Signed-off-by: Vladislav Yaroshchuk
> >  >> >  > 
> >  >> >  > >>
> >  >> >
> >  >> > Thank you for persistently working on this.
> >  >> >
> >  >> >  > ---
> >  >> >  >   net/vmnet-common.m | 302
> >  >> > +
> >  >> >  >   net/vmnet-shared.c |  94 +-
> >  >> >  >   net/vmnet_int.h|  39 +-
> >  >> >  >   3 files changed, 430 insertions(+), 5 deletions(-)
> >  >> >  >
> >  >> >  > diff --git a/net/vmnet-common.m b/net/vmnet-common.m
> >  >> >  > index 56612c72ce..2f70921cae 100644
> >  >> >  > --- a/net/vmnet-common.m
> >  >> >  > +++ b/net/vmnet-common.m
> >  >> >  > @@ -10,6 +10,8 @@
> >  >> >  >*/
> >  >> >  >
> >  >> >  >   #include "qemu/osdep.h"
> >  >> >  > +#include "qemu/main-loop.h"
> >  >> >  > +#include "qemu/log.h"
> >  >> >  >   #include "qapi/qapi-types-net.h"
> >  >> >  >   #include "vmnet_int.h"
> >  >> >  >   #include "clients.h"
> >  >> >  > @@ -17,4 +19,304 @@
> >  >> >  >   #include "qapi/error.h"
> >  >> >  >
> >  >> >  >   #include 
> >  >> >  > +#include 
> >  >> >  >
> >  >> >  > +
> >  >> >  > +static inline void
> > vmnet_set_send_bh_scheduled(VmnetCommonState *s,
> >  >> >  > +   bool
> > enable)
> >  >> >  > +{
> >  >> >  > +qatomic_set(>send_scheduled, enable);
> >  >> >  > +}
> >  >> >  > +
> >  >> >  > +
> >  >> >  > +static inline bool
> > vmnet_is_send_bh_scheduled(VmnetCommonState *s)
> >  >> >  > +{
> >  >> >  > +return qatomic_load_acquire(>send_scheduled);
> >  >> >  > +}
> >   

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-02-28 Thread Akihiko Odaki

On 2022/02/28 20:59, Vladislav Yaroshchuk wrote:



On Sat, Feb 26, 2022 at 3:27 PM Akihiko Odaki > wrote:


On Sat, Feb 26, 2022 at 8:33 PM Vladislav Yaroshchuk
mailto:vladislav.yaroshc...@jetbrains.com>> wrote:
 >
 >
 >
 > On Sat, Feb 26, 2022 at 12:16 PM Akihiko Odaki
mailto:akihiko.od...@gmail.com>> wrote:
 >>
 >> On 2022/02/26 17:37, Vladislav Yaroshchuk wrote:
 >> >
 >> > Hi Akihiko,
 >> >
 >> > On Fri, Feb 25, 2022 at 8:46 PM Akihiko Odaki
mailto:akihiko.od...@gmail.com>
 >> > >> wrote:
 >> >
 >> >     On 2022/02/26 2:13, Vladislav Yaroshchuk wrote:
 >> >      > Interaction with vmnet.framework in different modes
 >> >      > differs only on configuration stage, so we can create
 >> >      > common `send`, `receive`, etc. procedures and reuse them.
 >> >      >
 >> >      > vmnet.framework supports iov, but writing more than
 >> >      > one iov into vmnet interface fails with
 >> >      > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
 >> >      > one and passing it to vmnet works fine. That's the
 >> >      > reason why receive_iov() left unimplemented. But it still
 >> >      > works with good enough performance having .receive()
 >> >      > net/vmnet: implement shared mode (vmnet-shared)
 >> >      >
 >> >      > Interaction with vmnet.framework in different modes
 >> >      > differs only on configuration stage, so we can create
 >> >      > common `send`, `receive`, etc. procedures and reuse them.
 >> >      >
 >> >      > vmnet.framework supports iov, but writing more than
 >> >      > one iov into vmnet interface fails with
 >> >      > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
 >> >      > one and passing it to vmnet works fine. That's the
 >> >      > reason why receive_iov() left unimplemented. But it still
 >> >      > works with good enough performance having .receive()
 >> >      > implemented only.
 >> >      >
 >> >      > Also, there is no way to unsubscribe from vmnet packages
 >> >      > receiving except registering and unregistering event
 >> >      > callback or simply drop packages just ignoring and
 >> >      > not processing them when related flag is set. Here we do
 >> >      > using the second way.
 >> >      >
 >> >      > Signed-off-by: Phillip Tennen mailto:phil...@axleos.com>
 >> >     >>
 >> >      > Signed-off-by: Vladislav Yaroshchuk
 >> >     mailto:vladislav.yaroshc...@jetbrains.com>
 >> >     >>
 >> >
 >> >     Thank you for persistently working on this.
 >> >
 >> >      > ---
 >> >      >   net/vmnet-common.m | 302
 >> >     +
 >> >      >   net/vmnet-shared.c |  94 +-
 >> >      >   net/vmnet_int.h    |  39 +-
 >> >      >   3 files changed, 430 insertions(+), 5 deletions(-)
 >> >      >
 >> >      > diff --git a/net/vmnet-common.m b/net/vmnet-common.m
 >> >      > index 56612c72ce..2f70921cae 100644
 >> >      > --- a/net/vmnet-common.m
 >> >      > +++ b/net/vmnet-common.m
 >> >      > @@ -10,6 +10,8 @@
 >> >      >    */
 >> >      >
 >> >      >   #include "qemu/osdep.h"
 >> >      > +#include "qemu/main-loop.h"
 >> >      > +#include "qemu/log.h"
 >> >      >   #include "qapi/qapi-types-net.h"
 >> >      >   #include "vmnet_int.h"
 >> >      >   #include "clients.h"
 >> >      > @@ -17,4 +19,304 @@
 >> >      >   #include "qapi/error.h"
 >> >      >
 >> >      >   #include 
 >> >      > +#include 
 >> >      >
 >> >      > +
 >> >      > +static inline void
vmnet_set_send_bh_scheduled(VmnetCommonState *s,
 >> >      > +                                               bool
enable)
 >> >      > +{
 >> >      > +    qatomic_set(>send_scheduled, enable);
 >> >      > +}
 >> >      > +
 >> >      > +
 >> >      > +static inline bool
vmnet_is_send_bh_scheduled(VmnetCommonState *s)
 >> >      > +{
 >> >      > +    return qatomic_load_acquire(>send_scheduled);
 >> >      > +}
 >> >      > +
 >> >      > +
 >> >      > +static inline void
vmnet_set_send_enabled(VmnetCommonState *s,
 >> >      > +                                          bool enable)
 >> >      > +{
 >> >      > +    if (enable) {
 >> >      > +        vmnet_interface_set_event_callback(
 >> >      > +            s->vmnet_if,
 >> >      > +            VMNET_INTERFACE_PACKETS_AVAILABLE,
 >> >      > +            s->if_queue,
 >> >      > +            

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-02-28 Thread Vladislav Yaroshchuk
On Sat, Feb 26, 2022 at 3:27 PM Akihiko Odaki 
wrote:

> On Sat, Feb 26, 2022 at 8:33 PM Vladislav Yaroshchuk
>  wrote:
> >
> >
> >
> > On Sat, Feb 26, 2022 at 12:16 PM Akihiko Odaki 
> wrote:
> >>
> >> On 2022/02/26 17:37, Vladislav Yaroshchuk wrote:
> >> >
> >> > Hi Akihiko,
> >> >
> >> > On Fri, Feb 25, 2022 at 8:46 PM Akihiko Odaki <
> akihiko.od...@gmail.com
> >> > > wrote:
> >> >
> >> > On 2022/02/26 2:13, Vladislav Yaroshchuk wrote:
> >> >  > Interaction with vmnet.framework in different modes
> >> >  > differs only on configuration stage, so we can create
> >> >  > common `send`, `receive`, etc. procedures and reuse them.
> >> >  >
> >> >  > vmnet.framework supports iov, but writing more than
> >> >  > one iov into vmnet interface fails with
> >> >  > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
> >> >  > one and passing it to vmnet works fine. That's the
> >> >  > reason why receive_iov() left unimplemented. But it still
> >> >  > works with good enough performance having .receive()
> >> >  > net/vmnet: implement shared mode (vmnet-shared)
> >> >  >
> >> >  > Interaction with vmnet.framework in different modes
> >> >  > differs only on configuration stage, so we can create
> >> >  > common `send`, `receive`, etc. procedures and reuse them.
> >> >  >
> >> >  > vmnet.framework supports iov, but writing more than
> >> >  > one iov into vmnet interface fails with
> >> >  > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
> >> >  > one and passing it to vmnet works fine. That's the
> >> >  > reason why receive_iov() left unimplemented. But it still
> >> >  > works with good enough performance having .receive()
> >> >  > implemented only.
> >> >  >
> >> >  > Also, there is no way to unsubscribe from vmnet packages
> >> >  > receiving except registering and unregistering event
> >> >  > callback or simply drop packages just ignoring and
> >> >  > not processing them when related flag is set. Here we do
> >> >  > using the second way.
> >> >  >
> >> >  > Signed-off-by: Phillip Tennen  >> > >
> >> >  > Signed-off-by: Vladislav Yaroshchuk
> >> >  >> > >
> >> >
> >> > Thank you for persistently working on this.
> >> >
> >> >  > ---
> >> >  >   net/vmnet-common.m | 302
> >> > +
> >> >  >   net/vmnet-shared.c |  94 +-
> >> >  >   net/vmnet_int.h|  39 +-
> >> >  >   3 files changed, 430 insertions(+), 5 deletions(-)
> >> >  >
> >> >  > diff --git a/net/vmnet-common.m b/net/vmnet-common.m
> >> >  > index 56612c72ce..2f70921cae 100644
> >> >  > --- a/net/vmnet-common.m
> >> >  > +++ b/net/vmnet-common.m
> >> >  > @@ -10,6 +10,8 @@
> >> >  >*/
> >> >  >
> >> >  >   #include "qemu/osdep.h"
> >> >  > +#include "qemu/main-loop.h"
> >> >  > +#include "qemu/log.h"
> >> >  >   #include "qapi/qapi-types-net.h"
> >> >  >   #include "vmnet_int.h"
> >> >  >   #include "clients.h"
> >> >  > @@ -17,4 +19,304 @@
> >> >  >   #include "qapi/error.h"
> >> >  >
> >> >  >   #include 
> >> >  > +#include 
> >> >  >
> >> >  > +
> >> >  > +static inline void
> vmnet_set_send_bh_scheduled(VmnetCommonState *s,
> >> >  > +   bool enable)
> >> >  > +{
> >> >  > +qatomic_set(>send_scheduled, enable);
> >> >  > +}
> >> >  > +
> >> >  > +
> >> >  > +static inline bool
> vmnet_is_send_bh_scheduled(VmnetCommonState *s)
> >> >  > +{
> >> >  > +return qatomic_load_acquire(>send_scheduled);
> >> >  > +}
> >> >  > +
> >> >  > +
> >> >  > +static inline void vmnet_set_send_enabled(VmnetCommonState *s,
> >> >  > +  bool enable)
> >> >  > +{
> >> >  > +if (enable) {
> >> >  > +vmnet_interface_set_event_callback(
> >> >  > +s->vmnet_if,
> >> >  > +VMNET_INTERFACE_PACKETS_AVAILABLE,
> >> >  > +s->if_queue,
> >> >  > +^(interface_event_t event_id, xpc_object_t event)
> {
> >> >  > +assert(event_id ==
> >> > VMNET_INTERFACE_PACKETS_AVAILABLE);
> >> >  > +/*
> >> >  > + * This function is being called from a non
> qemu
> >> > thread, so
> >> >  > + * we only schedule a BH, and do the rest of
> the
> >> > io completion
> >> >  > + * handling from vmnet_send_bh() which runs
> in a
> >> > qemu context.
> >> >  > + *
> >> >  > + * Avoid scheduling multiple bottom halves
> >> >  > + */
> >> >  > +

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-02-26 Thread Akihiko Odaki
On Sat, Feb 26, 2022 at 8:33 PM Vladislav Yaroshchuk
 wrote:
>
>
>
> On Sat, Feb 26, 2022 at 12:16 PM Akihiko Odaki  
> wrote:
>>
>> On 2022/02/26 17:37, Vladislav Yaroshchuk wrote:
>> >
>> > Hi Akihiko,
>> >
>> > On Fri, Feb 25, 2022 at 8:46 PM Akihiko Odaki > > > wrote:
>> >
>> > On 2022/02/26 2:13, Vladislav Yaroshchuk wrote:
>> >  > Interaction with vmnet.framework in different modes
>> >  > differs only on configuration stage, so we can create
>> >  > common `send`, `receive`, etc. procedures and reuse them.
>> >  >
>> >  > vmnet.framework supports iov, but writing more than
>> >  > one iov into vmnet interface fails with
>> >  > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
>> >  > one and passing it to vmnet works fine. That's the
>> >  > reason why receive_iov() left unimplemented. But it still
>> >  > works with good enough performance having .receive()
>> >  > net/vmnet: implement shared mode (vmnet-shared)
>> >  >
>> >  > Interaction with vmnet.framework in different modes
>> >  > differs only on configuration stage, so we can create
>> >  > common `send`, `receive`, etc. procedures and reuse them.
>> >  >
>> >  > vmnet.framework supports iov, but writing more than
>> >  > one iov into vmnet interface fails with
>> >  > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
>> >  > one and passing it to vmnet works fine. That's the
>> >  > reason why receive_iov() left unimplemented. But it still
>> >  > works with good enough performance having .receive()
>> >  > implemented only.
>> >  >
>> >  > Also, there is no way to unsubscribe from vmnet packages
>> >  > receiving except registering and unregistering event
>> >  > callback or simply drop packages just ignoring and
>> >  > not processing them when related flag is set. Here we do
>> >  > using the second way.
>> >  >
>> >  > Signed-off-by: Phillip Tennen > > >
>> >  > Signed-off-by: Vladislav Yaroshchuk
>> > > > >
>> >
>> > Thank you for persistently working on this.
>> >
>> >  > ---
>> >  >   net/vmnet-common.m | 302
>> > +
>> >  >   net/vmnet-shared.c |  94 +-
>> >  >   net/vmnet_int.h|  39 +-
>> >  >   3 files changed, 430 insertions(+), 5 deletions(-)
>> >  >
>> >  > diff --git a/net/vmnet-common.m b/net/vmnet-common.m
>> >  > index 56612c72ce..2f70921cae 100644
>> >  > --- a/net/vmnet-common.m
>> >  > +++ b/net/vmnet-common.m
>> >  > @@ -10,6 +10,8 @@
>> >  >*/
>> >  >
>> >  >   #include "qemu/osdep.h"
>> >  > +#include "qemu/main-loop.h"
>> >  > +#include "qemu/log.h"
>> >  >   #include "qapi/qapi-types-net.h"
>> >  >   #include "vmnet_int.h"
>> >  >   #include "clients.h"
>> >  > @@ -17,4 +19,304 @@
>> >  >   #include "qapi/error.h"
>> >  >
>> >  >   #include 
>> >  > +#include 
>> >  >
>> >  > +
>> >  > +static inline void vmnet_set_send_bh_scheduled(VmnetCommonState *s,
>> >  > +   bool enable)
>> >  > +{
>> >  > +qatomic_set(>send_scheduled, enable);
>> >  > +}
>> >  > +
>> >  > +
>> >  > +static inline bool vmnet_is_send_bh_scheduled(VmnetCommonState *s)
>> >  > +{
>> >  > +return qatomic_load_acquire(>send_scheduled);
>> >  > +}
>> >  > +
>> >  > +
>> >  > +static inline void vmnet_set_send_enabled(VmnetCommonState *s,
>> >  > +  bool enable)
>> >  > +{
>> >  > +if (enable) {
>> >  > +vmnet_interface_set_event_callback(
>> >  > +s->vmnet_if,
>> >  > +VMNET_INTERFACE_PACKETS_AVAILABLE,
>> >  > +s->if_queue,
>> >  > +^(interface_event_t event_id, xpc_object_t event) {
>> >  > +assert(event_id ==
>> > VMNET_INTERFACE_PACKETS_AVAILABLE);
>> >  > +/*
>> >  > + * This function is being called from a non qemu
>> > thread, so
>> >  > + * we only schedule a BH, and do the rest of the
>> > io completion
>> >  > + * handling from vmnet_send_bh() which runs in a
>> > qemu context.
>> >  > + *
>> >  > + * Avoid scheduling multiple bottom halves
>> >  > + */
>> >  > +if (!vmnet_is_send_bh_scheduled(s)) {
>> >  > +vmnet_set_send_bh_scheduled(s, true);
>> >
>> > It can be interrupted between vmnet_is_send_bh_scheduled and
>> > vmnet_set_send_bh_scheduled, which leads to data race.
>> >
>> >
>> > Sorry, I did not clearly understand what you meant. Since 

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-02-26 Thread Vladislav Yaroshchuk
On Sat, Feb 26, 2022 at 12:16 PM Akihiko Odaki 
wrote:

> On 2022/02/26 17:37, Vladislav Yaroshchuk wrote:
> >
> > Hi Akihiko,
> >
> > On Fri, Feb 25, 2022 at 8:46 PM Akihiko Odaki  > > wrote:
> >
> > On 2022/02/26 2:13, Vladislav Yaroshchuk wrote:
> >  > Interaction with vmnet.framework in different modes
> >  > differs only on configuration stage, so we can create
> >  > common `send`, `receive`, etc. procedures and reuse them.
> >  >
> >  > vmnet.framework supports iov, but writing more than
> >  > one iov into vmnet interface fails with
> >  > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
> >  > one and passing it to vmnet works fine. That's the
> >  > reason why receive_iov() left unimplemented. But it still
> >  > works with good enough performance having .receive()
> >  > net/vmnet: implement shared mode (vmnet-shared)
> >  >
> >  > Interaction with vmnet.framework in different modes
> >  > differs only on configuration stage, so we can create
> >  > common `send`, `receive`, etc. procedures and reuse them.
> >  >
> >  > vmnet.framework supports iov, but writing more than
> >  > one iov into vmnet interface fails with
> >  > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
> >  > one and passing it to vmnet works fine. That's the
> >  > reason why receive_iov() left unimplemented. But it still
> >  > works with good enough performance having .receive()
> >  > implemented only.
> >  >
> >  > Also, there is no way to unsubscribe from vmnet packages
> >  > receiving except registering and unregistering event
> >  > callback or simply drop packages just ignoring and
> >  > not processing them when related flag is set. Here we do
> >  > using the second way.
> >  >
> >  > Signed-off-by: Phillip Tennen  > >
> >  > Signed-off-by: Vladislav Yaroshchuk
> >  > >
> >
> > Thank you for persistently working on this.
> >
> >  > ---
> >  >   net/vmnet-common.m | 302
> > +
> >  >   net/vmnet-shared.c |  94 +-
> >  >   net/vmnet_int.h|  39 +-
> >  >   3 files changed, 430 insertions(+), 5 deletions(-)
> >  >
> >  > diff --git a/net/vmnet-common.m b/net/vmnet-common.m
> >  > index 56612c72ce..2f70921cae 100644
> >  > --- a/net/vmnet-common.m
> >  > +++ b/net/vmnet-common.m
> >  > @@ -10,6 +10,8 @@
> >  >*/
> >  >
> >  >   #include "qemu/osdep.h"
> >  > +#include "qemu/main-loop.h"
> >  > +#include "qemu/log.h"
> >  >   #include "qapi/qapi-types-net.h"
> >  >   #include "vmnet_int.h"
> >  >   #include "clients.h"
> >  > @@ -17,4 +19,304 @@
> >  >   #include "qapi/error.h"
> >  >
> >  >   #include 
> >  > +#include 
> >  >
> >  > +
> >  > +static inline void vmnet_set_send_bh_scheduled(VmnetCommonState
> *s,
> >  > +   bool enable)
> >  > +{
> >  > +qatomic_set(>send_scheduled, enable);
> >  > +}
> >  > +
> >  > +
> >  > +static inline bool vmnet_is_send_bh_scheduled(VmnetCommonState
> *s)
> >  > +{
> >  > +return qatomic_load_acquire(>send_scheduled);
> >  > +}
> >  > +
> >  > +
> >  > +static inline void vmnet_set_send_enabled(VmnetCommonState *s,
> >  > +  bool enable)
> >  > +{
> >  > +if (enable) {
> >  > +vmnet_interface_set_event_callback(
> >  > +s->vmnet_if,
> >  > +VMNET_INTERFACE_PACKETS_AVAILABLE,
> >  > +s->if_queue,
> >  > +^(interface_event_t event_id, xpc_object_t event) {
> >  > +assert(event_id ==
> > VMNET_INTERFACE_PACKETS_AVAILABLE);
> >  > +/*
> >  > + * This function is being called from a non qemu
> > thread, so
> >  > + * we only schedule a BH, and do the rest of the
> > io completion
> >  > + * handling from vmnet_send_bh() which runs in a
> > qemu context.
> >  > + *
> >  > + * Avoid scheduling multiple bottom halves
> >  > + */
> >  > +if (!vmnet_is_send_bh_scheduled(s)) {
> >  > +vmnet_set_send_bh_scheduled(s, true);
> >
> > It can be interrupted between vmnet_is_send_bh_scheduled and
> > vmnet_set_send_bh_scheduled, which leads to data race.
> >
> >
> > Sorry, I did not clearly understand what you meant. Since this
> > callback (block) is submitted on DISPATCH_QUEUE_SERIAL,
> > only one instance of the callback will be executed at any
> > moment of time.
> > 

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-02-26 Thread Akihiko Odaki

On 2022/02/26 17:37, Vladislav Yaroshchuk wrote:


Hi Akihiko,

On Fri, Feb 25, 2022 at 8:46 PM Akihiko Odaki > wrote:


On 2022/02/26 2:13, Vladislav Yaroshchuk wrote:
 > Interaction with vmnet.framework in different modes
 > differs only on configuration stage, so we can create
 > common `send`, `receive`, etc. procedures and reuse them.
 >
 > vmnet.framework supports iov, but writing more than
 > one iov into vmnet interface fails with
 > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
 > one and passing it to vmnet works fine. That's the
 > reason why receive_iov() left unimplemented. But it still
 > works with good enough performance having .receive()
 > net/vmnet: implement shared mode (vmnet-shared)
 >
 > Interaction with vmnet.framework in different modes
 > differs only on configuration stage, so we can create
 > common `send`, `receive`, etc. procedures and reuse them.
 >
 > vmnet.framework supports iov, but writing more than
 > one iov into vmnet interface fails with
 > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
 > one and passing it to vmnet works fine. That's the
 > reason why receive_iov() left unimplemented. But it still
 > works with good enough performance having .receive()
 > implemented only.
 >
 > Also, there is no way to unsubscribe from vmnet packages
 > receiving except registering and unregistering event
 > callback or simply drop packages just ignoring and
 > not processing them when related flag is set. Here we do
 > using the second way.
 >
 > Signed-off-by: Phillip Tennen mailto:phil...@axleos.com>>
 > Signed-off-by: Vladislav Yaroshchuk
mailto:vladislav.yaroshc...@jetbrains.com>>

Thank you for persistently working on this.

 > ---
 >   net/vmnet-common.m | 302
+
 >   net/vmnet-shared.c |  94 +-
 >   net/vmnet_int.h    |  39 +-
 >   3 files changed, 430 insertions(+), 5 deletions(-)
 >
 > diff --git a/net/vmnet-common.m b/net/vmnet-common.m
 > index 56612c72ce..2f70921cae 100644
 > --- a/net/vmnet-common.m
 > +++ b/net/vmnet-common.m
 > @@ -10,6 +10,8 @@
 >    */
 >
 >   #include "qemu/osdep.h"
 > +#include "qemu/main-loop.h"
 > +#include "qemu/log.h"
 >   #include "qapi/qapi-types-net.h"
 >   #include "vmnet_int.h"
 >   #include "clients.h"
 > @@ -17,4 +19,304 @@
 >   #include "qapi/error.h"
 >
 >   #include 
 > +#include 
 >
 > +
 > +static inline void vmnet_set_send_bh_scheduled(VmnetCommonState *s,
 > +                                               bool enable)
 > +{
 > +    qatomic_set(>send_scheduled, enable);
 > +}
 > +
 > +
 > +static inline bool vmnet_is_send_bh_scheduled(VmnetCommonState *s)
 > +{
 > +    return qatomic_load_acquire(>send_scheduled);
 > +}
 > +
 > +
 > +static inline void vmnet_set_send_enabled(VmnetCommonState *s,
 > +                                          bool enable)
 > +{
 > +    if (enable) {
 > +        vmnet_interface_set_event_callback(
 > +            s->vmnet_if,
 > +            VMNET_INTERFACE_PACKETS_AVAILABLE,
 > +            s->if_queue,
 > +            ^(interface_event_t event_id, xpc_object_t event) {
 > +                assert(event_id ==
VMNET_INTERFACE_PACKETS_AVAILABLE);
 > +                /*
 > +                 * This function is being called from a non qemu
thread, so
 > +                 * we only schedule a BH, and do the rest of the
io completion
 > +                 * handling from vmnet_send_bh() which runs in a
qemu context.
 > +                 *
 > +                 * Avoid scheduling multiple bottom halves
 > +                 */
 > +                if (!vmnet_is_send_bh_scheduled(s)) {
 > +                    vmnet_set_send_bh_scheduled(s, true);

It can be interrupted between vmnet_is_send_bh_scheduled and
vmnet_set_send_bh_scheduled, which leads to data race.


Sorry, I did not clearly understand what you meant. Since this
callback (block) is submitted on DISPATCH_QUEUE_SERIAL,
only one instance of the callback will be executed at any
moment of time.
https://developer.apple.com/documentation/dispatch/dispatch_queue_serial 



Also this is the only place where we schedule a bottom half.

After we set the 'send_scheduled' flag, all the other
callback  blocks will do nothing (skip the if block) until
the bottom half is executed and reset 'send_scheduled'.
I don't see any races here :(

Correct me if I'm wrong please.


My explanation that the interruption between vmnet_is_send_bh_scheduled 
and vmnet_set_send_bh_scheduled is problematic 

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-02-26 Thread Vladislav Yaroshchuk
Hi Akihiko,

On Fri, Feb 25, 2022 at 8:46 PM Akihiko Odaki 
wrote:

> On 2022/02/26 2:13, Vladislav Yaroshchuk wrote:
> > Interaction with vmnet.framework in different modes
> > differs only on configuration stage, so we can create
> > common `send`, `receive`, etc. procedures and reuse them.
> >
> > vmnet.framework supports iov, but writing more than
> > one iov into vmnet interface fails with
> > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
> > one and passing it to vmnet works fine. That's the
> > reason why receive_iov() left unimplemented. But it still
> > works with good enough performance having .receive()
> > net/vmnet: implement shared mode (vmnet-shared)
> >
> > Interaction with vmnet.framework in different modes
> > differs only on configuration stage, so we can create
> > common `send`, `receive`, etc. procedures and reuse them.
> >
> > vmnet.framework supports iov, but writing more than
> > one iov into vmnet interface fails with
> > 'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
> > one and passing it to vmnet works fine. That's the
> > reason why receive_iov() left unimplemented. But it still
> > works with good enough performance having .receive()
> > implemented only.
> >
> > Also, there is no way to unsubscribe from vmnet packages
> > receiving except registering and unregistering event
> > callback or simply drop packages just ignoring and
> > not processing them when related flag is set. Here we do
> > using the second way.
> >
> > Signed-off-by: Phillip Tennen 
> > Signed-off-by: Vladislav Yaroshchuk 
>
> Thank you for persistently working on this.
>
> > ---
> >   net/vmnet-common.m | 302 +
> >   net/vmnet-shared.c |  94 +-
> >   net/vmnet_int.h|  39 +-
> >   3 files changed, 430 insertions(+), 5 deletions(-)
> >
> > diff --git a/net/vmnet-common.m b/net/vmnet-common.m
> > index 56612c72ce..2f70921cae 100644
> > --- a/net/vmnet-common.m
> > +++ b/net/vmnet-common.m
> > @@ -10,6 +10,8 @@
> >*/
> >
> >   #include "qemu/osdep.h"
> > +#include "qemu/main-loop.h"
> > +#include "qemu/log.h"
> >   #include "qapi/qapi-types-net.h"
> >   #include "vmnet_int.h"
> >   #include "clients.h"
> > @@ -17,4 +19,304 @@
> >   #include "qapi/error.h"
> >
> >   #include 
> > +#include 
> >
> > +
> > +static inline void vmnet_set_send_bh_scheduled(VmnetCommonState *s,
> > +   bool enable)
> > +{
> > +qatomic_set(>send_scheduled, enable);
> > +}
> > +
> > +
> > +static inline bool vmnet_is_send_bh_scheduled(VmnetCommonState *s)
> > +{
> > +return qatomic_load_acquire(>send_scheduled);
> > +}
> > +
> > +
> > +static inline void vmnet_set_send_enabled(VmnetCommonState *s,
> > +  bool enable)
> > +{
> > +if (enable) {
> > +vmnet_interface_set_event_callback(
> > +s->vmnet_if,
> > +VMNET_INTERFACE_PACKETS_AVAILABLE,
> > +s->if_queue,
> > +^(interface_event_t event_id, xpc_object_t event) {
> > +assert(event_id == VMNET_INTERFACE_PACKETS_AVAILABLE);
> > +/*
> > + * This function is being called from a non qemu
> thread, so
> > + * we only schedule a BH, and do the rest of the io
> completion
> > + * handling from vmnet_send_bh() which runs in a qemu
> context.
> > + *
> > + * Avoid scheduling multiple bottom halves
> > + */
> > +if (!vmnet_is_send_bh_scheduled(s)) {
> > +vmnet_set_send_bh_scheduled(s, true);
>
> It can be interrupted between vmnet_is_send_bh_scheduled and
> vmnet_set_send_bh_scheduled, which leads to data race.
>
>
Sorry, I did not clearly understand what you meant. Since this
callback (block) is submitted on DISPATCH_QUEUE_SERIAL,
only one instance of the callback will be executed at any
moment of time.
https://developer.apple.com/documentation/dispatch/dispatch_queue_serial

Also this is the only place where we schedule a bottom half.

After we set the 'send_scheduled' flag, all the other
callback  blocks will do nothing (skip the if block) until
the bottom half is executed and reset 'send_scheduled'.
I don't see any races here :(

Correct me if I'm wrong please.


> > +qemu_bh_schedule(s->send_bh);
> > +}
> > +});
> > +} else {
> > +vmnet_interface_set_event_callback(
> > +s->vmnet_if,
> > +VMNET_INTERFACE_PACKETS_AVAILABLE,
> > +NULL,
> > +NULL);
> > +}
> > +}
> > +
> > +
> > +static void vmnet_send_completed(NetClientState *nc, ssize_t len)
> > +{
> > +VmnetCommonState *s = DO_UPCAST(VmnetCommonState, nc, nc);
> > +vmnet_set_send_enabled(s, true);
> > +}
> > +
> > +
> > +static void vmnet_send_bh(void *opaque)
> > +{
> > +NetClientState *nc = (NetClientState *) opaque;
> > 

Re: [PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-02-25 Thread Akihiko Odaki

On 2022/02/26 2:13, Vladislav Yaroshchuk wrote:

Interaction with vmnet.framework in different modes
differs only on configuration stage, so we can create
common `send`, `receive`, etc. procedures and reuse them.

vmnet.framework supports iov, but writing more than
one iov into vmnet interface fails with
'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
one and passing it to vmnet works fine. That's the
reason why receive_iov() left unimplemented. But it still
works with good enough performance having .receive()
net/vmnet: implement shared mode (vmnet-shared)

Interaction with vmnet.framework in different modes
differs only on configuration stage, so we can create
common `send`, `receive`, etc. procedures and reuse them.

vmnet.framework supports iov, but writing more than
one iov into vmnet interface fails with
'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
one and passing it to vmnet works fine. That's the
reason why receive_iov() left unimplemented. But it still
works with good enough performance having .receive()
implemented only.

Also, there is no way to unsubscribe from vmnet packages
receiving except registering and unregistering event
callback or simply drop packages just ignoring and
not processing them when related flag is set. Here we do
using the second way.

Signed-off-by: Phillip Tennen 
Signed-off-by: Vladislav Yaroshchuk 


Thank you for persistently working on this.


---
  net/vmnet-common.m | 302 +
  net/vmnet-shared.c |  94 +-
  net/vmnet_int.h|  39 +-
  3 files changed, 430 insertions(+), 5 deletions(-)

diff --git a/net/vmnet-common.m b/net/vmnet-common.m
index 56612c72ce..2f70921cae 100644
--- a/net/vmnet-common.m
+++ b/net/vmnet-common.m
@@ -10,6 +10,8 @@
   */
  
  #include "qemu/osdep.h"

+#include "qemu/main-loop.h"
+#include "qemu/log.h"
  #include "qapi/qapi-types-net.h"
  #include "vmnet_int.h"
  #include "clients.h"
@@ -17,4 +19,304 @@
  #include "qapi/error.h"
  
  #include 

+#include 
  
+

+static inline void vmnet_set_send_bh_scheduled(VmnetCommonState *s,
+   bool enable)
+{
+qatomic_set(>send_scheduled, enable);
+}
+
+
+static inline bool vmnet_is_send_bh_scheduled(VmnetCommonState *s)
+{
+return qatomic_load_acquire(>send_scheduled);
+}
+
+
+static inline void vmnet_set_send_enabled(VmnetCommonState *s,
+  bool enable)
+{
+if (enable) {
+vmnet_interface_set_event_callback(
+s->vmnet_if,
+VMNET_INTERFACE_PACKETS_AVAILABLE,
+s->if_queue,
+^(interface_event_t event_id, xpc_object_t event) {
+assert(event_id == VMNET_INTERFACE_PACKETS_AVAILABLE);
+/*
+ * This function is being called from a non qemu thread, so
+ * we only schedule a BH, and do the rest of the io completion
+ * handling from vmnet_send_bh() which runs in a qemu context.
+ *
+ * Avoid scheduling multiple bottom halves
+ */
+if (!vmnet_is_send_bh_scheduled(s)) {
+vmnet_set_send_bh_scheduled(s, true);


It can be interrupted between vmnet_is_send_bh_scheduled and 
vmnet_set_send_bh_scheduled, which leads to data race.



+qemu_bh_schedule(s->send_bh);
+}
+});
+} else {
+vmnet_interface_set_event_callback(
+s->vmnet_if,
+VMNET_INTERFACE_PACKETS_AVAILABLE,
+NULL,
+NULL);
+}
+}
+
+
+static void vmnet_send_completed(NetClientState *nc, ssize_t len)
+{
+VmnetCommonState *s = DO_UPCAST(VmnetCommonState, nc, nc);
+vmnet_set_send_enabled(s, true);
+}
+
+
+static void vmnet_send_bh(void *opaque)
+{
+NetClientState *nc = (NetClientState *) opaque;
+VmnetCommonState *s = DO_UPCAST(VmnetCommonState, nc, nc);
+
+struct iovec *iov = s->iov_buf;
+struct vmpktdesc *packets = s->packets_buf;
+int pkt_cnt;
+int i;
+
+vmnet_return_t status;
+ssize_t size;
+
+/* read as many packets as present */
+pkt_cnt = VMNET_PACKETS_LIMIT;


There could be more than VMNET_PACKETS_LIMIT. You may call vmnet_read in 
a loop.



+for (i = 0; i < pkt_cnt; ++i) {
+packets[i].vm_pkt_size = s->max_packet_size;
+packets[i].vm_pkt_iovcnt = 1;
+packets[i].vm_flags = 0;
+}
+
+status = vmnet_read(s->vmnet_if, packets, _cnt);
+if (status != VMNET_SUCCESS) {
+error_printf("vmnet: read failed: %s\n",
+ vmnet_status_map_str(status));
+goto done;
+}
+
+for (i = 0; i < pkt_cnt; ++i) {
+size = qemu_send_packet_async(nc,
+  iov[i].iov_base,
+  packets[i].vm_pkt_size,
+  vmnet_send_completed);
+if (size == 0) {

[PATCH v15 3/8] net/vmnet: implement shared mode (vmnet-shared)

2022-02-25 Thread Vladislav Yaroshchuk
Interaction with vmnet.framework in different modes
differs only on configuration stage, so we can create
common `send`, `receive`, etc. procedures and reuse them.

vmnet.framework supports iov, but writing more than
one iov into vmnet interface fails with
'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
one and passing it to vmnet works fine. That's the
reason why receive_iov() left unimplemented. But it still
works with good enough performance having .receive()
net/vmnet: implement shared mode (vmnet-shared)

Interaction with vmnet.framework in different modes
differs only on configuration stage, so we can create
common `send`, `receive`, etc. procedures and reuse them.

vmnet.framework supports iov, but writing more than
one iov into vmnet interface fails with
'VMNET_INVALID_ARGUMENT'. Collecting provided iovs into
one and passing it to vmnet works fine. That's the
reason why receive_iov() left unimplemented. But it still
works with good enough performance having .receive()
implemented only.

Also, there is no way to unsubscribe from vmnet packages
receiving except registering and unregistering event
callback or simply drop packages just ignoring and
not processing them when related flag is set. Here we do
using the second way.

Signed-off-by: Phillip Tennen 
Signed-off-by: Vladislav Yaroshchuk 
---
 net/vmnet-common.m | 302 +
 net/vmnet-shared.c |  94 +-
 net/vmnet_int.h|  39 +-
 3 files changed, 430 insertions(+), 5 deletions(-)

diff --git a/net/vmnet-common.m b/net/vmnet-common.m
index 56612c72ce..2f70921cae 100644
--- a/net/vmnet-common.m
+++ b/net/vmnet-common.m
@@ -10,6 +10,8 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/main-loop.h"
+#include "qemu/log.h"
 #include "qapi/qapi-types-net.h"
 #include "vmnet_int.h"
 #include "clients.h"
@@ -17,4 +19,304 @@
 #include "qapi/error.h"
 
 #include 
+#include 
 
+
+static inline void vmnet_set_send_bh_scheduled(VmnetCommonState *s,
+   bool enable)
+{
+qatomic_set(>send_scheduled, enable);
+}
+
+
+static inline bool vmnet_is_send_bh_scheduled(VmnetCommonState *s)
+{
+return qatomic_load_acquire(>send_scheduled);
+}
+
+
+static inline void vmnet_set_send_enabled(VmnetCommonState *s,
+  bool enable)
+{
+if (enable) {
+vmnet_interface_set_event_callback(
+s->vmnet_if,
+VMNET_INTERFACE_PACKETS_AVAILABLE,
+s->if_queue,
+^(interface_event_t event_id, xpc_object_t event) {
+assert(event_id == VMNET_INTERFACE_PACKETS_AVAILABLE);
+/*
+ * This function is being called from a non qemu thread, so
+ * we only schedule a BH, and do the rest of the io completion
+ * handling from vmnet_send_bh() which runs in a qemu context.
+ *
+ * Avoid scheduling multiple bottom halves
+ */
+if (!vmnet_is_send_bh_scheduled(s)) {
+vmnet_set_send_bh_scheduled(s, true);
+qemu_bh_schedule(s->send_bh);
+}
+});
+} else {
+vmnet_interface_set_event_callback(
+s->vmnet_if,
+VMNET_INTERFACE_PACKETS_AVAILABLE,
+NULL,
+NULL);
+}
+}
+
+
+static void vmnet_send_completed(NetClientState *nc, ssize_t len)
+{
+VmnetCommonState *s = DO_UPCAST(VmnetCommonState, nc, nc);
+vmnet_set_send_enabled(s, true);
+}
+
+
+static void vmnet_send_bh(void *opaque)
+{
+NetClientState *nc = (NetClientState *) opaque;
+VmnetCommonState *s = DO_UPCAST(VmnetCommonState, nc, nc);
+
+struct iovec *iov = s->iov_buf;
+struct vmpktdesc *packets = s->packets_buf;
+int pkt_cnt;
+int i;
+
+vmnet_return_t status;
+ssize_t size;
+
+/* read as many packets as present */
+pkt_cnt = VMNET_PACKETS_LIMIT;
+for (i = 0; i < pkt_cnt; ++i) {
+packets[i].vm_pkt_size = s->max_packet_size;
+packets[i].vm_pkt_iovcnt = 1;
+packets[i].vm_flags = 0;
+}
+
+status = vmnet_read(s->vmnet_if, packets, _cnt);
+if (status != VMNET_SUCCESS) {
+error_printf("vmnet: read failed: %s\n",
+ vmnet_status_map_str(status));
+goto done;
+}
+
+for (i = 0; i < pkt_cnt; ++i) {
+size = qemu_send_packet_async(nc,
+  iov[i].iov_base,
+  packets[i].vm_pkt_size,
+  vmnet_send_completed);
+if (size == 0) {
+vmnet_set_send_enabled(s, false);
+goto done;
+} else if (size < 0) {
+break;
+}
+}
+
+done:
+vmnet_set_send_bh_scheduled(s, false);
+}
+
+
+static void vmnet_bufs_init(VmnetCommonState *s)
+{
+struct vmpktdesc *packets = s->packets_buf;
+struct iovec