On Thu, Mar 3, 2022 at 11:43 PM Vladislav Yaroshchuk <vladislav.yaroshc...@jetbrains.com> wrote: > > > > On Tue, Mar 1, 2022 at 11:21 AM Akihiko Odaki <akihiko.od...@gmail.com> wrote: >> >> On 2022/03/01 17:09, Vladislav Yaroshchuk wrote: >> > > Not sure that only one field is enough, cause >> > > we may have two states on bh execution start: >> > > 1. There are packets in vmnet buffer s->packets_buf >> > > that were rejected by qemu_send_async and waiting >> > > to be sent. If this happens, we should complete sending >> > > these waiting packets with qemu_send_async firstly, >> > > and after that we should call vmnet_read to get >> > > new ones and send them to QEMU; >> > > 2. There are no packets in s->packets_buf to be sent to >> > > qemu, we only need to get new packets from vmnet >> > > with vmnet_read and send them to QEMU >> > >> > In case 1, you should just keep calling qemu_send_packet_async. >> > Actually >> > qemu_send_packet_async adds the packet to its internal queue and calls >> > the callback when it is consumed. >> > >> > >> > I'm not sure we can keep calling qemu_send_packet_async, >> > because as docs from net/queue.c says: >> > >> > /* [...] >> > * If a sent callback is provided to send(), the caller must handle a >> > * zero return from the delivery handler by not sending any more packets >> > * until we have invoked the callback. Only in that case will we queue >> > * the packet. >> > * >> > * If a sent callback isn't provided, we just drop the packet to avoid >> > * unbounded queueing. >> > */ >> > >> > So after we did vmnet_read and read N packets >> > into temporary s->packets_buf, we begin calling >> > qemu_send_packet_async. If it returns 0 - it says >> > "no more packets until sent_cb called please". >> > At this moment we have N packets in s->packets_buf >> > and already queued K < N of them. But, packets K..N >> > are not queued and keep waiting for sent_cb to be sent >> > with qemu_send_packet_async. >> > Thus when sent_cb called, we should finish >> > our transfer of packets K..N from s->packets_buf >> > to qemu calling qemu_send_packet_async. >> > I meant this. >> >> I missed the comment. The description is contradicting with the actual >> code; qemu_net_queue_send_iov appends the packet to the queue whenever >> it cannot send one immediately. >> > > Yes, it appends, but (net/queue.c): > * qemu_net_queue_send tries to deliver the packet > immediately. If the packet cannot be delivered, the > qemu_net_queue_append is called and 0 is returned > to say the caller "the receiver is not ready, hold on"; > * qemu_net_queue_append does a probe before adding > the packet to the queue: > if (queue->nq_count >= queue->nq_maxlen && !sent_cb) { > return; /* drop if queue full and no callback */ > } > > The queue is not infinite, so we have three cases: > 1. The queue is not full -> append the packet, no > problems here > 2. The queue is full, no callback -> we cannot notify > a caller when we're ready, so just drop the packet > if we can't append it. > 3. The queue is full, callback present -> we can notify > a caller when we are ready, so "let's queue this packet, > but expect no more (!) packets is sent until I call > sent_cb when the queue is ready" > > Therefore if we provide a callback and keep sending > packets if 0 is returned, this may cause unlimited(!) > queue growth. To prevent this, we should stop sending > packets and wait for notification callback to continue.
Right. > > I don't see any contradiction with that comment. > >> Jason Wang, I saw you are in the MAINTAINERS for net/. Can you tell if >> calling qemu_send_packet_async is allowed after it returns 0? >> > > It may be wrong, but I think it's not allowed to send > packets after qemu_send_packet_async returns 0. > > Jason Wang, can you confirm please? With a cb, we can't do this. All users with cb will disable the source polling and depend on the cb to re-read the polling. (tap/l2tpv3/socket). Without a cb, we can. As analyzed above, qemu_net_queue_append() can limit the number of packets queued in this case. Thanks > > Best Regards, > > Vladislav Yaroshchuk > >> >> Regards, >> Akihiko Odaki > > >