mennovf opened a new issue, #17299: URL: https://github.com/apache/nuttx/issues/17299
### Description I ran into an issue with a UDP socket where a single task sequentially reads from and writes to a blocking, no timeout UDP socket. CONFIG_NET_RECV_BUFSIZE was 0, so the only place where the UDP readahead IOB buffers get freed is in recvfrom. If the IOB free list got emptied before the sendto() call, that call would block indefinitely waiting for memory, never giving the recvfrom() a chance to produce IOB buffers. [From its description in kconfig](https://nuttx.apache.org/docs/latest/reference/os/iob.html), I got the impression that CONFIG_IOB_THROTTLE was specifically created to prevent this kind of deadlock (although it only speaks of TCP, it's also used in the UDP code). On inspecting the UDP stack, if CONFIG_IOB_THROTTLE is enabled, the UDP reception allocates with throttle=true as expected. However the send logic of [both TCP](https://github.com/apache/nuttx/blob/cc2cb394fa98bc969497741b0bb5193ad9bb65d8/net/tcp/tcp_wrbuffer.c#L103) and [UDP](https://github.com/apache/nuttx/blob/cc2cb394fa98bc969497741b0bb5193ad9bb65d8/net/udp/udp_wrbuffer.c#L146) also allocate throttled instead of unthrottled, contradicting the IOB_THROTTLE description and thus not solving my issue. [udp_wrbuffer_alloc however DOES alloc unthrottled](https://github.com/apache/nuttx/blob/cc2cb394fa98bc969497741b0bb5193ad9bb65d8/net/udp/udp_wrbuffer.c#L97) but appears to never get called. I am not directly in control of the code which sequentially sendto()/recvfrom()s the socket to fix the behaviour by e.g. moving them to separate tasks. My question is whether this IOB_THROTTLE behaviour is a bug, or whether there's another intended (config) solution. ### Verification - [x] I have verified before submitting the report. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
