haitomatic opened a new issue, #9353: URL: https://github.com/apache/nuttx/issues/9353
Hi, I am using Nuttx SocketCAN together with UAVCAN in PX4 and got into a problem with `g_iob_sem.semcount` either exceeds the `CONFIG_IOB_NBUFFERS` or gets as low as 1. This happens expecially at high data rate (~1000fps for standard 2.0B CAN frame). When it occurs, socketCAN network iface stops functional and no more RX/TX. I dig into the code and noticed that `iob_update_pktlen` sporadically while doing trimming, frees up an amount of `CONFIG_IOB_NBUFFERS` or currently freed iob buffers (most of the time equal to `CONFIG_IOB_BUFFERS` ) https://github.com/apache/nuttx/blob/c60dd72a2a8bace64480c73bc824daf8bb9d41a1/net/devif/devif_send.c#L105 This leads to `g_iob_sem.semcount` being increased to much higher than `CONFIG_IOB_NBUFFERS`. I changed that part to ``` //iob_update_pktlen(dev->d_iob, offset); ret = iob_trycopyin(dev->d_iob, buf, len, offset, false); if (ret < 0 || dev->d_iob->io_pktlen != len) { netdev_iob_release(dev); goto errout; } ``` and I was able to get rid of the overflowed semcount problem This is really weird since before that with `netdev_iob_prepare`, the new iob handle should always have `io_flink` NULL. And it actually does when I do a check before the line `iob_update_pktlen` but somehow, in that function, the whole amount of iob in `g_iob_freelist` is always freed up. And for the `g_iob_sem.semcount` drop to 1 issue, when it happens, usually after 10mins of operation at high rate, suddenly semcount drops to 1 from a long lasting stable level around `CONFIG_IOB_NBUFFERS`. This issue is much harder to debug and currently I don't have any fix/workaround Both issue happen regardless of `CONFIG_IOB_NBUFFERS` value, with/without `CONFIG_IOB_THROTTLE` and seemingly independent of each other. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
