Hello all, On 09/30/11 15:52, Marc Kleine-Budde wrote:
> On 09/30/2011 02:32 PM, Michal Sojka wrote: >> The default value of sk->sk_wmem_alloc is 108544 which means that for >> CAN, this limit is reached (and the application blocks) when it has >> 542 CAN frames waiting to be send to the driver. This is of cause more >> then 10, allowed by dev->tx_queue_len. >> >> Therefore, we propose apply patch like this: (..) >> + dev->tx_queue_len = 22; (..) >> + sk->sk_sndbuf = SOCK_MIN_SNDBUF; >> This sets the minimum possible sk_sndbuf, i.e. 2048, which allows to >> have 11 frames queued for a socket before the application blocks. (..) > What about dynamically calculating the sk->sk_sndbuf providing room for > a fixed number of CAN frames in the socket, i.e. 10 so so. Maybe even > make the number of CAN frames configurable during runtime. If we can modify the rcvbuf size with SO_RCVBUF we should be able to use SO_SNDBUF for our needs too. Indeed i tend to set the sk_sndbuf to a size that allows to store only 3 CAN frames for each raw socket by default. >> It is also necessary to slightly increase the default tx_queue_len. >> Increasing it to 22 allows using two applications (or better two >> sockets) without seeing ENOBUFS. The third application/socket then >> gets ENOBUFS just for its first write(). > > Hmmm...3 applications isn't that much, is it? > How many ether applications are needed to deplete the standard 1000 tx > queuelen? > > 100k snd_buf / 2k skb+data = 50 frames per sock > 1000 tx_queuelen / 50 socks = 20 Aps IMO the question is which delay would would like to guarantee for applications on the system. E.g. if we want a maximum delay of e.g. 50ms @500kbit/s the tx_queue_len could be calculated to a value like 50 or so - not very academic %-) >> The above described situation is not the only way how can an >> application get ENOBUFS, but I think that in case of PF_CAN this is >> the most common situation and having a blocking behavior as provided >> by this patch would help the users a lot. Yes. Thanks a lot for this investigation! Best regards, Oliver _______________________________________________ Socketcan-core mailing list Socketcan-core@lists.berlios.de https://lists.berlios.de/mailman/listinfo/socketcan-core