On 22. 03. 2018 14:44, Mitja Pirih wrote: > On 20. 03. 2018 14:57, Jernej Stopinšek wrote: >> try to play around with kernel parameters and increase values for: >> >> net.ipv4.udp_rmem_min >> net.ipv4.udp_mem >> net.core.rmem_default >> net.core.rmem_max >> net.core.netdev_max_backlog >> net.core.netdev_budget >> >> also check interface Ring parameters: >> ethtool -g [interface] >> >> and increase RX and TX to maximum: >> in my case: >> >> ethtool -G eth0 rx 4096 tx 4096 >> >> >> Check out this document: >> https://blog.packagecloud.io/eng/2016/06/22/monitoring-tuning-linux-networking-stack-receiving-data/ >> >> >> and enable RPS and RFS. >> >> This solved my problem with Circular buffer overrun when adding more >> instances. > Tested your suggestion but it does not help. I've tested from 10% up > from defaults values up to 10x from default values (following a couple > of tuning guides). The only thing I can not test is Ring parameters as I > am getting all multicast traffic on lo (loopback) interface. Still > checking on RPS and RFS if there is any real use on lo as it is a > virtual interface.
Please correct my understanding of circular buffer overrun: usually it happens when one experiences a poor network performance. In that case the solution would be to manipulate buffers related to network. Are there other cases when it happens? I do experience approx 2-3 times daily the same error. What distinguishes my configuration is that tuners (8) and encoder (1) are on the same device, so at this stage there is no network involved yet. All traffic from the tuners is dumped to the same loopback interface. Loopback interface is hitted by an average of 400Mbits of data (200up + 200down), eth0 has an average of 10Mbps (9Mbps up + 100kbps down). What would be your steps to correctly diagnose where the problem (bottleneck?) is in this case? Thanks. Br, Mitja _______________________________________________ ffmpeg-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email [email protected] with subject "unsubscribe".
