> On Jan 3, 2017, at 9:34 AM, Daniel Pauli <[email protected]> wrote:
> 
> If you have LwIP stats enabled, you can check the memory pools for errors to 
> figure out which one is failing.  You should be able to resolve this by 
> sizing your memory pools to handle the number of supported connections.  For 
> example if you only support 5 simultaneous TCP connections, then your pools 
> should be big enough to allocate 5 send buffers worth of segments.  This is 
> how I configure my products, which typically have plenty of RAM.  Not sure 
> what the recommendation is for very constrained RAM products.
> 
> Using LwIP stats I found that memory allocations from RAW_PCB failed. By 
> increasing MEM_SIZE I was able to avoid the issue. I'm still unsure about a 
> reasonable value here, given that we want to support up to 64 simultaneous 
> TCP connections (MEMP_NUM_TCP_PCB==64). It is not a constrained RAM product, 
> though.
> 

RAW_PCB exhaustion is most likely unrelated to the problem you’re seeing.  When 
TCP segments are allocated in tcp_write() (with TCP_WRITE_FLAG_COPY specified) 
there are two allocations: PBUF_RAM (coming from LwIP heap, controlled by 
MEM_SIZE) and struct tcp_seg (coming from MEMP_NUM_TCP_SEG static memory pool)

In order to size these two pools according to what your connections can handle, 
you can use a worst case calculation where you assume all connections have a 
full send buffer worth of MSS segments.

MEM_SIZE: should be at least MEMP_NUM_TCP_PCB * TCP_SND_BUF with some extra 
space for miscellaneous heap allocations

MEMP_NUM_TCP_SEG: should be at least MEMP_NUM_TCP_PCB * (TCP_SND_BUF / TCP_MSS)

> Yes there is, with SO_LINGER you can perform an abortive closure rather than 
> graceful by setting the timeout to 0.  Typically this is a bad idea.  There’s 
> a decent discussion here on stackoverflow:
> 
> http://stackoverflow.com/questions/3757289/tcp-option-so-linger-zero-when-its-required
>  
> <http://stackoverflow.com/questions/3757289/tcp-option-so-linger-zero-when-its-required>
> 
>  We are using the default setting of LWIP_SO_LINGER==0. If I understand 
> correctly, this already completely disables linger processing and should 
> correspond to an abortive closure by setting the timeout to 0 as you 
> suggested. Is there another way to tell LWIP to release any resources 
> associated with a socket immediately? 
> 

I haven’t used LWIP_SO_LINGER in my port, but when not enabled, you get the 
default TCP closure behavior which is a graceful close with a 20 second timeout 
(see LWIP_TCP_CLOSE_TIMEOUT_MS_DEFAULT)

> Even when increasing MEM_SIZE, I feel uneasy about misbehaving clients 
> (ignoring retransmissions, reconnecting frequently) eating up all server 
> resources over time. In the case of unanswered retransmissions, I observed 
> that the TCP_PCB_LISTEN allocation counter is not decremented after close() 
> for a long time (probably 25 minutes?).

What state is the PCB in after closing the listener?  That sounds strange that 
this would pcb would hang around

Joel
_______________________________________________
lwip-users mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to