Il 29/03/2019 15:53, Sergio R. Caprile ha scritto:
The recommendations are to use separate pools for rx and tx so they can
not starve each other. Since PBUF_RAM is used for tx, then rx is
expected to use PBUF_POOL. I bet you can also use separate pools but I
don't know how.

Last time I checked, PBUF_RAM allocates a single block of memory. That
is probably the reason why these guys use them for DMA descriptors.
PBUF_POOL, on the other hand, *might* (not necessarily *will* but it
will surely bite you if you don't prepare for it) allocate several
buffers in a chain to satisfy your request. That is, your pbuf will be a
chain of pbufs.

I think I understood. However the LPC network driver allocates always ENET_ETH_MAX_FLEN-length pbufs. If PBUF_POOL_BUFSIZE is greater or equal ENET_ETH_MAX_FLEN (that is 1552 bytes) I don't think pbuf_alloc(PBUF_RAW, (u16_t) ENET_ETH_MAX_FLEN, PBUF_RAM) could return a pbuf chain.


While you can memcpy len bytes to a PBUF_RAM pbuf p where p->tot_len >=
len; you certainly can not do it to a PBUF_POOL and must loop through
all pbufs q=q->next writing up to q->len bytes to each one. Hope I'm
clear...

So, you'd better check what your driver is doing.
Keeping the rx side to PBUF_RAM can cause tx and rx to compete for
buffers (unless...)
Moving the rx side to PBUF_POOL if the driver was not properly written
for that, can cause memory corruption when the pool gets fragmented and
returned pbufs are chained ones.

_______________________________________________
lwip-users mailing list
[email protected]
https://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to