Hi Loïc,
I have an approach that I've used on a few different microcontroller
platforms. You'll have to see if this approach fits well on your hardware.
What I did was create a struct containing both a buffer and a
pbuf_custom, like this:
typedef struct {
struct pbuf_custom p;
uint8_t buffer[RX_BUFFER_SIZE] __attribute__ ((aligned (4)));
} rx_buffer_t;
static rx_buffer_t rx_buffer_array[RX_DESC_COUNT];
where I could control the linker's placement of rx_buffer_array if
necessary. Separate arrays (of the same length) for the pbufs and
buffers instead of the single array of structs is also a possibility.
On initialization I would give my Ethernet peripheral every buffer in
the array.
My Ethernet peripheral's DMA would fill the buffer bytes, and when it
finished with a buffer then my code would setup the associated pbuf and
pass it to LWIP.
When LWIP was finished with the pbuf it would call the
pbuf_custom.custom_free_function, where my code would give the buffer
back to my Ethernet peripheral's DMA controller.
I found this a really easy way to do the bookkeeping necessary for both
LWIP and the DMA controller.
Hope that helps,
Jeff
Jeffrey Nichols
Suprock Technologies, LLC
Phone: 603-479-3408
www.suprocktech.com
On 7/2/2021 2:18 AM, DROZ Loïc wrote:
Hello,
I ran into a problem with the receive thread of my Ethernet driver
working with LwIP and I am looking for some advice. Some context first:
The LwIP documentation suggests that incoming data should be
transferred into PBUF_POOL packet buffers. If I read the code
correctly, this means such pools are allocated from the MEMP_PBUF_POOL
memory pool. This is a problem for me in cases where these packet
buffers are reused to transmit data later: as the MEMP_PBUF_POOL is
allocated as a static array in memory, by default it lies in memory
inaccessible by my Ethernet hardware interface, the same is therefore
true for the payloads of packet buffers allocated from this pool. As
such, the Ethernet hardware interface will fail to transmit packet
buffers that were allocated in the receive thread.
If I understand correctly, it is possible to relocate the memory
arrays of pools, by redefining the LWIP_DECLARE_MEMORY_ALIGNEDmacro to
place the array it allocates in a memory region defined in the linker
script. However, my project uses an OS with a complicated build
procedure, and I would like to avoid changes to the linker script if
possible.
My OS uses all the memory accessible by the Ethernet hardware
interface, as a heap for dynamic memory allocation. It provides a
malloc-like function for dynamic memory allocation. Is it possible to
use it to allocate a pool’s memory array? I cannot modify
LWIP_DECLARE_MEMORY_ALIGNEDdirectly to declare and initialize it using
this malloc-like function, as is it a global variable and requires a
constant initializer. I then tried to modify
LWIP_DECLARE_MEMORY_ALIGNEDto :
#define LWIP_DECLARE_MEMORY_ALIGNED(variable_name, size) u8_t
variable_name
… and then allocating the pool’s memory array at the beginning of the
memp_init_poolfunction, and modifying the structmemp_desc* argument,
but it is causing problems with allocation from pools, which I am
currently trying to debug. Do you know if there would be a simpler way?
Best,
Loïc
_______________________________________________
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users
_______________________________________________
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users