Hello, > Just being pedantic I know, but that isn't strictly true - I'm working on > one board which can scatter/gather pbufs and where the memory pointed to > by the buffer descriptors is fixed at 128 bytes.
I don't think in our case combining fragmented pbuf is going to be our bottleneck. But we also have the memory to use a larger buffer. > However if you do have the memory it is more efficient to use the full > 1516. The question then is whether you have the memory because _every_ > packet received will occupy the full 1516 bytes, and if you're keeping > them around till you've got the lot, that can add up. If you are receiving > your 4MB image in 256 byte chunks, you'd need at least 24MB of RAM ;). If we are pulling the image while it's coming in, this won't be a problem. If the link is faster than we are, this could be. I think we'll be waiting on the data, not the data coming in faster than we can handle it. > NB the value of 1516 bytes can depend on your hardware, e.g. if your > hardware also transfers the CRCs. See > http://sd.wareonearth.com/~phil/net/jumbo/ for example. Thanks - interesting - I'd not looked into Jumbo Frames. > Just to clarify what Simon is saying, pool pbufs consist of a 'struct > pbuf' immediately followed in memory by the payload - so if you have the > payload pointer, you know where the struct pbuf is. This does help - thank you. > Indeed. I've certainly only been working in terms of (inherently > preallocated) pool pbufs. I see this is the way to go. I'll have to see if I can figure how to release the ETH buffer the pbuf points to when the pbuf has been processed. Thank you! Bill _______________________________________________ lwip-users mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/lwip-users
