Simon: I think you are pretty much correct; generally you are better off with smaller size mbufs. However, there are cases where larger mbufs are better (for example, a very large portion of your data packets are large).
> On Jan 19, 2017, at 11:57 PM, Simon Ratner <[email protected]> wrote: > > Thanks Chris, > > It appears to me that there is questionable benefit to having mbufs sized > larger than the largest L2CAP fragment size (plus overhead), i.e. the 80 > bytes that Will mentioned. Is that a reasonable statement, or am I missing > something? > > For incoming data, you always waste memory with larger mbufs, and for > outgoing data host will take longer to free the memory (since you can't > free the payload mbuf until the last fragment, as opposed to freeing > smaller mbufs as you go), and you don't save on the number of copies in the > host. You will save something on mbuf allocations and mbuf header overhead > in the app as you are generating the payload, though. > > When allocating mbufs for the payload, is there something I should do to > reserve enough leading space for the ACL header to make sure host doesn't > need to re-allocate it? > > Also, at least in theory, it sounds like you could size mbufs to match the > fragment exactly -- or pre-fragment the mbuf chain as you are generating > the payload -- and have zero copies in the host. Could be useful in a > low-memory situation, if the host was smart enough to take advantage of > that? > > > > > On Thu, Jan 19, 2017 at 11:13 AM, Christopher Collins <[email protected]> > wrote: > >> On Thu, Jan 19, 2017 at 10:57:58AM -0800, Christopher Collins wrote: >>> On Thu, Jan 19, 2017 at 03:46:49AM -0800, Simon Ratner wrote: >>>> A related question: how does this map to large ATT_MTU and fragmented >>>> packets at the L2CAP level (assuming no data length extension)? Does >> each >>>> fragment get its own mbuf, which are then chained together, or does the >>>> entire packet get reassembled into a single mbuf if there is room? >>> >>> If the host needs to send a large packet, it packs the payload into an >>> mbuf chain. By "packs," I mean each buffer holds as much data as >>> possible with no regard to the maximum L2CAP fragment size. >>> >>> When the host sends an L2CAP fragment, it splits the fragment payload >>> off from the front of the mbuf chain, constructs an ACL data packet, and >>> sends it to the controller. If a buffer at the front of mbuf can be >>> freed, now that data has been removed, the host frees it. >>> >>> If you are interested, the function which handles fragmentation and >>> freeing is mem_split_frag() (util/mem/src/mem.c). >> >> I rushed this response a bit, and there are some important details I >> neglected. >> >> * For the final L2CAP fragment in a packet, the host doesn't >> do an allocating or copying. Instead, it just prepends an ACL data >> header to the mbuf chain and sends it to the controller. >> >> * For all L2CAP fragments *other than the last*, the host allocates an >> additional mbuf chain to hold the ACL data packet. The host then copies >> the fragment data into this new chain, sends it, and frees buffers from >> the front of the original chain if possible. The number of buffers that >> get allocated for the fragment depends on how the maximum L2CAP fragment >> size compares to the msys mbuf size. If an msys mbuf buffer has >> sufficient capacity for a maximum size L2CAP fragment, then only one >> buffer will get allocated. If the mbuf capacity is less, the chain that >> gets allocated will consist of multiple buffers. >> >> * An L2CAP fragment mbuf chain contains the following: >> * mbuf pkthdr (8 bytes) >> * HCI ACL data header (4 bytes) >> * Basic L2CAP header (4 bytes) >> * Payload (varies) >> >> * For incoming data, the host does not do any packing. Each L2CAP >> fragment is simply chained together. >>
