+ 1 for mbufs.

Greater potential for zero-copy stack, ease of sizing for different memory 
footprint etc.
And you can always move data to contiguous area, when parsing in-place is 
inconvenient.

> On Jul 19, 2016, at 9:15 AM, Sterling Hughes <[email protected]> wrote:
> 
> +1 for Mbufs vs flat bufs.
> 
> - Mbufs are slightly painful to use, but with good sample code, I think 
> developers won’t have problems.
> 
> - Mbufs are extremely helpful when looking at sizing memory up or down on a 
> system.  b/c mbufs are chained structures, individual mbuf size is 
> changeable, which helps on low memory systems (e.g. smaller packet sizes, 
> smaller mbufs but more buffers.)
> 
> - The stacks themselves use mbufs, so you have zero copy from app->radio 
> (theoretically.)
> 
> - Mbufs can be converted to flat buffers (there should be an easy helper to 
> do this) by the app code.
> 
> Sterling
> 
> On 19 Jul 2016, at 8:44, Christopher Collins wrote:
> 
>> On Tue, Jul 19, 2016 at 06:38:02AM -0700, will sanfilippo wrote:
>>> I am +1 for mbufs. While they do take a bit of getting used to, I
>>> think converting the host to use them is the way to go, especially if
>>> they replace a large flat buffer pool.
>>> 
>>> I also think we should mention that mbufs use a fair amount of
>>> overhead. I dont think that applies here as the flat buffer required
>>> for an ATT buffer is quite large (if I recall). But if folks want to
>>> use mbufs for other things they should be aware of this.
>> 
>> Good point.  Mbufs impose the following amount of overhead:
>> 
>> * First buffer in chain: 24 bytes (os_mbuf + os_mbuf_pkthdr)
>> * Subsequent buffers:    16 bytes (os_mbuf)
>> 
>> Also, I didn't really explain it, but there is only one flat buffer in
>> the host that is being allocated: the ATT rx buffer.  This single buffer
>> is used for receives of all ATT commands and optionally for application
>> callbacks to populate with read responses.  The host gets by with a
>> single buffer because all receives are funneled into one task.  This
>> buffer is sized according to the largest ATT message the stack is built
>> to receive, with a spec-imposed cap of 515 bytes [*].  So, switching to
>> mbufs would probably save some memory here, but the savings wouldn't be
>> dramatic.
>> 
>> [*] This isn't totally true, as the ATT MTU can be as high as 65535.
>> However, the maximum size of an attribute is 512 bytes, which limits the
>> size of the largets message any ATT bearer would send.  The one
>> exception is the ATT Read Multiple Response, which can contain several
>> attributes.  However, this command is seldom used, and is not practical
>> with large attributes.  So, the current NimBLE code does impose an
>> artificial MTU cap of 515 bytes, but in practice I don't think it would
>> ever be noticed.
>> 
>> Chris

Reply via email to