I'm struggling with some puzzling numbers I'm getting while trying to
maximize transfer throughput over GATT.
I have a simple setup: a macbook pro sending data by writing
characteristics to a GATT server running on an nRF52DK board running mynewt.
The default MTU of 240 is accepted by the mac, and the characteristic is
setup for Write Without Response.
I am writing 200 bytes of characteristic value in a loop, as fast as
possible.
What I observe doesn't make sense: I get about 45kB/s (Kilo Bytes)
>From basic calculations, the max should less than 40kB/s, unless I'm
missing something:

When blasting characteristics that way, the master will send packets with
the max PDU size (27), which should be 37 bytes (27 PDU + 10 bytes
overhead). The slave will respond with Empty PDU packets of 10 bytes each
(0 PDU + 10 bytes overhead).
If we take into account the 150 us interframe spacing, a master/slave
packet cycle should take:
296 + 150 + 80 + 150  = 676 bits (37 bytes is 296 bits). That's 676
microseconds.
With a max PDU of 27 bytes, that's 39940 bytes/s of PDU throughput.
When writing characteristics with a 200 byte value, the total payload to
transfer is 207 bytes: 200 bytes of value + 3 bytes of GATT op code and
attribute handle + 4 bytes of L2CAP length/flags. So that's a ratio of
200/207 of effective payload, which gives us 38590 bytes/s of effective
throughput in the best case.

So how can I be observing over > 45 kbytes/s in my test? Is it possible
that the interframe spacing isn't always fixed as 150 us? Or is it that
between macOS Sierra and the nrf52/mynewt stack the PDU isn't capped at 27
(which would imply LE Data Length Extension, which I thought wasn't
supported on MacOS)?

Any clues?

Reply via email to