This is a discussion about the nimble HCI. I want to break this into two
discussions: the HCI API and newt (package?) architecture. Be warned: might be
a bit of a long read and I dont give alot of background details (otherwise
would have been a much longer read!).
HCI API:
I was thinking about this in the context of a combined host/controller HCI and
a separate HCI that is used by either a controller or host. If you look at an
HCI layer that is used by either a controller or host the basic set of API that
would need to provided are: transmit data, handle received data, transmit
commands/events, handle received commands/events. The HCI itself does not
really care if there is a host or controller “attached” to it; the API it
provides are the same.
If you look at the combined host/controller HCI API, the above API would need
one additional piece of information: whether this is a controller or host
calling the API as the HCI code would need to know the proper event queue/API
to call. What this means is that we would either need a separate set of API
that the host and controller call or the HCI does get notified if this is a
host or controller.
I spoke about this with a colleague and we both preferred the separate API
approach. The API would look something like this:
host_tx_data
host_rx_data_callback
host_tx_cmd
host_rx_event_callback
controller_tx_data
controller_rx_data_callback
controller_tx_event
controller_rx_cmd_callback
NOTE: I used the term “callback” here to denote the fact that we would probably
want to register a function with the HCI to call for received
data/commands/events. We could have an API that the host or controller would
need to implement but that would mean that the HCI would depend on the host and
controller.
Comments?
NEWT ARCHITECTURE:
The other subject I wanted to discuss was how to architect this within the newt
framework. Currently we have the BLE stack in net/nimble. This is separated
into three basic packages: controller, host and “common nimble”. The common HCI
interface code is in the net/nimble/src directory and is part of the
net/nimble/ package. Two basic ideas I am considering are: the application
informs the nimble package of the HCI that it wants (somehow). The combined
host/controller HCI and the “physical” HCI interface code all reside in
net/nimble.
The other option is to do something like this:
net/nimble/host
net/nimble/controller
net/nimble/hci_spi
net/nimble/hci_combined
net/nimble/hci_uart
The app would then specify which of these HCI it wanted. The controller and
host code would have a required API and the hci_xxxx would provide that API.
One thing that we discussed, but due to our lack of knowledge of the “physical”
HCI specification we could not decide upon, is whether or not you would have
multiple hci for the various physical interfaces or just one and there would be
a way of specifying which interface you would need. This really depends (in our
minds) on how much of the code would be common among these HCI. I suspect quite
a bit but not sure. It migth also be easier with the newt tool to have separate
HCI packages.
Anyway, starting with separate hci packages seems to be a good idea to me… Note
that these separate HCI packages would depend on hw/hal and would use the
appropriate HAL api. This assumes, of course, that the HALs provide the
necessary set of API. Of course, these HCI packages could include other HAL’s
or fancy drivers if desired and this could be specified by the app or could
just be a different hci_xxx package.
Comments?