Hi Wayne,
Good question. I'll address the specifics below, but I should mention
that OS event handling has undergone some fairly major changes in recent
days. Later today, I will send a follow up email to the dev list
detailing these changes and the rationale.
On Fri, Nov 04, 2016 at 12:57:04PM +0000, Wayne Keenan wrote:
> Hi,
>
> I had to move from the commonly found 'statically allocated' os_event
> usage pattern to something dynamic.
>
> This is what I'm using:
>
> // Post event:
>
> struct os_event* my_event = (struct os_event
> *)os_memblock_get(&bleprph_mbuf_mpool);
>
> if (!my_event) {
> // TODO: scream loudly
> return;
> }
> my_event->ev_queued = 0;
> my_event->ev_arg = arg;
> my_event->ev_type=OS_EVENT_T_PERUSER+1; // *1
> my_event(&MY_EVENT_QUEUE, my_event);
>
>
>
> // Poll handler:
>
> case OS_EVENT_T_PERUSER+1:
> my_data = ev->ev_arg;
> os_eventq_remove(&btask_evq, ev);
> os_memblock_put(&task_mbuf_mpool, ev); //* 2
> process_data(my_data);
> break;
>
>
> This seems to work and will fail quickly (reset) if step *2 isn't
> performed, but I have some questions.
>
> 1. Is this really correct? I've not seen it in the samples or the docs.
Yes, this is the expected way to manage events when there can be several
of the same type. As in your example, dynamically allocated events are
needed when each has a unique ev_arg value.
My opinion is that the event handler is the correct place to free the
os_event (as you do in your example). It may bug some that the module
that does the allocating is different from the one that does the
freeing. One approach that makes this less objectionable is to create a
function to allocate and enqueue the event as an atomic operation. If
this function lives in the same module where event processing occurs,
there is less uncertainty about event ownership.
> 2. The samples and docs didn't make *2 obvious, e.g. who owns the memory
> pool.
One package which does dynamic event allocation is the NimBLE
controller (net/nimble/controller). An event is allocated and enqueued
when a packet is received (ble_ll_hci_cmd_rx()), and freed after
processing in the main task look (ble_ll_task()).
> 3. Would it be better if the Task didn't care which pool the memory came
> from? i.e. inter task events.
> 4. I've assumed OS_EVENT_T_PERUSER+n is a valid thing todo, is it? until
> what n? is there a documented range limit?
Yes, using OS_EVENT_T_PERUSER+n was (!) correct. Event types is
something that has changed recently.
> It would be nice to have the API that perhaps look something like this:
>
> // Post:
> rc = my_event(&MY_EVENT_QUEUE, OS_EVENT_T_PERUSER+1, arg ); //*1
> if (rc) {
> // TODO: scream loudly
> return;
> }
>
>
> At *1 a default application pool, registered with the OS, is used (which
> would require additional API and setup in the application main startup)
>
>
> // Poll:
> struct os_event ev;
>
> os_eventq_get(&task_evq, &ev); // *1
>
> switch (ev->ev_type) {
> case OS_EVENT_T_PERUSER+1:
> my_data = ev->ev_arg;
> process_data(my_data);
> break;
> }
>
>
> At *1 the event data is copied and the event itself is released by '
> os_eventq_get'
I think this is a good idea that is safer than the current approach.
However, I believe there are tradeoffs involved. If an event knows how
to free itself, then it must either:
1. contain a pointer to its source pool, or
2. come from the same pool as all other events.
But I could be missing something. Do you see another option?
The first option increases the size of each event, even events which
are statically allocated. The second solution restricts how much
control the developer has, as you can no longer pre-allocate for a set
number of events of a particular type. Others can certainly chime in,
but personally I don't know that I'm comfortable with either of these
tradeoffs. The expectation is that most events will be statically
allocated, so I think it is important to keep that path simple and
efficient.
Thanks,
Chris