Jan Kiszka wrote:
Philippe Gerum wrote:
Jan Kiszka wrote:
I happened to stumble over this comment. It made me curious,
especially as it is not totally correct (the loop is executed in IRQ-off
context, thus it *is* timecritical).
Critical should be understood here in the sense that IRQs are off while
the loop workload is high, which is fortunately not the case. Hence the
Sure, there is not much to do inside the loop. But it does not scale
very well in case a significant number of elements are registered - and
they are scattered over a larger memory area so that cache missed strike us.
Compared to what it costs to actually call Linux to release the system
memory which is an operation the syscall will do anyway, those cache
misses account for basically nothing.
It's a bit theoretical, but I also think we can easily resolve it by
using Linux locks as soon as we can sanely sleep inside
xnheap_init/destroy_shared and xnheap_ioctl.
While thinking about the possibility to convert the hard IRQ lock
protection of kheapq into some Linux mutex or whatever, I analysed the
contexts the users of this queue (__validate_heap_addr/xnheap_ioctl,
xnheap_init_shared, xnheap_destroy_shared) execute in. Basically, it is
Linux/secondary mode, but there are unfortunate exceptions:
rt_heap_delete(): take nklock, then call xnheap_destroy_shared().
The latter will call __unreserve_and_free_heap() which calls Linux
functions like vfree() or kfree() -- I would say: not good! At
least on SMP we could easily get trapped by non-deterministic waiting on
Linux spinlocks inside those functions.
The same applies to rt_queue_delete().
Good spot. Better not calling the heap deletion routines under nklock
protection in the first place. The committed fix does just that for both
rt_heap_delete and rt_queue_delete.
Ok, we no longer have IRQs locked over vfree/kfree, but task scheduling
is still suffering from potential delays. Wouldn't it be better to defer
such operations to an asynchronous Linux call?
Do we really want heap creation/deletion to be short time bounded
operations at the expense of added complexity?