Jan Kiszka wrote:
Hi,
while preparing my reworked fast mutex patches for submission, reviewing
them once again, I realized a conception problem that the fast path can
introduce: So far every pthread_mutex_lock or rt_mutex_acquire forced
the caller into primary mode in case it was in
Gilles Chanteperdrix wrote:
Jan Kiszka wrote:
Hi,
while preparing my reworked fast mutex patches for submission, reviewing
them once again, I realized a conception problem that the fast path can
introduce: So far every pthread_mutex_lock or rt_mutex_acquire forced
the caller into primary
Jan Kiszka wrote:
As we are already fighting hard to avoid new explicit mode-switch use
cases, rather get rid of old ones, I thought it would be better to keep
existing semantic across the fast mutex changes.
Regarding those shared maps: they are per process, aren't they? But here
we need
Hi,
while preparing my reworked fast mutex patches for submission, reviewing
them once again, I realized a conception problem that the fast path can
introduce: So far every pthread_mutex_lock or rt_mutex_acquire forced
the caller into primary mode in case it was in secondary before. Now
this will
Gilles Chanteperdrix wrote:
Hi,
I am working on using SIGWINCH to trigger priority changes in
user-space. And I am afraid it will never really work:
- Xenomai makes a difference between the base priority of a thread and
its current priority. But pthread_getschedparam should return the base
Gilles Chanteperdrix wrote:
Philippe Gerum wrote:
(*) Btw, if you happen to export the current thread state via the shared
heap to
userland for the fast mutexes, I guess that we could to use it as well to
implement a smarter pthread_setschedparam/rt_task_set_priority, doing
something
Jan Kiszka wrote:
Sigh, this is not my day: Can anyone confirm that we are leaking memory
with current SVN head? I'm seeing a constant increase on slab
size-1024 on a x86_64 box when running arbitrary Xenomai apps in a loop.
I loose 1k per invocation, and this already from applying -lrtdm to
Jan Kiszka wrote:
Sigh, this is not my day: Can anyone confirm that we are leaking memory
with current SVN head? I'm seeing a constant increase on slab
size-1024 on a x86_64 box when running arbitrary Xenomai apps in a loop.
[ Besides that, I have an occasional deadlock in xnpipe_wakeup_proc
Jan Kiszka wrote:
Philippe Gerum wrote:
Jan Kiszka wrote:
Sigh, this is not my day: Can anyone confirm that we are leaking memory
with current SVN head? I'm seeing a constant increase on slab
size-1024 on a x86_64 box when running arbitrary Xenomai apps in a loop.
[ Besides that, I have an
Gilles Chanteperdrix wrote:
Jan Kiszka wrote:
As we are already fighting hard to avoid new explicit mode-switch use
cases, rather get rid of old ones, I thought it would be better to keep
existing semantic across the fast mutex changes.
Regarding those shared maps: they are per process,
Sigh, this is not my day: Can anyone confirm that we are leaking memory
with current SVN head? I'm seeing a constant increase on slab
size-1024 on a x86_64 box when running arbitrary Xenomai apps in a loop.
[ Besides that, I have an occasional deadlock in xnpipe_wakeup_proc due
to an inconsistent
Philippe Gerum wrote:
(*) Btw, if you happen to export the current thread state via the shared heap
to
userland for the fast mutexes, I guess that we could to use it as well to
implement a smarter pthread_setschedparam/rt_task_set_priority, doing
something
like:
if
Jan Kiszka wrote:
Jan Kiszka wrote:
Philippe Gerum wrote:
Jan Kiszka wrote:
Sigh, this is not my day: Can anyone confirm that we are leaking memory
with current SVN head? I'm seeing a constant increase on slab
size-1024 on a x86_64 box when running arbitrary Xenomai apps in a loop.
[
Jan Kiszka wrote:
Gilles Chanteperdrix wrote:
Jan Kiszka wrote:
As we are already fighting hard to avoid new explicit mode-switch use
cases, rather get rid of old ones, I thought it would be better to keep
existing semantic across the fast mutex changes.
Regarding those shared maps: they
The state tracking of Linux tasks queue on xnpipe events was fairly
broken. An easy way to corrupt some xnpipe_state_t object was to kill a
Linux task blocked on opening a /dev/rtpX device (this left the state
object queued in xnpipe_sleep, and hell broke loose on next queuing).
Another problem
Hi,
xnpipe_setup and the callbacks it sets is dead code for in-tree Xenomai
users. Going back to, well, fusion-0.6.9 (the oldest version we have in
LXR), it used to be dead meat already at that time. Can we remove it?
Jan
signature.asc
Description: OpenPGP digital signature
Jan Kiszka wrote:
Hi,
xnpipe_setup and the callbacks it sets is dead code for in-tree Xenomai
users. Going back to, well, fusion-0.6.9 (the oldest version we have in
LXR), it used to be dead meat already at that time. Can we remove it?
In case we can:
---
include/nucleus/pipe.h |5
Jan Kiszka wrote:
Jan Kiszka wrote:
Hi,
xnpipe_setup and the callbacks it sets is dead code for in-tree Xenomai
users. Going back to, well, fusion-0.6.9 (the oldest version we have in
LXR), it used to be dead meat already at that time. Can we remove it?
In case we can:
Sorry, a better
Hi,
looking into the xeno_in_primary_mode thing I wondered how to make the
thread state quickly retrievable. Going via pthread_getspecific as we do
for xeno_get_current appears logical - but not optimal. Though
getspecific is optimized for speed, it remains a function call, a few
sanity checks,
19 matches
Mail list logo