Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-06-05 Thread Philippe Gerum
On Tue, 2007-06-05 at 11:14 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> ...
> >>  This
> >> is a widely orthogonal issue, so please let us come back to my original
> >> point.
> >>
> > 
> > The original proposal suggests an ad hoc solution for a specific class
> > of tracing needs (code flow analysis with low temporal invasiveness)
> > (*).
> > 
> > I suggest that we _also_ take the time to think ahead, about a common
> > infrastructure which would host the basic, well-supported tools people
> > need to debug Xenomai applications. It is always possible to work around
> > temporal invasiveness by simulating external input, there are a few
> > techniques that work pretty well for that purpose, people working with a
> > co-design approach are doing that all the time. But before this, it
> > would be great that the code one relies on does not include silly or
> > less silly basic coding issues. Valgrind is such a framework defining a
> > possible infrastructure.
> > 
> > To sum up, I'm going to follow your work on usystrace to see where it
> > leads us, even if I'm not happy with its potential impact on the code.
> > Whether it gets eventually merged or not really depends on that aspect.
> 
> I'm also in favour of a less invasive approach as sketched earlier.
> 
> > At the same time, or at least in a reasonable future, we should REALLY
> > think about making Xenomai Valgrind-compatible, so that we could cover
> > the rest of the needs for debug tools. This is something I might pick
> > when my TODO list shortens, if nobody did it before.
> 
> Well, nothing is impossible given unlimited resources. I'm just slightly
> sceptical about the impact of some Xeno-Valgrind on the target and the
> related side-effects. Ugly things also depend on timing, and simulating
> the environment accurately is often a full project of its own. Hmm, ok,
> we could help users a bit by collecting and later on replaying events
> and I/O data at standardised interfaces (IPC mechanisms, standard RTDM
> devices, etc.). We could then smartly combine Valgrind with the
> simulator to control timing precisely. Still _a lot_ of work...
> 

Well, Xenomai is the result of a much larger work since July 2001, so
there is hope.

> > 
> > (*) You could avoid passing the function name in the systrace calls, by
> > relying on the value of __FUNCTION__, with a small hack to trimm the
> > __wrap_ prefix when needed. Making tracepoints less hairy would ease my
> > pain reading this stuff.
> 
> OK, but let's not spend brain cycles on this until we know if and how we
> can separate the entry/exit tracing from the traced service. If we could
> leave existing lib code untouched, I would be happier as well.
> 

It does matter because it reduce visual clutter, and that's a
prerequisite for merging, that's why I'm suggesting it.
We already have an automated instrumentation tool for the simulator, but
I'm not sure automatically inserted tracepoints would always be relevant
in all cases.

> Jan
> 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-06-05 Thread Jan Kiszka
Philippe Gerum wrote:
...
>>  This
>> is a widely orthogonal issue, so please let us come back to my original
>> point.
>>
> 
> The original proposal suggests an ad hoc solution for a specific class
> of tracing needs (code flow analysis with low temporal invasiveness)
> (*).
> 
> I suggest that we _also_ take the time to think ahead, about a common
> infrastructure which would host the basic, well-supported tools people
> need to debug Xenomai applications. It is always possible to work around
> temporal invasiveness by simulating external input, there are a few
> techniques that work pretty well for that purpose, people working with a
> co-design approach are doing that all the time. But before this, it
> would be great that the code one relies on does not include silly or
> less silly basic coding issues. Valgrind is such a framework defining a
> possible infrastructure.
> 
> To sum up, I'm going to follow your work on usystrace to see where it
> leads us, even if I'm not happy with its potential impact on the code.
> Whether it gets eventually merged or not really depends on that aspect.

I'm also in favour of a less invasive approach as sketched earlier.

> At the same time, or at least in a reasonable future, we should REALLY
> think about making Xenomai Valgrind-compatible, so that we could cover
> the rest of the needs for debug tools. This is something I might pick
> when my TODO list shortens, if nobody did it before.

Well, nothing is impossible given unlimited resources. I'm just slightly
sceptical about the impact of some Xeno-Valgrind on the target and the
related side-effects. Ugly things also depend on timing, and simulating
the environment accurately is often a full project of its own. Hmm, ok,
we could help users a bit by collecting and later on replaying events
and I/O data at standardised interfaces (IPC mechanisms, standard RTDM
devices, etc.). We could then smartly combine Valgrind with the
simulator to control timing precisely. Still _a lot_ of work...

> 
> (*) You could avoid passing the function name in the systrace calls, by
> relying on the value of __FUNCTION__, with a small hack to trimm the
> __wrap_ prefix when needed. Making tracepoints less hairy would ease my
> pain reading this stuff.

OK, but let's not spend brain cycles on this until we know if and how we
can separate the entry/exit tracing from the traced service. If we could
leave existing lib code untouched, I would be happier as well.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-06-05 Thread Philippe Gerum
On Fri, 2007-06-01 at 08:55 +0200, Jan Kiszka wrote:
> > 
> > panic. See the code flow from xntimer_start_*periodic for instance, up
> > to the point where the timer is enqueued.
> 
> With wrong I meant valid sched, but for an incorrect CPU.
> 

As explained earlier, nothing really bad would happen, except a
sub-optimal setup.

> > 
> >>  Anyway, if you say it does matter, my instrumentation would
> >> have caught the first bug in Xenomai! That xnpod_raw_current_sched() is
> >> there because xntimer_init is used in non-atomic contexts (I got
> >> warnings from DEBUG_PREEMPT).
> > 
> > Thread migration in Xenomai only happens upon request from the migrating
> > thread itself, so it becomes an issue if the caller belongs to the Linux
> > domain (i.e. preemption in init_module, the rest is not much
> 
> ...or preemption during shadow thread creation. But this requires the
> that Linux thread to be shadowed has no clearly defined single-CPU
> affinity and might happen to be pushed around by a loadbalancer during
> init. This sounds like a fatal user error /wrt RT.
> 

The point is that in any case, this would make no difference with a
shadow with a lose affinity being moved to another CPU while running, by
a call to sched_setscheduler(), or as a result of load balancing.
xnshadow_map per-se brings nothing more to the picture with respect to
this issue, except that at some point, it really pins the current task
on the current CPU. Per Xenomai-thread timers have been initialized
before xnshadow_map is entered anyway.

> > 
>  librtutils.patch
> 
>  My original librtprint patch. I now renamed the library to
>  librtutils to express that more stuff beyond rt_print may find its
>  home here in the future. Hopefully acceptable now.
> 
> >>> Will merge, but... since I'm something of a pain, I'm still pondering
> >>> whether rtutils is our best shot for picking up a name here. We already
> >>> have src/utils, so we would add src/librtutils as yet another set of
> >>> utilities; I'm not sure both belong to the same class of helpers.
> >> /me hesitated as well and put the "lib" prefix into the directory name.
> >> So I'm all ears for a better name!
> >>
> > 
> > TssTssTss... _You_ got the brain, _I_ am the cumbersome one.
> 
> But my brain needs some pointers what _you_ would be happy with. Is your
> current problem only related to the clash librtutils vs. utils, or don't
> you like the term "rtutils" at all? If it's the former, would moving
> "utils" to "tools" raise or lower your pain?
> 

I don't like utils and beyond that it clashes with other "utils". I will
try to suggest something a bit less generic than utils/tools.

> > 
>  rtsystrace-v2.patch
> 
>  Updated proposal to add rt_print-based Xenomai syscall tracing.
>  Still in early stage, and I'm lacking feedback on this approach, if
>  it makes sense to pursue it.
> 
> >>> Gasp. I really don't like the idea of having tons of explicit trace
> >>> points spread all over the code, this just makes the task of reading it
> >>> a total nightmare. There are some user-space tracing infrastructure
> >>> already, starting with LTTng, or maybe older ones like Ctrace, with or
> >>> without automatic instrumentation of the source files.
> >>> We may want to get some facts about those before reinventing a square
> >>> wheel.
> >> Let me elaborate about the motivation a bit more: My idea is to have an
> >> rt-safe replacement for strace. The latter is based on ptrace, requires
> >> evil signals, and is far to invasive for tracing high-speed RT
> >> applications IMHO (even if we had that RT-safe). But with low-impact,
> >> per-process rt_printf, we have a pure user space mechanism at hand that
> >> you can quickly set up for a single RT application (using your private
> >> libs e.g.). No need to have an LTTng-prepared kernel, no need to trace
> >> the whole system if you only want to know where your damn applications
> >> now hangs again, no need to disturb other RT users with your debugging
> >> stuff.
> >>
> >> But I agree, the source code impact on the skin libs is far from being
> >> negligible. Maybe we can find a smarter solution, something that just
> >> hooks into the library calls (reminds me of --wrap...) to capture entry
> >> parameters and then the returned values. This will still require some
> >> work (for each instrumented library function), but we might be able to
> >> do this out-of-line.
> > 
> > At some point, I think we should really put some brain cycles into
> > looking into Valgrind, and evaluate the actual cost of adapting it to
> > our needs (for both the existing memory tracker skin, and maybe a trace
> > skin too).
> 
> Valgrind is not a replacement for strace, even if it would work for
> multi-threaded strickly timed RT apps (which I doubt, I _did_ look at it
> already, understood that it requires a complex and temporal invasive VM,

I basically don't care of temporal invasivenes

Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-06-02 Thread ROSSIER Daniel

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:xenomai-core-
> [EMAIL PROTECTED] On Behalf Of Jan Kiszka
> Sent: vendredi 1 juin 2007 08:56
> To: [EMAIL PROTECTED]
> Cc: xenomai-core
> Subject: Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch
series,
> LTTng bits
> 
> Philippe Gerum wrote:
> >> ...
> >>>> xeno-kill-ipipe_processor_id.patch
> >>>> Refreshed cleanup patch to remove ipipe_processor_id
> completely.
> >>>>
> >>> The logic of this patch is flawed. rthal_processor_id() already
> >>> implies rawness wrt how we fetch the current CPU number, so
> >>> rthal_raw_processor_id() is redundant, and
xnarch_raw_current_cpu()
> >>> too
> >> It isn't when you take a look at the corresponding I-pipe patch as
> well:
> >> This enables debug instrumentation of smp_processor_id to validate
> >> also the atomicity of Xenomai core code /wrt cpuid handling!
> >>
> >
> > Nope, not for archs which do _not_ have a stack-insensitive
> > smp_processor_id() and get_current() yet (if it's even possible for
> > all of them, since the in-kernel ABI would likely have to be amended
> > for reserving the PDA register). Some archs even have
> > stack-insensitive get_current(), but stack-sensitive
> > current_thread_info(), so we might be toast in a way or another.
> >
> > I'm not arguing that the I-pipe changes you suggest have no value in
> > the
> > 2.6.20/x86* contexts for instance, but since they also affect the
> > generic code area, they can't work for all our supported
> > configurations right now. For instance, on an I-pipe setup with a
> > non-heading Xenomai domain and DEBUG_PREEMPT enabled, a kernel-based
> > Xenomai thread calling
> > smp_processor_id() from a stalled context (which would be legal)
> would
> > _require_ preemption control macros and get_current() to be
> > stack-insensitive. At that point, we would have some problem
> > propagating this assumptions to the generic Xenomai core,
> specifically
> > for the Blackfin and ARM variants, which do not have stack-
> insensitive
> > accessors here.
> >
> > Also, my point is that we don't want to change the usability of
> > rthal_processor_id(). So either it does not change with this patch,
> > and the raw* forms are useless, or it changes, and this is a no-go.
> > Keep in mind that a number of external modules may rely on this
> aspect
> > already; so either we kill rthal_processor_id() completely to force
a
> > proper update in depending code, or we keep the basic assumptions
> > about the usability context unchanged, but we just cannot make the
> > situation more fragile for the out of tree code by changing such a
> > fundamental assumption (i.e. that rthal_processor_id() is context-
> agnostic).
> 
> Ok, I see it's too early. Let's put this back into the drawer until we
> might solve the stack-sensitiveness issue for all archs (as we
recently
> started to discuss). I will rework the I-pipe patches so that we can
> make use of some parts of them already now.
> 
> 
> >
> >>> for the same reason. Additionally, xnpod_current_sched() also
> >>> implies to use a raw accessor to the CPU number, so
> >>> xnpod_current_raw_sched() is pointless. For this reason, we must
be
> >>> extra-careful about replacing
> >>> rthal_processor_id() with smp_processor_id(), because 1) we may
> want
> >>> to bypass DEBUG_PREEMPT checks sometimes, 2) not all supported
> archs
> >>> do have a stack-insensitive smp_processor_id() implementation yet
> >>> (blackfin, ARM, and Linux/x86 2.4).
> >> For 1) see ipipe again, but 2), granted, is a problem. Likely the
> cut
> >> is too radical and we must leave the infrastructure in place.
Still,
> >> making use of DEBUG_PREEMPT also for Xenomai is IMHO worth some
> >> changes that will then at least have effect on a subset of archs.
> >
> > It becomes very useful precisely because ipipe_processor_id() gets
> > killed.
> >
> >>> To answer the inlined question about the need to initialize the
> >>> ->sched field when setting up a timer, I would say yes, we have to
> >>> set this up too. It might seem a bit silly, but nothing would
> >>> prevent a migration call to occur for a just initialized timer, in
> >>> which case, we would need the sched pointer to reflect the current
> situation (i.e. at init time).
> >> But what would be the effect of a w

Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-05-31 Thread Jan Kiszka
Philippe Gerum wrote:
>> ...
 xeno-kill-ipipe_processor_id.patch
 Refreshed cleanup patch to remove ipipe_processor_id completely.

>>> The logic of this patch is flawed. rthal_processor_id() already implies
>>> rawness wrt how we fetch the current CPU number, so
>>> rthal_raw_processor_id() is redundant, and xnarch_raw_current_cpu() too
>> It isn't when you take a look at the corresponding I-pipe patch as well:
>> This enables debug instrumentation of smp_processor_id to validate also
>> the atomicity of Xenomai core code /wrt cpuid handling!
>>
> 
> Nope, not for archs which do _not_ have a stack-insensitive
> smp_processor_id() and get_current() yet (if it's even possible for all
> of them, since the in-kernel ABI would likely have to be amended for
> reserving the PDA register). Some archs even have stack-insensitive
> get_current(), but stack-sensitive current_thread_info(), so we might be
> toast in a way or another.
> 
> I'm not arguing that the I-pipe changes you suggest have no value in the
> 2.6.20/x86* contexts for instance, but since they also affect the generic
> code area, they can't work for all our supported configurations right
> now. For instance, on an I-pipe setup with a non-heading Xenomai domain
> and DEBUG_PREEMPT enabled, a kernel-based Xenomai thread calling
> smp_processor_id() from a stalled context (which would be legal) would
> _require_ preemption control macros and get_current() to be
> stack-insensitive. At that point, we would have some problem propagating
> this assumptions to the generic Xenomai core, specifically for the
> Blackfin and ARM variants, which do not have stack-insensitive accessors
> here.
> 
> Also, my point is that we don't want to change the usability of
> rthal_processor_id(). So either it does not change with this patch, and
> the raw* forms are useless, or it changes, and this is a no-go. Keep in
> mind that a number of external modules may rely on this aspect already;
> so either we kill rthal_processor_id() completely to force a proper
> update in depending code, or we keep the basic assumptions about the
> usability context unchanged, but we just cannot make the situation more
> fragile for the out of tree code by changing such a fundamental
> assumption (i.e. that rthal_processor_id() is context-agnostic).

Ok, I see it's too early. Let's put this back into the drawer until we
might solve the stack-sensitiveness issue for all archs (as we recently
started to discuss). I will rework the I-pipe patches so that we can
make use of some parts of them already now.

> 
>>> for the same reason. Additionally, xnpod_current_sched() also implies to
>>> use a raw accessor to the CPU number, so xnpod_current_raw_sched() is
>>> pointless. For this reason, we must be extra-careful about replacing
>>> rthal_processor_id() with smp_processor_id(), because 1) we may want to
>>> bypass DEBUG_PREEMPT checks sometimes, 2) not all supported archs do
>>> have a stack-insensitive smp_processor_id() implementation yet
>>> (blackfin, ARM, and Linux/x86 2.4).
>> For 1) see ipipe again, but 2), granted, is a problem. Likely the cut is
>> too radical and we must leave the infrastructure in place. Still, making
>> use of DEBUG_PREEMPT also for Xenomai is IMHO worth some changes that
>> will then at least have effect on a subset of archs.
> 
> It becomes very useful precisely because ipipe_processor_id() gets
> killed.
> 
>>> To answer the inlined question about the need to initialize the ->sched
>>> field when setting up a timer, I would say yes, we have to set this up
>>> too. It might seem a bit silly, but nothing would prevent a migration
>>> call to occur for a just initialized timer, in which case, we would need
>>> the sched pointer to reflect the current situation (i.e. at init time).
>> But what would be the effect of a wrong sched until the timer is
>> started?
> 
> panic. See the code flow from xntimer_start_*periodic for instance, up
> to the point where the timer is enqueued.

With wrong I meant valid sched, but for an incorrect CPU.

> 
>>  Anyway, if you say it does matter, my instrumentation would
>> have caught the first bug in Xenomai! That xnpod_raw_current_sched() is
>> there because xntimer_init is used in non-atomic contexts (I got
>> warnings from DEBUG_PREEMPT).
> 
> Thread migration in Xenomai only happens upon request from the migrating
> thread itself, so it becomes an issue if the caller belongs to the Linux
> domain (i.e. preemption in init_module, the rest is not much

...or preemption during shadow thread creation. But this requires the
that Linux thread to be shadowed has no clearly defined single-CPU
affinity and might happen to be pushed around by a loadbalancer during
init. This sounds like a fatal user error /wrt RT.

> preemptible), in which case the timer might be queued on a different CPU
> than the thread's. But since we don't migrate timers upon
> sched_setaffinity() requests already, but only upon
> xnpod_migrate_thread()

Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-05-31 Thread Philippe Gerum
On Thu, 2007-05-31 at 22:39 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Mon, 2007-05-28 at 13:31 +0200, Jan Kiszka wrote:
> >> Instead of posting yet another stream of individual patches from my
> >> queue, I decided to put them all into a series and upload them. See
> >>
> >>http://www.rts.uni-hannover.de/rtaddon/patches
> >>
> >> for my latest I-pipe, Xenomai, and LTTng enhancements and fixes. Here is
> >> a short overview of the content:
> >>
> > 
> > Note to users: Those patches are all for trunk/v2.4, so don't search for
> > them into the v2.3.x stable branch.
> 
> Of course, I wouldn't dare to shake the stable tree. :)
> 
> ...
> >> fast-tsc-to-ns.patch
> >>
> >> Integration of my scaled-math-based xnarch_tsc_to_ns service for
> >> i386 at least. xnarch_ns_to_tsc remains untouched in order to keep
> >> conversion errors small. clocktest reports no significant precision
> >> regression here, and both code size and execution speed improved.
> >>
> > 
> > Postponed. I need some feedback from Gilles who wrote the generic
> > support for the funky arithmetics we use.
> 
> Yeah, looking forward. I would also be interested to hear if/what this
> approach may buy us on non-x86 architectures.
> 
> ...
> >> xeno-kill-ipipe_processor_id.patch
> >> Refreshed cleanup patch to remove ipipe_processor_id completely.
> >>
> > 
> > The logic of this patch is flawed. rthal_processor_id() already implies
> > rawness wrt how we fetch the current CPU number, so
> > rthal_raw_processor_id() is redundant, and xnarch_raw_current_cpu() too
> 
> It isn't when you take a look at the corresponding I-pipe patch as well:
> This enables debug instrumentation of smp_processor_id to validate also
> the atomicity of Xenomai core code /wrt cpuid handling!
> 

Nope, not for archs which do _not_ have a stack-insensitive
smp_processor_id() and get_current() yet (if it's even possible for all
of them, since the in-kernel ABI would likely have to be amended for
reserving the PDA register). Some archs even have stack-insensitive
get_current(), but stack-sensitive current_thread_info(), so we might be
toast in a way or another.

I'm not arguing that the I-pipe changes you suggest have no value in the
2.6.20/x86* contexts for instance, but since they also affect the generic
code area, they can't work for all our supported configurations right
now. For instance, on an I-pipe setup with a non-heading Xenomai domain
and DEBUG_PREEMPT enabled, a kernel-based Xenomai thread calling
smp_processor_id() from a stalled context (which would be legal) would
_require_ preemption control macros and get_current() to be
stack-insensitive. At that point, we would have some problem propagating
this assumptions to the generic Xenomai core, specifically for the
Blackfin and ARM variants, which do not have stack-insensitive accessors
here.

Also, my point is that we don't want to change the usability of
rthal_processor_id(). So either it does not change with this patch, and
the raw* forms are useless, or it changes, and this is a no-go. Keep in
mind that a number of external modules may rely on this aspect already;
so either we kill rthal_processor_id() completely to force a proper
update in depending code, or we keep the basic assumptions about the
usability context unchanged, but we just cannot make the situation more
fragile for the out of tree code by changing such a fundamental
assumption (i.e. that rthal_processor_id() is context-agnostic).

> > for the same reason. Additionally, xnpod_current_sched() also implies to
> > use a raw accessor to the CPU number, so xnpod_current_raw_sched() is
> > pointless. For this reason, we must be extra-careful about replacing
> > rthal_processor_id() with smp_processor_id(), because 1) we may want to
> > bypass DEBUG_PREEMPT checks sometimes, 2) not all supported archs do
> > have a stack-insensitive smp_processor_id() implementation yet
> > (blackfin, ARM, and Linux/x86 2.4).
> 
> For 1) see ipipe again, but 2), granted, is a problem. Likely the cut is
> too radical and we must leave the infrastructure in place. Still, making
> use of DEBUG_PREEMPT also for Xenomai is IMHO worth some changes that
> will then at least have effect on a subset of archs.

It becomes very useful precisely because ipipe_processor_id() gets
killed.

> 
> > 
> > To answer the inlined question about the need to initialize the ->sched
> > field when setting up a timer, I would say yes, we have to set this up
> > too. It might seem a bit silly, but nothing would prevent a migration
> > call to occur for a just initialized timer, in which case, we would need
> > the sched pointer to reflect the current situation (i.e. at init time).
> 
> But what would be the effect of a wrong sched until the timer is
> started?

panic. See the code flow from xntimer_start_*periodic for instance, up
to the point where the timer is enqueued.

>  Anyway, if you say it does matter, my instrumentation would
> have caught the first bug

Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-05-31 Thread Philippe Gerum
On Thu, 2007-05-31 at 22:39 +0200, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Mon, 2007-05-28 at 13:31 +0200, Jan Kiszka wrote:
> >> Instead of posting yet another stream of individual patches from my
> >> queue, I decided to put them all into a series and upload them. See
> >>
> >>http://www.rts.uni-hannover.de/rtaddon/patches
> >>
> >> for my latest I-pipe, Xenomai, and LTTng enhancements and fixes. Here is
> >> a short overview of the content:
> >>
> > 
> > Note to users: Those patches are all for trunk/v2.4, so don't search for
> > them into the v2.3.x stable branch.
> 
> Of course, I wouldn't dare to shake the stable tree. :)
> 
> ...
> >> fast-tsc-to-ns.patch
> >>
> >> Integration of my scaled-math-based xnarch_tsc_to_ns service for
> >> i386 at least. xnarch_ns_to_tsc remains untouched in order to keep
> >> conversion errors small. clocktest reports no significant precision
> >> regression here, and both code size and execution speed improved.
> >>
> > 
> > Postponed. I need some feedback from Gilles who wrote the generic
> > support for the funky arithmetics we use.
> 
> Yeah, looking forward. I would also be interested to hear if/what this
> approach may buy us on non-x86 architectures.
> 
> ...
> >> xeno-kill-ipipe_processor_id.patch
> >> Refreshed cleanup patch to remove ipipe_processor_id completely.
> >>
> > 
> > The logic of this patch is flawed. rthal_processor_id() already implies
> > rawness wrt how we fetch the current CPU number, so
> > rthal_raw_processor_id() is redundant, and xnarch_raw_current_cpu() too
> 
> It isn't when you take a look at the corresponding I-pipe patch as well:
> This enables debug instrumentation of smp_processor_id to validate also
> the atomicity of Xenomai core code /wrt cpuid handling!
> 

Nope, not for archs which do _not_ have a stack-insensitive
smp_processor_id() and get_current() yet (if it's even possible for all
of them, since the in-kernel ABI would likely have to be amended for
reserving the PDA register). Some archs even have stack-insensitive
get_current(), but stack-sensitive current_thread_info(), so we might be
toast in a way or another.

I'm not arguing that the I-pipe changes you suggest have no value in the
2.6.20/x86 context for instance, but since they also affect the generic
code area, they can't work for all our supported configurations right
now. For instance, on an I-pipe setup with a non-heading Xenomai domain
and DEBUG_PREEMPT enabled, a kernel-based Xenomai thread calling
smp_processor_id() from a stalled context (which would be legal) would
_require_ preemption control macros and get_current() to be
stack-insensitive. At that point, we would have some problem propagating
this assumptions to the generic Xenomai core, specifically for the
Blackfin and ARM variants, which do not have stack-insensitive accessors
here.

Also, my point is that we don't want to change the usability of
rthal_processor_id(). So either it does not change with this patch, and
the raw* forms are useless, or it changes, and this is a no-go. Keep in
mind that a number of external modules may rely on this aspect already;
so either we kill rthal_processor_id() completely to force a proper
update in depending code, or we keep the basic assumptions about the
usability context unchanged, but we just cannot make the situation more
fragile for the out of tree code by changing such a fundamental
assumption (i.e. that rthal_processor_id() is context-agnostic).

> > for the same reason. Additionally, xnpod_current_sched() also implies to
> > use a raw accessor to the CPU number, so xnpod_current_raw_sched() is
> > pointless. For this reason, we must be extra-careful about replacing
> > rthal_processor_id() with smp_processor_id(), because 1) we may want to
> > bypass DEBUG_PREEMPT checks sometimes, 2) not all supported archs do
> > have a stack-insensitive smp_processor_id() implementation yet
> > (blackfin, ARM, and Linux/x86 2.4).
> 
> For 1) see ipipe again, but 2), granted, is a problem. Likely the cut is
> too radical and we must leave the infrastructure in place. Still, making
> use of DEBUG_PREEMPT also for Xenomai is IMHO worth some changes that
> will then at least have effect on a subset of archs.

It becomes very useful precisely because ipipe_processor_id() gets
killed.

> 
> > 
> > To answer the inlined question about the need to initialize the ->sched
> > field when setting up a timer, I would say yes, we have to set this up
> > too. It might seem a bit silly, but nothing would prevent a migration
> > call to occur for a just initialized timer, in which case, we would need
> > the sched pointer to reflect the current situation (i.e. at init time).
> 
> But what would be the effect of a wrong sched until the timer is
> started?

panic. See the code flow from xntimer_start_*periodic for instance, up
to the point where the timer is enqueued.

>  Anyway, if you say it does matter, my instrumentation would
> have caught the first bug i

Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-05-31 Thread Jan Kiszka
Philippe Gerum wrote:
> On Mon, 2007-05-28 at 13:31 +0200, Jan Kiszka wrote:
>> Instead of posting yet another stream of individual patches from my
>> queue, I decided to put them all into a series and upload them. See
>>
>>  http://www.rts.uni-hannover.de/rtaddon/patches
>>
>> for my latest I-pipe, Xenomai, and LTTng enhancements and fixes. Here is
>> a short overview of the content:
>>
> 
> Note to users: Those patches are all for trunk/v2.4, so don't search for
> them into the v2.3.x stable branch.

Of course, I wouldn't dare to shake the stable tree. :)

...
>> fast-tsc-to-ns.patch
>>
>> Integration of my scaled-math-based xnarch_tsc_to_ns service for
>> i386 at least. xnarch_ns_to_tsc remains untouched in order to keep
>> conversion errors small. clocktest reports no significant precision
>> regression here, and both code size and execution speed improved.
>>
> 
> Postponed. I need some feedback from Gilles who wrote the generic
> support for the funky arithmetics we use.

Yeah, looking forward. I would also be interested to hear if/what this
approach may buy us on non-x86 architectures.

...
>> xeno-kill-ipipe_processor_id.patch
>> Refreshed cleanup patch to remove ipipe_processor_id completely.
>>
> 
> The logic of this patch is flawed. rthal_processor_id() already implies
> rawness wrt how we fetch the current CPU number, so
> rthal_raw_processor_id() is redundant, and xnarch_raw_current_cpu() too

It isn't when you take a look at the corresponding I-pipe patch as well:
This enables debug instrumentation of smp_processor_id to validate also
the atomicity of Xenomai core code /wrt cpuid handling!

> for the same reason. Additionally, xnpod_current_sched() also implies to
> use a raw accessor to the CPU number, so xnpod_current_raw_sched() is
> pointless. For this reason, we must be extra-careful about replacing
> rthal_processor_id() with smp_processor_id(), because 1) we may want to
> bypass DEBUG_PREEMPT checks sometimes, 2) not all supported archs do
> have a stack-insensitive smp_processor_id() implementation yet
> (blackfin, ARM, and Linux/x86 2.4).

For 1) see ipipe again, but 2), granted, is a problem. Likely the cut is
too radical and we must leave the infrastructure in place. Still, making
use of DEBUG_PREEMPT also for Xenomai is IMHO worth some changes that
will then at least have effect on a subset of archs.

> 
> To answer the inlined question about the need to initialize the ->sched
> field when setting up a timer, I would say yes, we have to set this up
> too. It might seem a bit silly, but nothing would prevent a migration
> call to occur for a just initialized timer, in which case, we would need
> the sched pointer to reflect the current situation (i.e. at init time).

But what would be the effect of a wrong sched until the timer is
started? Anyway, if you say it does matter, my instrumentation would
have caught the first bug in Xenomai! That xnpod_raw_current_sched() is
there because xntimer_init is used in non-atomic contexts (I got
warnings from DEBUG_PREEMPT).

>> librtutils.patch
>>
>> My original librtprint patch. I now renamed the library to
>> librtutils to express that more stuff beyond rt_print may find its
>> home here in the future. Hopefully acceptable now.
>>
> 
> Will merge, but... since I'm something of a pain, I'm still pondering
> whether rtutils is our best shot for picking up a name here. We already
> have src/utils, so we would add src/librtutils as yet another set of
> utilities; I'm not sure both belong to the same class of helpers.

/me hesitated as well and put the "lib" prefix into the directory name.
So I'm all ears for a better name!

> 
>> rtsystrace-v2.patch
>>
>> Updated proposal to add rt_print-based Xenomai syscall tracing.
>> Still in early stage, and I'm lacking feedback on this approach, if
>> it makes sense to pursue it.
>>
> 
> Gasp. I really don't like the idea of having tons of explicit trace
> points spread all over the code, this just makes the task of reading it
> a total nightmare. There are some user-space tracing infrastructure
> already, starting with LTTng, or maybe older ones like Ctrace, with or
> without automatic instrumentation of the source files.
> We may want to get some facts about those before reinventing a square
> wheel.

Let me elaborate about the motivation a bit more: My idea is to have an
rt-safe replacement for strace. The latter is based on ptrace, requires
evil signals, and is far to invasive for tracing high-speed RT
applications IMHO (even if we had that RT-safe). But with low-impact,
per-process rt_printf, we have a pure user space mechanism at hand that
you can quickly set up for a single RT application (using your private
libs e.g.). No need to have an LTTng-prepared kernel, no need to trace
the whole system if you only want to know where your damn applications
now hangs again, no need to disturb other RT users with your debugging
stuff.

But I agree, the source cod

Re: [Xenomai-core] [ANNOUNCE] Xenomai and I-pipe patch series, LTTng bits

2007-05-31 Thread Philippe Gerum
On Mon, 2007-05-28 at 13:31 +0200, Jan Kiszka wrote:
> Instead of posting yet another stream of individual patches from my
> queue, I decided to put them all into a series and upload them. See
> 
>   http://www.rts.uni-hannover.de/rtaddon/patches
> 
> for my latest I-pipe, Xenomai, and LTTng enhancements and fixes. Here is
> a short overview of the content:
> 

Note to users: Those patches are all for trunk/v2.4, so don't search for
them into the v2.3.x stable branch.

> 
> /xenomai
> 
> 
> rt-safe-skin-dereference.patch
> 
> As posted a few days ago: Fixes the usage of module_put over
> the Xenomai domain.
> 

Merged.

> inline-rt_timer-services.patch
> 
> Inline trivial rt_timer services of the native skin for kernel
> usage. Saves object size, micro-optimises their usage.
> 

Merged.

> uninline-tsc-ns.patch
> 
> Uninlines the huge xnarch_tsc_to_ns and xnarch_ns_to_tsc functions.
> Specifically on low-end boxes with small caches, this appears to buy
> us several microseconds worst-case latency. :)
> 

Will merge.

> fast-tsc-to-ns.patch
> 
> Integration of my scaled-math-based xnarch_tsc_to_ns service for
> i386 at least. xnarch_ns_to_tsc remains untouched in order to keep
> conversion errors small. clocktest reports no significant precision
> regression here, and both code size and execution speed improved.
> 

Postponed. I need some feedback from Gilles who wrote the generic
support for the funky arithmetics we use.

> flatten-timer-irq.patch
> 
> As posted earlier: Refactor the timer IRQ path.
> 

Merged.

> xntimer-start-in-tick.patch
> 
> As posted earlier: Only reprogram the hardware timer once per tick.
> 

Merged.

> optimise-periodic-xntimers.patch
> 
> Simplifies the tests that have to be done in the tick handler in
> order to decide if an xntimer shall be reloaded by introducing a new
> timer state XNTIMER_PERIODIC and testing all states at once.
> 

Merged.

> xeno-kill-ipipe_processor_id.patch
> Refreshed cleanup patch to remove ipipe_processor_id completely.
> 

The logic of this patch is flawed. rthal_processor_id() already implies
rawness wrt how we fetch the current CPU number, so
rthal_raw_processor_id() is redundant, and xnarch_raw_current_cpu() too
for the same reason. Additionally, xnpod_current_sched() also implies to
use a raw accessor to the CPU number, so xnpod_current_raw_sched() is
pointless. For this reason, we must be extra-careful about replacing
rthal_processor_id() with smp_processor_id(), because 1) we may want to
bypass DEBUG_PREEMPT checks sometimes, 2) not all supported archs do
have a stack-insensitive smp_processor_id() implementation yet
(blackfin, ARM, and Linux/x86 2.4).

To answer the inlined question about the need to initialize the ->sched
field when setting up a timer, I would say yes, we have to set this up
too. It might seem a bit silly, but nothing would prevent a migration
call to occur for a just initialized timer, in which case, we would need
the sched pointer to reflect the current situation (i.e. at init time).

> remove-rthal_rwlock.patch
> 
> Refreshed removal patch for the now unused rthal_rwlocks.
> 

Merged.

> librtutils.patch
> 
> My original librtprint patch. I now renamed the library to
> librtutils to express that more stuff beyond rt_print may find its
> home here in the future. Hopefully acceptable now.
> 

Will merge, but... since I'm something of a pain, I'm still pondering
whether rtutils is our best shot for picking up a name here. We already
have src/utils, so we would add src/librtutils as yet another set of
utilities; I'm not sure both belong to the same class of helpers.

> rtsystrace-v2.patch
> 
> Updated proposal to add rt_print-based Xenomai syscall tracing.
> Still in early stage, and I'm lacking feedback on this approach, if
> it makes sense to pursue it.
> 

Gasp. I really don't like the idea of having tons of explicit trace
points spread all over the code, this just makes the task of reading it
a total nightmare. There are some user-space tracing infrastructure
already, starting with LTTng, or maybe older ones like Ctrace, with or
without automatic instrumentation of the source files.
We may want to get some facts about those before reinventing a square
wheel.

> lttng.patch
> 
> Very rough patch to make LTTng work with Xenomai again. This patch
> tries to follow Jean-Olivier Villemure's original work very closely
> to get something working first. Needs more cleanups and enhancements
> as I explained earlier in the LTTng announcement.
> 

Will merge to ease further work on this feature. xenoltt.xml might be
left in another directory than src/utils, or at least in a separate
subdir, like can/.

Thanks,

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core