Moving to a new thread subject.

The following thread will be dedicated to development of the Automated
performance testing and it's presentation.

At first stage an infrastructure will be design and created. later on the
presentation and automation will be discussed.

Infrastructure design:
Option 1:
- The time measurements will based on the nucleus API functions.
- On entering the function a timer will be activated and stop at the
function exit point.
- When test execution complete all the time measurements of each function
will be summed up and average will be calculated.

- One infrastructure for all skins
- The specific skin delays will not be measured.
- Inserting a system delay

Implementation method:
- Either by code integrated into the nucleus code scoped in a precomiplation
- Or by a preloaded or statically compiled library which will grab every
nucleus API function run its timer and only then call the origin nucleus

Option 2:
- In the application level
  - Create a library which calculate an present results.
  - using it will require the test to call specific API functions.

- A skin oriented result
- No delay in the underlying system
- provide any user the ability to test her own application

Option 3:
- A combination of 1 and 2:
  - A library which collect the data and later on will present it.
  - An API for the application layer.
  - Same/different API to integrate into the nucleus.

- More flexibility
- Can also provide a mean to hunt the exact delay factor.
- Usable as developing tool
- Hard to maintain a uniform results.

Waiting for your comments/suggestions/corrections.
2007/9/4, Jan Kiszka <[EMAIL PROTECTED]>:

> Ravid Baruch Naali wrote:
> > The automated chart seems exactly the thing for me (I have some testing
> > tools development experience).
> >
> > I'll look into it and will soon send a initial design.
> Hurray! You just deserved an option for a free beer! Now you just need
> to post the first related patch - and manage to meet me personally (the
> latter may be a bit trickier than the former ;)).
> >
> > Of course any pinters/advice/suggestions will be highly appreciated.
> Let's start with creating the infrastructure for the targets firsts.
> Once we have more tests with unified output locally on the Xenomai
> targets, we can think about how to transfer this data to a central
> database and how to visualise that database on the web.
> Jan

Ravid Baruch Naali
Mobile: 052-5830021
Xenomai-core mailing list

Reply via email to