Re: [Xenomai] thread executing udd_register_device() is not rescheduled again

2018-06-18 Thread Pham, Phong


Hi Philippe and all,

While I may have too many questions from my previous email, I think what I 
really need is someone to confirm my question #2 where calling 
udd_register_device() (in kernel space of course) is inappropriate in Xenomai 
when initiating the process is from a user space via an ioctl of an RTDM device?

Specifically,

User space:
ioctl(fd_from_mmap_rtdm_device, DO_MY_WORK, NULL);

where the rtdm_device provides my custom ioctl implementation.

Kernel space:
struct udd_device my_new_device;
rtdm_device_iotcl(struct rtdm_fd *fd, unsigned int request, void *arg)
{
switch (request)
{
  case DO_MY_WORK:
...
udd_register_device(_new_device);
...
  break;
}

is illegal in Xenomai?

Any way of getting around this?
Thanks,
Phong.

-Original Message-
From: Pham, Phong
Sent: Thursday, June 14, 2018 3:58 PM
To: 'Philippe Gerum'; xenomai@xenomai.org
Cc: Hillman, Robert
Subject: RE: [Xenomai] thread executing udd_register_device() is not 
rescheduled again



Hi Philippe,

I dig up a little more; the Linux "current" thread calls device_create() Linux 
API eventually lead to devtmpfs_create_node(); upon wake_up_process(kdevtmpfs 
thread), kdevtmpfs becomes available (expected).  However, after 
wait_for_completion(), the "current" running thread is no longer "current" but 
rather "current" thread is now switched to the initial process I invoked which 
was sleeping when device_create() is called.  The once current thread that 
execute device_create() eventually calls xnthread_suspend() and hang.  The 
"current" thread now doesn't do anything either.

1) Does the detailed description above "fits" with what you mentioned 
"secondary mode"?
2) Does it mean that udd_register_device() usage is inappropriate when calling 
from a user space via an ioctl of an RTDM device?

3) I noticed there is nrt calls in kernel/xenomai/posix/memory.c.  Will this be 
a good example of calling an ioctl that will create a rtdm device mapper?  (I 
haven't read the code in detail and hope you would provide a quick yes/no 
suggestion before I proceed to save time.)
4) If (3) is not an ex., does it mean I have to create a Linux file mapper 
(instead of rtdm file mapper) without Xenomai involvement, (ie. equivalent to 
udd_register_device() functionality but strictly Linux)?

Phong.

-Original Message-
From: Philippe Gerum [mailto:r...@xenomai.org]
Sent: Wednesday, June 13, 2018 11:10 PM
To: Pham, Phong; xenomai@xenomai.org
Cc: Hillman, Robert
Subject: Re: [Xenomai] thread executing udd_register_device() is not 
rescheduled again

On 06/13/2018 10:26 PM, Pham, Phong wrote:
>
> Hi,
>
> I currently have a bunch of rtdm devices.  One of the devices, from 
> userspace, I use ioctl() to get to kernel space.  When in kernel space, I 
> create anther rtdm device (using udd_register_device() API).  The API returns 
> success without any issue.  However, upon exiting the kernel space, in 
> handle_root_syscall (), execute xnthread_relax(),xnthread_suspend(),
>
>   /*
>   * If the current thread is being relaxed, we must have been
>   * called from xnthread_relax(), in which case we introduce an
>   * opportunity for interrupt delivery right before switching
>   * context, which shortens the uninterruptible code path.
>   *
>   * We have to shut irqs off before __xnsched_run() though: if
>   * an interrupt could preempt us in ___xnsched_run() right
>   * after the call to xnarch_escalate() but before we grab the
>   * nklock, we would enter the critical section in
>   * xnsched_run() while running in secondary mode, which would
>   * defeat the purpose of xnarch_escalate().
>   */
>   if (likely(thread == sched->curr)) {
>  xnsched_set_resched(sched);
>  if (unlikely(mask & XNRELAX)) {
>xnlock_clear_irqon();
>splmax();
>__xnsched_run(sched);
>return;
>  }
>  /*
>  * If the thread is runnning on another CPU,
>  * xnsched_run will trigger the IPI as required.
>  */
>  __xnsched_run(sched);
>  goto out;
>   }
>
> and the current thread never runs again (ie. never exits xnthread_suspend()).
>
> I noticed that  udd_register_device() eventually uses device_create() Linux 
> API to create the files.  If I do not call device_create() and execute 
> everything else, then my current thread does not hang (ie. runs til 
> completion).  Any insight into Xenomai implementation causes the Xenomai 
> scheduler to behave this way when Linux 

Re: [Xenomai] rpi3 - very high negative latency for simple xeno task

2018-06-18 Thread Greg Gallagher
Hi Pintu,
   I just started doing some work recently with the RPI3 to see how
Xenomai performs, but I haven't made it too far at the moment.  How
did you tune the system?

-Greg

On Sun, Jun 17, 2018 at 10:07 AM, Pintu Kumar  wrote:
> Dear Greg,
>
> Do you have any comment about this?
> On Wed, Jun 6, 2018 at 11:11 AM Pintu Kumar  wrote:
>>
>> Hi,
>>
>> I have a simple demo program, which just create one rt_task (using
>> native API) and inside the task, I just rt_printf "some logs" for 10
>> times, with 100us interval.
>>
>> Today I checked this program first time on Raspberry Pi 3, Model B.
>> Xenomai: 3.0.6
>> ARCH = arm32
>> Kernel: 4.9.80 for rpi3
>>
>> Here is the output:
>> -
>> native $ sudo ./simple_xeno_task 10
>> main: creating task name: task0, priority: 99
>> Task: 0, Iteration: 10, Sleep duration: 100 us
>>
>> Task[0] - Avg Latency: -9.630 us
>> Task[0] - Max Latency: 0.052 us
>> Task[0] - Min Latency: -67.448 us
>> 1 32.552 -67.448
>> 2 75.261 -24.739
>> 3 97.657 -2.343
>> 4 99.479 -0.521
>> 5 99.791 -0.209
>> 6 99.740 -0.260
>> 7 100.052 0.052
>> 8 99.635 -0.365
>> 9 99.740 -0.260
>> 10 99.792 -0.208
>> ALL FINISHED...!!!
>> ---
>>
>> If you see, the first 2 iterations have very large negative values.
>> What could be the cause of this?
>> As per RTOS, I expect the output to be close to 100us always.
>>
>>
>> When I run the same program on x86_64 machine I get the below output:
>> 
>> ./simple_xeno_task 10
>> main: creating task name: task0, priority: 99
>> Task: 0, Iteration: 10, Sleep duration: 100 us
>>
>> Task[0] - Avg Latency: -0.680 us
>> Task[0] - Max Latency: 0.043 us
>> Task[0] - Min Latency: -5.868 us
>> 1 94.132 -5.868
>> 2 99.127 -0.873
>> 3 99.955 -0.045
>> 4 100.043 0.043
>> 5 99.990 -0.010
>> 6 99.959 -0.041
>> 7 100.017 0.017
>> 8 99.977 -0.023
>> 9 99.965 -0.035
>> 10 100.039 0.039
>> ALL FINISHED...!!!
>> 
>>
>> So, I wonder what could be the cause of negative latency on Rpi3 with 
>> Xenomai.
>>
>>
>> This is the latency output on Rpi3:
>> --
>> sudo /usr/xenomai/bin/latency
>> == Sampling period: 1000 us
>> == Test mode: periodic user-mode task
>> == All results in microseconds
>> warming up...
>> RTT|  00:00:01  (periodic user-mode task, 1000 us period, priority 99)
>> RTH|lat min|lat avg|lat max|-overrun|---msw|---lat best|--lat 
>> worst
>> RTD|  0.625|  1.374|  5.312|   0| 0|  0.625|  
>> 5.312
>> RTD|  0.624|  1.376|  5.468|   0| 0|  0.624|  
>> 5.468
>> RTD|  0.624|  1.373|  5.676|   0| 0|  0.624|  
>> 5.676
>> RTD|  0.623|  1.372|  9.426|   0| 0|  0.623|  
>> 9.426
>> RTD|  0.467|  1.355|  4.582|   0| 0|  0.467|  
>> 9.426
>> RTD|  0.623|  1.384|  9.113|   0| 0|  0.467|  
>> 9.426
>> RTD|  0.622|  1.379|  5.935|   0| 0|  0.467|  
>> 9.426
>> RTD|  0.622|  1.359|  4.581|   0| 0|  0.467|  
>> 9.426
>> RTD|  0.622|  1.372|  7.028|   0| 0|  0.467|  
>> 9.426
>> RTD|  0.621|  1.361|  6.038|   0| 0|  0.467|  
>> 9.426
>> RTD|  0.621|  1.360|  4.267|   0| 0|  0.467|  
>> 9.426
>> RTD|  0.673|  1.364| 10.256|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.672|  1.363|  6.453|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.620|  1.352|  4.161|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.619|  1.373|  7.234|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.619|  1.352|  6.140|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.671|  1.352|  4.733|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.566|  1.363|  5.514|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.618|  1.368|  6.712|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.618|  1.353|  4.420|   0| 0|  0.467| 
>> 10.256
>> RTD|  0.617|  1.368|  6.815|   0| 0|  0.467| 
>> 10.256
>> ^C---|---|---|---||--|-
>>
>> --
>>
>>
>> Thanks,
>> Pintu
>
> ___
> Xenomai mailing list
> Xenomai@xenomai.org
> https://xenomai.org/mailman/listinfo/xenomai

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] Xenomai 3 Multi-core Semaphore latency

2018-06-18 Thread Jeff Melvile


Hi Philippe,

On Thu, 14 Jun 2018, Philippe Gerum wrote:

> On 06/12/2018 06:18 PM, Jeff Melvile wrote:
> > Dmitriy (and Philippe),
> > 
> > Thanks for looking into this. I'm working with Raman.
> > 
> > On Tue, 22 May 2018, Dmitriy Cherkasov wrote:
> > 
> >> On 05/20/2018 08:07 AM, Philippe Gerum wrote:
> >>> On 05/18/2018 06:24 PM, Singh, Raman wrote:
>  Environment: ARM Cortex-A53 quad-core processor (ARM 64-bit) on a
>  Zynq Ultrascale+ ZCU102 dev board, Xenomai 3 next branch from May 
>  14, 2018 (SHA1: 410a4cc1109ba4e0d05b7ece7b4a5210287e1183 ), 
>  Cobalt configuration with POSIX skin, Linux Kernel version 4.9.24
> 
>  I've been having issues with semaphore latency when threads access 
>  semaphores while executing on different cores. When both threads 
>  accessing 
>  a semaphore execute on the same processor core, the latency between
>  one thread posting a semaphore and another waking up after waiting on it 
>  is fairly small. However, as soon as one of the threads is moved to a 
>  different core, the latency between a semaphore post from one thread to 
>  a 
>  waiting thread waking up in response starts to become large enough to 
>  affect real time performance.  The latencies I've been seeing are on the 
>  order
>  of 100's of milliseconds.
> 
> >>>
> >>> Reproduced on hikey here: the rescheduling IPIs Xenomai is sending for
> >>> waking up threads on remote CPUs don't flow to the other end properly
> >>> (ipipe_send_ipi()), which explains the behavior you have been seeing.
> >>>
> >>> @Dmitriy: this may be an issue with the range of SGIs available to the
> >>> kernel when a secure firmware is enabled, which may be restricted to
> >>> SGI[0-7].
> >>>
> >>> For the rescheduling IPI on ARM64, the interrupt pipeline attempts to
> >>> trigger SGI8 which may be reserved by the ATF in secure mode, therefore
> >>> may never be received on the remote end.
> >>>
> >>> Fixing this will require some work in the interrupt pipeline, typically
> >>> for multiplexing our IPIs on a single SGI below SGI8. As a matter of
> >>> fact, the same issue exists on the ARM side, but since running a secure
> >>> firmware there is uncommon for Xenomai users, this went unnoticed (at
> >>> least not reported yet AFAIR). We need to sync up on this not to
> >>> duplicate work.
> >>>
> >>
> >> I see this on Hikey with the latest ipipe-arm64 tree as well. I can 
> >> confirm the
> >> reschedule IPI isn't being received although it is sent. Rearranging the 
> >> IPIs
> >> to move reschedule up a few spots resolves the issue, so I think this 
> >> confirms
> >> the root cause.
> > 
> > Short term - what is the consequence of naively rearranging the IPIs? What 
> > else breaks? FWIW secure firmware is not in use. Is your test patch 
> > something we can apply to be able to test the multi-core aspects of our 
> > software?
> > 
> > Let me know if there is anything either of us can do to help. We have 
> > kernel development experience but admittedly not quite at this level.
> > 
> 
> This issue may affect the ARM port in some cases as well, so I took a stab at 
> it for ARM64 since the related code is very similar. Could you test that 
> patch? TIA,

Thanks for the patch. We ended up applying it on top of 
a kernel patched with ipipe-core-4.9.24-arm64-2.patch, manually resolving 
the conflicts (contained to smp.c IIRC). Clearly this is a little diferent 
than applying it on top of the ipipe HEAD and generating a fresh patch. 

The fix did resolve the high latencies we were seeing in our application 
across cores. Thanks again for the fix and let me know if you'd 
like us to do any additional testing.

Thanks,
Jeff 

> 
> commit 765aa7853642b46e1c13fd1f21dfcb9d049f5bfa (HEAD -> wip/arm64-ipi-4.9)
> Author: Philippe Gerum 
> Date:   Wed Jun 13 19:16:27 2018 +0200
> 
> arm64/ipipe: multiplex IPIs
> 
> SGI8-15 can be reserved for the exclusive use of the firmware. The
> ARM64 kernel currently uses six of them (NR_IPI), and the pipeline
> needs to define three more for conveying out-of-band events
> (i.e. reschedule, hrtimer and critical IPIs). Therefore we have to
> multiplex nine inter-processor events over eight SGIs (SGI0-7).
> 
> This patch changes the IPI management in order to multiplex all
> regular (in-band) IPIs over SGI0, reserving SGI1-3 for out-of-band
> events.
> 
> diff --git a/arch/arm64/include/asm/ipipe.h b/arch/arm64/include/asm/ipipe.h
> index b16f03b508d6..8e756be01906 100644
> --- a/arch/arm64/include/asm/ipipe.h
> +++ b/arch/arm64/include/asm/ipipe.h
> @@ -32,6 +32,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #define IPIPE_CORE_RELEASE   4
>  
> @@ -165,7 +166,7 @@ static inline void ipipe_unmute_pic(void)
>  void __ipipe_early_core_setup(void);
>  void __ipipe_hook_critical_ipi(struct ipipe_domain *ipd);
>  void __ipipe_root_localtimer(unsigned int irq, void *cookie);
> 

Re: [Xenomai] xenomai modules - xeno_rtdm, xeno_hal, xeno_nucleus. not getting installed

2018-06-18 Thread Greg Gallagher
Did you turn them on in menuconfig when you built your kernel?

-Greg

On Mon, Jun 18, 2018 at 1:40 AM, Ashok kumar  wrote:
> Hi,
>
> I have patched xenomai -2.6.4 with linux 3.18.20 .
> and installed the patched kernel, and  compiled the xenomai library using 
> below
> commands
>
> cd /usr/src
> sudo mkdir build_xenomai-2.6.4
> cd build_xenomai-2.6.4
>
> sudo ../xenomai-2.6.4/configure --enable-shared --enable-smp --enable-x86-sep
> sudo make -j8
> sudo make install
>
> in /usr/xenomai/ I am not able to get the modules directory and xenomai 
> modules
> are not available ,xeno_rtdm,xeno_hal,xeno_nucleus.
>
> I used the below command to load the modules
>
> sudo modprobe xeno_rtdm,
> sudo modprobe xeno_hal
> sudo modprobe xeno_nucleus
>
> but the modules are not getting loaded.
>
>
> is there any modification should be done in the make file, or any
> other options should be enabled in the configure options
>
> kindly help me the get the xenomai modules available.
>
>
> Thank you
> R.Ashokkumar
>
> ___
> Xenomai mailing list
> Xenomai@xenomai.org
> https://xenomai.org/mailman/listinfo/xenomai

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] ipipe-4.4.y LTS

2018-06-18 Thread Greg Gallagher
Are there still plans to have a ipipe patch for the CIP kernel?  I
think this was brought up at the last meet up?  Maybe the bigger
question (which is a lot more work) is do we plan on maintaining ipipe
for 4.4, 4.9 and older?

-Greg

On Mon, Jun 18, 2018 at 8:52 AM, Radu Rendec  wrote:
> Hi all,
>
> Are there any plans to maintain the ipipe-4.4.y branch and update it to
> later versions of kernel 4.4? It would be nice to have, since at this
> point kernel 4.4 has the longest projected EOL.
>
> Currently the latest thing that can be merged cleanly on top of
> ipipe-core-4.4.71-powerpc-8 is kernel 4.4.73, which is only 2 releases
> later than what's already in there.
>
> Thanks,
> Radu Rendec
>
> ___
> Xenomai mailing list
> Xenomai@xenomai.org
> https://xenomai.org/mailman/listinfo/xenomai

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai


[Xenomai] ipipe-4.4.y LTS

2018-06-18 Thread Radu Rendec
Hi all,

Are there any plans to maintain the ipipe-4.4.y branch and update it to
later versions of kernel 4.4? It would be nice to have, since at this
point kernel 4.4 has the longest projected EOL.

Currently the latest thing that can be merged cleanly on top of
ipipe-core-4.4.71-powerpc-8 is kernel 4.4.73, which is only 2 releases
later than what's already in there.

Thanks,
Radu Rendec

___
Xenomai mailing list
Xenomai@xenomai.org
https://xenomai.org/mailman/listinfo/xenomai