[Xenomai-core] MSI support in Xenomai

2011-03-16 Thread krishna m

Hi All,
I wanted to know if the latest Xenomai [Version xenomai-2.5.6] supports MSI 
interrupts. I am using the adeos Patch [version adeos-ipipe-2.6.37-x86-2.9-00] 
and corresponding Linux kernel [version: linux-2.6.37]. I have read in the 
Xenomai FAQ that MSI is not supported by the ipipe patch just wanted to know if 
the latest Xenomai and Ipipe supports MSI. 

thanks,
Krishna 
  ___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] MSI support in Xenomai

2011-03-21 Thread krishna m

> Date: Thu, 17 Mar 2011 09:30:37 +0100
> From: jan.kis...@siemens.com
> To: gilles.chanteperd...@xenomai.org; krishnamurth...@hotmail.com
> CC: xenomai-core@gna.org
> Subject: Re: MSI support in Xenomai
> 
> On 2011-03-16 14:26, Gilles Chanteperdrix wrote:
> > krishna m wrote:
> >> Hi All, I wanted to know if the latest Xenomai [Version
> >> xenomai-2.5.6] supports MSI interrupts. I am using the adeos Patch
> >> [version adeos-ipipe-2.6.37-x86-2.9-00] and corresponding Linux
> >> kernel [version: linux-2.6.37]. I have read in the Xenomai FAQ that
> >> MSI is not supported by the ipipe patch just wanted to know if the
> >> latest Xenomai and Ipipe supports MSI.
> > 
> > No.
> 
> Strictly spoken, but it works in practice. See
> 
> http://thread.gmane.org/gmane.linux.real-time.xenomai.users/12135
> 
> Jan
> 
> -- 
> Siemens AG, Corporate Technology, CT T DE IT 1
> Corporate Competence Center Embedded Linux
 
Hi Jan,

Thanks for the reply. I got the following info from you post and i had few 
questions related to this post:
 


>> - MSI[-X] usage for RTDM (ie. RT) drivers is basically fine as long as you 
>> avoid enabling/disabling from RT context (also used for quite a while here, 
>> no known issues under this restriction)
 
my question is are you mentioninig here that i must not be doing 
pci_enable_msi() and pci_disable_msi() inside the RTDM driver ? if not then 
where should these calls be done ? 
 
Right now i have ported my standerd linux driver for the PCIe card to RTDM 
drvier. This driver uses MSI. I am not enabeling or disabling MSI or IRQ any 
where in the driver while processing [i.e. in ISR or read/write]. I am 
enabeling MSI in the PCI Probe and disabeling it in the PCI remove functions. 
The problem i am facing is as soon as i get the First MSI interrupt the kernel 
oops and i get the following dump:Mar 21 21:08:34 barch kernel: BUG: unable to 
handle kernel paging request at 7f45402d
Mar 21 21:08:34 barch kernel: Oops:  [#1] PREEMPT SMP 

 
 [note: The normal linux driver works with out any problem]. 
 
Please let me know your thoughts. Also is there an example PCIe RTDM driver 
using MSI that i can refer to understand more about this topic ?
 
thanks,
krishna 


  ___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] MSI support in Xenomai

2011-03-22 Thread krishna m

> Date: Tue, 22 Mar 2011 13:13:16 +0100
> From: jan.kis...@siemens.com
> To: krishnamurth...@hotmail.com
> CC: gilles.chanteperd...@xenomai.org; xenomai-core@gna.org
> Subject: Re: MSI support in Xenomai
> 
> On 2011-03-21 13:48, krishna m wrote:
> > > Date: Thu, 17 Mar 2011 09:30:37 +0100
> > > From: jan.kis...@siemens.com
> > > To: gilles.chanteperd...@xenomai.org; krishnamurth...@hotmail.com
> > > CC: xenomai-core@gna.org
> > > Subject: Re: MSI support in Xenomai
> > >
> > > On 2011-03-16 14:26, Gilles Chanteperdrix wrote:
> > > > krishna m wrote:
> > > >> Hi All, I wanted to know if the latest Xenomai [Version
> > > >> xenomai-2.5.6] supports MSI interrupts. I am using the adeos Patch
> > > >> [version adeos-ipipe-2.6.37-x86-2.9-00] and corresponding Linux
> > > >> kernel [version: linux-2.6.37]. I have read in the Xenomai FAQ that
> > > >> MSI is not supported by the ipipe patch just wanted to know if the
> > > >> latest Xenomai and Ipipe supports MSI.
> > > >
> > > > No.
> > >
> > > Strictly spoken, but it works in practice. See
> > >
> > > http://thread.gmane.org/gmane.linux.real-time.xenomai.users/12135
> > >
> > > Jan
> > >
> > > --
> > > Siemens AG, Corporate Technology, CT T DE IT 1
> > > Corporate Competence Center Embedded Linux
> > 
> > Hi Jan,
> > 
> > Thanks for the reply. I got the following info from you post and i had few 
> > questions related to this post:
> > 
> > 
> > 
> > >> - MSI[-X] usage for RTDM (ie. RT) drivers is basically fine as long as 
> > >> you 
> > avoid enabling/disabling from RT context (also used for quite a while here, 
> > no 
> > known issues under this restriction)
> > 
> > my question is are you mentioninig here that i must not be doing 
> > pci_enable_msi() and pci_disable_msi() inside the RTDM driver ? if not then 
> > where should these calls be done ?
> 
> pci_enable/disable_msi are initialization services that only need to be
> executed over Linux context - also in RTDM drivers.
> 
> > 
> > Right now i have ported my standerd linux driver for the PCIe card to RTDM 
> > drvier. This driver uses MSI. I am not enabeling or disabling MSI or IRQ 
> > any 
> > where in the driver while processing [i.e. in ISR or read/write]. I am 
> > enabeling 
> > MSI in the PCI Probe and disabeling it in the PCI remove functions. The 
> > problem 
> > i am facing is as soon as i get the First MSI interrupt the kernel oops and 
> > i 
> > get the following dump:
> > 
> > |Mar 21 21:08:34 barch kernel: BUG: unable to handle kernel paging request 
> > at 7f45402d
> > Mar 21 21:08:34 barch kernel: Oops:  [#1] PREEMPT SMP
> > |
> > 
> 
> We would need the full backtrace to have at least a chance to help.
> 
> > 
> > [note: The normal linux driver works with out any problem].
> > 
> > Please let me know your thoughts. Also is there an example PCIe RTDM driver 
> > using MSI that i can refer to understand more about this topic ?
> 
> Check RTnet's IGB driver. It works with MSI-X interrupts on x86.
> 
> Jan
> 
> -- 
> Siemens AG, Corporate Technology, CT T DE IT 1
> Corporate Competence Center Embedded Linux
 
Hi Jan,
Thanks again for the reply. Here is the full backtrace:
 
Mar 22 21:12:47 localhost kernel: test_dev: Probe for device function=0
Mar 22 21:12:47 localhost kernel: test_dev :15:00.0: found PCI INT A -> IRQ 
10
Mar 22 21:12:47 localhost kernel: pci_bar = 0xfe00 
Mar 22 21:12:47 localhost kernel: Xenomai: RTDM: RT open handler is deprecated, 
driver requires update.
Mar 22 21:12:47 localhost kernel: Xenomai: RTDM: RT close handler is 
deprecated, driver requires update.
Mar 22 21:12:47 localhost kernel: test_dev: IRQ 16 successfully assigned to the 
device.
Mar 22 21:12:47 localhost kernel: test_dev: Probe for device function=1
Mar 22 21:12:47 localhost kernel: test_dev: Successfully added test_dev Device 
Driver.
Mar 22 21:13:58 localhost kernel: BUG: unable to handle kernel paging request 
at d84050c0
Mar 22 21:13:58 localhost kernel: IP: [] 
__ipipe_set_irq_pending+0x36/0x49
Mar 22 21:13:58 localhost kernel: *pde =  
Mar 22 21:13:58 localhost kernel: Oops: 0002 [#1] PREEMPT SMP 
Mar 22 21:13:58 localhost kernel: last sysfs file: 
/sys/devices/pci:00/:00:1a.0/:1f:00.0/:20:05.0/:2d:00.0/class
Mar 22 21:13:58 localhost kernel: Modules linked in: test_dev autofs4 hidp 
l2cap crc16 bluetooth sunrpc ip6t_REJECT xt_tcpudp ip6table_filter ip6_tabl

Re: [Xenomai-core] MSI support in Xenomai

2011-03-22 Thread krishna m

> Date: Tue, 22 Mar 2011 18:03:43 +0100
> From: gilles.chanteperd...@xenomai.org
> To: jan.kis...@siemens.com
> CC: krishnamurth...@hotmail.com; xenomai-core@gna.org
> Subject: Re: MSI support in Xenomai
> 
> Jan Kiszka wrote:
> > On 2011-03-22 16:55, krishna m wrote:
> 
> >> Mar 22 21:12:47 localhost kernel: test_dev: Probe for device function=0
> >> Mar 22 21:12:47 localhost kernel: test_dev :15:00.0: found PCI INT A 
> >> -> IRQ 10
> > 
> > Here you get IRQ 10...
> > 
> >> Mar 22 21:12:47 localhost kernel: pci_bar = 0xfe00
> >> Mar 22 21:12:47 localhost kernel: Xenomai: RTDM: RT open handler is 
> >> deprecated, 
> >> driver requires update.
> >> Mar 22 21:12:47 localhost kernel: Xenomai: RTDM: RT close handler is 
> >> deprecated, 
> >> driver requires update.
> > 
> > [ These messages also have a meaning, though unrelated to the crash. ]
> > 
> >> Mar 22 21:12:47 localhost kernel: test_dev: IRQ 16 successfully assigned 
> >> to the 
> >> device.
> > 
> > ...but here IRQ 16 is assigned. Broken output or a real inconsistency?
> > 
> > Also, where is the MSI? You should see log messages about MSI/MSI-X IRQ
> > number assignment when properly enabling the support at PCI level.
> 
> Maybe pci_enable_msi was called after request_irq ?
> 
> 
> -- 
> Gilles.
 
Just check my code:
i do enable MSI  (pci_enable_msi) before rtdm_irq_request pasted the code below 
part of my PCI probe function in the device driver:
 
..
rtdm_dev = kmalloc(sizeof(struct rtdm_device), GFP_KERNEL);
  if(!rtdm_dev) {
   printk(KERN_WARNING "RUBICON: kmalloc failed\n");
   ret = -ENOMEM; //Insufficient storage space is available.
   goto fail_pci;
  }

  //copy the structure to the new memory
  memcpy(rtdm_dev, &rubicon_rtdm_driver, sizeof(struct rtdm_device));

  //create filename
  snprintf(rtdm_dev->device_name,
   RTDM_MAX_DEVNAME_LEN, "rtser%d", 0 /*i*/);
  rtdm_dev->device_id = 0; //i;

  //define two other members of the rtdm_device structure
  rtdm_dev->proc_name = rtdm_dev->device_name;
 
  ret = rtdm_dev_register(rtdm_dev);
  if(ret < 0) {
   printk(KERN_WARNING"RUBICON: cannot register device\n");
   goto fail_pci;
  }
  g_rubicon_context.xen_device = rtdm_dev;
 
  ret = pci_enable_msi(dev);
  if(ret) {
   printk("RUBICON: Enabling MSI failed wuth error code: 0x%x\n", ret);
   goto fail_pci;
  }

  g_rubicon_context.irq = dev->irq; /* save the allocated IRQ */
  g_rubicon_context.dev = dev; /* Save the PCI queue h/w device ctx */
  dev_set_drvdata(dev,(void *) &(g_rubicon_context));
  /* request the irq for the device */
  ret = rtdm_irq_request(&g_rubicon_context.irq_handle,
 g_rubicon_context.irq,
 rubicon_irq_handler,
 RTDM_IRQTYPE_SHARED,
 "rubicon",
 (void *)&g_rubicon_context);

 
>From IRQ number perspective:
Mar 22 21:12:47 localhost kernel: test_dev :15:00.0: found PCI INT A -> IRQ 
10  >>>> is the print from the Kernel PCI Probe and 
Mar 22 21:12:47 localhost kernel: test_dev: IRQ 16 successfully assigned >>>>>> 
is my Print after enabling MSI and registering for irq.
 
I get the similar print with the Plain Linux kernel with the IRQ being assigned 
MSI enabled is irq 20. [Below i have pasted the kernel log of plain linux 
kernel]. I have tested the MSI interrupts and they work fine. 
Mar 22 23:47:49 localhost kernel: test_dev: Probe for device function=0
Mar 22 23:47:49 localhost kernel: test_dev :15:00.0: found PCI INT A -> IRQ 
10
Mar 22 23:47:49 localhost kernel: pci_bar = 0xfe00 
Mar 22 23:47:49 localhost kernel: test_dev: IRQ 20 successfully assigned to the 
device.
Mar 22 23:47:49 localhost kernel: test_dev: Probe for device function=1
Mar 22 23:47:49 localhost kernel: test_dev: Successfully added test_dev Device 
Driver.

thanks
-krishna 
  ___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] MSI support in Xenomai

2011-03-25 Thread krishna m

 

From: krishnamurth...@hotmail.com
To: gilles.chanteperd...@xenomai.org; jan.kis...@siemens.com
Date: Tue, 22 Mar 2011 23:50:27 +0530
CC: xenomai-core@gna.org
Subject: Re: [Xenomai-core] MSI support in Xenomai




> Date: Tue, 22 Mar 2011 18:03:43 +0100
> From: gilles.chanteperd...@xenomai.org
> To: jan.kis...@siemens.com
> CC: krishnamurth...@hotmail.com; xenomai-core@gna.org
> Subject: Re: MSI support in Xenomai
> 
> Jan Kiszka wrote:
> > On 2011-03-22 16:55, krishna m wrote:
> 
> >> Mar 22 21:12:47 localhost kernel: test_dev: Probe for device function=0
> >> Mar 22 21:12:47 localhost kernel: test_dev :15:00.0: found PCI INT A 
> >> -> IRQ 10
> > 
> > Here you get IRQ 10...
> > 
> >> Mar 22 21:12:47 localhost kernel: pci_bar = 0xfe00
> >> Mar 22 21:12:47 localhost kernel: Xenomai: RTDM: RT open handler is 
> >> deprecated, 
> >> driver requires update.
> >> Mar 22 21:12:47 localhost kernel: Xenomai: RTDM: RT close handler is 
> >> deprecated, 
> >> driver requires update.
> > 
> > [ These messages also have a meaning, though unrelated to the crash. ]
> > 
> >> Mar 22 21:12:47 localhost kernel: test_dev: IRQ 16 successfully assigned 
> >> to the 
> >> device.
> > 
> > ...but here IRQ 16 is assigned. Broken output or a real inconsistency?
> > 
> > Also, where is the MSI? You should see log messages about MSI/MSI-X IRQ
> > number assignment when properly enabling the support at PCI level.
> 
> Maybe pci_enable_msi was called after request_irq ?
> 
> 
> -- 
> Gilles.
 
Just check my code:
i do enable MSI  (pci_enable_msi) before rtdm_irq_request pasted the code below 
part of my PCI probe function in the device driver:
 
..
rtdm_dev = kmalloc(sizeof(struct rtdm_device), GFP_KERNEL);
  if(!rtdm_dev) {
   printk(KERN_WARNING "RUBICON: kmalloc failed\n");
   ret = -ENOMEM; //Insufficient storage space is available.
   goto fail_pci;
  }

  //copy the structure to the new memory
  memcpy(rtdm_dev, &rubicon_rtdm_driver, sizeof(struct rtdm_device));

  //create filename
  snprintf(rtdm_dev->device_name,
   RTDM_MAX_DEVNAME_LEN, "rtser%d", 0 /*i*/);
  rtdm_dev->device_id = 0; //i;

  //define two other members of the rtdm_device structure
  rtdm_dev->proc_name = rtdm_dev->device_name;
 
  ret = rtdm_dev_register(rtdm_dev);
  if(ret < 0) {
   printk(KERN_WARNING"RUBICON: cannot register device\n");
   goto fail_pci;
  }
  g_rubicon_context.xen_device = rtdm_dev;
 
  ret = pci_enable_msi(dev);
  if(ret) {
   printk("RUBICON: Enabling MSI failed wuth error code: 0x%x\n", ret);
   goto fail_pci;
  }

  g_rubicon_context.irq = dev->irq; /* save the allocated IRQ */
  g_rubicon_context.dev = dev; /* Save the PCI queue h/w device ctx */
  dev_set_drvdata(dev,(void *) &(g_rubicon_context));
  /* request the irq for the device */
  ret = rtdm_irq_request(&g_rubicon_context.irq_handle,
 g_rubicon_context.irq,
 rubicon_irq_handler,
 RTDM_IRQTYPE_SHARED,
 "rubicon",
 (void *)&g_rubicon_context);

 
>From IRQ number perspective:
Mar 22 21:12:47 localhost kernel: test_dev :15:00.0: found PCI INT A -> IRQ 
10  >>>> is the print from the Kernel PCI Probe and 
Mar 22 21:12:47 localhost kernel: test_dev: IRQ 16 successfully assigned >>>>>> 
is my Print after enabling MSI and registering for irq.
 
I get the similar print with the Plain Linux kernel with the IRQ being assigned 
MSI enabled is irq 20. [Below i have pasted the kernel log of plain linux 
kernel]. I have tested the MSI interrupts and they work fine. 
Mar 22 23:47:49 localhost kernel: test_dev: Probe for device function=0
Mar 22 23:47:49 localhost kernel: test_dev :15:00.0: found PCI INT A -> IRQ 
10
Mar 22 23:47:49 localhost kernel: pci_bar = 0xfe00 
Mar 22 23:47:49 localhost kernel: test_dev: IRQ 20 successfully assigned to the 
device.
Mar 22 23:47:49 localhost kernel: test_dev: Probe for device function=1
Mar 22 23:47:49 localhost kernel: test_dev: Successfully added test_dev Device 
Driver.
 
Just wanted to add 3 more questions:
1. Which versions of Linux, Adeos patch and xenomai are tested for stable MSI 
functionality ? Please let me know.
2. I am using the Xenomai Version linux-2.6.37, xenomai-2.5.6 and 
adeos-ipipe-2.6.37-x86-2.9-00. Is this combination fine ?
3. I looked at the /proc/interrupts and with out xenomai loaded [i.e. default 
linux kernel] I see my card beaing assigned MSI-EDGE interrupt and with xenomai 
kernel i don't see the entry in the /proc/interrupts but in /proc/xenomai/irq i 
see this 

Re: [Xenomai-core] MSI support in Xenomai

2011-03-29 Thread krishna m

rom: krishnamurth...@hotmail.com
To: gilles.chanteperd...@xenomai.org; jan.kis...@siemens.com
CC: xenomai-core@gna.org
Subject: RE: [Xenomai-core] MSI support in Xenomai
Date: Fri, 25 Mar 2011 17:51:06 +0530




 
From: krishnamurth...@hotmail.com
To: gilles.chanteperd...@xenomai.org; jan.kis...@siemens.com
Date: Tue, 22 Mar 2011 23:50:27 +0530
CC: xenomai-core@gna.org
Subject: Re: [Xenomai-core] MSI support in Xenomai



> Date: Tue, 22 Mar 2011 18:03:43 +0100
> From: gilles.chanteperd...@xenomai.org
> To: jan.kis...@siemens.com
> CC: krishnamurth...@hotmail.com; xenomai-core@gna.org
> Subject: Re: MSI support in Xenomai
> 
> Jan Kiszka wrote:
> > On 2011-03-22 16:55, krishna m wrote:
> 
> >> Mar 22 21:12:47 localhost kernel: test_dev: Probe for device function=0
> >> Mar 22 21:12:47 localhost kernel: test_dev :15:00.0: found PCI INT A 
> >> -> IRQ 10
> > 
> > Here you get IRQ 10...
> > 
> >> Mar 22 21:12:47 localhost kernel: pci_bar = 0xfe00
> >> Mar 22 21:12:47 localhost kernel: Xenomai: RTDM: RT open handler is 
> >> deprecated, 
> >> driver requires update.
> >> Mar 22 21:12:47 localhost kernel: Xenomai: RTDM: RT close handler is 
> >> deprecated, 
> >> driver requires update.
> > 
> > [ These messages also have a meaning, though unrelated to the crash. ]
> > 
> >> Mar 22 21:12:47 localhost kernel: test_dev: IRQ 16 successfully assigned 
> >> to the 
> >> device.
> > 
> > ...but here IRQ 16 is assigned. Broken output or a real inconsistency?
> > 
> > Also, where is the MSI? You should see log messages about MSI/MSI-X IRQ
> > number assignment when properly enabling the support at PCI level.
> 
> Maybe pci_enable_msi was called after request_irq ?
> 
> 
> -- 
> Gilles.
 
Just check my code:
i do enable MSI  (pci_enable_msi) before rtdm_irq_request pasted the code below 
part of my PCI probe function in the device driver:
 
..
rtdm_dev = kmalloc(sizeof(struct rtdm_device), GFP_KERNEL);
  if(!rtdm_dev) {
   printk(KERN_WARNING "RUBICON: kmalloc failed\n");
   ret = -ENOMEM; //Insufficient storage space is available.
   goto fail_pci;
  }

  //copy the structure to the new memory
  memcpy(rtdm_dev, &rubicon_rtdm_driver, sizeof(struct rtdm_device));

  //create filename
  snprintf(rtdm_dev->device_name,
   RTDM_MAX_DEVNAME_LEN, "rtser%d", 0 /*i*/);
  rtdm_dev->device_id = 0; //i;

  //define two other members of the rtdm_device structure
  rtdm_dev->proc_name = rtdm_dev->device_name;
 
  ret = rtdm_dev_register(rtdm_dev);
  if(ret < 0) {
   printk(KERN_WARNING"RUBICON: cannot register device\n");
   goto fail_pci;
  }
  g_rubicon_context.xen_device = rtdm_dev;
 
  ret = pci_enable_msi(dev);
  if(ret) {
   printk("RUBICON: Enabling MSI failed wuth error code: 0x%x\n", ret);
   goto fail_pci;
  }

  g_rubicon_context.irq = dev->irq; /* save the allocated IRQ */
  g_rubicon_context.dev = dev; /* Save the PCI queue h/w device ctx */
  dev_set_drvdata(dev,(void *) &(g_rubicon_context));
  /* request the irq for the device */
  ret = rtdm_irq_request(&g_rubicon_context.irq_handle,
 g_rubicon_context.irq,
 rubicon_irq_handler,
 RTDM_IRQTYPE_SHARED,
 "rubicon",
 (void *)&g_rubicon_context);

 
>From IRQ number perspective:
Mar 22 21:12:47 localhost kernel: test_dev :15:00.0: found PCI INT A -> IRQ 
10  >>>> is the print from the Kernel PCI Probe and 
Mar 22 21:12:47 localhost kernel: test_dev: IRQ 16 successfully assigned >>>>>> 
is my Print after enabling MSI and registering for irq.
 
I get the similar print with the Plain Linux kernel with the IRQ being assigned 
MSI enabled is irq 20. [Below i have pasted the kernel log of plain linux 
kernel]. I have tested the MSI interrupts and they work fine. 
Mar 22 23:47:49 localhost kernel: test_dev: Probe for device function=0
Mar 22 23:47:49 localhost kernel: test_dev :15:00.0: found PCI INT A -> IRQ 
10
Mar 22 23:47:49 localhost kernel: pci_bar = 0xfe00 
Mar 22 23:47:49 localhost kernel: test_dev: IRQ 20 successfully assigned to the 
device.
Mar 22 23:47:49 localhost kernel: test_dev: Probe for device function=1
Mar 22 23:47:49 localhost kernel: test_dev: Successfully added test_dev Device 
Driver.
 
Just wanted to add 3 more questions:
1. Which versions of Linux, Adeos patch and xenomai are tested for stable MSI 
functionality ? Please let me know.
2. I am using the Xenomai Version linux-2.6.37, xenomai-2.5.6 and 
adeos-ipipe-2.6.37-x86-2.9-00. Is this combination fine ?
3. I looked at the /proc/interrupts and with out xenomai loaded [i.e. default 

[Xenomai-core] Tiny Core Linux + xenomai/RTAI

2011-04-05 Thread krishna m

Has anyone tried applying Xenomai or RTAI patch to the tiny core Linux? Will it 
give a better performance compared to the plain vanilla Linux because the 
kernel footprint is small?  ___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Backfire: User <-> Kernel latancy mesurement tool on Xenomai

2011-04-06 Thread krishna m

I ported the backfire tool in the OSADL site 
[https://www.osadl.org/backfire-4.backfire.0.html] to measure the user to/from 
kernel latency.I wanted to measure the difference between the RT_PREEMPT kernel 
and Xenomai Kernel. Surprisingly i see RT_PREEMPT performance better than 
Xenomai. 
 
Here are few points to note:
1. The thread priority of the "sendme" tool of backfire in RT_PREEMPT is 99 
[highest]
2. I have made the thread priority 99 for the rt_task that i spawn [par of 
ported sendme]
ret = rt_task_shadow(&rt_task_desc, NULL, 99, 0);
 
My Questions:
* I wanted to know if any one has done such measurements using backfire and how 
dose Xenomai fair agnist RT_PREEMPT?
* is there any similar tool like backfire in the Xenomai tool set that dose the 
similar measurements?
* Do I need to do more Xenomai specific optimization in the "sendme" and 
"backfire" code to get better performance?
 
Thanks - Krishna 
  ___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Tiny Core Linux + xenomai/RTAI

2011-04-06 Thread krishna m


> Date: Tue, 5 Apr 2011 21:12:36 +0200
> From: gilles.chanteperd...@xenomai.org
> To: gm...@reliableembeddedsystems.com
> CC: xenomai-core@gna.org
> Subject: Re: [Xenomai-core] Tiny Core Linux + xenomai/RTAI
>
> Robert Berger wrote:
> > Hi Gilles,
> >
> > On 04/05/2011 08:17 PM, Gilles Chanteperdrix wrote:
> >> Ok, we are on Xenomai-core, so, let us discuss. If we admit that the OP
> >> is indeed talking about latencies (a quantifiable measure of
> >> determinism), suggesting that the effect on cache of the Linux kernel
> >> might influence the latencies is not completely irrelevant: the
> >> benchmark we make with Xenomai tend to consistently show that cache
> >> thrashing by the Linux kernel has an effect on latencies.
> >
> > Yes this did not immediately come into my mind. Linux cache thrashing
> > affects latencies of threads running under Xenomai (user and/or kernel
> > space), but as you point out (
> > http://permalink.gmane.org/gmane.linux.real-time.xenomai.devel/8167 )
> > that is distro independent.
>
> Yes, and I completely agree with your answer, I was just replying for
> the sake of completeness.

Thanks for replying. I got to read important points that affect the performance 
of a realtime system. I was indeed talking about the determinism of the RTOS 
when i referred to the performance. I am right now looking at interrupt 
determinism, task switch latency and jitter performance of the RTOS
[Xenomai in this case]on x86 platform. Since there is no Cache locking 
mechanism on x86. i wanted to know if a smaller footprint kernel would 
improve the performance of the above mentioned parameters of RTOS. 

 
>
> >
> > I know that the answer to this question might not be trivial, but what
> > would you suggest could be done to minimize cache thrashing?
>
> If we are talking embedded systems, you have control over the non
> real-time activities you run, you can try and be greedy in the way they
> use cache (I am not sure anyone really does that, I, for one, tried and
> optimize a toy application for cache usage to see that the effect is
> impressive).
>
> The other way, is not to minimize cache thrashing, but to minimize its
> effect on latencies. You can do that by increasing the frequency of the
> "critical" real-time task. By increasing its frequency, you make it more
> likely to remain in cache, and so decrease its latency. This is why, for
> instance, on most (*) platforms you get a better latency with smaller
> periods.
>
> On some embedded platforms, you also have the choice to lock some cache
> lines, or move some data or code to some fast memory essentially as fast
> as cache (TCMs or SRAMs on ARM). This is a promising solution, at least
> on ARM, where many SOCs have such special memory, but as far as I know,
> nobody tried yet and reported success of failure.
>
> (*) the exception being armv4 or armv5 without the FCSE extension, where
> the cache is flushed all the time anyway.
>
> >
> >> Also, having shorter latencies means that we cover a larger range of
> >> user-application needs. So, we try to have short latencies.
> >>
> >
> > Whoever wants to see Xenomai latencies in action can compile cyclictest
> > with and without Xenomai and see the differences. On the platforms I've
> > tried so far the differences are clearly visible;)
>
> Again, I agree with your answer, determinism, i.e. worst case latency is
> what matters for a real-time system, but:
> - smaller worst case latency covers a broader range of applications;
> - smaller average case latency means more CPU cycles for the Linux
> kernel to run, so, the dual-kernels based solutions still try to
> preserve the average case latency and are a bit special with regard to
> that particular question.
>
> --
> Gilles.
>
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core
>   
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core