Re: one question about MSI-X support for vPCI

2020-11-19 Thread Peter pan
Hi, Jan,

The following is my debug log:

On root cell, enable root cell:
[  267.783070] uio_ivshmem 0003:00:00.0: output_section at 
0xfb70a000, size 0x2000
[  267.792364] ivshmem-net 0003:00:01.0: enabling device ( -> 0002)
[  267.798798] ivshmem-net 0003:00:01.0: TX memory at 0xfb801000, 
size 0x0007f000
[  267.807443] ivshmem-net 0003:00:01.0: RX memory at 0xfb88, 
size 0x0007f000
[  267.816408] ivshm_net_state_change:state=0,peer_state=0  // Then 
root cell NIC state is changed to be INIT, inmate NIC is RESET
[  267.816471] The Jailhouse is opening.

Then execute linux inmate cell loading:
[  673.503776] ivshm_net_state_change:state=1,peer_state=1  // Then 
root cell NIC state is changed to be READY, inmate NIC is INIT
[  673.510338] ivshm_net_state_change:state=2,peer_state=2  // Then 
root cell NIC state is changed to be READY, inmate NIC is READY, and then 
set carrior on
[  673.516315] *set carrier on

For inmate Cell, during kernel boot up and driver probe:

[1.649054] ivshmem-net :00:01.0: enabling device ( -> 0002)
[1.655516] ivshmem-net :00:01.0: TX memory at 0xfb88, 
size 0x0007f000
[1.664142] ivshmem-net :00:01.0: RX memory at 0xfb801000, 
size 0x0007f000
[1.673180] ivshm_net_state_change:state=0,peer_state=1 // Then 
inmate cell NIC state is changed to be INIT, root cell NIC is INIT
[1.673579] uio_ivshmem :00:00.0: enabling device ( -> 0002)
[1.685477] ivshm_net_state_change:state=1,peer_state=2 // Then 
inmate cell NIC state is changed to be READY, root cell NIC is READY, but 
after that and before ifconfig NIC up, ivshm_net_state_change is not called 
anymore, so carrior is not set to be on.


We can find that before ifconfig up (open) virtual NIC, although stats both 
for NIC in root cell and inmate cell are all READY, but carrior in root 
cell is on, but in inmate cell if off.
So I don't think virtual NIC in root cell and inmate cell is whole symmetric

Thanks.
Jiafei.

在2020年11月19日星期四 UTC+8 下午4:28:48 写道:

> On 19.11.20 08:52, Peter pan wrote: 
> > Hi, Jan, 
> > 
> > After some investigation, I found the root cause of the issue: the 
> > carrier is not 
> > changed to be on if we open virtual NIC in inmate firstly, attached 
> > patch can 
> > fix this issue, please help to review, by the way where I can upstream 
> > this patch? 
>
> Thanks for the patch! 
>
> I'm just wondering, given that ivshmem-net is conceptually fully 
> symmetric, what is causing this issue to only happen in one way. Guess I 
> need to study the scenario in details. 
>
> Jan 
>
> > Thanks. 
> > 
> > Best Regards, 
> > Jiafei. 
> > 
> > 在2020年11月18日星期三 UTC+8 下午6:01:51 写道: 
> > 
> > On 18.11.20 10:50, Peter pan wrote: 
> > > Hi, Jan, 
> > > 
> > > I have one new issue and not sure it is a know issue. 
> > > 
> > > The issue is: when I ifconfig up ivshmem-net NIC in root cell firstly 
> > > and then ifconfig up ivshmem NIC in inmate cell (runing Linux),  I 
> > can 
> > > ping through between two NICs, but if I ifconfig up NIC in inmate 
> > cell 
> > > before ifconfig up the NIC in root cell, I can't ping through between 
> > > two NICs, and I found NIC in inmate can only receive packet 
> > sending from 
> > > root cell NIC, but NIC in root cell can't receive any packet and 
> > there 
> > > is also no irq received for ivshmem NIC. 
> > > 
> > 
> > The link states of both virtual NICs are up (ethtool)? Is there any 
> > ivshmem-net interrupt received at all on the root side? There should be 
> > a few during setup at least. 
> > 
> > Check that the interrupt line on the root side is really free, and also 
> > that GICD is properly intercepted by Jailhouse (check mappings). 
> > 
> > Jan 
> > 
> > -- 
> > Siemens AG, T RDA IOT 
> > Corporate Competence Center Embedded Linux 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "Jailhouse" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to jailhouse-de...@googlegroups.com 
> > . 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/jailhouse-dev/581e32ac-d032-4108-b4fe-21286e6b2085n%40googlegroups.com
>  
> > <
> https://groups.google.com/d/msgid/jailhouse-dev/581e32ac-d032-4108-b4fe-21286e6b2085n%40googlegroups.com?utm_medium=email_source=footer>.
>  
>
>
> -- 
> Siemens AG, T RDA IOT 
> Corporate Competence Center Embedded Linux 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/525f34ef-be0c-451d-b626-43763045008cn%40googlegroups.com.


Re: one question about MSI-X support for vPCI

2020-11-19 Thread Jan Kiszka
On 19.11.20 08:52, Peter pan wrote:
> Hi, Jan,
> 
> After some investigation, I found the root cause of the issue: the
> carrier is not
> changed to be on if we open virtual NIC in inmate firstly, attached
> patch can
> fix this issue, please help to review, by the way where I can upstream
> this patch?

Thanks for the patch!

I'm just wondering, given that ivshmem-net is conceptually fully
symmetric, what is causing this issue to only happen in one way. Guess I
need to study the scenario in details.

Jan

> Thanks.
> 
> Best Regards,
> Jiafei.
> 
> 在2020年11月18日星期三 UTC+8 下午6:01:51 写道:
> 
> On 18.11.20 10:50, Peter pan wrote:
> > Hi, Jan,
> >
> > I have one new issue and not sure it is a know issue.
> >
> > The issue is: when I ifconfig up ivshmem-net NIC in root cell firstly
> > and then ifconfig up ivshmem NIC in inmate cell (runing Linux),  I
> can
> > ping through between two NICs, but if I ifconfig up NIC in inmate
> cell
> > before ifconfig up the NIC in root cell, I can't ping through between
> > two NICs, and I found NIC in inmate can only receive packet
> sending from
> > root cell NIC, but NIC in root cell can't receive any packet and
> there
> > is also no irq received for ivshmem NIC.
> >
> 
> The link states of both virtual NICs are up (ethtool)? Is there any
> ivshmem-net interrupt received at all on the root side? There should be
> a few during setup at least.
> 
> Check that the interrupt line on the root side is really free, and also
> that GICD is properly intercepted by Jailhouse (check mappings).
> 
> Jan
> 
> -- 
> Siemens AG, T RDA IOT
> Corporate Competence Center Embedded Linux
> 
> -- 
> You received this message because you are subscribed to the Google
> Groups "Jailhouse" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to jailhouse-dev+unsubscr...@googlegroups.com
> .
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jailhouse-dev/581e32ac-d032-4108-b4fe-21286e6b2085n%40googlegroups.com
> .

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/803d46bb-b51a-6fba-7bfc-3e6d2145a8cd%40siemens.com.


Re: one question about MSI-X support for vPCI

2020-11-18 Thread Peter pan
Hi, Jan,

After some investigation, I found the root cause of the issue: the carrier 
is not
changed to be on if we open virtual NIC in inmate firstly, attached patch 
can
fix this issue, please help to review, by the way where I can upstream this 
patch?
Thanks.

Best Regards,
Jiafei.

在2020年11月18日星期三 UTC+8 下午6:01:51 写道:

> On 18.11.20 10:50, Peter pan wrote: 
> > Hi, Jan, 
> > 
> > I have one new issue and not sure it is a know issue. 
> > 
> > The issue is: when I ifconfig up ivshmem-net NIC in root cell firstly 
> > and then ifconfig up ivshmem NIC in inmate cell (runing Linux),  I can 
> > ping through between two NICs, but if I ifconfig up NIC in inmate cell 
> > before ifconfig up the NIC in root cell, I can't ping through between 
> > two NICs, and I found NIC in inmate can only receive packet sending from 
> > root cell NIC, but NIC in root cell can't receive any packet and there 
> > is also no irq received for ivshmem NIC. 
> > 
>
> The link states of both virtual NICs are up (ethtool)? Is there any 
> ivshmem-net interrupt received at all on the root side? There should be 
> a few during setup at least. 
>
> Check that the interrupt line on the root side is really free, and also 
> that GICD is properly intercepted by Jailhouse (check mappings). 
>
> Jan 
>
> -- 
> Siemens AG, T RDA IOT 
> Corporate Competence Center Embedded Linux 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/581e32ac-d032-4108-b4fe-21286e6b2085n%40googlegroups.com.
>From 828eecf2c696410a30bfd7c5c7a0d384c4bec7c5 Mon Sep 17 00:00:00 2001
From: Jiafei Pan 
Date: Thu, 19 Nov 2020 15:34:47 +0800
Subject: [PATCH] ivshmem-net: set carrier on if device has been opened

When virtual NIC is opened in inmate firstly, and then
open virtual NIC in root cell, virtual NIC in inmate
can't transmit packets out. The roor cause of this
issue is although state has been changed to be RUN
in process of "open", but carrier need to set to be on
after state of peer is changed to be RUN, otherwise
network stack will not transmit packet to virtual NIC.

Signed-off-by: Jiafei Pan 
---
 drivers/net/ivshmem-net.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/drivers/net/ivshmem-net.c b/drivers/net/ivshmem-net.c
index 18d5a15dbec2..8aab21b3febe 100644
--- a/drivers/net/ivshmem-net.c
+++ b/drivers/net/ivshmem-net.c
@@ -623,6 +623,12 @@ static void ivshm_net_state_change(struct work_struct *work)
 			netif_carrier_off(ndev);
 			ivshm_net_do_stop(ndev);
 		}
+		/* In case of it has been already opened, so state is RUN,
+		 * set Carrier on when remote goes to RUN.
+		 */
+		if (peer_state == IVSHM_NET_STATE_RUN)
+			netif_carrier_on(ndev);
+
 		break;
 	}
 
-- 
2.17.1



Re: one question about MSI-X support for vPCI

2020-11-18 Thread Jan Kiszka
On 18.11.20 10:50, Peter pan wrote:
> Hi, Jan,
> 
> I have one new issue and not sure it is a know issue.
> 
> The issue is: when I ifconfig up ivshmem-net NIC in root cell firstly
> and then ifconfig up ivshmem NIC in inmate cell (runing Linux),  I can
> ping through between two NICs, but if I ifconfig up NIC in inmate cell
> before ifconfig up the NIC in root cell, I can't ping through between
> two NICs, and I found NIC in inmate can only receive packet sending from
> root cell NIC, but NIC in root cell can't receive any packet and there
> is also no irq received for ivshmem NIC.
> 

The link states of both virtual NICs are up (ethtool)? Is there any
ivshmem-net interrupt received at all on the root side? There should be
a few during setup at least.

Check that the interrupt line on the root side is really free, and also
that GICD is properly intercepted by Jailhouse (check mappings).

Jan

-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/9ea3bc46-677d-7e23-0bff-70d0218898da%40siemens.com.


Re: one question about MSI-X support for vPCI

2020-11-18 Thread Peter pan
Hi, Jan,

I have one new issue and not sure it is a know issue.

The issue is: when I ifconfig up ivshmem-net NIC in root cell firstly and 
then ifconfig up ivshmem NIC in inmate cell (runing Linux),  I can ping 
through between two NICs, but if I ifconfig up NIC in inmate cell before 
ifconfig up the NIC in root cell, I can't ping through between two NICs, 
and I found NIC in inmate can only receive packet sending from root cell 
NIC, but NIC in root cell can't receive any packet and there is also no irq 
received for ivshmem NIC.

Thanks.

Best Regards,
Jiafei.

在2020年11月13日星期五 UTC+8 下午5:55:52 写道:

> Thanks Jan, now it works with INTx.
>
> Jiafei.
>
> 在2020年11月12日星期四 UTC+8 下午9:05:15 写道:
>
>> On 12.11.20 09:50, Peter pan wrote: 
>> > Dear Jailhouse Community, 
>> > 
>> > I am runing Jailhouse on kernel v5.4, and port some ivshmem patches 
>> > from http://git.kiszka.org/?p=linux.git;a=summary 
>> >  
>> > 
>> > The issue I have is uio_shmem and ivshmem-net will probe failed when I 
>> > use MSI-X mode for vPCI after I run Jailhouse enable command.  Please 
>> > find the folowing log: 
>> > 
>> > [   21.581019] jailhouse: loading out-of-tree module taints kernel. 
>> > 
>> > [   30.000988] pci-host-generic fb50.pci: host bridge /pci@0 
>> ranges: 
>> > 
>> > [   30.000997] pci-host-generic fb50.pci:   MEM 
>> > 0xfb60..0xfb603fff -> 0xfb60 
>> > 
>> > [   30.001028] pci-host-generic fb50.pci: ECAM at [mem 
>> > 0xfb50-0xfb5f] for [bus 00] 
>> > 
>> > [   30.001081] pci-host-generic fb50.pci: PCI host bridge to bus 
>> 0003:00 
>> > 
>> > [   30.001085] pci_bus 0003:00: root bus resource [bus 00] 
>> > 
>> > [   30.001087] pci_bus 0003:00: root bus resource [mem 
>> > 0xfb60-0xfb603fff] 
>> > 
>> > [   30.001105] pci 0003:00:00.0: [110a:4106] type 00 class 0xff 
>> > 
>> > [   30.001128] pci 0003:00:00.0: reg 0x10: [mem 0x-0x0fff] 
>> > 
>> > [   30.001136] pci 0003:00:00.0: reg 0x14: [mem 0x-0x01ff] 
>> > 
>> > [   30.001340] pci 0003:00:01.0: [110a:4106] type 00 class 0xff0001 
>> > 
>> > [   30.001359] pci 0003:00:01.0: reg 0x10: [mem 0x-0x0fff] 
>> > 
>> > [   30.001368] pci 0003:00:01.0: reg 0x14: [mem 0x-0x01ff] 
>> > 
>> > [   30.002389] pci 0003:00:00.0: BAR 0: assigned [mem 
>> 0xfb60-0xfb600fff] 
>> > 
>> > [   30.002397] pci 0003:00:01.0: BAR 0: assigned [mem 
>> 0xfb601000-0xfb601fff] 
>> > 
>> > [   30.002403] pci 0003:00:00.0: BAR 1: assigned [mem 
>> 0xfb602000-0xfb6021ff] 
>> > 
>> > [   30.002409] pci 0003:00:01.0: BAR 1: assigned [mem 
>> 0xfb602200-0xfb6023ff] 
>> > 
>> > [   30.002478] uio_ivshmem 0003:00:00.0: enabling device ( -> 0002) 
>> > 
>> > [   30.002505] uio_ivshmem 0003:00:00.0: state_table at 
>> > 0xfb70, size 0x1000 
>> > 
>> > [   30.002512] uio_ivshmem 0003:00:00.0: rw_section at 
>> > 0xfb701000, size 0x9000 
>> > 
>> > [   30.002520] uio_ivshmem 0003:00:00.0: input_sections at 
>> > 0xfb70a000, size 0x6000 
>> > 
>> > [   30.002524] uio_ivshmem 0003:00:00.0: output_section at 
>> > 0xfb70a000, size 0x2000 
>> > 
>> > [   30.002576] uio_ivshmem: probe of 0003:00:00.0 failed with error -28 
>> > 
>> > [   30.002620] ivshmem-net 0003:00:01.0: enabling device ( -> 0002) 
>> > 
>> > [   30.002664] ivshmem-net 0003:00:01.0: TX memory at 
>> > 0xfb801000, size 0x0007f000 
>> > 
>> > [   30.002667] ivshmem-net 0003:00:01.0: RX memory at 
>> > 0xfb88, size 0x0007f000 
>> > 
>> > [   30.047630] ivshmem-net: probe of 0003:00:01.0 failed with error -28 
>> > 
>> > [   30.047714] The Jailhouse is opening. 
>> > 
>> > 
>> > After some investigation I found the dts node of vPIC is added to root 
>> > cell by using vpci_template.dts,  the Jailhouse 
>> > driver create_vpci_of_overlay() in driver/pci.c can't add the handler 
>> of 
>> > "msi-parent" to this PCI node,  but the kernel driver of the virtual 
>> PCI 
>> > device use the following function  
>> > ret = pci_alloc_irq_vectors(pdev, 1, 2, PCI_IRQ_LEGACY | PCI_IRQ_MSIX); 
>> > So the  driver will probe MSIX irq for the device, because there is no 
>> > msi controller is speficied to vPCI, the result is no irq-domain is 
>> > provided to this PCI and irq allocated failed. 
>> > 
>> > so how to fix such issue? Appreciate any comments and suggestions, 
>> thanks. 
>>
>> The vPCI support in Jailhouse injects interrupts as legacy INTx. For 
>> that, you need to provide up to 4 (less if you have less ivshmem 
>> devices) consecutive SPIs that are not in use by real devices (in any 
>> cell). See other arm64 configs, specifically look for vpci_irq_base. 
>>
>> Jan 
>> -- 
>> Siemens AG, T RDA IOT 
>> Corporate Competence Center Embedded Linux 
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this 

Re: one question about MSI-X support for vPCI

2020-11-13 Thread Peter pan
Thanks Jan, now it works with INTx.

Jiafei.

在2020年11月12日星期四 UTC+8 下午9:05:15 写道:

> On 12.11.20 09:50, Peter pan wrote:
> > Dear Jailhouse Community,
> > 
> > I am runing Jailhouse on kernel v5.4, and port some ivshmem patches
> > from http://git.kiszka.org/?p=linux.git;a=summary
> > 
> > 
> > The issue I have is uio_shmem and ivshmem-net will probe failed when I
> > use MSI-X mode for vPCI after I run Jailhouse enable command.  Please
> > find the folowing log:
> > 
> > [   21.581019] jailhouse: loading out-of-tree module taints kernel.
> > 
> > [   30.000988] pci-host-generic fb50.pci: host bridge /pci@0 ranges:
> > 
> > [   30.000997] pci-host-generic fb50.pci:   MEM
> > 0xfb60..0xfb603fff -> 0xfb60
> > 
> > [   30.001028] pci-host-generic fb50.pci: ECAM at [mem
> > 0xfb50-0xfb5f] for [bus 00]
> > 
> > [   30.001081] pci-host-generic fb50.pci: PCI host bridge to bus 
> 0003:00
> > 
> > [   30.001085] pci_bus 0003:00: root bus resource [bus 00]
> > 
> > [   30.001087] pci_bus 0003:00: root bus resource [mem
> > 0xfb60-0xfb603fff]
> > 
> > [   30.001105] pci 0003:00:00.0: [110a:4106] type 00 class 0xff
> > 
> > [   30.001128] pci 0003:00:00.0: reg 0x10: [mem 0x-0x0fff]
> > 
> > [   30.001136] pci 0003:00:00.0: reg 0x14: [mem 0x-0x01ff]
> > 
> > [   30.001340] pci 0003:00:01.0: [110a:4106] type 00 class 0xff0001
> > 
> > [   30.001359] pci 0003:00:01.0: reg 0x10: [mem 0x-0x0fff]
> > 
> > [   30.001368] pci 0003:00:01.0: reg 0x14: [mem 0x-0x01ff]
> > 
> > [   30.002389] pci 0003:00:00.0: BAR 0: assigned [mem 
> 0xfb60-0xfb600fff]
> > 
> > [   30.002397] pci 0003:00:01.0: BAR 0: assigned [mem 
> 0xfb601000-0xfb601fff]
> > 
> > [   30.002403] pci 0003:00:00.0: BAR 1: assigned [mem 
> 0xfb602000-0xfb6021ff]
> > 
> > [   30.002409] pci 0003:00:01.0: BAR 1: assigned [mem 
> 0xfb602200-0xfb6023ff]
> > 
> > [   30.002478] uio_ivshmem 0003:00:00.0: enabling device ( -> 0002)
> > 
> > [   30.002505] uio_ivshmem 0003:00:00.0: state_table at
> > 0xfb70, size 0x1000
> > 
> > [   30.002512] uio_ivshmem 0003:00:00.0: rw_section at
> > 0xfb701000, size 0x9000
> > 
> > [   30.002520] uio_ivshmem 0003:00:00.0: input_sections at
> > 0xfb70a000, size 0x6000
> > 
> > [   30.002524] uio_ivshmem 0003:00:00.0: output_section at
> > 0xfb70a000, size 0x2000
> > 
> > [   30.002576] uio_ivshmem: probe of 0003:00:00.0 failed with error -28
> > 
> > [   30.002620] ivshmem-net 0003:00:01.0: enabling device ( -> 0002)
> > 
> > [   30.002664] ivshmem-net 0003:00:01.0: TX memory at
> > 0xfb801000, size 0x0007f000
> > 
> > [   30.002667] ivshmem-net 0003:00:01.0: RX memory at
> > 0xfb88, size 0x0007f000
> > 
> > [   30.047630] ivshmem-net: probe of 0003:00:01.0 failed with error -28
> > 
> > [   30.047714] The Jailhouse is opening.
> > 
> > 
> > After some investigation I found the dts node of vPIC is added to root
> > cell by using vpci_template.dts,  the Jailhouse
> > driver create_vpci_of_overlay() in driver/pci.c can't add the handler of
> > "msi-parent" to this PCI node,  but the kernel driver of the virtual PCI
> > device use the following function 
> > ret = pci_alloc_irq_vectors(pdev, 1, 2, PCI_IRQ_LEGACY | PCI_IRQ_MSIX);
> > So the  driver will probe MSIX irq for the device, because there is no
> > msi controller is speficied to vPCI, the result is no irq-domain is
> > provided to this PCI and irq allocated failed.
> > 
> > so how to fix such issue? Appreciate any comments and suggestions, 
> thanks.
>
> The vPCI support in Jailhouse injects interrupts as legacy INTx. For
> that, you need to provide up to 4 (less if you have less ivshmem
> devices) consecutive SPIs that are not in use by real devices (in any
> cell). See other arm64 configs, specifically look for vpci_irq_base.
>
> Jan
> -- 
> Siemens AG, T RDA IOT
> Corporate Competence Center Embedded Linux
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/650b2d0f-b721-44a4-9572-28c2a88a7559n%40googlegroups.com.


Re: one question about MSI-X support for vPCI

2020-11-12 Thread Jan Kiszka
On 12.11.20 09:50, Peter pan wrote:
> Dear Jailhouse Community,
> 
> I am runing Jailhouse on kernel v5.4, and port some ivshmem patches
> from http://git.kiszka.org/?p=linux.git;a=summary
> 
> 
> The issue I have is uio_shmem and ivshmem-net will probe failed when I
> use MSI-X mode for vPCI after I run Jailhouse enable command.  Please
> find the folowing log:
> 
> [   21.581019] jailhouse: loading out-of-tree module taints kernel.
> 
> [   30.000988] pci-host-generic fb50.pci: host bridge /pci@0 ranges:
> 
> [   30.000997] pci-host-generic fb50.pci:   MEM
> 0xfb60..0xfb603fff -> 0xfb60
> 
> [   30.001028] pci-host-generic fb50.pci: ECAM at [mem
> 0xfb50-0xfb5f] for [bus 00]
> 
> [   30.001081] pci-host-generic fb50.pci: PCI host bridge to bus 0003:00
> 
> [   30.001085] pci_bus 0003:00: root bus resource [bus 00]
> 
> [   30.001087] pci_bus 0003:00: root bus resource [mem
> 0xfb60-0xfb603fff]
> 
> [   30.001105] pci 0003:00:00.0: [110a:4106] type 00 class 0xff
> 
> [   30.001128] pci 0003:00:00.0: reg 0x10: [mem 0x-0x0fff]
> 
> [   30.001136] pci 0003:00:00.0: reg 0x14: [mem 0x-0x01ff]
> 
> [   30.001340] pci 0003:00:01.0: [110a:4106] type 00 class 0xff0001
> 
> [   30.001359] pci 0003:00:01.0: reg 0x10: [mem 0x-0x0fff]
> 
> [   30.001368] pci 0003:00:01.0: reg 0x14: [mem 0x-0x01ff]
> 
> [   30.002389] pci 0003:00:00.0: BAR 0: assigned [mem 0xfb60-0xfb600fff]
> 
> [   30.002397] pci 0003:00:01.0: BAR 0: assigned [mem 0xfb601000-0xfb601fff]
> 
> [   30.002403] pci 0003:00:00.0: BAR 1: assigned [mem 0xfb602000-0xfb6021ff]
> 
> [   30.002409] pci 0003:00:01.0: BAR 1: assigned [mem 0xfb602200-0xfb6023ff]
> 
> [   30.002478] uio_ivshmem 0003:00:00.0: enabling device ( -> 0002)
> 
> [   30.002505] uio_ivshmem 0003:00:00.0: state_table at
> 0xfb70, size 0x1000
> 
> [   30.002512] uio_ivshmem 0003:00:00.0: rw_section at
> 0xfb701000, size 0x9000
> 
> [   30.002520] uio_ivshmem 0003:00:00.0: input_sections at
> 0xfb70a000, size 0x6000
> 
> [   30.002524] uio_ivshmem 0003:00:00.0: output_section at
> 0xfb70a000, size 0x2000
> 
> [   30.002576] uio_ivshmem: probe of 0003:00:00.0 failed with error -28
> 
> [   30.002620] ivshmem-net 0003:00:01.0: enabling device ( -> 0002)
> 
> [   30.002664] ivshmem-net 0003:00:01.0: TX memory at
> 0xfb801000, size 0x0007f000
> 
> [   30.002667] ivshmem-net 0003:00:01.0: RX memory at
> 0xfb88, size 0x0007f000
> 
> [   30.047630] ivshmem-net: probe of 0003:00:01.0 failed with error -28
> 
> [   30.047714] The Jailhouse is opening.
> 
> 
> After some investigation I found the dts node of vPIC is added to root
> cell by using vpci_template.dts,  the Jailhouse
> driver create_vpci_of_overlay() in driver/pci.c can't add the handler of
> "msi-parent" to this PCI node,  but the kernel driver of the virtual PCI
> device use the following function 
> ret = pci_alloc_irq_vectors(pdev, 1, 2, PCI_IRQ_LEGACY | PCI_IRQ_MSIX);
> So the  driver will probe MSIX irq for the device, because there is no
> msi controller is speficied to vPCI, the result is no irq-domain is
> provided to this PCI and irq allocated failed.
> 
> so how to fix such issue? Appreciate any comments and suggestions, thanks.

The vPCI support in Jailhouse injects interrupts as legacy INTx. For
that, you need to provide up to 4 (less if you have less ivshmem
devices) consecutive SPIs that are not in use by real devices (in any
cell). See other arm64 configs, specifically look for vpci_irq_base.

Jan
-- 
Siemens AG, T RDA IOT
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/7552cacf-519e-9cde-ba5a-c2e2121c5a54%40siemens.com.


one question about MSI-X support for vPCI

2020-11-12 Thread Peter pan
Dear Jailhouse Community,

I am runing Jailhouse on kernel v5.4, and port some ivshmem patches from 
http://git.kiszka.org/?p=linux.git;a=summary

The issue I have is uio_shmem and ivshmem-net will probe failed when I use 
MSI-X mode for vPCI after I run Jailhouse enable command.  Please find the 
folowing log:

[   21.581019] jailhouse: loading out-of-tree module taints kernel.

[   30.000988] pci-host-generic fb50.pci: host bridge /pci@0 ranges:

[   30.000997] pci-host-generic fb50.pci:   MEM 0xfb60..0xfb603fff 
-> 0xfb60

[   30.001028] pci-host-generic fb50.pci: ECAM at [mem 
0xfb50-0xfb5f] for [bus 00]

[   30.001081] pci-host-generic fb50.pci: PCI host bridge to bus 0003:00

[   30.001085] pci_bus 0003:00: root bus resource [bus 00]

[   30.001087] pci_bus 0003:00: root bus resource [mem 
0xfb60-0xfb603fff]

[   30.001105] pci 0003:00:00.0: [110a:4106] type 00 class 0xff

[   30.001128] pci 0003:00:00.0: reg 0x10: [mem 0x-0x0fff]

[   30.001136] pci 0003:00:00.0: reg 0x14: [mem 0x-0x01ff]

[   30.001340] pci 0003:00:01.0: [110a:4106] type 00 class 0xff0001

[   30.001359] pci 0003:00:01.0: reg 0x10: [mem 0x-0x0fff]

[   30.001368] pci 0003:00:01.0: reg 0x14: [mem 0x-0x01ff]

[   30.002389] pci 0003:00:00.0: BAR 0: assigned [mem 0xfb60-0xfb600fff]

[   30.002397] pci 0003:00:01.0: BAR 0: assigned [mem 0xfb601000-0xfb601fff]

[   30.002403] pci 0003:00:00.0: BAR 1: assigned [mem 0xfb602000-0xfb6021ff]

[   30.002409] pci 0003:00:01.0: BAR 1: assigned [mem 0xfb602200-0xfb6023ff]

[   30.002478] uio_ivshmem 0003:00:00.0: enabling device ( -> 0002)

[   30.002505] uio_ivshmem 0003:00:00.0: state_table at 0xfb70, 
size 0x1000

[   30.002512] uio_ivshmem 0003:00:00.0: rw_section at 0xfb701000, 
size 0x9000

[   30.002520] uio_ivshmem 0003:00:00.0: input_sections at 
0xfb70a000, size 0x6000

[   30.002524] uio_ivshmem 0003:00:00.0: output_section at 
0xfb70a000, size 0x2000

[   30.002576] uio_ivshmem: probe of 0003:00:00.0 failed with error -28

[   30.002620] ivshmem-net 0003:00:01.0: enabling device ( -> 0002)

[   30.002664] ivshmem-net 0003:00:01.0: TX memory at 0xfb801000, 
size 0x0007f000

[   30.002667] ivshmem-net 0003:00:01.0: RX memory at 0xfb88, 
size 0x0007f000

[   30.047630] ivshmem-net: probe of 0003:00:01.0 failed with error -28

[   30.047714] The Jailhouse is opening.

After some investigation I found the dts node of vPIC is added to root cell 
by using vpci_template.dts,  the Jailhouse driver create_vpci_of_overlay() 
in driver/pci.c can't add the handler of "msi-parent" to this PCI node,  
but the kernel driver of the virtual PCI device use the following function 
ret = pci_alloc_irq_vectors(pdev, 1, 2, PCI_IRQ_LEGACY | PCI_IRQ_MSIX);
So the  driver will probe MSIX irq for the device, because there is no msi 
controller is speficied to vPCI, the result is no irq-domain is provided to 
this PCI and irq allocated failed.

so how to fix such issue? Appreciate any comments and suggestions, thanks.

Best Regards,
Jiafei.

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/e223356c-fc2c-4c3b-98c3-6d27fba1099an%40googlegroups.com.


Re: Question about MSI-X

2018-10-01 Thread Henning Schild
Am Mon, 1 Oct 2018 17:38:27 +0200
schrieb Claudio Scordino :

> Hi Jan,
> 
> Il 28/09/2018 14:11, Jan Kiszka ha scritto:
> > On 28.09.18 12:07, Claudio Scordino wrote:  
> >> Dear all,
> >>
> >> I'm implementing a minimal inmate driver for i210 and I wonder if
> >> I have understood correctly the usage of the MSI-X functions.
> >>
> >> Once the correct MSI-X bar (BAR3, in my case) has been mapped, I
> >> need to invoke both:
> >>
> >>  int_set_handler(IRQ_VECTOR, irq_handler);
> >>  pci_msix_set_vector(bdf, IRQ_VECTOR, 0);
> >>
> >> Is IRQ_VECTOR the value reported by lspci ("pin A routed to IRQ
> >> 18") or is it the value reported by /proc/interrupts (129, in my
> >> specific case) ?  
> > 
> > No, it is a free APIC vector in your setup. Can be anything >= 32.
> >   
> >>
> >> BTW, what is the third argument of pci_msix_set_vector() supposed
> >> to contain ?  
> > 
> > If your device is able and configured to generate multiple MSI-X
> > vectors (e.g. one vector per queue, one for maintenance etc.), this
> > links them to the desired APIC vector.  
> 
> Thank you for the clarifications.
> 
> The device seems to be able to either use a single or a multiple
> MSI-X vector (based on the value of a register). However, in both
> cases I can't get any interrupt on reception and I can't figure out a
> simple way for understanding if it is a device or a cell
> misconfiguration. The platform is x86 with Intel i210, programmed to
> use only the first queue. Polling (similar to the e1000-demo) works
> fine.
> 
> This is the lspci output (MSI-X  is available on BAR3):
> 
>   03:00.0 Ethernet controller: Intel Corporation I210 Gigabit
> Network Connection (rev 03) Control: I/O- Mem+ BusMaster+ SpecCycle-
> MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status:
> Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-  SERR-  IRQ 18 Region 0: Memory at df10 (32-bit, non-prefetchable)
> [size=1M] Region 2: I/O ports at e000 [disabled] [size=32]
>   Region 3: Memory at df20 (32-bit, non-prefetchable)
> [size=16K] Expansion ROM at df00 [disabled] [size=1M]
>   Capabilities: [40] Power Management version 3
>   Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA
> PME(D0+,D1-,D2-,D3hot+,D3cold+) Status: D0 NoSoftRst+ PME-Enable-
> DSel=0 DScale=1 PME- Capabilities: [50] MSI: Enable- Count=1/1
> Maskable+ 64bit+ Address:   Data: 
>   Masking:   Pending: 
>   Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
>   Vector table: BAR=3 offset=
>   PBA: BAR=3 offset=2000
>   Capabilities: [a0] Express (v2) Endpoint, MSI 00
>   DevCap: MaxPayload 512 bytes, PhantFunc 0,
> Latency L0s <512ns, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+
> FLReset+ SlotPowerLimit 0.000W DevCtl:Report errors:
> Correctable+ Non-Fatal+ Fatal+ Unsupported+ RlxdOrd+ ExtTag-
> PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 256 bytes, MaxReadReq
> 512 bytes DevSta: CorrErr- UncorrErr+ FatalErr- UnsuppReq+
> AuxPwr+ TransPend- LnkCap:Port #0, Speed 2.5GT/s, Width x1,
> ASPM L0s L1, Exit Latency L0s <2us, L1 <16us ClockPM- Surprise-
> LLActRep- BwNot- ASPMOptComp+ LnkCtl: ASPM L1 Enabled; RCB 64
> bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt-
> AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train-
> SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout:
> Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion
> Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2:
> Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis- Transmit
> Margin: Normal Operating Range, EnterModifiedCompliance-
> ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current
> De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
> EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
> Capabilities: [100 v2] Advanced Error Reporting UESta:DLP-
> SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC-
> UnsupReq+ ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO-
> CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
> UESvrt:   DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt-
> RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta:RxErr- BadTLP-
> BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk:   RxErr- BadTLP-
> BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap:  First Error
> Pointer: 14, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [140 v1]
> Device Serial Number c4-00-ad-ff-ff-06-d9-72 Capabilities: [1a0 v1]
> Transaction Processing Hints Device specific mode supported Steering
> table in TPH capability structure Kernel driver in use: igb Kernel
> modules: igb
> 
> 
> Most of the settings of the inmate cell have been taken from the root
> cell and checked against the datasheet:
> 
> .pci_devices = {
>{ /* Ethernet @03:00.0 */
>.type 

Re: Question about MSI-X

2018-10-01 Thread Jan Kiszka

On 01.10.18 17:38, Claudio Scordino wrote:

Hi Jan,

Il 28/09/2018 14:11, Jan Kiszka ha scritto:

On 28.09.18 12:07, Claudio Scordino wrote:

Dear all,

I'm implementing a minimal inmate driver for i210 and I wonder if I have
understood correctly the usage of the MSI-X functions.

Once the correct MSI-X bar (BAR3, in my case) has been mapped, I need to invoke
both:

 int_set_handler(IRQ_VECTOR, irq_handler);
 pci_msix_set_vector(bdf, IRQ_VECTOR, 0);

Is IRQ_VECTOR the value reported by lspci ("pin A routed to IRQ 18") or is it
the value reported by /proc/interrupts (129, in my specific case) ?


No, it is a free APIC vector in your setup. Can be anything >= 32.



BTW, what is the third argument of pci_msix_set_vector() supposed to contain ?


If your device is able and configured to generate multiple MSI-X vectors (e.g.
one vector per queue, one for maintenance etc.), this links them to the desired
APIC vector.


Thank you for the clarifications.

The device seems to be able to either use a single or a multiple MSI-X vector 
(based on the value of a register).
However, in both cases I can't get any interrupt on reception and I can't figure 
out a simple way for understanding if it is a device or a cell misconfiguration.
The platform is x86 with Intel i210, programmed to use only the first queue. 
Polling (similar to the e1000-demo) works fine.


This is the lspci output (MSI-X  is available on BAR3):

 03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network 
Connection (rev 03)
 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx+
 Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- 
 Latency: 0
 Interrupt: pin A routed to IRQ 18
 Region 0: Memory at df10 (32-bit, non-prefetchable) [size=1M]
 Region 2: I/O ports at e000 [disabled] [size=32]
 Region 3: Memory at df20 (32-bit, non-prefetchable) [size=16K]
 Expansion ROM at df00 [disabled] [size=1M]
 Capabilities: [40] Power Management version 3
     Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA 
PME(D0+,D1-,D2-,D3hot+,D3cold+)
     Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
 Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
     Address:   Data: 
     Masking:   Pending: 
 Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
     Vector table: BAR=3 offset=
     PBA: BAR=3 offset=2000
 Capabilities: [a0] Express (v2) Endpoint, MSI 00
     DevCap:    MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 
<64us
     ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 
0.000W
     DevCtl:    Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
     RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
     MaxPayload 256 bytes, MaxReadReq 512 bytes
     DevSta:    CorrErr- UncorrErr+ FatalErr- UnsuppReq+ AuxPwr+ TransPend-
     LnkCap:    Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency 
L0s <2us, L1 <16us

     ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
     LnkCtl:    ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk+
     ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
     LnkSta:    Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- 
BWMgmt- ABWMgmt-
     DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not 
Supported
     DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF 
Disabled

     LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
  Transmit Margin: Normal Operating Range, EnterModifiedCompliance- 
ComplianceSOS-

  Compliance De-emphasis: -6dB
     LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, 
EqualizationPhase1-

  EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
 Capabilities: [100 v2] Advanced Error Reporting
     UESta:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- 
MalfTLP- ECRC- UnsupReq+ ACSViol-
     UEMsk:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- 
MalfTLP- ECRC- UnsupReq- ACSViol-
     UESvrt:    DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ 
MalfTLP+ ECRC- UnsupReq- ACSViol-

     CESta:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
     CEMsk:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
     AERCap:    First Error Pointer: 14, GenCap+ CGenEn- ChkCap+ ChkEn-
 Capabilities: [140 v1] Device Serial Number c4-00-ad-ff-ff-06-d9-72
 Capabilities: [1a0 v1] Transaction Processing Hints
     Device specific mode supported
     Steering table in TPH capability structure
 Kernel driver in use: igb
 Kernel modules: igb


Most of the settings of the inmate cell have been taken from the root cell and 
checked against the datasheet:


    .pci_devices = 

Re: Question about MSI-X

2018-10-01 Thread Claudio Scordino

Hi Jan,

Il 28/09/2018 14:11, Jan Kiszka ha scritto:

On 28.09.18 12:07, Claudio Scordino wrote:

Dear all,

I'm implementing a minimal inmate driver for i210 and I wonder if I have
understood correctly the usage of the MSI-X functions.

Once the correct MSI-X bar (BAR3, in my case) has been mapped, I need to invoke
both:

 int_set_handler(IRQ_VECTOR, irq_handler);
 pci_msix_set_vector(bdf, IRQ_VECTOR, 0);

Is IRQ_VECTOR the value reported by lspci ("pin A routed to IRQ 18") or is it
the value reported by /proc/interrupts (129, in my specific case) ?


No, it is a free APIC vector in your setup. Can be anything >= 32.



BTW, what is the third argument of pci_msix_set_vector() supposed to contain ?


If your device is able and configured to generate multiple MSI-X vectors (e.g.
one vector per queue, one for maintenance etc.), this links them to the desired
APIC vector.


Thank you for the clarifications.

The device seems to be able to either use a single or a multiple MSI-X vector 
(based on the value of a register).
However, in both cases I can't get any interrupt on reception and I can't 
figure out a simple way for understanding if it is a device or a cell 
misconfiguration.
The platform is x86 with Intel i210, programmed to use only the first queue. 
Polling (similar to the e1000-demo) works fine.

This is the lspci output (MSI-X  is available on BAR3):

03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network 
Connection (rev 03)
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- https://groups.google.com/d/optout.


Re: Question about MSI-X

2018-09-28 Thread Jan Kiszka
On 28.09.18 12:07, Claudio Scordino wrote:
> Dear all,
> 
> I'm implementing a minimal inmate driver for i210 and I wonder if I have
> understood correctly the usage of the MSI-X functions.
> 
> Once the correct MSI-X bar (BAR3, in my case) has been mapped, I need to 
> invoke
> both:
> 
> int_set_handler(IRQ_VECTOR, irq_handler);
> pci_msix_set_vector(bdf, IRQ_VECTOR, 0);
> 
> Is IRQ_VECTOR the value reported by lspci ("pin A routed to IRQ 18") or is it
> the value reported by /proc/interrupts (129, in my specific case) ?

No, it is a free APIC vector in your setup. Can be anything >= 32.

> 
> BTW, what is the third argument of pci_msix_set_vector() supposed to contain ?

If your device is able and configured to generate multiple MSI-X vectors (e.g.
one vector per queue, one for maintenance etc.), this links them to the desired
APIC vector.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Question about MSI-X

2018-09-28 Thread Claudio Scordino

Dear all,

I'm implementing a minimal inmate driver for i210 and I wonder if I have 
understood correctly the usage of the MSI-X functions.

Once the correct MSI-X bar (BAR3, in my case) has been mapped, I need to invoke 
both:

int_set_handler(IRQ_VECTOR, irq_handler);
pci_msix_set_vector(bdf, IRQ_VECTOR, 0);

Is IRQ_VECTOR the value reported by lspci ("pin A routed to IRQ 18") or is it 
the value reported by /proc/interrupts (129, in my specific case) ?

BTW, what is the third argument of pci_msix_set_vector() supposed to contain ?

Many thanks and best regards,

 Claudio

--
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.