[PATCH v6 0/1] PCI: kirin: Add MSI support

2018-07-11 Thread Xiaowei Song


Before Version Patches
==
patch v5
https://patchwork.kernel.org/patch/10493797/
patch v4
https://patchwork.kernel.org/patch/10402399/
patch v3
https://www.spinics.net/lists/linux-pci/msg72322.html
patch v2
https://www.spinics.net/lists/kernel/msg2797610.html

patch v1
https://www.spinics.net/lists/kernel/msg2796410.html

Changes between V6 and V5
=
1. Fix the bug pointed by Lorenzo.
(1) The bug of checking the return value of kirin_pcie_add_msi is 
fixed, which was pointed by Lorenzo.
2. Test the patch based on Hikey960.
   (1) the patch is tested on Hikey960 board with COLORFUL CN600 M.2 SSD 
connected.
   (2) As i can not cat /proc/interrupts for "adb shell"  is not working, so 
some log was added to test this patch as following.
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1966,8 +1966,13 @@ static int 
nvme_setup_io_queues(struct nvme_dev *dev)
pci_free_irq_vectors(pdev);
result = 
pci_alloc_irq_vectors_affinity(pdev, 1, nr_io_queues + 1,

PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, );
-   if (result <= 0)
+   if (result <= 0) {
+   printk(KERN_ERR "NVMe: alloc irq 
Fail.\n");
return -EIO;
+   }
+   else
+   printk(KERN_ERR "PCIe Device NVMe 
enable MSI IRQ Success.\n");
+

--- a/drivers/pci/dwc/pcie-kirin.c
+++ b/drivers/pci/dwc/pcie-kirin.c
@@ -462,6 +462,9 @@ static int 
kirin_pcie_add_msi(struct dw_pcie *pci,
}

pci->pp.msi_irq = ret;
+
+   dev_err(>dev,
+   "Kirin PCIe MSI IRQ 
No.(%d)\n", ret);
}

return ret;

(3) The running log is shown as below:
...
[0.567993] kirin-pcie f400.pcie: Kirin PCIe MSI 
IRQ No.(55)
[0.568012] PCI: OF: host bridge /soc/pcie@f400 
ranges:
[0.568026] PCI: OF:   MEM 0xf600..0xf7ff -> 
0x
[0.587310] kirin-pcie f400.pcie: PCI host 
bridge to bus :00
[0.587319] pci_bus :00: root bus resource [bus 
00-ff]
[0.587326] pci_bus :00: root bus resource [mem 
0xf600-0xf7ff] (bus address [0x-0x01ff])
[0.587362] pci :00:00.0: [19e5:3660] type 01 
class 0x060400
[0.587409] pci :00:00.0: reg 0x10: [mem 
0xf600-0xf6ff 64bit]
[0.587514] pci :00:00.0: supports D1 D2
[0.587519] pci :00:00.0: PME# supported from D0 
D1 D2 D3hot
[0.589065] pci :01:00.0: [10ec:5760] type 00 
class 0x010802
[0.589643] pci :01:00.0: reg 0x10: [mem 
0xf600-0xf6003fff 64bit]
[0.590142] pci :01:00.0: reg 0x24: [mem 
0xf600-0xf6001fff]
[0.604141] pci :00:00.0: BAR 0: assigned [mem 
0xf600-0xf6ff 64bit]
[0.604158] pci :00:00.0: BAR 14: assigned [mem 
0xf700-0xf70f]
[0.604166] pci :01:00.0: BAR 0: assigned [mem 
0xf700-0xf7003fff 64bit]
[0.604344] pci :01:00.0: BAR 5: assigned [mem 
0xf7004000-0xf7005fff]
[0.604400] pci :00:00.0: PCI bridge to [bus 
01-ff]
[0.604409] pci :00:00.0:   bridge window [mem 
0xf700-0xf70f]
[0.604750] pcieport :00:00.0: Signaling PME 
with IRQ 62
[0.604853] pcieport :00:00.0: AER enabled with 
IRQ 62
...
[0.623179] nvme nvme0: pci function :01:00.0
[0.623252] nvme :01:00.0: enabling device ( 
-> 0002)
...
[0.974765] PCIe Device NVMe enable MSI IRQ Success.
...

Changes between V5 and V4
=
1. rebase the patch based on Linux next-version branch.
2. fix issues according to review comments from Andy Shevchenko and Lorenzo.
   

[PATCH v6 0/1] PCI: kirin: Add MSI support

2018-07-11 Thread Xiaowei Song


Before Version Patches
==
patch v5
https://patchwork.kernel.org/patch/10493797/
patch v4
https://patchwork.kernel.org/patch/10402399/
patch v3
https://www.spinics.net/lists/linux-pci/msg72322.html
patch v2
https://www.spinics.net/lists/kernel/msg2797610.html

patch v1
https://www.spinics.net/lists/kernel/msg2796410.html

Changes between V6 and V5
=
1. Fix the bug pointed by Lorenzo.
(1) The bug of checking the return value of kirin_pcie_add_msi is 
fixed, which was pointed by Lorenzo.
2. Test the patch based on Hikey960.
   (1) the patch is tested on Hikey960 board with COLORFUL CN600 M.2 SSD 
connected.
   (2) As i can not cat /proc/interrupts for "adb shell"  is not working, so 
some log was added to test this patch as following.
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1966,8 +1966,13 @@ static int 
nvme_setup_io_queues(struct nvme_dev *dev)
pci_free_irq_vectors(pdev);
result = 
pci_alloc_irq_vectors_affinity(pdev, 1, nr_io_queues + 1,

PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, );
-   if (result <= 0)
+   if (result <= 0) {
+   printk(KERN_ERR "NVMe: alloc irq 
Fail.\n");
return -EIO;
+   }
+   else
+   printk(KERN_ERR "PCIe Device NVMe 
enable MSI IRQ Success.\n");
+

--- a/drivers/pci/dwc/pcie-kirin.c
+++ b/drivers/pci/dwc/pcie-kirin.c
@@ -462,6 +462,9 @@ static int 
kirin_pcie_add_msi(struct dw_pcie *pci,
}

pci->pp.msi_irq = ret;
+
+   dev_err(>dev,
+   "Kirin PCIe MSI IRQ 
No.(%d)\n", ret);
}

return ret;

(3) The running log is shown as below:
...
[0.567993] kirin-pcie f400.pcie: Kirin PCIe MSI 
IRQ No.(55)
[0.568012] PCI: OF: host bridge /soc/pcie@f400 
ranges:
[0.568026] PCI: OF:   MEM 0xf600..0xf7ff -> 
0x
[0.587310] kirin-pcie f400.pcie: PCI host 
bridge to bus :00
[0.587319] pci_bus :00: root bus resource [bus 
00-ff]
[0.587326] pci_bus :00: root bus resource [mem 
0xf600-0xf7ff] (bus address [0x-0x01ff])
[0.587362] pci :00:00.0: [19e5:3660] type 01 
class 0x060400
[0.587409] pci :00:00.0: reg 0x10: [mem 
0xf600-0xf6ff 64bit]
[0.587514] pci :00:00.0: supports D1 D2
[0.587519] pci :00:00.0: PME# supported from D0 
D1 D2 D3hot
[0.589065] pci :01:00.0: [10ec:5760] type 00 
class 0x010802
[0.589643] pci :01:00.0: reg 0x10: [mem 
0xf600-0xf6003fff 64bit]
[0.590142] pci :01:00.0: reg 0x24: [mem 
0xf600-0xf6001fff]
[0.604141] pci :00:00.0: BAR 0: assigned [mem 
0xf600-0xf6ff 64bit]
[0.604158] pci :00:00.0: BAR 14: assigned [mem 
0xf700-0xf70f]
[0.604166] pci :01:00.0: BAR 0: assigned [mem 
0xf700-0xf7003fff 64bit]
[0.604344] pci :01:00.0: BAR 5: assigned [mem 
0xf7004000-0xf7005fff]
[0.604400] pci :00:00.0: PCI bridge to [bus 
01-ff]
[0.604409] pci :00:00.0:   bridge window [mem 
0xf700-0xf70f]
[0.604750] pcieport :00:00.0: Signaling PME 
with IRQ 62
[0.604853] pcieport :00:00.0: AER enabled with 
IRQ 62
...
[0.623179] nvme nvme0: pci function :01:00.0
[0.623252] nvme :01:00.0: enabling device ( 
-> 0002)
...
[0.974765] PCIe Device NVMe enable MSI IRQ Success.
...

Changes between V5 and V4
=
1. rebase the patch based on Linux next-version branch.
2. fix issues according to review comments from Andy Shevchenko and Lorenzo.