Re: Purpose of pci_remap_iospace

2016-07-15 Thread Arnd Bergmann
On Friday, July 15, 2016 5:21:20 AM CEST Bharat Kumar Gogada wrote:
> > On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
> >
> > [...]
> >
> > > Hi Lorenzo,
> > >
> > > I missed something in my device tree now I corrected it.
> > >
> > > ranges = <0x0100 0x 0xe000 0x 0xe000 0
> > 0x0001   //io
> >
> > You have not missed anything, you changed the PCI bus address at which
> > your host bridge responds to IO space and it must match your configuration.
> > At what PCI bus address your host bridge maps IO space ?
> >
> Our host bridge does not have dedicted address space mapped for IO 
> transactions.
> For generation of IO transactions it requires some register read and write 
> operations
> in bridge logic.
> 
> So the above PCI address does not come in to picture also, is there alternate 
> way to handle IO
> Bars with our kind of hardware architecture.

Hisilicon has a similar thing on one of their LPC bridges, and
Rongrong Zou has implemented something for it in the past, but I
think it never got merged.

https://lkml.org/lkml/2015/12/29/154 has one version of his
proposal, not sure if that was the latest one or if something
newer exists.

Arnd


Re: Purpose of pci_remap_iospace

2016-07-15 Thread Arnd Bergmann
On Friday, July 15, 2016 5:21:20 AM CEST Bharat Kumar Gogada wrote:
> > On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
> >
> > [...]
> >
> > > Hi Lorenzo,
> > >
> > > I missed something in my device tree now I corrected it.
> > >
> > > ranges = <0x0100 0x 0xe000 0x 0xe000 0
> > 0x0001   //io
> >
> > You have not missed anything, you changed the PCI bus address at which
> > your host bridge responds to IO space and it must match your configuration.
> > At what PCI bus address your host bridge maps IO space ?
> >
> Our host bridge does not have dedicted address space mapped for IO 
> transactions.
> For generation of IO transactions it requires some register read and write 
> operations
> in bridge logic.
> 
> So the above PCI address does not come in to picture also, is there alternate 
> way to handle IO
> Bars with our kind of hardware architecture.

Hisilicon has a similar thing on one of their LPC bridges, and
Rongrong Zou has implemented something for it in the past, but I
think it never got merged.

https://lkml.org/lkml/2015/12/29/154 has one version of his
proposal, not sure if that was the latest one or if something
newer exists.

Arnd


RE: Purpose of pci_remap_iospace

2016-07-14 Thread Bharat Kumar Gogada
> On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
>
> [...]
>
> > Hi Lorenzo,
> >
> > I missed something in my device tree now I corrected it.
> >
> > ranges = <0x0100 0x 0xe000 0x 0xe000 0
> 0x0001   //io
>
> You have not missed anything, you changed the PCI bus address at which
> your host bridge responds to IO space and it must match your configuration.
> At what PCI bus address your host bridge maps IO space ?
>
Our host bridge does not have dedicted address space mapped for IO transactions.
For generation of IO transactions it requires some register read and write 
operations
in bridge logic.

So the above PCI address does not come in to picture also, is there alternate 
way to handle IO
Bars with our kind of hardware architecture.

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



RE: Purpose of pci_remap_iospace

2016-07-14 Thread Bharat Kumar Gogada
> On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
>
> [...]
>
> > Hi Lorenzo,
> >
> > I missed something in my device tree now I corrected it.
> >
> > ranges = <0x0100 0x 0xe000 0x 0xe000 0
> 0x0001   //io
>
> You have not missed anything, you changed the PCI bus address at which
> your host bridge responds to IO space and it must match your configuration.
> At what PCI bus address your host bridge maps IO space ?
>
Our host bridge does not have dedicted address space mapped for IO transactions.
For generation of IO transactions it requires some register read and write 
operations
in bridge logic.

So the above PCI address does not come in to picture also, is there alternate 
way to handle IO
Bars with our kind of hardware architecture.

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



Re: Purpose of pci_remap_iospace

2016-07-14 Thread Lorenzo Pieralisi
On Thu, Jul 14, 2016 at 05:12:01PM +0200, Arnd Bergmann wrote:
> On Thursday, July 14, 2016 3:56:24 PM CEST Lorenzo Pieralisi wrote:
> > On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
> > 
> > [...]
> > 
> > > Hi Lorenzo,
> > > 
> > > I missed something in my device tree now I corrected it.
> > > 
> > > ranges = <0x0100 0x 0xe000 0x 0xe000 0 
> > > 0x0001   //io
> > 
> > You have not missed anything, you changed the PCI bus address at
> > which your host bridge responds to IO space and it must match
> > your configuration.
> 
> I'd always recommend mapping the I/O space to PCI address zero, but
> evidently the hardware is not configured that way here.

+1 and it is a message that must be heeded by Xiling folks before
merging the host controller changes and respective DT bindings/dts.

Lorenzo


Re: Purpose of pci_remap_iospace

2016-07-14 Thread Lorenzo Pieralisi
On Thu, Jul 14, 2016 at 05:12:01PM +0200, Arnd Bergmann wrote:
> On Thursday, July 14, 2016 3:56:24 PM CEST Lorenzo Pieralisi wrote:
> > On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
> > 
> > [...]
> > 
> > > Hi Lorenzo,
> > > 
> > > I missed something in my device tree now I corrected it.
> > > 
> > > ranges = <0x0100 0x 0xe000 0x 0xe000 0 
> > > 0x0001   //io
> > 
> > You have not missed anything, you changed the PCI bus address at
> > which your host bridge responds to IO space and it must match
> > your configuration.
> 
> I'd always recommend mapping the I/O space to PCI address zero, but
> evidently the hardware is not configured that way here.

+1 and it is a message that must be heeded by Xiling folks before
merging the host controller changes and respective DT bindings/dts.

Lorenzo


Re: Purpose of pci_remap_iospace

2016-07-14 Thread Lorenzo Pieralisi
On Thu, Jul 14, 2016 at 03:05:40PM +, Bharat Kumar Gogada wrote:

[...]

> > On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
> > > ranges = <0x0100 0x 0xe000 0x 0xe000 0
> > 0x0001   //io
> >
> > You have not missed anything, you changed the PCI bus address at which
> > your host bridge responds to IO space and it must match your configuration.
> > At what PCI bus address your host bridge maps IO space ?
> >
> > >  0x0200 0x 0xe010 0x
> > > 0xe010 0 0x0ef0>; //non prefetchabe memory
> > >
> > > [2.389498] nwl-pcie fd0e.pcie: Link is UP
> > > [2.389541] PCI host bridge /amba/pcie@fd0e ranges:
> > > [2.389558]   No bus range found for /amba/pcie@fd0e, using [bus
> > 00-ff]
> > > [2.389583]IO 0xe000..0xe000 -> 0xe000
> > > [2.389624]   MEM 0xe010..0xeeff -> 0xe010
> > > [2.389803] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > > [2.389822] pci_bus :00: root bus resource [bus 00-ff]
> > > [2.389839] pci_bus :00: root bus resource [io  0x-0x] (bus
> > address [0xe000-0xe000])
> > > [2.389863] pci_bus :00: root bus resource [mem 0xe010-
> > 0xeeff]
> > > [2.390094] pci :00:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [2.390110] iommu: Adding device :00:00.0 to group 1
> > > [2.390274] pci :01:00.0: reg 0x20: initial BAR value 0x 
> > > invalid
> > > [2.390481] pci :01:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [2.390496] iommu: Adding device :01:00.0 to group 1
> > > [2.390533] in pci_bridge_check_ranges io 101
> > > [2.390545] in pci_bridge_check_ranges io 2 101
> > > [2.390575] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> > 0xe02f]
> > > [2.390592] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> > > [2.390609] pci :00:00.0: BAR 6: assigned [mem 0xe030-
> > 0xe03007ff pref]
> > > [2.390636] pci :01:00.0: BAR 0: assigned [mem 
> > > 0xe010-0xe01f
> > 64bit]
> > > [2.390669] pci :01:00.0: BAR 2: assigned [mem 
> > > 0xe020-0xe02f
> > 64bit]
> > > [2.390702] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> > > [2.390721] pci :00:00.0: PCI bridge to [bus 01-0c]
> > > [2.390785] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
> > > [2.390823] pci :00:00.0:   bridge window [mem 0xe010-
> > 0xe02f]
> > >
> Thanks a lot Loenzo for your kind and clear explanation, I will dig
> through hardware and correct my device tree.
> 
> From above log why IO space is allocated as only 4k even though I'm
> allocating 64k through device tree ?

You are not allocating anything in the device tree, you are just
defining the physical memory window at which your PCI host bridge
address decoders "map" PCI IO cycles.

PCI core code, while assigning resources, sizes the PCI bridge
IO window BAR by sizing the downstream PCI devices BARs:

See:

pbus_size_io()

PCI core won't allocate an IO window to your PCI bridge window BARs
bigger than what's necessary (according to downstream devices), keeping
alignment in mind.

Is that clear ?

> This email and any attachments are intended for the sole use of the named 
> recipient(s) and contain(s) confidential information that may be proprietary, 
> privileged or copyrighted under applicable law. If you are not the intended 
> recipient, do not read, copy, or forward this email message or any 
> attachments. Delete this email message and any attachments immediately.

This disclaimer should disappear if you want to discuss patches on
public mailing lists.

Thanks,
Lorenzo


Re: Purpose of pci_remap_iospace

2016-07-14 Thread Lorenzo Pieralisi
On Thu, Jul 14, 2016 at 03:05:40PM +, Bharat Kumar Gogada wrote:

[...]

> > On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
> > > ranges = <0x0100 0x 0xe000 0x 0xe000 0
> > 0x0001   //io
> >
> > You have not missed anything, you changed the PCI bus address at which
> > your host bridge responds to IO space and it must match your configuration.
> > At what PCI bus address your host bridge maps IO space ?
> >
> > >  0x0200 0x 0xe010 0x
> > > 0xe010 0 0x0ef0>; //non prefetchabe memory
> > >
> > > [2.389498] nwl-pcie fd0e.pcie: Link is UP
> > > [2.389541] PCI host bridge /amba/pcie@fd0e ranges:
> > > [2.389558]   No bus range found for /amba/pcie@fd0e, using [bus
> > 00-ff]
> > > [2.389583]IO 0xe000..0xe000 -> 0xe000
> > > [2.389624]   MEM 0xe010..0xeeff -> 0xe010
> > > [2.389803] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > > [2.389822] pci_bus :00: root bus resource [bus 00-ff]
> > > [2.389839] pci_bus :00: root bus resource [io  0x-0x] (bus
> > address [0xe000-0xe000])
> > > [2.389863] pci_bus :00: root bus resource [mem 0xe010-
> > 0xeeff]
> > > [2.390094] pci :00:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [2.390110] iommu: Adding device :00:00.0 to group 1
> > > [2.390274] pci :01:00.0: reg 0x20: initial BAR value 0x 
> > > invalid
> > > [2.390481] pci :01:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [2.390496] iommu: Adding device :01:00.0 to group 1
> > > [2.390533] in pci_bridge_check_ranges io 101
> > > [2.390545] in pci_bridge_check_ranges io 2 101
> > > [2.390575] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> > 0xe02f]
> > > [2.390592] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> > > [2.390609] pci :00:00.0: BAR 6: assigned [mem 0xe030-
> > 0xe03007ff pref]
> > > [2.390636] pci :01:00.0: BAR 0: assigned [mem 
> > > 0xe010-0xe01f
> > 64bit]
> > > [2.390669] pci :01:00.0: BAR 2: assigned [mem 
> > > 0xe020-0xe02f
> > 64bit]
> > > [2.390702] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> > > [2.390721] pci :00:00.0: PCI bridge to [bus 01-0c]
> > > [2.390785] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
> > > [2.390823] pci :00:00.0:   bridge window [mem 0xe010-
> > 0xe02f]
> > >
> Thanks a lot Loenzo for your kind and clear explanation, I will dig
> through hardware and correct my device tree.
> 
> From above log why IO space is allocated as only 4k even though I'm
> allocating 64k through device tree ?

You are not allocating anything in the device tree, you are just
defining the physical memory window at which your PCI host bridge
address decoders "map" PCI IO cycles.

PCI core code, while assigning resources, sizes the PCI bridge
IO window BAR by sizing the downstream PCI devices BARs:

See:

pbus_size_io()

PCI core won't allocate an IO window to your PCI bridge window BARs
bigger than what's necessary (according to downstream devices), keeping
alignment in mind.

Is that clear ?

> This email and any attachments are intended for the sole use of the named 
> recipient(s) and contain(s) confidential information that may be proprietary, 
> privileged or copyrighted under applicable law. If you are not the intended 
> recipient, do not read, copy, or forward this email message or any 
> attachments. Delete this email message and any attachments immediately.

This disclaimer should disappear if you want to discuss patches on
public mailing lists.

Thanks,
Lorenzo


Re: Purpose of pci_remap_iospace

2016-07-14 Thread Arnd Bergmann
On Thursday, July 14, 2016 3:56:24 PM CEST Lorenzo Pieralisi wrote:
> On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
> 
> [...]
> 
> > Hi Lorenzo,
> > 
> > I missed something in my device tree now I corrected it.
> > 
> > ranges = <0x0100 0x 0xe000 0x 0xe000 0 
> > 0x0001   //io
> 
> You have not missed anything, you changed the PCI bus address at
> which your host bridge responds to IO space and it must match
> your configuration.

I'd always recommend mapping the I/O space to PCI address zero, but
evidently the hardware is not configured that way here.

Arnd


Re: Purpose of pci_remap_iospace

2016-07-14 Thread Arnd Bergmann
On Thursday, July 14, 2016 3:56:24 PM CEST Lorenzo Pieralisi wrote:
> On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
> 
> [...]
> 
> > Hi Lorenzo,
> > 
> > I missed something in my device tree now I corrected it.
> > 
> > ranges = <0x0100 0x 0xe000 0x 0xe000 0 
> > 0x0001   //io
> 
> You have not missed anything, you changed the PCI bus address at
> which your host bridge responds to IO space and it must match
> your configuration.

I'd always recommend mapping the I/O space to PCI address zero, but
evidently the hardware is not configured that way here.

Arnd


RE: Purpose of pci_remap_iospace

2016-07-14 Thread Bharat Kumar Gogada
> On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
>
> [...]
>
> > Hi Lorenzo,
> >
> > I missed something in my device tree now I corrected it.
> >
> > ranges = <0x0100 0x 0xe000 0x 0xe000 0
> 0x0001   //io
>
> You have not missed anything, you changed the PCI bus address at which
> your host bridge responds to IO space and it must match your configuration.
> At what PCI bus address your host bridge maps IO space ?
>
> >  0x0200 0x 0xe010 0x
> > 0xe010 0 0x0ef0>; //non prefetchabe memory
> >
> > [2.389498] nwl-pcie fd0e.pcie: Link is UP
> > [2.389541] PCI host bridge /amba/pcie@fd0e ranges:
> > [2.389558]   No bus range found for /amba/pcie@fd0e, using [bus
> 00-ff]
> > [2.389583]IO 0xe000..0xe000 -> 0xe000
> > [2.389624]   MEM 0xe010..0xeeff -> 0xe010
> > [2.389803] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > [2.389822] pci_bus :00: root bus resource [bus 00-ff]
> > [2.389839] pci_bus :00: root bus resource [io  0x-0x] (bus
> address [0xe000-0xe000])
> > [2.389863] pci_bus :00: root bus resource [mem 0xe010-
> 0xeeff]
> > [2.390094] pci :00:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [2.390110] iommu: Adding device :00:00.0 to group 1
> > [2.390274] pci :01:00.0: reg 0x20: initial BAR value 0x 
> > invalid
> > [2.390481] pci :01:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [2.390496] iommu: Adding device :01:00.0 to group 1
> > [2.390533] in pci_bridge_check_ranges io 101
> > [2.390545] in pci_bridge_check_ranges io 2 101
> > [2.390575] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> 0xe02f]
> > [2.390592] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> > [2.390609] pci :00:00.0: BAR 6: assigned [mem 0xe030-
> 0xe03007ff pref]
> > [2.390636] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f
> 64bit]
> > [2.390669] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f
> 64bit]
> > [2.390702] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> > [2.390721] pci :00:00.0: PCI bridge to [bus 01-0c]
> > [2.390785] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
> > [2.390823] pci :00:00.0:   bridge window [mem 0xe010-
> 0xe02f]
> >
Thanks a lot Loenzo for your kind and clear explanation, I will dig through 
hardware and correct my device tree.

>From above log why IO space is allocated as only 4k even though I'm allocating 
>64k through device tree ?

Regards,
Bharat



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



RE: Purpose of pci_remap_iospace

2016-07-14 Thread Bharat Kumar Gogada
> On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:
>
> [...]
>
> > Hi Lorenzo,
> >
> > I missed something in my device tree now I corrected it.
> >
> > ranges = <0x0100 0x 0xe000 0x 0xe000 0
> 0x0001   //io
>
> You have not missed anything, you changed the PCI bus address at which
> your host bridge responds to IO space and it must match your configuration.
> At what PCI bus address your host bridge maps IO space ?
>
> >  0x0200 0x 0xe010 0x
> > 0xe010 0 0x0ef0>; //non prefetchabe memory
> >
> > [2.389498] nwl-pcie fd0e.pcie: Link is UP
> > [2.389541] PCI host bridge /amba/pcie@fd0e ranges:
> > [2.389558]   No bus range found for /amba/pcie@fd0e, using [bus
> 00-ff]
> > [2.389583]IO 0xe000..0xe000 -> 0xe000
> > [2.389624]   MEM 0xe010..0xeeff -> 0xe010
> > [2.389803] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > [2.389822] pci_bus :00: root bus resource [bus 00-ff]
> > [2.389839] pci_bus :00: root bus resource [io  0x-0x] (bus
> address [0xe000-0xe000])
> > [2.389863] pci_bus :00: root bus resource [mem 0xe010-
> 0xeeff]
> > [2.390094] pci :00:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [2.390110] iommu: Adding device :00:00.0 to group 1
> > [2.390274] pci :01:00.0: reg 0x20: initial BAR value 0x 
> > invalid
> > [2.390481] pci :01:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [2.390496] iommu: Adding device :01:00.0 to group 1
> > [2.390533] in pci_bridge_check_ranges io 101
> > [2.390545] in pci_bridge_check_ranges io 2 101
> > [2.390575] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> 0xe02f]
> > [2.390592] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> > [2.390609] pci :00:00.0: BAR 6: assigned [mem 0xe030-
> 0xe03007ff pref]
> > [2.390636] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f
> 64bit]
> > [2.390669] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f
> 64bit]
> > [2.390702] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> > [2.390721] pci :00:00.0: PCI bridge to [bus 01-0c]
> > [2.390785] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
> > [2.390823] pci :00:00.0:   bridge window [mem 0xe010-
> 0xe02f]
> >
Thanks a lot Loenzo for your kind and clear explanation, I will dig through 
hardware and correct my device tree.

>From above log why IO space is allocated as only 4k even though I'm allocating 
>64k through device tree ?

Regards,
Bharat



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



Re: Purpose of pci_remap_iospace

2016-07-14 Thread Lorenzo Pieralisi
On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:

[...]

> Hi Lorenzo,
> 
> I missed something in my device tree now I corrected it.
> 
> ranges = <0x0100 0x 0xe000 0x 0xe000 0 0x0001 
>   //io

You have not missed anything, you changed the PCI bus address at
which your host bridge responds to IO space and it must match
your configuration. At what PCI bus address your host bridge
maps IO space ?

>  0x0200 0x 0xe010 0x 0xe010 0 
> 0x0ef0>; //non prefetchabe memory
> 
> [2.389498] nwl-pcie fd0e.pcie: Link is UP
> [2.389541] PCI host bridge /amba/pcie@fd0e ranges:
> [2.389558]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.389583]IO 0xe000..0xe000 -> 0xe000
> [2.389624]   MEM 0xe010..0xeeff -> 0xe010
> [2.389803] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.389822] pci_bus :00: root bus resource [bus 00-ff]
> [2.389839] pci_bus :00: root bus resource [io  0x-0x] (bus 
> address [0xe000-0xe000])
> [2.389863] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
> [2.390094] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
> [2.390110] iommu: Adding device :00:00.0 to group 1
> [2.390274] pci :01:00.0: reg 0x20: initial BAR value 0x 
> invalid
> [2.390481] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
> [2.390496] iommu: Adding device :01:00.0 to group 1
> [2.390533] in pci_bridge_check_ranges io 101
> [2.390545] in pci_bridge_check_ranges io 2 101
> [2.390575] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
> [2.390592] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> [2.390609] pci :00:00.0: BAR 6: assigned [mem 0xe030-0xe03007ff 
> pref]
> [2.390636] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
> 64bit]
> [2.390669] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
> 64bit]
> [2.390702] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> [2.390721] pci :00:00.0: PCI bridge to [bus 01-0c]
> [2.390785] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
> [2.390823] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]
> 
> Lspci on bridge:
> 00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal 
> decode])
> Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
> Stepping- SERR- FastB2B- DisINTx-
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> SERR-  Interrupt: pin A routed to IRQ 224
> Bus: primary=00, secondary=01, subordinate=0c, sec-latency=0
> I/O behind bridge: e0001000-e0001fff
> Memory behind bridge: e010-e02f
> 
> Here my IO space is showing 4k, but what I'm providing is 4k ?(In above boot 
> log also IO space length 4k)
> 
> Lspci on EP:
> 01:00.0 Memory controller: Xilinx Corporation Device d024
> Subsystem: Xilinx Corporation Device 0007
> Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
> Stepping- SERR- FastB2B- DisINTx-
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> SERR-  Interrupt: pin A routed to IRQ 224
> Region 0: Memory at e010 (64-bit, non-prefetchable) [disabled] 
> [size=1M]
> Region 2: Memory at e020 (64-bit, non-prefetchable) [disabled] 
> [size=1M]
> Region 4: I/O ports at 1000 [disabled] [size=64]
> 
> On EP from where it is getting this 1000 address, it should be within
> I/O behind bridge range know ?


The CPU physical address in the DT range for PCI IO range is the
address at which your host bridge responds to PCI IO space cycle
(through memory mapped accesses, to emulate x86 IO port behaviour).

The PCI bus address in the range is the address your
host bridge will convert the incoming physical CPU address
and drive the PCI bus transactions.

Is your host bridge programmed with its address decoder
set-up according to what I say above (and your DT bindings) ?

If yes, on to the virtual address space.

On ARM, for IO space, we map the cpu physical address I
mention above to a chunk of virtual address space allocated
for PCI IO space, that's what pci_remap_iospace() is meant
for.

That physical address is mapped to a fixed virtual address range
(starting with PCI_IOBASE).

The value you see in the IO bar above is an offset into that chunk
of virtual addresses so that, when you do eg inb(offset) in a driver
the code behind it translates that access to a memory mapped access into
the virtual address space allocated to PCI IO space (that you
previously mapped through pci_remap_iospace()).

The offset allocated starts from 0x1000, since that's the
value of PCIBIOS_MIN_IO, that the code assigning resources
use to preserve the range 

Re: Purpose of pci_remap_iospace

2016-07-14 Thread Lorenzo Pieralisi
On Thu, Jul 14, 2016 at 01:32:13PM +, Bharat Kumar Gogada wrote:

[...]

> Hi Lorenzo,
> 
> I missed something in my device tree now I corrected it.
> 
> ranges = <0x0100 0x 0xe000 0x 0xe000 0 0x0001 
>   //io

You have not missed anything, you changed the PCI bus address at
which your host bridge responds to IO space and it must match
your configuration. At what PCI bus address your host bridge
maps IO space ?

>  0x0200 0x 0xe010 0x 0xe010 0 
> 0x0ef0>; //non prefetchabe memory
> 
> [2.389498] nwl-pcie fd0e.pcie: Link is UP
> [2.389541] PCI host bridge /amba/pcie@fd0e ranges:
> [2.389558]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.389583]IO 0xe000..0xe000 -> 0xe000
> [2.389624]   MEM 0xe010..0xeeff -> 0xe010
> [2.389803] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.389822] pci_bus :00: root bus resource [bus 00-ff]
> [2.389839] pci_bus :00: root bus resource [io  0x-0x] (bus 
> address [0xe000-0xe000])
> [2.389863] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
> [2.390094] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
> [2.390110] iommu: Adding device :00:00.0 to group 1
> [2.390274] pci :01:00.0: reg 0x20: initial BAR value 0x 
> invalid
> [2.390481] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
> [2.390496] iommu: Adding device :01:00.0 to group 1
> [2.390533] in pci_bridge_check_ranges io 101
> [2.390545] in pci_bridge_check_ranges io 2 101
> [2.390575] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
> [2.390592] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
> [2.390609] pci :00:00.0: BAR 6: assigned [mem 0xe030-0xe03007ff 
> pref]
> [2.390636] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
> 64bit]
> [2.390669] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
> 64bit]
> [2.390702] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
> [2.390721] pci :00:00.0: PCI bridge to [bus 01-0c]
> [2.390785] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
> [2.390823] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]
> 
> Lspci on bridge:
> 00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal 
> decode])
> Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
> Stepping- SERR- FastB2B- DisINTx-
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> SERR-  Interrupt: pin A routed to IRQ 224
> Bus: primary=00, secondary=01, subordinate=0c, sec-latency=0
> I/O behind bridge: e0001000-e0001fff
> Memory behind bridge: e010-e02f
> 
> Here my IO space is showing 4k, but what I'm providing is 4k ?(In above boot 
> log also IO space length 4k)
> 
> Lspci on EP:
> 01:00.0 Memory controller: Xilinx Corporation Device d024
> Subsystem: Xilinx Corporation Device 0007
> Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
> Stepping- SERR- FastB2B- DisINTx-
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> SERR-  Interrupt: pin A routed to IRQ 224
> Region 0: Memory at e010 (64-bit, non-prefetchable) [disabled] 
> [size=1M]
> Region 2: Memory at e020 (64-bit, non-prefetchable) [disabled] 
> [size=1M]
> Region 4: I/O ports at 1000 [disabled] [size=64]
> 
> On EP from where it is getting this 1000 address, it should be within
> I/O behind bridge range know ?


The CPU physical address in the DT range for PCI IO range is the
address at which your host bridge responds to PCI IO space cycle
(through memory mapped accesses, to emulate x86 IO port behaviour).

The PCI bus address in the range is the address your
host bridge will convert the incoming physical CPU address
and drive the PCI bus transactions.

Is your host bridge programmed with its address decoder
set-up according to what I say above (and your DT bindings) ?

If yes, on to the virtual address space.

On ARM, for IO space, we map the cpu physical address I
mention above to a chunk of virtual address space allocated
for PCI IO space, that's what pci_remap_iospace() is meant
for.

That physical address is mapped to a fixed virtual address range
(starting with PCI_IOBASE).

The value you see in the IO bar above is an offset into that chunk
of virtual addresses so that, when you do eg inb(offset) in a driver
the code behind it translates that access to a memory mapped access into
the virtual address space allocated to PCI IO space (that you
previously mapped through pci_remap_iospace()).

The offset allocated starts from 0x1000, since that's the
value of PCIBIOS_MIN_IO, that the code assigning resources
use to preserve the range 

RE: Purpose of pci_remap_iospace

2016-07-14 Thread Bharat Kumar Gogada
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Wed, Jul 13, 2016 at 12:30:44PM +, Bharat Kumar Gogada wrote:
> >
> > [...]
> >
> > > err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
> > > if (err) {
> > > pr_err("Getting bridge resources failed\n");
> > > return err;
> > > }
> > > resource_list_for_each_entry(window, ) {//code for io
> resource
> > > struct resource *res = window->res;
> > > u64 restype = resource_type(res);
> > >
> > > switch (restype) {
> > > case IORESOURCE_IO:
> > > err = pci_remap_iospace(res, iobase);
> > > if(err)
> > > pr_info("FAILED TO IPREMAP RESOURCE\n");
> > > break;
> > > default:
> > > dev_err(pcie->dev, "invalid resource %pR\n",
> > > res);
> > >
> > > }
> > > }
> > >
> > > Other than above code I haven't done any change in driver.
> > >
> > Here is your PCI bridge mem space window assignment. I do not see an
> > IO window assignment which makes me think that IO cycles and relative
> > IO window is not enabled through the bridge, that's the reason you
> > can't assign IO space to the endpoint, because it has no parent IO window
> enabled IIUC.
> >
>
> We sorted this out, enabled the IO base limit / upper 16bit registers in the
> bridge for 32 bit decode.
> However my IO address being assigned to EP is different than what I provide
> in device tree.
>

Hi Lorenzo,

I missed something in my device tree now I corrected it.

ranges = <0x0100 0x 0xe000 0x 0xe000 0 0x0001   
//io
 0x0200 0x 0xe010 0x 0xe010 0 
0x0ef0>; //non prefetchabe memory

[2.389498] nwl-pcie fd0e.pcie: Link is UP
[2.389541] PCI host bridge /amba/pcie@fd0e ranges:
[2.389558]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
[2.389583]IO 0xe000..0xe000 -> 0xe000
[2.389624]   MEM 0xe010..0xeeff -> 0xe010
[2.389803] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
[2.389822] pci_bus :00: root bus resource [bus 00-ff]
[2.389839] pci_bus :00: root bus resource [io  0x-0x] (bus 
address [0xe000-0xe000])
[2.389863] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
[2.390094] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
[2.390110] iommu: Adding device :00:00.0 to group 1
[2.390274] pci :01:00.0: reg 0x20: initial BAR value 0x invalid
[2.390481] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
[2.390496] iommu: Adding device :01:00.0 to group 1
[2.390533] in pci_bridge_check_ranges io 101
[2.390545] in pci_bridge_check_ranges io 2 101
[2.390575] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
[2.390592] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[2.390609] pci :00:00.0: BAR 6: assigned [mem 0xe030-0xe03007ff 
pref]
[2.390636] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
64bit]
[2.390669] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
64bit]
[2.390702] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
[2.390721] pci :00:00.0: PCI bridge to [bus 01-0c]
[2.390785] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
[2.390823] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]

Lspci on bridge:
00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal decode])
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- TAbort- SERR- 

RE: Purpose of pci_remap_iospace

2016-07-14 Thread Bharat Kumar Gogada
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Wed, Jul 13, 2016 at 12:30:44PM +, Bharat Kumar Gogada wrote:
> >
> > [...]
> >
> > > err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
> > > if (err) {
> > > pr_err("Getting bridge resources failed\n");
> > > return err;
> > > }
> > > resource_list_for_each_entry(window, ) {//code for io
> resource
> > > struct resource *res = window->res;
> > > u64 restype = resource_type(res);
> > >
> > > switch (restype) {
> > > case IORESOURCE_IO:
> > > err = pci_remap_iospace(res, iobase);
> > > if(err)
> > > pr_info("FAILED TO IPREMAP RESOURCE\n");
> > > break;
> > > default:
> > > dev_err(pcie->dev, "invalid resource %pR\n",
> > > res);
> > >
> > > }
> > > }
> > >
> > > Other than above code I haven't done any change in driver.
> > >
> > Here is your PCI bridge mem space window assignment. I do not see an
> > IO window assignment which makes me think that IO cycles and relative
> > IO window is not enabled through the bridge, that's the reason you
> > can't assign IO space to the endpoint, because it has no parent IO window
> enabled IIUC.
> >
>
> We sorted this out, enabled the IO base limit / upper 16bit registers in the
> bridge for 32 bit decode.
> However my IO address being assigned to EP is different than what I provide
> in device tree.
>

Hi Lorenzo,

I missed something in my device tree now I corrected it.

ranges = <0x0100 0x 0xe000 0x 0xe000 0 0x0001   
//io
 0x0200 0x 0xe010 0x 0xe010 0 
0x0ef0>; //non prefetchabe memory

[2.389498] nwl-pcie fd0e.pcie: Link is UP
[2.389541] PCI host bridge /amba/pcie@fd0e ranges:
[2.389558]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
[2.389583]IO 0xe000..0xe000 -> 0xe000
[2.389624]   MEM 0xe010..0xeeff -> 0xe010
[2.389803] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
[2.389822] pci_bus :00: root bus resource [bus 00-ff]
[2.389839] pci_bus :00: root bus resource [io  0x-0x] (bus 
address [0xe000-0xe000])
[2.389863] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
[2.390094] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
[2.390110] iommu: Adding device :00:00.0 to group 1
[2.390274] pci :01:00.0: reg 0x20: initial BAR value 0x invalid
[2.390481] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
[2.390496] iommu: Adding device :01:00.0 to group 1
[2.390533] in pci_bridge_check_ranges io 101
[2.390545] in pci_bridge_check_ranges io 2 101
[2.390575] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
[2.390592] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[2.390609] pci :00:00.0: BAR 6: assigned [mem 0xe030-0xe03007ff 
pref]
[2.390636] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
64bit]
[2.390669] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
64bit]
[2.390702] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
[2.390721] pci :00:00.0: PCI bridge to [bus 01-0c]
[2.390785] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
[2.390823] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]

Lspci on bridge:
00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal decode])
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- TAbort- SERR- 

RE: Purpose of pci_remap_iospace

2016-07-14 Thread Bharat Kumar Gogada
> Subject: Re: Purpose of pci_remap_iospace
>
> On Wed, Jul 13, 2016 at 12:30:44PM +, Bharat Kumar Gogada wrote:
>
> [...]
>
> > err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
> > if (err) {
> > pr_err("Getting bridge resources failed\n");
> > return err;
> > }
> > resource_list_for_each_entry(window, ) {//code for io 
> > resource
> > struct resource *res = window->res;
> > u64 restype = resource_type(res);
> >
> > switch (restype) {
> > case IORESOURCE_IO:
> > err = pci_remap_iospace(res, iobase);
> > if(err)
> > pr_info("FAILED TO IPREMAP RESOURCE\n");
> > break;
> > default:
> > dev_err(pcie->dev, "invalid resource %pR\n",
> > res);
> >
> > }
> > }
> >
> > Other than above code I haven't done any change in driver.
> >
> Here is your PCI bridge mem space window assignment. I do not see an IO
> window assignment which makes me think that IO cycles and relative IO
> window is not enabled through the bridge, that's the reason you can't assign
> IO space to the endpoint, because it has no parent IO window enabled IIUC.
>

We sorted this out, enabled the IO base limit / upper 16bit registers in the 
bridge for 32 bit decode.
However my IO address being assigned to EP is different than what I provide in 
device tree.

Device tree property:
ranges = <0x0100 0x 0x 0x 0xe000 0 0x0001   
//io
  0x0200 0x 0xe010 0x 0xe010 0 
0x0ef0>; //non prefetchabe memory

Here is the boot log:
[2.312504] nwl-pcie fd0e.pcie: Link is UP
[2.312548] PCI host bridge /amba/pcie@fd0e ranges:
[2.312565]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
[2.312591]IO 0xe000..0xe000 -> 0x
[2.312610]   MEM 0xe010..0xeeff -> 0xe010
[2.312711] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
[2.312729] pci_bus :00: root bus resource [bus 00-ff]
[2.312745] pci_bus :00: root bus resource [io  0x-0x]
[2.312761] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
[2.312993] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
[2.313009] iommu: Adding device :00:00.0 to group 1
[2.313363] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
[2.313379] iommu: Adding device :01:00.0 to group 1
[2.313434] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
[2.313452] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[2.313469] pci :00:00.0: BAR 6: assigned [mem 0xe030-0xe03007ff 
pref]
[2.313495] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
64bit]
[2.313529] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
64bit]
[2.313561] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
[2.313581] pci :00:00.0: PCI bridge to [bus 01-0c]
[2.313597] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
[2.313614] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]

If we are mapping our IO space to 0xe000 and 64k size, why kernel is 
showing 0x1000-0x1fff which is 4k ?

Lspci of bridge :
00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal decode])
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- TAbort- SERR- 

RE: Purpose of pci_remap_iospace

2016-07-14 Thread Bharat Kumar Gogada
> Subject: Re: Purpose of pci_remap_iospace
>
> On Wed, Jul 13, 2016 at 12:30:44PM +, Bharat Kumar Gogada wrote:
>
> [...]
>
> > err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
> > if (err) {
> > pr_err("Getting bridge resources failed\n");
> > return err;
> > }
> > resource_list_for_each_entry(window, ) {//code for io 
> > resource
> > struct resource *res = window->res;
> > u64 restype = resource_type(res);
> >
> > switch (restype) {
> > case IORESOURCE_IO:
> > err = pci_remap_iospace(res, iobase);
> > if(err)
> > pr_info("FAILED TO IPREMAP RESOURCE\n");
> > break;
> > default:
> > dev_err(pcie->dev, "invalid resource %pR\n",
> > res);
> >
> > }
> > }
> >
> > Other than above code I haven't done any change in driver.
> >
> Here is your PCI bridge mem space window assignment. I do not see an IO
> window assignment which makes me think that IO cycles and relative IO
> window is not enabled through the bridge, that's the reason you can't assign
> IO space to the endpoint, because it has no parent IO window enabled IIUC.
>

We sorted this out, enabled the IO base limit / upper 16bit registers in the 
bridge for 32 bit decode.
However my IO address being assigned to EP is different than what I provide in 
device tree.

Device tree property:
ranges = <0x0100 0x 0x 0x 0xe000 0 0x0001   
//io
  0x0200 0x 0xe010 0x 0xe010 0 
0x0ef0>; //non prefetchabe memory

Here is the boot log:
[2.312504] nwl-pcie fd0e.pcie: Link is UP
[2.312548] PCI host bridge /amba/pcie@fd0e ranges:
[2.312565]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
[2.312591]IO 0xe000..0xe000 -> 0x
[2.312610]   MEM 0xe010..0xeeff -> 0xe010
[2.312711] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
[2.312729] pci_bus :00: root bus resource [bus 00-ff]
[2.312745] pci_bus :00: root bus resource [io  0x-0x]
[2.312761] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
[2.312993] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
[2.313009] iommu: Adding device :00:00.0 to group 1
[2.313363] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
[2.313379] iommu: Adding device :01:00.0 to group 1
[2.313434] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
[2.313452] pci :00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[2.313469] pci :00:00.0: BAR 6: assigned [mem 0xe030-0xe03007ff 
pref]
[2.313495] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
64bit]
[2.313529] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
64bit]
[2.313561] pci :01:00.0: BAR 4: assigned [io  0x1000-0x103f]
[2.313581] pci :00:00.0: PCI bridge to [bus 01-0c]
[2.313597] pci :00:00.0:   bridge window [io  0x1000-0x1fff]
[2.313614] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]

If we are mapping our IO space to 0xe000 and 64k size, why kernel is 
showing 0x1000-0x1fff which is 4k ?

Lspci of bridge :
00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal decode])
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- TAbort- SERR- 

Re: Purpose of pci_remap_iospace

2016-07-13 Thread Lorenzo Pieralisi
On Wed, Jul 13, 2016 at 03:16:21PM +, Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
> > >  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada
> > wrote:
> > > > > > Subject: Re: Purpose of pci_remap_iospace
> > > >
> > > > I notice you have 1MB of I/O space here
> > > >
> > > > > Kernel Boot log:
> > > > > [2.345294] nwl-pcie fd0e.pcie: Link is UP
> > > > > [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> > > > > [2.345356]   No bus range found for /amba/pcie@fd0e, using
> > [bus
> > > > 00-ff]
> > > > > [2.345382]IO 0xe000..0xe00f -> 0x
> > > > > [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> > > > > [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > > > > [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> > > > > [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
> > > >
> > > > and all of it gets mapped by the PCI core. Usually you only have 64K
> > > > of I/O space per host bridge, and the PCI core should perhaps not
> > > > try to map all of it, though I don't think this is actually your 
> > > > problem here.
> > > >
> > > > > [2.345550] pci_bus :00: root bus resource [mem 0xe010-
> > > > 0xeeff]
> > > > > [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the 
> > > > > same
> > > > bus?
> > > > > [2.345786] iommu: Adding device :00:00.0 to group 1
> > > > > [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the 
> > > > > same
> > > > bus?
> > > > > [2.346158] iommu: Adding device :01:00.0 to group 1
> > > > > [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> > > > 0xe02f]
> > > > > [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-
> > 0xe01f
> > > > 64bit]
> > > > > [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-
> > 0xe02f
> > > > 64bit]
> > > > > [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> > > > > [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 
> > > > > 0x0040]
> > > > > [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> > > > > [2.346350] pci :00:00.0:   bridge window [mem 0xe010-
> > > > 0xe02f]
> > > > >
> > > > > IO assignment fails.
> > > >
> > > > I would guess that the I/O space is not registered correctly. Is
> > > > this drivers/pci/host/pcie-xilinx.c ? We have had problems with this
> > > > in the past, since almost nobody uses I/O space and it requires
> > > > several steps to all be done correctly.
> > > >
> > > Thanks Arnd.
> > >
> > > we are testing using drivers/pci/host/pcie-xilinx-nwl.c.
> >
> > According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
> > this hardware does not support I/O space.
> 
> We received a newer IP version with IO support, so we are trying to test this 
> feature.
> >
> > Is this on ARM or microblaze?
> 
> It is ARM 64-bit.
> 
> > This has neither the PCI memory nor the I/O resource, it looks like you 
> > never
> > call pci_add_resource_offset() to start with, or maybe it fails for some
> > reason.
> 
> I see that above API is used in ARM drivers, do we need to do it in
> ARM64 also ?

It is called in of_pci_get_host_bridge_resources(), since you
are using that API there is nothing more you have to do. The problem
with resources in /proc/iomem and /proc/ioports is that you
do not request the host bridge apertures in your host controller
driver, see drivers/pci/host/pci-host-common.c (devm_request_resource())
to see how to do it.

And as I said previously in this thread none of this is related to
your IO BAR assignment failures IMHO.

Lorenzo


Re: Purpose of pci_remap_iospace

2016-07-13 Thread Lorenzo Pieralisi
On Wed, Jul 13, 2016 at 03:16:21PM +, Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
> > >  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada
> > wrote:
> > > > > > Subject: Re: Purpose of pci_remap_iospace
> > > >
> > > > I notice you have 1MB of I/O space here
> > > >
> > > > > Kernel Boot log:
> > > > > [2.345294] nwl-pcie fd0e.pcie: Link is UP
> > > > > [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> > > > > [2.345356]   No bus range found for /amba/pcie@fd0e, using
> > [bus
> > > > 00-ff]
> > > > > [2.345382]IO 0xe000..0xe00f -> 0x
> > > > > [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> > > > > [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > > > > [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> > > > > [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
> > > >
> > > > and all of it gets mapped by the PCI core. Usually you only have 64K
> > > > of I/O space per host bridge, and the PCI core should perhaps not
> > > > try to map all of it, though I don't think this is actually your 
> > > > problem here.
> > > >
> > > > > [2.345550] pci_bus :00: root bus resource [mem 0xe010-
> > > > 0xeeff]
> > > > > [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the 
> > > > > same
> > > > bus?
> > > > > [2.345786] iommu: Adding device :00:00.0 to group 1
> > > > > [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the 
> > > > > same
> > > > bus?
> > > > > [2.346158] iommu: Adding device :01:00.0 to group 1
> > > > > [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> > > > 0xe02f]
> > > > > [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-
> > 0xe01f
> > > > 64bit]
> > > > > [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-
> > 0xe02f
> > > > 64bit]
> > > > > [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> > > > > [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 
> > > > > 0x0040]
> > > > > [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> > > > > [2.346350] pci :00:00.0:   bridge window [mem 0xe010-
> > > > 0xe02f]
> > > > >
> > > > > IO assignment fails.
> > > >
> > > > I would guess that the I/O space is not registered correctly. Is
> > > > this drivers/pci/host/pcie-xilinx.c ? We have had problems with this
> > > > in the past, since almost nobody uses I/O space and it requires
> > > > several steps to all be done correctly.
> > > >
> > > Thanks Arnd.
> > >
> > > we are testing using drivers/pci/host/pcie-xilinx-nwl.c.
> >
> > According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
> > this hardware does not support I/O space.
> 
> We received a newer IP version with IO support, so we are trying to test this 
> feature.
> >
> > Is this on ARM or microblaze?
> 
> It is ARM 64-bit.
> 
> > This has neither the PCI memory nor the I/O resource, it looks like you 
> > never
> > call pci_add_resource_offset() to start with, or maybe it fails for some
> > reason.
> 
> I see that above API is used in ARM drivers, do we need to do it in
> ARM64 also ?

It is called in of_pci_get_host_bridge_resources(), since you
are using that API there is nothing more you have to do. The problem
with resources in /proc/iomem and /proc/ioports is that you
do not request the host bridge apertures in your host controller
driver, see drivers/pci/host/pci-host-common.c (devm_request_resource())
to see how to do it.

And as I said previously in this thread none of this is related to
your IO BAR assignment failures IMHO.

Lorenzo


Re: Purpose of pci_remap_iospace

2016-07-13 Thread Arnd Bergmann
On Wednesday, July 13, 2016 3:16:21 PM CEST Bharat Kumar Gogada wrote:
> 
> > This has neither the PCI memory nor the I/O resource, it looks like you 
> > never
> > call pci_add_resource_offset() to start with, or maybe it fails for some
> > reason.
> 
> I see that above API is used in ARM drivers, do we need to do it in ARM64 
> also ?
> 

Yes, all architectures need it.

Arnd



Re: Purpose of pci_remap_iospace

2016-07-13 Thread Arnd Bergmann
On Wednesday, July 13, 2016 3:16:21 PM CEST Bharat Kumar Gogada wrote:
> 
> > This has neither the PCI memory nor the I/O resource, it looks like you 
> > never
> > call pci_add_resource_offset() to start with, or maybe it fails for some
> > reason.
> 
> I see that above API is used in ARM drivers, do we need to do it in ARM64 
> also ?
> 

Yes, all architectures need it.

Arnd



Re: Purpose of pci_remap_iospace

2016-07-13 Thread liviu.du...@arm.com
On Wed, Jul 13, 2016 at 05:28:47PM +0200, Arnd Bergmann wrote:
> On Wednesday, July 13, 2016 3:16:21 PM CEST Bharat Kumar Gogada wrote:
> > 
> > > This has neither the PCI memory nor the I/O resource, it looks like you 
> > > never
> > > call pci_add_resource_offset() to start with, or maybe it fails for some
> > > reason.
> > 
> > I see that above API is used in ARM drivers, do we need to do it in ARM64 
> > also ?
> > 
> 
> Yes, all architectures need it.

of_pci_get_host_bridge_resources() calls it for him.

Liviu

> 
>   Arnd
> 

-- 

| I would like to |
| fix the world,  |
| but they're not |
| giving me the   |
 \ source code!  /
  ---
¯\_(ツ)_/¯


Re: Purpose of pci_remap_iospace

2016-07-13 Thread liviu.du...@arm.com
On Wed, Jul 13, 2016 at 05:28:47PM +0200, Arnd Bergmann wrote:
> On Wednesday, July 13, 2016 3:16:21 PM CEST Bharat Kumar Gogada wrote:
> > 
> > > This has neither the PCI memory nor the I/O resource, it looks like you 
> > > never
> > > call pci_add_resource_offset() to start with, or maybe it fails for some
> > > reason.
> > 
> > I see that above API is used in ARM drivers, do we need to do it in ARM64 
> > also ?
> > 
> 
> Yes, all architectures need it.

of_pci_get_host_bridge_resources() calls it for him.

Liviu

> 
>   Arnd
> 

-- 

| I would like to |
| fix the world,  |
| but they're not |
| giving me the   |
 \ source code!  /
  ---
¯\_(ツ)_/¯


RE: Purpose of pci_remap_iospace

2016-07-13 Thread Bharat Kumar Gogada
> Subject: Re: Purpose of pci_remap_iospace
>
> On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
> >  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada
> wrote:
> > > > > Subject: Re: Purpose of pci_remap_iospace
> > >
> > > I notice you have 1MB of I/O space here
> > >
> > > > Kernel Boot log:
> > > > [2.345294] nwl-pcie fd0e.pcie: Link is UP
> > > > [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> > > > [2.345356]   No bus range found for /amba/pcie@fd0e, using
> [bus
> > > 00-ff]
> > > > [2.345382]IO 0xe000..0xe00f -> 0x
> > > > [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> > > > [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > > > [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> > > > [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
> > >
> > > and all of it gets mapped by the PCI core. Usually you only have 64K
> > > of I/O space per host bridge, and the PCI core should perhaps not
> > > try to map all of it, though I don't think this is actually your problem 
> > > here.
> > >
> > > > [2.345550] pci_bus :00: root bus resource [mem 0xe010-
> > > 0xeeff]
> > > > [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the 
> > > > same
> > > bus?
> > > > [2.345786] iommu: Adding device :00:00.0 to group 1
> > > > [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the 
> > > > same
> > > bus?
> > > > [2.346158] iommu: Adding device :01:00.0 to group 1
> > > > [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> > > 0xe02f]
> > > > [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-
> 0xe01f
> > > 64bit]
> > > > [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-
> 0xe02f
> > > 64bit]
> > > > [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> > > > [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 
> > > > 0x0040]
> > > > [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> > > > [2.346350] pci :00:00.0:   bridge window [mem 0xe010-
> > > 0xe02f]
> > > >
> > > > IO assignment fails.
> > >
> > > I would guess that the I/O space is not registered correctly. Is
> > > this drivers/pci/host/pcie-xilinx.c ? We have had problems with this
> > > in the past, since almost nobody uses I/O space and it requires
> > > several steps to all be done correctly.
> > >
> > Thanks Arnd.
> >
> > we are testing using drivers/pci/host/pcie-xilinx-nwl.c.
>
> According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
> this hardware does not support I/O space.

We received a newer IP version with IO support, so we are trying to test this 
feature.
>
> Is this on ARM or microblaze?

It is ARM 64-bit.

> This has neither the PCI memory nor the I/O resource, it looks like you never
> call pci_add_resource_offset() to start with, or maybe it fails for some
> reason.

I see that above API is used in ARM drivers, do we need to do it in ARM64 also ?

Regards,
Bharat



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



RE: Purpose of pci_remap_iospace

2016-07-13 Thread Bharat Kumar Gogada
> Subject: Re: Purpose of pci_remap_iospace
>
> On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
> >  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada
> wrote:
> > > > > Subject: Re: Purpose of pci_remap_iospace
> > >
> > > I notice you have 1MB of I/O space here
> > >
> > > > Kernel Boot log:
> > > > [2.345294] nwl-pcie fd0e.pcie: Link is UP
> > > > [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> > > > [2.345356]   No bus range found for /amba/pcie@fd0e, using
> [bus
> > > 00-ff]
> > > > [2.345382]IO 0xe000..0xe00f -> 0x
> > > > [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> > > > [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > > > [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> > > > [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
> > >
> > > and all of it gets mapped by the PCI core. Usually you only have 64K
> > > of I/O space per host bridge, and the PCI core should perhaps not
> > > try to map all of it, though I don't think this is actually your problem 
> > > here.
> > >
> > > > [2.345550] pci_bus :00: root bus resource [mem 0xe010-
> > > 0xeeff]
> > > > [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the 
> > > > same
> > > bus?
> > > > [2.345786] iommu: Adding device :00:00.0 to group 1
> > > > [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the 
> > > > same
> > > bus?
> > > > [2.346158] iommu: Adding device :01:00.0 to group 1
> > > > [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> > > 0xe02f]
> > > > [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-
> 0xe01f
> > > 64bit]
> > > > [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-
> 0xe02f
> > > 64bit]
> > > > [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> > > > [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 
> > > > 0x0040]
> > > > [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> > > > [2.346350] pci :00:00.0:   bridge window [mem 0xe010-
> > > 0xe02f]
> > > >
> > > > IO assignment fails.
> > >
> > > I would guess that the I/O space is not registered correctly. Is
> > > this drivers/pci/host/pcie-xilinx.c ? We have had problems with this
> > > in the past, since almost nobody uses I/O space and it requires
> > > several steps to all be done correctly.
> > >
> > Thanks Arnd.
> >
> > we are testing using drivers/pci/host/pcie-xilinx-nwl.c.
>
> According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
> this hardware does not support I/O space.

We received a newer IP version with IO support, so we are trying to test this 
feature.
>
> Is this on ARM or microblaze?

It is ARM 64-bit.

> This has neither the PCI memory nor the I/O resource, it looks like you never
> call pci_add_resource_offset() to start with, or maybe it fails for some
> reason.

I see that above API is used in ARM drivers, do we need to do it in ARM64 also ?

Regards,
Bharat



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



Re: Purpose of pci_remap_iospace

2016-07-13 Thread Lorenzo Pieralisi
On Wed, Jul 13, 2016 at 12:30:44PM +, Bharat Kumar Gogada wrote:

[...]

> err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
> if (err) {
> pr_err("Getting bridge resources failed\n");
> return err;
> }
> resource_list_for_each_entry(window, ) {//code for io resource
> struct resource *res = window->res;
> u64 restype = resource_type(res);
> 
> switch (restype) {
> case IORESOURCE_IO:
> err = pci_remap_iospace(res, iobase);
> if(err)
> pr_info("FAILED TO IPREMAP RESOURCE\n");
> break;
> default:
> dev_err(pcie->dev, "invalid resource %pR\n", res);
> 
> }
> }
> 
> Other than above code I haven't done any change in driver.
> 
> Here is the printk added boot log:
> [2.308680] nwl-pcie fd0e.pcie: Link is UP
> [2.308724] PCI host bridge /amba/pcie@fd0e ranges:
> [2.308741]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.308755] in pci_add_resource_offset res->start 0   offset 0
> [2.308774]IO 0xe000..0xe00f -> 0x
> [2.308795] in pci_add_resource_offset res->start 0   offset 0
> [2.308805]   MEM 0xe010..0xeeff -> 0xe010
> [2.308824] in pci_add_resource_offset res->start e010offset 0
> [2.308834] nwl-pcie fd0e.pcie: invalid resource [bus 00-ff]
> [2.308870] nwl-pcie fd0e.pcie: invalid resource [mem 
> 0xe010-0xeeff]
> [2.308979] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.308998] pci_bus :00: root bus resource [bus 00-ff]
> [2.309014] pci_bus :00: root bus resource [io  0x-0xf]
> [2.309030] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
> [2.309253] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
> [2.309269] iommu: Adding device :00:00.0 to group 1
> [2.309625] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
> [2.309641] iommu: Adding device :01:00.0 to group 1
> [2.309697] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]

Here is your PCI bridge mem space window assignment. I do not see
an IO window assignment which makes me think that IO cycles and
relative IO window is not enabled through the bridge, that's the
reason you can't assign IO space to the endpoint, because it has
no parent IO window enabled IIUC.

You can add some debug info into pci_bridge_check_ranges() in
particular to the reading of PCI_IO_BASE resources to confirm
what I am saying above, thanks.

Lorenzo

> [2.309718] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
> 64bit]
> [2.309752] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
> 64bit]
> [2.309784] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> [2.309800] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [2.309816] pci :00:00.0: PCI bridge to [bus 01-0c]
> [2.309833] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]
> 
> Here is the output of ioports and iomem:
> 
> root@:~# cat /proc/iomem
> -7fff : System RAM
>   0008-00a76fff : Kernel code
>   01c72000-01d4bfff : Kernel data
> fd0c-fd0c1fff : /amba/ahci@fd0c
> fd0e-fd0e0fff : breg
> fd48-fd480fff : pcireg
> ff00-ff000fff : xuartps
> ff01-ff010fff : xuartps
> ff02-ff020fff : /amba/i2c@ff02
> ff03-ff030fff : /amba/i2c@ff03
> ff07-ff070fff : /amba/can@ff07
> ff0a-ff0a0fff : /amba/gpio@ff0a
> ff0f-ff0f0fff : /amba/spi@ff0f
> ff17-ff170fff : mmc0
> ffa6-ffa600ff : /amba/rtc@ffa6
> 80-8000ff : cfg
> root@:~# cat /proc/ioports
> root@:~#
> 
> /proc/ioports is empty.
> 
> Thanks & Regards,
> Bharat
> 
> 
> This email and any attachments are intended for the sole use of the named 
> recipient(s) and contain(s) confidential information that may be proprietary, 
> privileged or copyrighted under applicable law. If you are not the intended 
> recipient, do not read, copy, or forward this email message or any 
> attachments. Delete this email message and any attachments immediately.
> 


Re: Purpose of pci_remap_iospace

2016-07-13 Thread Lorenzo Pieralisi
On Wed, Jul 13, 2016 at 12:30:44PM +, Bharat Kumar Gogada wrote:

[...]

> err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
> if (err) {
> pr_err("Getting bridge resources failed\n");
> return err;
> }
> resource_list_for_each_entry(window, ) {//code for io resource
> struct resource *res = window->res;
> u64 restype = resource_type(res);
> 
> switch (restype) {
> case IORESOURCE_IO:
> err = pci_remap_iospace(res, iobase);
> if(err)
> pr_info("FAILED TO IPREMAP RESOURCE\n");
> break;
> default:
> dev_err(pcie->dev, "invalid resource %pR\n", res);
> 
> }
> }
> 
> Other than above code I haven't done any change in driver.
> 
> Here is the printk added boot log:
> [2.308680] nwl-pcie fd0e.pcie: Link is UP
> [2.308724] PCI host bridge /amba/pcie@fd0e ranges:
> [2.308741]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.308755] in pci_add_resource_offset res->start 0   offset 0
> [2.308774]IO 0xe000..0xe00f -> 0x
> [2.308795] in pci_add_resource_offset res->start 0   offset 0
> [2.308805]   MEM 0xe010..0xeeff -> 0xe010
> [2.308824] in pci_add_resource_offset res->start e010offset 0
> [2.308834] nwl-pcie fd0e.pcie: invalid resource [bus 00-ff]
> [2.308870] nwl-pcie fd0e.pcie: invalid resource [mem 
> 0xe010-0xeeff]
> [2.308979] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.308998] pci_bus :00: root bus resource [bus 00-ff]
> [2.309014] pci_bus :00: root bus resource [io  0x-0xf]
> [2.309030] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
> [2.309253] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
> [2.309269] iommu: Adding device :00:00.0 to group 1
> [2.309625] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
> [2.309641] iommu: Adding device :01:00.0 to group 1
> [2.309697] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]

Here is your PCI bridge mem space window assignment. I do not see
an IO window assignment which makes me think that IO cycles and
relative IO window is not enabled through the bridge, that's the
reason you can't assign IO space to the endpoint, because it has
no parent IO window enabled IIUC.

You can add some debug info into pci_bridge_check_ranges() in
particular to the reading of PCI_IO_BASE resources to confirm
what I am saying above, thanks.

Lorenzo

> [2.309718] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
> 64bit]
> [2.309752] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
> 64bit]
> [2.309784] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> [2.309800] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [2.309816] pci :00:00.0: PCI bridge to [bus 01-0c]
> [2.309833] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]
> 
> Here is the output of ioports and iomem:
> 
> root@:~# cat /proc/iomem
> -7fff : System RAM
>   0008-00a76fff : Kernel code
>   01c72000-01d4bfff : Kernel data
> fd0c-fd0c1fff : /amba/ahci@fd0c
> fd0e-fd0e0fff : breg
> fd48-fd480fff : pcireg
> ff00-ff000fff : xuartps
> ff01-ff010fff : xuartps
> ff02-ff020fff : /amba/i2c@ff02
> ff03-ff030fff : /amba/i2c@ff03
> ff07-ff070fff : /amba/can@ff07
> ff0a-ff0a0fff : /amba/gpio@ff0a
> ff0f-ff0f0fff : /amba/spi@ff0f
> ff17-ff170fff : mmc0
> ffa6-ffa600ff : /amba/rtc@ffa6
> 80-8000ff : cfg
> root@:~# cat /proc/ioports
> root@:~#
> 
> /proc/ioports is empty.
> 
> Thanks & Regards,
> Bharat
> 
> 
> This email and any attachments are intended for the sole use of the named 
> recipient(s) and contain(s) confidential information that may be proprietary, 
> privileged or copyrighted under applicable law. If you are not the intended 
> recipient, do not read, copy, or forward this email message or any 
> attachments. Delete this email message and any attachments immediately.
> 


Re: Purpose of pci_remap_iospace

2016-07-13 Thread Arnd Bergmann
On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
>  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > > > Subject: Re: Purpose of pci_remap_iospace
> >
> > I notice you have 1MB of I/O space here
> >
> > > Kernel Boot log:
> > > [2.345294] nwl-pcie fd0e.pcie: Link is UP
> > > [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> > > [2.345356]   No bus range found for /amba/pcie@fd0e, using [bus
> > 00-ff]
> > > [2.345382]IO 0xe000..0xe00f -> 0x
> > > [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> > > [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > > [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> > > [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
> >
> > and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
> > space per host bridge, and the PCI core should perhaps not try to map
> > all of it, though I don't think this is actually your problem here.
> >
> > > [2.345550] pci_bus :00: root bus resource [mem 0xe010-
> > 0xeeff]
> > > [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [2.345786] iommu: Adding device :00:00.0 to group 1
> > > [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [2.346158] iommu: Adding device :01:00.0 to group 1
> > > [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> > 0xe02f]
> > > [2.346234] pci :01:00.0: BAR 0: assigned [mem 
> > > 0xe010-0xe01f
> > 64bit]
> > > [2.346268] pci :01:00.0: BAR 2: assigned [mem 
> > > 0xe020-0xe02f
> > 64bit]
> > > [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> > > [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > > [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> > > [2.346350] pci :00:00.0:   bridge window [mem 0xe010-
> > 0xe02f]
> > >
> > > IO assignment fails.
> >
> > I would guess that the I/O space is not registered correctly. Is this
> > drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
> > past, since almost nobody uses I/O space and it requires several
> > steps to all be done correctly.
> >
> Thanks Arnd.
> 
> we are testing using drivers/pci/host/pcie-xilinx-nwl.c.

According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
this hardware does not support I/O space.

Is this on ARM or microblaze?

> Here is the code I added to driver in probe:
> ..
> err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
> if (err) {
> pr_err("Getting bridge resources failed\n");
> return err;
> }
> resource_list_for_each_entry(window, ) {//code for io resource
> struct resource *res = window->res;
> u64 restype = resource_type(res);
> 
> switch (restype) {
> case IORESOURCE_IO:
> err = pci_remap_iospace(res, iobase);
> if(err)
> pr_info("FAILED TO IPREMAP RESOURCE\n");
> break;
> default:
> dev_err(pcie->dev, "invalid resource %pR\n", res);
> 
> }
> }
> 
> Other than above code I haven't done any change in driver.
> 
> Here is the printk added boot log:
> [2.308680] nwl-pcie fd0e.pcie: Link is UP
> [2.308724] PCI host bridge /amba/pcie@fd0e ranges:
> [2.308741]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.308755] in pci_add_resource_offset res->start 0   offset 0
> [2.308774]IO 0xe000..0xe00f -> 0x
> [2.308795] in pci_add_resource_offset res->start 0   offset 0
> [2.308805]   MEM 0xe010..0xeeff -> 0xe010
> [2.308824] in pci_add_resource_offset res->start e010offset 0
> [2.308834] nwl-pcie fd0e.pcie: invalid resource [bus 00-ff]
> [2.308870] nwl-pcie fd0e.pcie: invalid resource [mem 
> 0xe010-0xeeff]
> [2.308979] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.308998] pci_bus :00: root bus resource [bus 00-ff]
> [2.309014] pci_bus :00: root bus resource [io  0x-0xf]
> [

Re: Purpose of pci_remap_iospace

2016-07-13 Thread Arnd Bergmann
On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
>  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > > > Subject: Re: Purpose of pci_remap_iospace
> >
> > I notice you have 1MB of I/O space here
> >
> > > Kernel Boot log:
> > > [2.345294] nwl-pcie fd0e.pcie: Link is UP
> > > [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> > > [2.345356]   No bus range found for /amba/pcie@fd0e, using [bus
> > 00-ff]
> > > [2.345382]IO 0xe000..0xe00f -> 0x
> > > [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> > > [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > > [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> > > [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
> >
> > and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
> > space per host bridge, and the PCI core should perhaps not try to map
> > all of it, though I don't think this is actually your problem here.
> >
> > > [2.345550] pci_bus :00: root bus resource [mem 0xe010-
> > 0xeeff]
> > > [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [2.345786] iommu: Adding device :00:00.0 to group 1
> > > [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same
> > bus?
> > > [2.346158] iommu: Adding device :01:00.0 to group 1
> > > [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> > 0xe02f]
> > > [2.346234] pci :01:00.0: BAR 0: assigned [mem 
> > > 0xe010-0xe01f
> > 64bit]
> > > [2.346268] pci :01:00.0: BAR 2: assigned [mem 
> > > 0xe020-0xe02f
> > 64bit]
> > > [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> > > [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > > [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> > > [2.346350] pci :00:00.0:   bridge window [mem 0xe010-
> > 0xe02f]
> > >
> > > IO assignment fails.
> >
> > I would guess that the I/O space is not registered correctly. Is this
> > drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
> > past, since almost nobody uses I/O space and it requires several
> > steps to all be done correctly.
> >
> Thanks Arnd.
> 
> we are testing using drivers/pci/host/pcie-xilinx-nwl.c.

According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
this hardware does not support I/O space.

Is this on ARM or microblaze?

> Here is the code I added to driver in probe:
> ..
> err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
> if (err) {
> pr_err("Getting bridge resources failed\n");
> return err;
> }
> resource_list_for_each_entry(window, ) {//code for io resource
> struct resource *res = window->res;
> u64 restype = resource_type(res);
> 
> switch (restype) {
> case IORESOURCE_IO:
> err = pci_remap_iospace(res, iobase);
> if(err)
> pr_info("FAILED TO IPREMAP RESOURCE\n");
> break;
> default:
> dev_err(pcie->dev, "invalid resource %pR\n", res);
> 
> }
> }
> 
> Other than above code I haven't done any change in driver.
> 
> Here is the printk added boot log:
> [2.308680] nwl-pcie fd0e.pcie: Link is UP
> [2.308724] PCI host bridge /amba/pcie@fd0e ranges:
> [2.308741]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.308755] in pci_add_resource_offset res->start 0   offset 0
> [2.308774]IO 0xe000..0xe00f -> 0x
> [2.308795] in pci_add_resource_offset res->start 0   offset 0
> [2.308805]   MEM 0xe010..0xeeff -> 0xe010
> [2.308824] in pci_add_resource_offset res->start e010offset 0
> [2.308834] nwl-pcie fd0e.pcie: invalid resource [bus 00-ff]
> [2.308870] nwl-pcie fd0e.pcie: invalid resource [mem 
> 0xe010-0xeeff]
> [2.308979] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.308998] pci_bus :00: root bus resource [bus 00-ff]
> [2.309014] pci_bus :00: root bus resource [io  0x-0xf]
> [

Re: Purpose of pci_remap_iospace

2016-07-13 Thread liviu.du...@arm.com
On Wed, Jul 13, 2016 at 08:11:56AM +, Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > I have a query.
> > >
> > > Can any once explain the purpose of pci_remap_iospace function in root
> > port driver.
> > >
> > > What is its dependency with architecture ?
> > >
> > > Here is my understanding, the above API takes PCIe IO resource and its
> > > to be mapped CPU address from ranges property and remaps into virtual
> > address space.
> > >
> > > So my question is who uses this virtual addresses ?
> >
> > The inb()/outb() functions declared in asm/io.h
> >
> > > When End Point requests for IO BARs doesn't it get from the above
> > > resource range (first parameter of API) and do ioremap to access this
> > > region ?
> >
> > Device drivers generally do not ioremap() the I/O BARs but they use
> > inb()/outb() directly. They can also call pci_iomap() and do
> > ioread8()/iowrite8() on the pointer returned from that function, but
> > generally the call to pci_iomap() then returns a pointer into the virtual
> > address that is already mapped.
> >
> > > But why root complex driver is mapping this address region ?
> >
> > The PCI core does not know that the I/O space is memory mapped.
> > On x86 and a few others, I/O space is not memory mapped but requires the
> > use of special CPU instructions.
> >
> Thanks Arnd.
> 
> I'm facing issue in testing IO bars on our SoC.
> 
> I added following ranges in our device tree :
> ranges = <0x0100 0x 0x 0x 0xe000 0 0x0010 
>   //io
>  0x0200 0x 0xe010 0x 0xe010 0 
> 0x0ef0>;   //non prefetchabe memory
> 
> And I'm using above API to map the res and cpu physical address in my driver.
> 
> Kernel Boot log:
> [2.345294] nwl-pcie fd0e.pcie: Link is UP
> [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> [2.345356]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.345382]IO 0xe000..0xe00f -> 0x
> [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
> [2.345550] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
> [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
> [2.345786] iommu: Adding device :00:00.0 to group 1
> [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
> [2.346158] iommu: Adding device :01:00.0 to group 1
> [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
> [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
> 64bit]
> [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
> 64bit]
> [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]

Can you try to print the value of ret in pci_assign_resource() when it is 
printing the above message?

I would tr debugging that function and the __pci_assign_resource() function to 
figure out
where it fails. Maybe due to IO region being 1MB?

Best regards,
Liviu

> [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> [2.346350] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]
> 
> IO assignment fails.
> 
> On End Point:
> 01:00.0 Memory controller: Xilinx Corporation Device a024
> Subsystem: Xilinx Corporation Device 0007
> Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
> Stepping- SERR- FastB2B- DisINTx-
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> SERR-  Interrupt: pin A routed to IRQ 224
> Region 0: Memory at e010 (64-bit, non-prefetchable) [disabled] 
> [size=1M]
> Region 2: Memory at e020 (64-bit, non-prefetchable) [disabled] 
> [size=1M]
> Region 4: I/O ports at  [disabled]
> 
> When I tested on x86 machine the same End Point I/O address is assigned, but 
> it is a IO port mapped address.
> 
> So my doubt is why the memory mapped IO addresses are not assigned to EP on 
> SoC ?
> 
> Do we need to have port mapped addresses on SoC also for PCI IO bars ?
> 
> Please let me know If I'm doing something wrong or missin

Re: Purpose of pci_remap_iospace

2016-07-13 Thread liviu.du...@arm.com
On Wed, Jul 13, 2016 at 08:11:56AM +, Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > I have a query.
> > >
> > > Can any once explain the purpose of pci_remap_iospace function in root
> > port driver.
> > >
> > > What is its dependency with architecture ?
> > >
> > > Here is my understanding, the above API takes PCIe IO resource and its
> > > to be mapped CPU address from ranges property and remaps into virtual
> > address space.
> > >
> > > So my question is who uses this virtual addresses ?
> >
> > The inb()/outb() functions declared in asm/io.h
> >
> > > When End Point requests for IO BARs doesn't it get from the above
> > > resource range (first parameter of API) and do ioremap to access this
> > > region ?
> >
> > Device drivers generally do not ioremap() the I/O BARs but they use
> > inb()/outb() directly. They can also call pci_iomap() and do
> > ioread8()/iowrite8() on the pointer returned from that function, but
> > generally the call to pci_iomap() then returns a pointer into the virtual
> > address that is already mapped.
> >
> > > But why root complex driver is mapping this address region ?
> >
> > The PCI core does not know that the I/O space is memory mapped.
> > On x86 and a few others, I/O space is not memory mapped but requires the
> > use of special CPU instructions.
> >
> Thanks Arnd.
> 
> I'm facing issue in testing IO bars on our SoC.
> 
> I added following ranges in our device tree :
> ranges = <0x0100 0x 0x 0x 0xe000 0 0x0010 
>   //io
>  0x0200 0x 0xe010 0x 0xe010 0 
> 0x0ef0>;   //non prefetchabe memory
> 
> And I'm using above API to map the res and cpu physical address in my driver.
> 
> Kernel Boot log:
> [2.345294] nwl-pcie fd0e.pcie: Link is UP
> [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> [2.345356]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.345382]IO 0xe000..0xe00f -> 0x
> [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
> [2.345550] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
> [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
> [2.345786] iommu: Adding device :00:00.0 to group 1
> [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
> [2.346158] iommu: Adding device :01:00.0 to group 1
> [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
> [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
> 64bit]
> [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
> 64bit]
> [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]

Can you try to print the value of ret in pci_assign_resource() when it is 
printing the above message?

I would tr debugging that function and the __pci_assign_resource() function to 
figure out
where it fails. Maybe due to IO region being 1MB?

Best regards,
Liviu

> [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> [2.346350] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]
> 
> IO assignment fails.
> 
> On End Point:
> 01:00.0 Memory controller: Xilinx Corporation Device a024
> Subsystem: Xilinx Corporation Device 0007
> Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
> Stepping- SERR- FastB2B- DisINTx-
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> SERR-  Interrupt: pin A routed to IRQ 224
> Region 0: Memory at e010 (64-bit, non-prefetchable) [disabled] 
> [size=1M]
> Region 2: Memory at e020 (64-bit, non-prefetchable) [disabled] 
> [size=1M]
> Region 4: I/O ports at  [disabled]
> 
> When I tested on x86 machine the same End Point I/O address is assigned, but 
> it is a IO port mapped address.
> 
> So my doubt is why the memory mapped IO addresses are not assigned to EP on 
> SoC ?
> 
> Do we need to have port mapped addresses on SoC also for PCI IO bars ?
> 
> Please let me know If I'm doing something wrong or missin

RE: Purpose of pci_remap_iospace

2016-07-13 Thread Bharat Kumar Gogada
 > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > > Subject: Re: Purpose of pci_remap_iospace
> > >
> > > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > > Hi,
> > > >
> > > > I have a query.
> > > >
> > > > Can any once explain the purpose of pci_remap_iospace function in
> root
> > > port driver.
> > > >
> > > > What is its dependency with architecture ?
> > > >
> > > > Here is my understanding, the above API takes PCIe IO resource and its
> > > > to be mapped CPU address from ranges property and remaps into
> virtual
> > > address space.
> > > >
> > > > So my question is who uses this virtual addresses ?
> > >
> > > The inb()/outb() functions declared in asm/io.h
> > >
> > > > When End Point requests for IO BARs doesn't it get from the above
> > > > resource range (first parameter of API) and do ioremap to access this
> > > > region ?
> > >
> > > Device drivers generally do not ioremap() the I/O BARs but they use
> > > inb()/outb() directly. They can also call pci_iomap() and do
> > > ioread8()/iowrite8() on the pointer returned from that function, but
> > > generally the call to pci_iomap() then returns a pointer into the virtual
> > > address that is already mapped.
> > >
> > > > But why root complex driver is mapping this address region ?
> > >
> > > The PCI core does not know that the I/O space is memory mapped.
> > > On x86 and a few others, I/O space is not memory mapped but requires
> the
> > > use of special CPU instructions.
> > >
> > Thanks Arnd.
> >
> > I'm facing issue in testing IO bars on our SoC.
> >
> > I added following ranges in our device tree :
> > ranges = <0x0100 0x 0x 0x 0xe000 0
> 0x0010   //io
> >  0x0200 0x 0xe010 0x 0xe010 0
> 0x0ef0>;   //non prefetchabe memory
> >
> > And I'm using above API to map the res and cpu physical address in my
> driver.
>
> I notice you have 1MB of I/O space here
>
> > Kernel Boot log:
> > [2.345294] nwl-pcie fd0e.pcie: Link is UP
> > [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> > [2.345356]   No bus range found for /amba/pcie@fd0e, using [bus
> 00-ff]
> > [2.345382]IO 0xe000..0xe00f -> 0x
> > [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> > [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> > [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
>
> and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
> space per host bridge, and the PCI core should perhaps not try to map
> all of it, though I don't think this is actually your problem here.
>
> > [2.345550] pci_bus :00: root bus resource [mem 0xe010-
> 0xeeff]
> > [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [2.345786] iommu: Adding device :00:00.0 to group 1
> > [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [2.346158] iommu: Adding device :01:00.0 to group 1
> > [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> 0xe02f]
> > [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f
> 64bit]
> > [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f
> 64bit]
> > [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> > [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> > [2.346350] pci :00:00.0:   bridge window [mem 0xe010-
> 0xe02f]
> >
> > IO assignment fails.
>
> I would guess that the I/O space is not registered correctly. Is this
> drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
> past, since almost nobody uses I/O space and it requires several
> steps to all be done correctly.
>
Thanks Arnd.

we are testing using drivers/pci/host/pcie-xilinx-nwl.c.

Here is the code I added to driver in probe:
..
err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
if (err) {
pr_err("Getting bridge resources failed\n");
return err;
}
resource_list_for_each

RE: Purpose of pci_remap_iospace

2016-07-13 Thread Bharat Kumar Gogada
 > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > > Subject: Re: Purpose of pci_remap_iospace
> > >
> > > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > > Hi,
> > > >
> > > > I have a query.
> > > >
> > > > Can any once explain the purpose of pci_remap_iospace function in
> root
> > > port driver.
> > > >
> > > > What is its dependency with architecture ?
> > > >
> > > > Here is my understanding, the above API takes PCIe IO resource and its
> > > > to be mapped CPU address from ranges property and remaps into
> virtual
> > > address space.
> > > >
> > > > So my question is who uses this virtual addresses ?
> > >
> > > The inb()/outb() functions declared in asm/io.h
> > >
> > > > When End Point requests for IO BARs doesn't it get from the above
> > > > resource range (first parameter of API) and do ioremap to access this
> > > > region ?
> > >
> > > Device drivers generally do not ioremap() the I/O BARs but they use
> > > inb()/outb() directly. They can also call pci_iomap() and do
> > > ioread8()/iowrite8() on the pointer returned from that function, but
> > > generally the call to pci_iomap() then returns a pointer into the virtual
> > > address that is already mapped.
> > >
> > > > But why root complex driver is mapping this address region ?
> > >
> > > The PCI core does not know that the I/O space is memory mapped.
> > > On x86 and a few others, I/O space is not memory mapped but requires
> the
> > > use of special CPU instructions.
> > >
> > Thanks Arnd.
> >
> > I'm facing issue in testing IO bars on our SoC.
> >
> > I added following ranges in our device tree :
> > ranges = <0x0100 0x 0x 0x 0xe000 0
> 0x0010   //io
> >  0x0200 0x 0xe010 0x 0xe010 0
> 0x0ef0>;   //non prefetchabe memory
> >
> > And I'm using above API to map the res and cpu physical address in my
> driver.
>
> I notice you have 1MB of I/O space here
>
> > Kernel Boot log:
> > [2.345294] nwl-pcie fd0e.pcie: Link is UP
> > [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> > [2.345356]   No bus range found for /amba/pcie@fd0e, using [bus
> 00-ff]
> > [2.345382]IO 0xe000..0xe00f -> 0x
> > [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> > [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> > [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> > [2.345533] pci_bus :00: root bus resource [io  0x-0xf]
>
> and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
> space per host bridge, and the PCI core should perhaps not try to map
> all of it, though I don't think this is actually your problem here.
>
> > [2.345550] pci_bus :00: root bus resource [mem 0xe010-
> 0xeeff]
> > [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [2.345786] iommu: Adding device :00:00.0 to group 1
> > [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same
> bus?
> > [2.346158] iommu: Adding device :01:00.0 to group 1
> > [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-
> 0xe02f]
> > [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f
> 64bit]
> > [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f
> 64bit]
> > [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> > [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> > [2.346350] pci :00:00.0:   bridge window [mem 0xe010-
> 0xe02f]
> >
> > IO assignment fails.
>
> I would guess that the I/O space is not registered correctly. Is this
> drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
> past, since almost nobody uses I/O space and it requires several
> steps to all be done correctly.
>
Thanks Arnd.

we are testing using drivers/pci/host/pcie-xilinx-nwl.c.

Here is the code I added to driver in probe:
..
err = of_pci_get_host_bridge_resources(node, 0, 0xff, , );
if (err) {
pr_err("Getting bridge resources failed\n");
return err;
}
resource_list_for_each

Re: Purpose of pci_remap_iospace

2016-07-13 Thread Arnd Bergmann
On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > I have a query.
> > >
> > > Can any once explain the purpose of pci_remap_iospace function in root
> > port driver.
> > >
> > > What is its dependency with architecture ?
> > >
> > > Here is my understanding, the above API takes PCIe IO resource and its
> > > to be mapped CPU address from ranges property and remaps into virtual
> > address space.
> > >
> > > So my question is who uses this virtual addresses ?
> >
> > The inb()/outb() functions declared in asm/io.h
> >
> > > When End Point requests for IO BARs doesn't it get from the above
> > > resource range (first parameter of API) and do ioremap to access this
> > > region ?
> >
> > Device drivers generally do not ioremap() the I/O BARs but they use
> > inb()/outb() directly. They can also call pci_iomap() and do
> > ioread8()/iowrite8() on the pointer returned from that function, but
> > generally the call to pci_iomap() then returns a pointer into the virtual
> > address that is already mapped.
> >
> > > But why root complex driver is mapping this address region ?
> >
> > The PCI core does not know that the I/O space is memory mapped.
> > On x86 and a few others, I/O space is not memory mapped but requires the
> > use of special CPU instructions.
> >
> Thanks Arnd.
> 
> I'm facing issue in testing IO bars on our SoC.
> 
> I added following ranges in our device tree :
> ranges = <0x0100 0x 0x 0x 0xe000 0 0x0010 
>   //io
>  0x0200 0x 0xe010 0x 0xe010 0 
> 0x0ef0>;   //non prefetchabe memory
> 
> And I'm using above API to map the res and cpu physical address in my driver.

I notice you have 1MB of I/O space here

> Kernel Boot log:
> [2.345294] nwl-pcie fd0e.pcie: Link is UP
> [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> [2.345356]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.345382]IO 0xe000..0xe00f -> 0x
> [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> [2.345533] pci_bus :00: root bus resource [io  0x-0xf]

and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
space per host bridge, and the PCI core should perhaps not try to map
all of it, though I don't think this is actually your problem here.

> [2.345550] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
> [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
> [2.345786] iommu: Adding device :00:00.0 to group 1
> [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
> [2.346158] iommu: Adding device :01:00.0 to group 1
> [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
> [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
> 64bit]
> [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
> 64bit]
> [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> [2.346350] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]
> 
> IO assignment fails.

I would guess that the I/O space is not registered correctly. Is this
drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
past, since almost nobody uses I/O space and it requires several
steps to all be done correctly.

The line "  IO 0xe000..0xe00f -> 0x" from your log actually
comes from the driver parsing the DT, and that seems to be correct.

Can you add a printk to pci_add_resource_offset() to show which resources
actually get added and what the offset is? Also, please show the contents
of /proc/ioport and /proc/iomem.

Arnd


Re: Purpose of pci_remap_iospace

2016-07-13 Thread Arnd Bergmann
On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada wrote:
> > Subject: Re: Purpose of pci_remap_iospace
> >
> > On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > I have a query.
> > >
> > > Can any once explain the purpose of pci_remap_iospace function in root
> > port driver.
> > >
> > > What is its dependency with architecture ?
> > >
> > > Here is my understanding, the above API takes PCIe IO resource and its
> > > to be mapped CPU address from ranges property and remaps into virtual
> > address space.
> > >
> > > So my question is who uses this virtual addresses ?
> >
> > The inb()/outb() functions declared in asm/io.h
> >
> > > When End Point requests for IO BARs doesn't it get from the above
> > > resource range (first parameter of API) and do ioremap to access this
> > > region ?
> >
> > Device drivers generally do not ioremap() the I/O BARs but they use
> > inb()/outb() directly. They can also call pci_iomap() and do
> > ioread8()/iowrite8() on the pointer returned from that function, but
> > generally the call to pci_iomap() then returns a pointer into the virtual
> > address that is already mapped.
> >
> > > But why root complex driver is mapping this address region ?
> >
> > The PCI core does not know that the I/O space is memory mapped.
> > On x86 and a few others, I/O space is not memory mapped but requires the
> > use of special CPU instructions.
> >
> Thanks Arnd.
> 
> I'm facing issue in testing IO bars on our SoC.
> 
> I added following ranges in our device tree :
> ranges = <0x0100 0x 0x 0x 0xe000 0 0x0010 
>   //io
>  0x0200 0x 0xe010 0x 0xe010 0 
> 0x0ef0>;   //non prefetchabe memory
> 
> And I'm using above API to map the res and cpu physical address in my driver.

I notice you have 1MB of I/O space here

> Kernel Boot log:
> [2.345294] nwl-pcie fd0e.pcie: Link is UP
> [2.345339] PCI host bridge /amba/pcie@fd0e ranges:
> [2.345356]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
> [2.345382]IO 0xe000..0xe00f -> 0x
> [2.345401]   MEM 0xe010..0xeeff -> 0xe010
> [2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
> [2.345517] pci_bus :00: root bus resource [bus 00-ff]
> [2.345533] pci_bus :00: root bus resource [io  0x-0xf]

and all of it gets mapped by the PCI core. Usually you only have 64K of I/O
space per host bridge, and the PCI core should perhaps not try to map
all of it, though I don't think this is actually your problem here.

> [2.345550] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
> [2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
> [2.345786] iommu: Adding device :00:00.0 to group 1
> [2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
> [2.346158] iommu: Adding device :01:00.0 to group 1
> [2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
> [2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
> 64bit]
> [2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
> 64bit]
> [2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
> [2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
> [2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
> [2.346350] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]
> 
> IO assignment fails.

I would guess that the I/O space is not registered correctly. Is this
drivers/pci/host/pcie-xilinx.c ? We have had problems with this in the
past, since almost nobody uses I/O space and it requires several
steps to all be done correctly.

The line "  IO 0xe000..0xe00f -> 0x" from your log actually
comes from the driver parsing the DT, and that seems to be correct.

Can you add a printk to pci_add_resource_offset() to show which resources
actually get added and what the offset is? Also, please show the contents
of /proc/ioport and /proc/iomem.

Arnd


RE: Purpose of pci_remap_iospace

2016-07-13 Thread Bharat Kumar Gogada
> Subject: Re: Purpose of pci_remap_iospace
>
> On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > Hi,
> >
> > I have a query.
> >
> > Can any once explain the purpose of pci_remap_iospace function in root
> port driver.
> >
> > What is its dependency with architecture ?
> >
> > Here is my understanding, the above API takes PCIe IO resource and its
> > to be mapped CPU address from ranges property and remaps into virtual
> address space.
> >
> > So my question is who uses this virtual addresses ?
>
> The inb()/outb() functions declared in asm/io.h
>
> > When End Point requests for IO BARs doesn't it get from the above
> > resource range (first parameter of API) and do ioremap to access this
> > region ?
>
> Device drivers generally do not ioremap() the I/O BARs but they use
> inb()/outb() directly. They can also call pci_iomap() and do
> ioread8()/iowrite8() on the pointer returned from that function, but
> generally the call to pci_iomap() then returns a pointer into the virtual
> address that is already mapped.
>
> > But why root complex driver is mapping this address region ?
>
> The PCI core does not know that the I/O space is memory mapped.
> On x86 and a few others, I/O space is not memory mapped but requires the
> use of special CPU instructions.
>
Thanks Arnd.

I'm facing issue in testing IO bars on our SoC.

I added following ranges in our device tree :
ranges = <0x0100 0x 0x 0x 0xe000 0 0x0010   
//io
 0x0200 0x 0xe010 0x 0xe010 0 
0x0ef0>;   //non prefetchabe memory

And I'm using above API to map the res and cpu physical address in my driver.

Kernel Boot log:
[2.345294] nwl-pcie fd0e.pcie: Link is UP
[2.345339] PCI host bridge /amba/pcie@fd0e ranges:
[2.345356]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
[2.345382]IO 0xe000..0xe00f -> 0x
[2.345401]   MEM 0xe010..0xeeff -> 0xe010
[2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
[2.345517] pci_bus :00: root bus resource [bus 00-ff]
[2.345533] pci_bus :00: root bus resource [io  0x-0xf]
[2.345550] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
[2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
[2.345786] iommu: Adding device :00:00.0 to group 1
[2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
[2.346158] iommu: Adding device :01:00.0 to group 1
[2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
[2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
64bit]
[2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
64bit]
[2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
[2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
[2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
[2.346350] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]

IO assignment fails.

On End Point:
01:00.0 Memory controller: Xilinx Corporation Device a024
Subsystem: Xilinx Corporation Device 0007
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR-  [disabled]

When I tested on x86 machine the same End Point I/O address is assigned, but it 
is a IO port mapped address.

So my doubt is why the memory mapped IO addresses are not assigned to EP on SoC 
?

Do we need to have port mapped addresses on SoC also for PCI IO bars ?

Please let me know If I'm doing something wrong or missing something.

Thanks & Regards,
Bharat



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



RE: Purpose of pci_remap_iospace

2016-07-13 Thread Bharat Kumar Gogada
> Subject: Re: Purpose of pci_remap_iospace
>
> On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > Hi,
> >
> > I have a query.
> >
> > Can any once explain the purpose of pci_remap_iospace function in root
> port driver.
> >
> > What is its dependency with architecture ?
> >
> > Here is my understanding, the above API takes PCIe IO resource and its
> > to be mapped CPU address from ranges property and remaps into virtual
> address space.
> >
> > So my question is who uses this virtual addresses ?
>
> The inb()/outb() functions declared in asm/io.h
>
> > When End Point requests for IO BARs doesn't it get from the above
> > resource range (first parameter of API) and do ioremap to access this
> > region ?
>
> Device drivers generally do not ioremap() the I/O BARs but they use
> inb()/outb() directly. They can also call pci_iomap() and do
> ioread8()/iowrite8() on the pointer returned from that function, but
> generally the call to pci_iomap() then returns a pointer into the virtual
> address that is already mapped.
>
> > But why root complex driver is mapping this address region ?
>
> The PCI core does not know that the I/O space is memory mapped.
> On x86 and a few others, I/O space is not memory mapped but requires the
> use of special CPU instructions.
>
Thanks Arnd.

I'm facing issue in testing IO bars on our SoC.

I added following ranges in our device tree :
ranges = <0x0100 0x 0x 0x 0xe000 0 0x0010   
//io
 0x0200 0x 0xe010 0x 0xe010 0 
0x0ef0>;   //non prefetchabe memory

And I'm using above API to map the res and cpu physical address in my driver.

Kernel Boot log:
[2.345294] nwl-pcie fd0e.pcie: Link is UP
[2.345339] PCI host bridge /amba/pcie@fd0e ranges:
[2.345356]   No bus range found for /amba/pcie@fd0e, using [bus 00-ff]
[2.345382]IO 0xe000..0xe00f -> 0x
[2.345401]   MEM 0xe010..0xeeff -> 0xe010
[2.345498] nwl-pcie fd0e.pcie: PCI host bridge to bus :00
[2.345517] pci_bus :00: root bus resource [bus 00-ff]
[2.345533] pci_bus :00: root bus resource [io  0x-0xf]
[2.345550] pci_bus :00: root bus resource [mem 0xe010-0xeeff]
[2.345770] pci :00:00.0: cannot attach to SMMU, is it on the same bus?
[2.345786] iommu: Adding device :00:00.0 to group 1
[2.346142] pci :01:00.0: cannot attach to SMMU, is it on the same bus?
[2.346158] iommu: Adding device :01:00.0 to group 1
[2.346213] pci :00:00.0: BAR 8: assigned [mem 0xe010-0xe02f]
[2.346234] pci :01:00.0: BAR 0: assigned [mem 0xe010-0xe01f 
64bit]
[2.346268] pci :01:00.0: BAR 2: assigned [mem 0xe020-0xe02f 
64bit]
[2.346300] pci :01:00.0: BAR 4: no space for [io  size 0x0040]
[2.346316] pci :01:00.0: BAR 4: failed to assign [io  size 0x0040]
[2.346333] pci :00:00.0: PCI bridge to [bus 01-0c]
[2.346350] pci :00:00.0:   bridge window [mem 0xe010-0xe02f]

IO assignment fails.

On End Point:
01:00.0 Memory controller: Xilinx Corporation Device a024
Subsystem: Xilinx Corporation Device 0007
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR-  [disabled]

When I tested on x86 machine the same End Point I/O address is assigned, but it 
is a IO port mapped address.

So my doubt is why the memory mapped IO addresses are not assigned to EP on SoC 
?

Do we need to have port mapped addresses on SoC also for PCI IO bars ?

Please let me know If I'm doing something wrong or missing something.

Thanks & Regards,
Bharat



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



RE: Purpose of pci_remap_iospace

2016-07-12 Thread Bharat Kumar Gogada
> Subject: Re: Purpose of pci_remap_iospace
>
> On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > Hi,
> >
> > I have a query.
> >
> > Can any once explain the purpose of pci_remap_iospace function in root
> port driver.
> >
> > What is its dependency with architecture ?
> >
> > Here is my understanding, the above API takes PCIe IO resource and its
> > to be mapped CPU address from ranges property and remaps into virtual
> address space.
> >
> > So my question is who uses this virtual addresses ?
>
> The inb()/outb() functions declared in asm/io.h
>
> > When End Point requests for IO BARs doesn't it get from the above
> > resource range (first parameter of API) and do ioremap to access this
> > region ?
>
> Device drivers generally do not ioremap() the I/O BARs but they use
> inb()/outb() directly. They can also call pci_iomap() and do
> ioread8()/iowrite8() on the pointer returned from that function, but
> generally the call to pci_iomap() then returns a pointer into the virtual
> address that is already mapped.
>
> > But why root complex driver is mapping this address region ?
>
> The PCI core does not know that the I/O space is memory mapped.
> On x86 and a few others, I/O space is not memory mapped but requires the
> use of special CPU instructions.
>
Thanks Bergmann


This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



RE: Purpose of pci_remap_iospace

2016-07-12 Thread Bharat Kumar Gogada
> Subject: Re: Purpose of pci_remap_iospace
>
> On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> > Hi,
> >
> > I have a query.
> >
> > Can any once explain the purpose of pci_remap_iospace function in root
> port driver.
> >
> > What is its dependency with architecture ?
> >
> > Here is my understanding, the above API takes PCIe IO resource and its
> > to be mapped CPU address from ranges property and remaps into virtual
> address space.
> >
> > So my question is who uses this virtual addresses ?
>
> The inb()/outb() functions declared in asm/io.h
>
> > When End Point requests for IO BARs doesn't it get from the above
> > resource range (first parameter of API) and do ioremap to access this
> > region ?
>
> Device drivers generally do not ioremap() the I/O BARs but they use
> inb()/outb() directly. They can also call pci_iomap() and do
> ioread8()/iowrite8() on the pointer returned from that function, but
> generally the call to pci_iomap() then returns a pointer into the virtual
> address that is already mapped.
>
> > But why root complex driver is mapping this address region ?
>
> The PCI core does not know that the I/O space is memory mapped.
> On x86 and a few others, I/O space is not memory mapped but requires the
> use of special CPU instructions.
>
Thanks Bergmann


This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



Re: Purpose of pci_remap_iospace

2016-07-12 Thread Arnd Bergmann
On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> Hi,
> 
> I have a query.
> 
> Can any once explain the purpose of pci_remap_iospace function in root port 
> driver.
> 
> What is its dependency with architecture ?
> 
> Here is my understanding, the above API takes PCIe IO resource and its to be 
> mapped CPU address from
> ranges property and remaps into virtual address space.
> 
> So my question is who uses this virtual addresses ?

The inb()/outb() functions declared in asm/io.h

> When End Point requests for IO BARs doesn't it get
> from the above resource range (first parameter of API) and
> do ioremap to access this region ?

Device drivers generally do not ioremap() the I/O BARs but they
use inb()/outb() directly. They can also call pci_iomap() and
do ioread8()/iowrite8() on the pointer returned from that function,
but generally the call to pci_iomap() then returns a pointer into
the virtual address that is already mapped.
 
> But why root complex driver is mapping this address region ?

The PCI core does not know that the I/O space is memory mapped.
On x86 and a few others, I/O space is not memory mapped but requires
the use of special CPU instructions.

Arnd


Re: Purpose of pci_remap_iospace

2016-07-12 Thread Arnd Bergmann
On Tuesday, July 12, 2016 6:57:10 AM CEST Bharat Kumar Gogada wrote:
> Hi,
> 
> I have a query.
> 
> Can any once explain the purpose of pci_remap_iospace function in root port 
> driver.
> 
> What is its dependency with architecture ?
> 
> Here is my understanding, the above API takes PCIe IO resource and its to be 
> mapped CPU address from
> ranges property and remaps into virtual address space.
> 
> So my question is who uses this virtual addresses ?

The inb()/outb() functions declared in asm/io.h

> When End Point requests for IO BARs doesn't it get
> from the above resource range (first parameter of API) and
> do ioremap to access this region ?

Device drivers generally do not ioremap() the I/O BARs but they
use inb()/outb() directly. They can also call pci_iomap() and
do ioread8()/iowrite8() on the pointer returned from that function,
but generally the call to pci_iomap() then returns a pointer into
the virtual address that is already mapped.
 
> But why root complex driver is mapping this address region ?

The PCI core does not know that the I/O space is memory mapped.
On x86 and a few others, I/O space is not memory mapped but requires
the use of special CPU instructions.

Arnd