Re: REGRESSION: Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-11-12 Thread Thomas Gleixner
On Thu, Nov 12 2020 at 15:15, Thomas Gleixner wrote:
> On Thu, Nov 12 2020 at 08:55, Jason Gunthorpe wrote:
>> On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
>> They were unable to bisect further into the series because some of the
>> interior commits don't boot :(
>>
>> When we try to load the mlx5 driver on a bare metal VF it gets this:
>>
>> [Thu Oct 22 08:54:51 2020] DMAR: DRHD: handling fault status reg 2
>> [Thu Oct 22 08:54:51 2020] DMAR: [INTR-REMAP] Request device [42:00.2] fault 
>> index 1600 [fault reason 37] Blocked a compatibility format interrupt request
>> [Thu Oct 22 08:55:04 2020] mlx5_core :42:00.1 eth4: Link down
>> [Thu Oct 22 08:55:11 2020] mlx5_core :42:00.1 eth4: Link up
>> [Thu Oct 22 08:55:54 2020] mlx5_core :42:00.2: 
>> mlx5_cmd_eq_recover:264:(pid 3390): Recovered 1 EQEs on cmd_eq
>> [Thu Oct 22 08:55:54 2020] mlx5_core :42:00.2: 
>> wait_func_handle_exec_timeout:1051:(pid 3390): cmd0: CREATE_EQ(0×301) 
>> recovered after timeout
>> [Thu Oct 22 08:55:54 2020] DMAR: DRHD: handling fault status reg 102
>> [Thu Oct 22 08:55:54 2020] DMAR: [INTR-REMAP] Request device [42:00.2] fault 
>> index 1600 [fault reason 37] Blocked a compatibility format interrupt request
>>
>> If you have any idea Ziyad and Itay can run any debugging you like.
>>
>> I suppose it is because this series is handing out compatability
>> addr/data pairs while the IOMMU is setup to only accept remap ones
>> from SRIOV VFs?
>
> So the issue seems to be that the VF device has the default irq domain
> assigned and not the remapping domain. Let me stare into the code to see
> how these VF devices are set up and registered with the IOMMU/remap
> unit.

Found the reason. Will fix it after walking the dogs. Brain needs some
fresh air.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: REGRESSION: Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-11-12 Thread Thomas Gleixner
Jason,

(trimmed CC list a bit)

On Thu, Nov 12 2020 at 08:55, Jason Gunthorpe wrote:
> On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
> They were unable to bisect further into the series because some of the
> interior commits don't boot :(
>
> When we try to load the mlx5 driver on a bare metal VF it gets this:
>
> [Thu Oct 22 08:54:51 2020] DMAR: DRHD: handling fault status reg 2
> [Thu Oct 22 08:54:51 2020] DMAR: [INTR-REMAP] Request device [42:00.2] fault 
> index 1600 [fault reason 37] Blocked a compatibility format interrupt request
> [Thu Oct 22 08:55:04 2020] mlx5_core :42:00.1 eth4: Link down
> [Thu Oct 22 08:55:11 2020] mlx5_core :42:00.1 eth4: Link up
> [Thu Oct 22 08:55:54 2020] mlx5_core :42:00.2: 
> mlx5_cmd_eq_recover:264:(pid 3390): Recovered 1 EQEs on cmd_eq
> [Thu Oct 22 08:55:54 2020] mlx5_core :42:00.2: 
> wait_func_handle_exec_timeout:1051:(pid 3390): cmd0: CREATE_EQ(0×301) 
> recovered after timeout
> [Thu Oct 22 08:55:54 2020] DMAR: DRHD: handling fault status reg 102
> [Thu Oct 22 08:55:54 2020] DMAR: [INTR-REMAP] Request device [42:00.2] fault 
> index 1600 [fault reason 37] Blocked a compatibility format interrupt request
>
> If you have any idea Ziyad and Itay can run any debugging you like.
>
> I suppose it is because this series is handing out compatability
> addr/data pairs while the IOMMU is setup to only accept remap ones
> from SRIOV VFs?

So the issue seems to be that the VF device has the default irq domain
assigned and not the remapping domain. Let me stare into the code to see
how these VF devices are set up and registered with the IOMMU/remap
unit.

Thanks,

tglx

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

REGRESSION: Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-11-12 Thread Jason Gunthorpe
On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
> This is the second version of providing a base to support device MSI (non
> PCI based) and on top of that support for IMS (Interrupt Message Storm)
> based devices in a halfways architecture independent way.

Hi Thomas,

Our test team has been struggling with a regression on bare metal
SRIOV VFs since -rc1 that they were able to bisect to this series

This commit tests good:

5712c3ed549e ("Merge tag 'armsoc-fixes' of 
git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc")

This commit tests bad:

981aa1d366bf ("PCI: MSI: Fix Kconfig dependencies for PCI_MSI_ARCH_FALLBACKS")

They were unable to bisect further into the series because some of the
interior commits don't boot :(

When we try to load the mlx5 driver on a bare metal VF it gets this:

[Thu Oct 22 08:54:51 2020] DMAR: DRHD: handling fault status reg 2
[Thu Oct 22 08:54:51 2020] DMAR: [INTR-REMAP] Request device [42:00.2] fault 
index 1600 [fault reason 37] Blocked a compatibility format interrupt request
[Thu Oct 22 08:55:04 2020] mlx5_core :42:00.1 eth4: Link down
[Thu Oct 22 08:55:11 2020] mlx5_core :42:00.1 eth4: Link up
[Thu Oct 22 08:55:54 2020] mlx5_core :42:00.2: mlx5_cmd_eq_recover:264:(pid 
3390): Recovered 1 EQEs on cmd_eq
[Thu Oct 22 08:55:54 2020] mlx5_core :42:00.2: 
wait_func_handle_exec_timeout:1051:(pid 3390): cmd0: CREATE_EQ(0×301) 
recovered after timeout
[Thu Oct 22 08:55:54 2020] DMAR: DRHD: handling fault status reg 102
[Thu Oct 22 08:55:54 2020] DMAR: [INTR-REMAP] Request device [42:00.2] fault 
index 1600 [fault reason 37] Blocked a compatibility format interrupt request

If you have any idea Ziyad and Itay can run any debugging you like.

I suppose it is because this series is handing out compatability
addr/data pairs while the IOMMU is setup to only accept remap ones
from SRIOV VFs?

Thanks,
Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-30 Thread Thomas Gleixner
Megha,

On Wed, Sep 30 2020 at 10:25, Megha Dey wrote:
> On 9/30/2020 8:20 AM, Thomas Gleixner wrote:
 Your IMS patches? Why do you need something special again?
>
> By IMS patches, I meant your IMS driver patch that was updated (as it 
> was untested, it had some compile errors and we removed the IMS_QUEUE
> parts) :

Ok.

> The whole patchset can be found here:
>
> https://lore.kernel.org/lkml/f4a085f1-f6de-2539-12fe-c7308d243...@intel.com/
>
> It would be great if you could review the IMS patches :)

It somehow slipped through the cracks. I'll have a look.

> We were hoping to get IMS in the 5.10 merge window :)

Hope dies last, right?

>>> We might be able to put together a mockup just to prove it
>> If that makes Megha's stuff going that would of course be appreciated,
>> but we can defer the IMS_QUEUE part for later. It's orthogonal to the
>> IMS_ARRAY stuff.
>
> In our patch series, we have removed the IMS_QUEUE stuff and retained 
> only the IMS_ARRAY parts > as that was sufficient for us.

That works. We can add that back when Jason has his puzzle pieces
sorted.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-30 Thread Dey, Megha

Hi Thomas/Jason,

On 9/30/2020 8:20 AM, Thomas Gleixner wrote:

On Wed, Sep 30 2020 at 08:43, Jason Gunthorpe wrote:

On Wed, Sep 30, 2020 at 08:41:48AM +0200, Thomas Gleixner wrote:

On Tue, Sep 29 2020 at 16:03, Megha Dey wrote:

On 8/26/2020 4:16 AM, Thomas Gleixner wrote:

#9  is obviously just for the folks interested in IMS


I see that the tip tree (as of 9/29) has most of these patches but
notice that the DEV_MSI related patches

haven't made it. I have tested the tip tree(x86/irq branch) with your
DEV_MSI infra patches and our IMS patches with the IDXD driver and was

Your IMS patches? Why do you need something special again?


By IMS patches, I meant your IMS driver patch that was updated (as it 
was untested, it had some compile


errors and we removed the IMS_QUEUE parts) :

https://lore.kernel.org/lkml/160021246221.67751.16280230469654363209.st...@djiang5-desk3.ch.intel.com/

and some iommu related changes required by IMS.

https://lore.kernel.org/lkml/160021246905.67751.1674517279122764758.st...@djiang5-desk3.ch.intel.com/

The whole patchset can be found here:

https://lore.kernel.org/lkml/f4a085f1-f6de-2539-12fe-c7308d243...@intel.com/

It would be great if you could review the IMS patches :)




wondering if we should push out those patches as part of our patchset?

As I don't have any hardware to test that, I was waiting for you and
Jason to confirm that this actually works for the two different IMS
implementations.

How urgently do you need this? The code looked good from what I
understood. It will be a while before we have all the parts to send an
actual patch though.

I personally do not need it at all :) Megha might have different
thoughts...


I have tested these patches and it works fine (I had to add a couple of 
EXPORT_SYMBOLS).


We were hoping to get IMS in the 5.10 merge window :)




We might be able to put together a mockup just to prove it

If that makes Megha's stuff going that would of course be appreciated,
but we can defer the IMS_QUEUE part for later. It's orthogonal to the
IMS_ARRAY stuff.


In our patch series, we have removed the IMS_QUEUE stuff and retained 
only the IMS_ARRAY parts


as that was sufficient for us.



Thanks,

 tglx

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-30 Thread Thomas Gleixner
On Wed, Sep 30 2020 at 08:43, Jason Gunthorpe wrote:
> On Wed, Sep 30, 2020 at 08:41:48AM +0200, Thomas Gleixner wrote:
>> On Tue, Sep 29 2020 at 16:03, Megha Dey wrote:
>> > On 8/26/2020 4:16 AM, Thomas Gleixner wrote:
>> >> #9is obviously just for the folks interested in IMS
>> >>
>> >
>> > I see that the tip tree (as of 9/29) has most of these patches but 
>> > notice that the DEV_MSI related patches
>> >
>> > haven't made it. I have tested the tip tree(x86/irq branch) with your
>> > DEV_MSI infra patches and our IMS patches with the IDXD driver and was
>> 
>> Your IMS patches? Why do you need something special again?
>> 
>> > wondering if we should push out those patches as part of our patchset?
>> 
>> As I don't have any hardware to test that, I was waiting for you and
>> Jason to confirm that this actually works for the two different IMS
>> implementations.
>
> How urgently do you need this? The code looked good from what I
> understood. It will be a while before we have all the parts to send an
> actual patch though.

I personally do not need it at all :) Megha might have different
thoughts... 

> We might be able to put together a mockup just to prove it

If that makes Megha's stuff going that would of course be appreciated,
but we can defer the IMS_QUEUE part for later. It's orthogonal to the
IMS_ARRAY stuff.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-30 Thread Jason Gunthorpe
On Wed, Sep 30, 2020 at 08:41:48AM +0200, Thomas Gleixner wrote:
> On Tue, Sep 29 2020 at 16:03, Megha Dey wrote:
> > On 8/26/2020 4:16 AM, Thomas Gleixner wrote:
> >> #9 is obviously just for the folks interested in IMS
> >>
> >
> > I see that the tip tree (as of 9/29) has most of these patches but 
> > notice that the DEV_MSI related patches
> >
> > haven't made it. I have tested the tip tree(x86/irq branch) with your
> > DEV_MSI infra patches and our IMS patches with the IDXD driver and was
> 
> Your IMS patches? Why do you need something special again?
> 
> > wondering if we should push out those patches as part of our patchset?
> 
> As I don't have any hardware to test that, I was waiting for you and
> Jason to confirm that this actually works for the two different IMS
> implementations.

How urgently do you need this? The code looked good from what I
understood. It will be a while before we have all the parts to send an
actual patch though.

We might be able to put together a mockup just to prove it

Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-30 Thread Thomas Gleixner
On Tue, Sep 29 2020 at 16:03, Megha Dey wrote:
> On 8/26/2020 4:16 AM, Thomas Gleixner wrote:
>> #9   is obviously just for the folks interested in IMS
>>
>
> I see that the tip tree (as of 9/29) has most of these patches but 
> notice that the DEV_MSI related patches
>
> haven't made it. I have tested the tip tree(x86/irq branch) with your
> DEV_MSI infra patches and our IMS patches with the IDXD driver and was

Your IMS patches? Why do you need something special again?

> wondering if we should push out those patches as part of our patchset?

As I don't have any hardware to test that, I was waiting for you and
Jason to confirm that this actually works for the two different IMS
implementations.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-29 Thread Dey, Megha

Hi Thomas,

On 8/26/2020 4:16 AM, Thomas Gleixner wrote:

This is the second version of providing a base to support device MSI (non
PCI based) and on top of that support for IMS (Interrupt Message Storm)
based devices in a halfways architecture independent way.

The first version can be found here:

 https://lore.kernel.org/r/20200821002424.119492...@linutronix.de

It's still a mixed bag of bug fixes, cleanups and general improvements
which are worthwhile independent of device MSI.

There are quite a bunch of issues to solve:

   - X86 does not use the device::msi_domain pointer for historical reasons
 and due to XEN, which makes it impossible to create an architecture
 agnostic device MSI infrastructure.

   - X86 has it's own msi_alloc_info data type which is pointlessly
 different from the generic version and does not allow to share code.

   - The logic of composing MSI messages in an hierarchy is busted at the
 core level and of course some (x86) drivers depend on that.

   - A few minor shortcomings as usual

This series addresses that in several steps:

  1) Accidental bug fixes

   iommu/amd: Prevent NULL pointer dereference

  2) Janitoring

   x86/init: Remove unused init ops
   PCI: vmd: Dont abuse vector irqomain as parent
   x86/msi: Remove pointless vcpu_affinity callback

  3) Sanitizing the composition of MSI messages in a hierarchy
  
   genirq/chip: Use the first chip in irq_chip_compose_msi_msg()

   x86/msi: Move compose message callback where it belongs

  4) Simplification of the x86 specific interrupt allocation mechanism

   x86/irq: Rename X86_IRQ_ALLOC_TYPE_MSI* to reflect PCI dependency
   x86/irq: Add allocation type for parent domain retrieval
   iommu/vt-d: Consolidate irq domain getter
   iommu/amd: Consolidate irq domain getter
   iommu/irq_remapping: Consolidate irq domain lookup

  5) Consolidation of the X86 specific interrupt allocation mechanism to be as 
close
 as possible to the generic MSI allocation mechanism which allows to get rid
 of quite a bunch of x86'isms which are pointless

   x86/irq: Prepare consolidation of irq_alloc_info
   x86/msi: Consolidate HPET allocation
   x86/ioapic: Consolidate IOAPIC allocation
   x86/irq: Consolidate DMAR irq allocation
   x86/irq: Consolidate UV domain allocation
   PCI/MSI: Rework pci_msi_domain_calc_hwirq()
   x86/msi: Consolidate MSI allocation
   x86/msi: Use generic MSI domain ops

   6) x86 specific cleanups to remove the dependency on arch_*_msi_irqs()

   x86/irq: Move apic_post_init() invocation to one place
   x86/pci: Reducde #ifdeffery in PCI init code
   x86/irq: Initialize PCI/MSI domain at PCI init time
   irqdomain/msi: Provide DOMAIN_BUS_VMD_MSI
   PCI: vmd: Mark VMD irqdomain with DOMAIN_BUS_VMD_MSI
   PCI/MSI: Provide pci_dev_has_special_msi_domain() helper
   x86/xen: Make xen_msi_init() static and rename it to xen_hvm_msi_init()
   x86/xen: Rework MSI teardown
   x86/xen: Consolidate XEN-MSI init
   irqdomain/msi: Allow to override msi_domain_alloc/free_irqs()
   x86/xen: Wrap XEN MSI management into irqdomain
   iommm/vt-d: Store irq domain in struct device
   iommm/amd: Store irq domain in struct device
   x86/pci: Set default irq domain in pcibios_add_device()
   PCI/MSI: Make arch_.*_msi_irq[s] fallbacks selectable
   x86/irq: Cleanup the arch_*_msi_irqs() leftovers
   x86/irq: Make most MSI ops XEN private
   iommu/vt-d: Remove domain search for PCI/MSI[X]
   iommu/amd: Remove domain search for PCI/MSI

   7) X86 specific preparation for device MSI

   x86/irq: Add DEV_MSI allocation type
   x86/msi: Rename and rework pci_msi_prepare() to cover non-PCI MSI

   8) Generic device MSI infrastructure
   platform-msi: Provide default irq_chip:: Ack
   genirq/proc: Take buslock on affinity write
   genirq/msi: Provide and use msi_domain_set_default_info_flags()
   platform-msi: Add device MSI infrastructure
   irqdomain/msi: Provide msi_alloc/free_store() callbacks

   9) POC of IMS (Interrupt Message Storm) irq domain and irqchip
  implementations for both device array and queue storage.

   irqchip: Add IMS (Interrupt Message Storm) driver - NOT FOR MERGING

Changes vs. V1:

- Addressed various review comments and addressed the 0day fallout.
  - Corrected the XEN logic (Jürgen)
  - Make the arch fallback in PCI/MSI opt-in not opt-out (Bjorn)

- Fixed the compose MSI message inconsistency

- Ensure that the necessary flags are set for device SMI

- Make the irq bus logic work for affinity setting to prepare
  support for IMS storage in queue memory. It turned out to be
  less scary than I feared.

- Remove leftovers in iommu/intel|amd

- Reworked the IMS POC driver to cover queue storage so Jason can have a
  look whether that fits the needs of 

Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-25 Thread Thomas Gleixner
On Fri, Sep 25 2020 at 17:49, Peter Zijlstra wrote:
> Here it looks like this:
>
> [1.830276] BUG: kernel NULL pointer dereference, address: 
> [1.838043] #PF: supervisor instruction fetch in kernel mode
> [1.844357] #PF: error_code(0x0010) - not-present page
> [1.850090] PGD 0 P4D 0
> [1.852915] Oops: 0010 [#1] SMP
> [1.856419] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 
> 5.9.0-rc6-00700-g0248dedd12d4 #419
> [1.865447] Hardware name: Intel Corporation S2600GZ/S2600GZ, BIOS 
> SE5C600.86B.02.02.0002.122320131210 12/23/2013
> [1.876902] RIP: 0010:0x0
> [1.879824] Code: Bad RIP value.
> [1.883423] RSP: :82803da0 EFLAGS: 00010282
> [1.889251] RAX:  RBX: 8282b980 RCX: 
> 82803e40
> [1.897241] RDX: 0001 RSI: 82803e40 RDI: 
> 8282b980
> [1.905201] RBP: 88842f331000 R08:  R09: 
> 0001
> [1.913162] R10: 0001 R11:  R12: 
> 0048
> [1.921123] R13: 82803e40 R14: 8282b9c0 R15: 
> 
> [1.929085] FS:  () GS:88842f40() 
> knlGS:
> [1.938113] CS:  0010 DS:  ES:  CR0: 80050033
> [1.944524] CR2: ffd6 CR3: 02811001 CR4: 
> 000606b0
> [1.952484] Call Trace:
> [1.955214]  msi_domain_alloc+0x36/0x130

Hrm. That looks like a not initialized mandatory callback. Confused.

Is this on -next and if so, does this happen on tip:x86/irq as well?

Can you provide yoru config please?

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-25 Thread Peter Zijlstra
On Fri, Sep 25, 2020 at 11:29:13AM -0400, Qian Cai wrote:

> It looks like the crashes happen in the interrupt remapping code where they 
> are
> only able to to generate partial call traces.

> [8.466614][T0] BUG: kernel NULL pointer dereference, address: 
> 
> [8.474295][T0] #PF: supervisor instruction fetch in kernel mode
> [8.480669][T0] #PF: error_code(0x0010) - not-present page
> [8.486518][T0] PGD 0 P4D 0 
> [8.489757][T0] Oops: 0010 [#1] SMP KASAN PTI
> [8.494476][T0] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G  I
>5.9.0-rc6-next-20200925 #2
> [8.503987][T0] Hardware name: HPE ProLiant DL560 Gen10/ProLiant DL560 
> Gen10, BIOS U34 11/13/2019
> [8.513238][T0] RIP: 0010:0x0
> [8.516562][T0] Code: Bad RIP v

Here it looks like this:

[1.830276] BUG: kernel NULL pointer dereference, address: 
[1.838043] #PF: supervisor instruction fetch in kernel mode
[1.844357] #PF: error_code(0x0010) - not-present page
[1.850090] PGD 0 P4D 0
[1.852915] Oops: 0010 [#1] SMP
[1.856419] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 
5.9.0-rc6-00700-g0248dedd12d4 #419
[1.865447] Hardware name: Intel Corporation S2600GZ/S2600GZ, BIOS 
SE5C600.86B.02.02.0002.122320131210 12/23/2013
[1.876902] RIP: 0010:0x0
[1.879824] Code: Bad RIP value.
[1.883423] RSP: :82803da0 EFLAGS: 00010282
[1.889251] RAX:  RBX: 8282b980 RCX: 82803e40
[1.897241] RDX: 0001 RSI: 82803e40 RDI: 8282b980
[1.905201] RBP: 88842f331000 R08:  R09: 0001
[1.913162] R10: 0001 R11:  R12: 0048
[1.921123] R13: 82803e40 R14: 8282b9c0 R15: 
[1.929085] FS:  () GS:88842f40() 
knlGS:
[1.938113] CS:  0010 DS:  ES:  CR0: 80050033
[1.944524] CR2: ffd6 CR3: 02811001 CR4: 000606b0
[1.952484] Call Trace:
[1.955214]  msi_domain_alloc+0x36/0x130
[1.959594]  __irq_domain_alloc_irqs+0x165/0x380
[1.964748]  dmar_alloc_hwirq+0x9a/0x120
[1.969127]  dmar_set_interrupt.part.0+0x1c/0x60
[1.974281]  enable_drhd_fault_handling+0x2c/0x6c
[1.979532]  apic_intr_mode_init+0xfa/0x100
[1.984191]  x86_late_time_init+0x20/0x30
[1.988662]  start_kernel+0x723/0x7e6
[1.992748]  secondary_startup_64_no_verify+0xa6/0xab
[1.998386] Modules linked in:
[2.001794] CR2: 
[2.005510] ---[ end trace 837dc60d7c66efa2 ]---

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-25 Thread Qian Cai
On Wed, 2020-08-26 at 13:16 +0200, Thomas Gleixner wrote:
> This is the second version of providing a base to support device MSI (non
> PCI based) and on top of that support for IMS (Interrupt Message Storm)
> based devices in a halfways architecture independent way.
> 
> The first version can be found here:
> 
> https://lore.kernel.org/r/20200821002424.119492...@linutronix.de
> 
> It's still a mixed bag of bug fixes, cleanups and general improvements
> which are worthwhile independent of device MSI.

Reverting the part of this patchset on the top of today's linux-next fixed an
boot issue on HPE ProLiant DL560 Gen10, i.e.,

$ git revert --no-edit 13b90cadfc29..bc95fd0d7c42

.config: https://gitlab.com/cailca/linux-mm/-/blob/master/x86.config

It looks like the crashes happen in the interrupt remapping code where they are
only able to to generate partial call traces.

[1.912386][T0] ACPI: X2APIC_NMI (uid[0xf5] high level 9983][T0] ... 
MAX_LOCK_DEPTH:  48
[7.914876][T0] ... MAX_LOCKDEP_KEYS:8192
[7.919942][T0] ... CLASSHASH_SIZE:  4096
[7.925009][T0] ... MAX_LOCKDEP_ENTRIES: 32768
[7.930163][T0] ... MAX_LOCKDEP_CHAINS:  65536
[7.935318][T0] ... CHAINHASH_SIZE:  32768
[7.940473][T0]  memory used by lock dependency info: 6301 kB
[7.946586][T0]  memory used for stack traces: 4224 kB
[7.952088][T0]  per task-struct memory footprint: 1920 bytes
[7.968312][T0] mempolicy: Enabling automatic NUMA balancing. Configure 
with numa_balancing= or the kernel.numa_balancing sysctl
[7.980281][T0] ACPI: Core revision 20200717
[7.993343][T0] clocksource: hpet: mask: 0x max_cycles: 
0x, max_idle_ns: 79635855245 ns
[8.003270][T0] APIC: Switch to symmetric I/O mode setup
[8.008951][T0] DMAR: Host address width 46
[8.013512][T0] DMAR: DRHD base: 0x00e5ffc000 flags: 0x0
[8.019680][T0] DMAR: dmar0: reg_base_addr e5ffc000 ver 1:0 cap 
8d2078c106f0466 [T0] DMAR-IR: IOAPIC id 15 under DRHD base  0xe5ffc000 
IOMMU 0
[8.420990][T0] DMAR-IR: IOAPIC id 8 under DRHD base  0xddffc000 IOMMU 15
[8.428166][T0] DMAR-IR: IOAPIC id 9 under DRHD base  0xddffc000 IOMMU 15
[8.435341][T0] DMAR-IR: HPET id 0 under DRHD base 0xddffc000
[8.441456][T0] DMAR-IR: Queued invalidation will be enabled to support 
x2apic and Intr-remapping.
[8.457911][T0] DMAR-IR: Enabled IRQ remapping in x2apic mode
[8.466614][T0] BUG: kernel NULL pointer dereference, address: 

[8.474295][T0] #PF: supervisor instruction fetch in kernel mode
[8.480669][T0] #PF: error_code(0x0010) - not-present page
[8.486518][T0] PGD 0 P4D 0 
[8.489757][T0] Oops: 0010 [#1] SMP KASAN PTI
[8.494476][T0] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G  I  
 5.9.0-rc6-next-20200925 #2
[8.503987][T0] Hardware name: HPE ProLiant DL560 Gen10/ProLiant DL560 
Gen10, BIOS U34 11/13/2019
[8.513238][T0] RIP: 0010:0x0
[8.516562][T0] Code: Bad RIP v

or

[2.906744][T0] ACPI: X2API32, address 0xfec68000, GSI 128-135
[2.907063][T0] IOAPIC[15]: apic_id 29, version 32, address 0xfec7, 
GSI 136-143
[2.907071][T0] IOAPIC[16]: apic_id 30, version 32, address 0xfec78000, 
GSI 144-151
[2.907079][T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[2.907084][T0] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high 
level)
[2.907100][T0] Using ACPI (MADT) for SMP configuration information
[2.907105][T0] ACPI: HPET id: 0x8086a701 base: 0xfed0
[2.907116][T0] ACPI: SPCR: console: uart,mmio,0x0,115200
[2.907121][T0] TSC deadline timer available
[2.907126][T0] smpboot: Allowing 144 CPUs, 0 hotplug CPUs
[2.907163][T0] [mem 0xd000-0xfdff] available for PCI devices
[2.907175][T0] clocksource: refined-jiffies: mask: 0x 
max_cycles: 0x, max_idle_ns: 1911260446275 ns
[2.914541][T0] setup_percpu: NR_CPUS:256 nr_cpumask_bits:144 
nr_cpu_ids:144 nr_node_ids:4
[2.926109][   466 ecap f020df
[9.134709][T0] DMAR: DRHD base: 0x00f5ffc000 flags: 0x0
[9.140867][T0] DMAR: dmar8: reg_base_addr f5ffc000 ver 1:0 cap 
8d2078c106f0466 ecap f020df
[9.149610][T0] DMAR: DRHD base: 0x00f7ffc000 flags: 0x0
[9.155762][T0] DMAR: dmar9: reg_base_addr f7ffc000 ver 1:0 cap 
8d2078c106f0466 ecap f020df
[9.164491][T0] DMAR: DRHD base: 0x00f9ffc000 flags: 0x0
[9.170645][T0] DMAR: dmar10: reg_base_addr f9ffc000 ver 1:0 cap 
8d2078c106f0466 ecap f020df
[9.179476][T0] DMAR: DRHD base: 0x00fbffc000 flags: 0x0
[9.185626][T0] DMAR: dmar11: reg_base_addr fbffc000 ver 1:0 cap 
8d2078c106f0466 ecap f020df
[9.194442][T0] DMAR: DRHD base: 0x00dfffc000 flags: 0x0
[9.200587][T0] DMAR: dmar12: 

Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-08 Thread Russ Anderson
On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
> This is the second version of providing a base to support device MSI (non
> PCI based) and on top of that support for IMS (Interrupt Message Storm)
> based devices in a halfways architecture independent way.

Booted with quick testing on a 32 socket, 1536 CPU, 12 TB memory
Cascade Lake system and a 8 socket, 144 CPU, 3 TB memory
Cooper Lake system without any obvious regression.


-- 
Russ Anderson,  SuperDome Flex Linux Kernel Group Manager
HPE - Hewlett Packard Enterprise (formerly SGI)  r...@hpe.com
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-03 Thread Thomas Gleixner
Ashok,

On Thu, Sep 03 2020 at 09:35, Ashok Raj wrote:
> On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
>> This is the second version of providing a base to support device MSI (non
>> PCI based) and on top of that support for IMS (Interrupt Message Storm)
>
> s/Storm/Store
>
> maybe pun intended :-)

Maybe? :)

>> based devices in a halfways architecture independent way.
>
> You mean "halfways" because the message addr and data follow guidelines
> per arch (x86 or such), but the location of the storage isn't dictated
> by architecture? or did you have something else in mind?

Yes, the actual message adress and data format are architecture
specific, but we also have x86 specific allocation info format which
needs an arch callback unfortunately.

>>- Ensure that the necessary flags are set for device SMI
>
> is that supposed to be MSI? 

Of course, but SMI is a better match for Message Storm :)

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-03 Thread Raj, Ashok
Hi Thomas,

Thanks a ton for jumping in helping on straightening it for IMS!!!


On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
> This is the second version of providing a base to support device MSI (non
> PCI based) and on top of that support for IMS (Interrupt Message Storm)

s/Storm/Store

maybe pun intended :-)

> based devices in a halfways architecture independent way.

You mean "halfways" because the message addr and data follow guidelines
per arch (x86 or such), but the location of the storage isn't dictated
by architecture? or did you have something else in mind? 

> 
> The first version can be found here:
> 
> https://lore.kernel.org/r/20200821002424.119492...@linutronix.de
> 

[snip]

> 
> Changes vs. V1:
> 
>- Addressed various review comments and addressed the 0day fallout.
>  - Corrected the XEN logic (Jürgen)
>  - Make the arch fallback in PCI/MSI opt-in not opt-out (Bjorn)
> 
>- Fixed the compose MSI message inconsistency
> 
>- Ensure that the necessary flags are set for device SMI

is that supposed to be MSI? 

Cheers,
Ashok
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-09-01 Thread Boqun Feng
Hi Thomas,

On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
[...]
> 
> The whole lot is also available from git:
> 
>git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git device-msi
> 
> This has been tested on Intel/AMD/KVM but lacks testing on:
> 
> - HYPERV (-ENODEV)

FWIW, I did a build and boot test in a hyperv guest with your
development branch, the latest commit is 71cbf478eb6f ("irqchip: Add
IMS (Interrupt Message Storm) driver - NOT FOR MERGING"). And everything
seemed working fine.

If you want me to set/unset a particular CONFIG option or run some
command for testing purposes, please let me know ;-)

Regards,
Bqoun

> - VMD enabled systems (-ENODEV)
> - XEN (-ENOCLUE)
> - IMS (-ENODEV)
> 
> - Any non-X86 code which might depend on the broken compose MSI message
>   logic. Marc excpects not much fallout, but agrees that we need to fix
>   it anyway.
> 
> #1 - #3 should be applied unconditionally for obvious reasons
> #4 - #6 are wortwhile cleanups which should be done independent of device MSI
> 
> #7 - #8 look promising to cleanup the platform MSI implementation
>   independent of #8, but I neither had cycles nor the stomach to
>   tackle that.
> 
> #9is obviously just for the folks interested in IMS
> 
> Thanks,
> 
>   tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-08-31 Thread Lu Baolu

Hi Thomas,

On 2020/8/31 15:10, Thomas Gleixner wrote:

On Mon, Aug 31 2020 at 08:51, Lu Baolu wrote:

On 8/26/20 7:16 PM, Thomas Gleixner wrote:

This is the second version of providing a base to support device MSI (non
PCI based) and on top of that support for IMS (Interrupt Message Storm)
based devices in a halfways architecture independent way.


After applying this patch series, the dmar_alloc_hwirq() helper doesn't
work anymore during boot. This causes the IOMMU driver to fail to
register the DMA fault handler and abort the IOMMU probe processing.
Is this a known issue?


See replies to patch 15/46 or pull the git tree. It has the issue fixed.


Ah! Yes. Sorry for the noise.

Beset regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-08-31 Thread Thomas Gleixner
On Mon, Aug 31 2020 at 08:51, Lu Baolu wrote:
> On 8/26/20 7:16 PM, Thomas Gleixner wrote:
>> This is the second version of providing a base to support device MSI (non
>> PCI based) and on top of that support for IMS (Interrupt Message Storm)
>> based devices in a halfways architecture independent way.
>
> After applying this patch series, the dmar_alloc_hwirq() helper doesn't
> work anymore during boot. This causes the IOMMU driver to fail to
> register the DMA fault handler and abort the IOMMU probe processing.
> Is this a known issue?

See replies to patch 15/46 or pull the git tree. It has the issue fixed.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-08-30 Thread Lu Baolu

Hi Thomas,

On 8/26/20 7:16 PM, Thomas Gleixner wrote:

This is the second version of providing a base to support device MSI (non
PCI based) and on top of that support for IMS (Interrupt Message Storm)
based devices in a halfways architecture independent way.


After applying this patch series, the dmar_alloc_hwirq() helper doesn't
work anymore during boot. This causes the IOMMU driver to fail to
register the DMA fault handler and abort the IOMMU probe processing.
Is this a known issue?

Best regards,
baolu



The first version can be found here:

 https://lore.kernel.org/r/20200821002424.119492...@linutronix.de

It's still a mixed bag of bug fixes, cleanups and general improvements
which are worthwhile independent of device MSI.

There are quite a bunch of issues to solve:

   - X86 does not use the device::msi_domain pointer for historical reasons
 and due to XEN, which makes it impossible to create an architecture
 agnostic device MSI infrastructure.

   - X86 has it's own msi_alloc_info data type which is pointlessly
 different from the generic version and does not allow to share code.

   - The logic of composing MSI messages in an hierarchy is busted at the
 core level and of course some (x86) drivers depend on that.

   - A few minor shortcomings as usual

This series addresses that in several steps:

  1) Accidental bug fixes

   iommu/amd: Prevent NULL pointer dereference

  2) Janitoring

   x86/init: Remove unused init ops
   PCI: vmd: Dont abuse vector irqomain as parent
   x86/msi: Remove pointless vcpu_affinity callback

  3) Sanitizing the composition of MSI messages in a hierarchy
  
   genirq/chip: Use the first chip in irq_chip_compose_msi_msg()

   x86/msi: Move compose message callback where it belongs

  4) Simplification of the x86 specific interrupt allocation mechanism

   x86/irq: Rename X86_IRQ_ALLOC_TYPE_MSI* to reflect PCI dependency
   x86/irq: Add allocation type for parent domain retrieval
   iommu/vt-d: Consolidate irq domain getter
   iommu/amd: Consolidate irq domain getter
   iommu/irq_remapping: Consolidate irq domain lookup

  5) Consolidation of the X86 specific interrupt allocation mechanism to be as 
close
 as possible to the generic MSI allocation mechanism which allows to get rid
 of quite a bunch of x86'isms which are pointless

   x86/irq: Prepare consolidation of irq_alloc_info
   x86/msi: Consolidate HPET allocation
   x86/ioapic: Consolidate IOAPIC allocation
   x86/irq: Consolidate DMAR irq allocation
   x86/irq: Consolidate UV domain allocation
   PCI/MSI: Rework pci_msi_domain_calc_hwirq()
   x86/msi: Consolidate MSI allocation
   x86/msi: Use generic MSI domain ops

   6) x86 specific cleanups to remove the dependency on arch_*_msi_irqs()

   x86/irq: Move apic_post_init() invocation to one place
   x86/pci: Reducde #ifdeffery in PCI init code
   x86/irq: Initialize PCI/MSI domain at PCI init time
   irqdomain/msi: Provide DOMAIN_BUS_VMD_MSI
   PCI: vmd: Mark VMD irqdomain with DOMAIN_BUS_VMD_MSI
   PCI/MSI: Provide pci_dev_has_special_msi_domain() helper
   x86/xen: Make xen_msi_init() static and rename it to xen_hvm_msi_init()
   x86/xen: Rework MSI teardown
   x86/xen: Consolidate XEN-MSI init
   irqdomain/msi: Allow to override msi_domain_alloc/free_irqs()
   x86/xen: Wrap XEN MSI management into irqdomain
   iommm/vt-d: Store irq domain in struct device
   iommm/amd: Store irq domain in struct device
   x86/pci: Set default irq domain in pcibios_add_device()
   PCI/MSI: Make arch_.*_msi_irq[s] fallbacks selectable
   x86/irq: Cleanup the arch_*_msi_irqs() leftovers
   x86/irq: Make most MSI ops XEN private
   iommu/vt-d: Remove domain search for PCI/MSI[X]
   iommu/amd: Remove domain search for PCI/MSI

   7) X86 specific preparation for device MSI

   x86/irq: Add DEV_MSI allocation type
   x86/msi: Rename and rework pci_msi_prepare() to cover non-PCI MSI

   8) Generic device MSI infrastructure
   platform-msi: Provide default irq_chip:: Ack
   genirq/proc: Take buslock on affinity write
   genirq/msi: Provide and use msi_domain_set_default_info_flags()
   platform-msi: Add device MSI infrastructure
   irqdomain/msi: Provide msi_alloc/free_store() callbacks

   9) POC of IMS (Interrupt Message Storm) irq domain and irqchip
  implementations for both device array and queue storage.

   irqchip: Add IMS (Interrupt Message Storm) driver - NOT FOR MERGING

Changes vs. V1:

- Addressed various review comments and addressed the 0day fallout.
  - Corrected the XEN logic (Jürgen)
  - Make the arch fallback in PCI/MSI opt-in not opt-out (Bjorn)

- Fixed the compose MSI message inconsistency

- Ensure that the necessary flags are set for device SMI

- Make the irq bus logic work for affinity setting to prepare
  

Re: [patch V2 00/46] x86, PCI, XEN, genirq ...: Prepare for device MSI

2020-08-28 Thread Joerg Roedel
On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
> This is the second version of providing a base to support device MSI (non
> PCI based) and on top of that support for IMS (Interrupt Message Storm)
> based devices in a halfways architecture independent way.
> 
> The first version can be found here:
> 
> https://lore.kernel.org/r/20200821002424.119492...@linutronix.de
> 
> It's still a mixed bag of bug fixes, cleanups and general improvements
> which are worthwhile independent of device MSI.
> 
> There are quite a bunch of issues to solve:
> 
>   - X86 does not use the device::msi_domain pointer for historical reasons
> and due to XEN, which makes it impossible to create an architecture
> agnostic device MSI infrastructure.
> 
>   - X86 has it's own msi_alloc_info data type which is pointlessly
> different from the generic version and does not allow to share code.
> 
>   - The logic of composing MSI messages in an hierarchy is busted at the
> core level and of course some (x86) drivers depend on that.
> 
>   - A few minor shortcomings as usual
> 
> This series addresses that in several steps:

For all IOMMU changes:

Acked-by: Joerg Roedel 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu