Re: Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
On 12/31/20 2:40 PM, Chuck Tuffli wrote: > On Wed, Dec 30, 2020 at 4:38 PM Neel Chauhan wrote: >> >> Hi Chuck, >> >> On 2020-12-30 10:04, Chuck Tuffli wrote: >>> What is the output from >>> # pciconf -rb pci0:0:14:0 0x40:0x48 >> >> The output is: >> >> 01 00 00 00 01 2e 68 02 00 > > Perfect. The Linux driver says the 8086:9a0b device you have "... may > provide root port configuration information which limits bus > numbering" which causes the code to read the VM Capability register > (0x40) and the VM Configuration register (0x44). Here, VMCAP = 0x0001 > where bit 0 set appears to mean the config register has starting bus > number information. VMCFG = 0x2e01 where bits 5:4 give the coded start > number of bus 224 or 0xe0 which matches the PCI bridge shown in the > lspci output (i.e. 1:e0:06.0). > > I wonder if mirroring the logic in [1] and setting > bus->rman.rm_start = 224; > in vmd_attach() might help. > >> I was also able to stop kernel panics by adding: >> >> rman_fini(>vmd_bus.rman); >> >> In the fail: statement in vmd_attach(). >> >> But I still cannot detect the SSD. > > [1] > https://github.com/torvalds/linux/blob/master/drivers/pci/controller/vmd.c#L507 You will also need to subtract that starting bus number from the bus number used to compute the offset into the PCI-express region for config register read/write as this code does: https://github.com/torvalds/linux/blob/master/drivers/pci/controller/vmd.c#L339 Also, that means the vm_bus.c can't hardcode reading from bus 0. Instead, vmd(4) might need to export an IVAR to vmd_bus(4) that is the starting bus number and vm_bus needs to use that instead of hardcoding 0. -- John Baldwin ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
On Wed, Dec 30, 2020 at 4:38 PM Neel Chauhan wrote: > > Hi Chuck, > > On 2020-12-30 10:04, Chuck Tuffli wrote: > > What is the output from > > # pciconf -rb pci0:0:14:0 0x40:0x48 > > The output is: > > 01 00 00 00 01 2e 68 02 00 Perfect. The Linux driver says the 8086:9a0b device you have "... may provide root port configuration information which limits bus numbering" which causes the code to read the VM Capability register (0x40) and the VM Configuration register (0x44). Here, VMCAP = 0x0001 where bit 0 set appears to mean the config register has starting bus number information. VMCFG = 0x2e01 where bits 5:4 give the coded start number of bus 224 or 0xe0 which matches the PCI bridge shown in the lspci output (i.e. 1:e0:06.0). I wonder if mirroring the logic in [1] and setting bus->rman.rm_start = 224; in vmd_attach() might help. > I was also able to stop kernel panics by adding: > > rman_fini(>vmd_bus.rman); > > In the fail: statement in vmd_attach(). > > But I still cannot detect the SSD. [1] https://github.com/torvalds/linux/blob/master/drivers/pci/controller/vmd.c#L507 --chuck ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
On 2020-12-30 21:04, Neel Chauhan wrote: It is likely because VMD uses PCI domain above 0x1 but we aren't looking at this. The 0x1 is purely a Linux construct. It seems the PCI domains are virtual. Source: https://lists.x.org/archives/xorg-devel/2016-August/050590.html -Neel ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
I think I found the issue: This PCIe controller is not detected: 1:e0:1d.0 PCI bridge [0604]: Intel Corporation Tiger Lake-LP PCI Express Root Port #9 [8086:a0b0] (rev 20) SI I believe the above PCIe controller is exposed by VMD (as it is on Linux), but FreeBSD vmd/vmd_bus is unable to attach this controller. It is likely because VMD uses PCI domain above 0x1 but we aren't looking at this. Source: https://github.com/torvalds/linux/blob/master/drivers/pci/controller/vmd.c#L437 Don't yet have a patch though. Sorry for the number of emails earlier. -Neel On 2020-12-30 10:04, Chuck Tuffli wrote: On Tue, Dec 29, 2020 at 6:30 PM Neel Chauhan wrote: Hi freebsd-hackers@, CC'd freebsd-current@, I hope you all had a wonderful holiday season. I recently got a HP Spectre x360 13t-aw200 which is an Intel TigerLake-based laptop. It has the Intel "Evo" branding and an "Optane" SSD which I disabled (so I can get a "second" SSD). On the Spectre, the NVMe is not detected: https://imgur.com/a/ighTwHQ I don't know if it is HP or Intel, but the VMD IDs device id is 8086:9a0b. I'm guessing Intel since Dell laptops (XPS, Vostro) also have this device ID [1]. Sadly, NVMe RAID is forced on this laptop. I wrote a rough patch to add the device IDs, and the patch is below: FWIW, that is the same change I would have made. Peeking at the Linux vmd driver, it doesn't appear to do anything special for 8086:9a0b as compared to the 8086:2a0c device the FreeBSD driver already supports. That said, the Linux driver reads a capability register to determine the bus number start (vmd_bus_number_start()) which I don't see in the FreeBSD driver. This is curious because, looking at the "lspci all" output from the XPS link you provided, the NVMe device shows up in PCI domain 0x1000 (i.e. not 0x). Which (and I have no direct experience with this device or code) only happens if the bus number start function returns 0x0. What is the output from # pciconf -rb pci0:0:14:0 0x40:0x48 --chuck ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
I have attached two files: * pcidump.txt: A dump of `pciconf -lv` * acpidump.txt: A dump of `acpidump` Hope this can help. -Neel On 2020-12-30 17:21, Neel Chauhan wrote: To extend, I am getting an issue with `pci_read_device()` where it returns a `vid` (PCI Vendor ID) of 0x. This ends up returning "Cannot allocate dinfo!" from vmd. Log (via grep): https://imgur.com/a/tAmmY7i -Neel On 2020-12-30 16:38, Neel Chauhan wrote: Hi Chuck, On 2020-12-30 10:04, Chuck Tuffli wrote: What is the output from # pciconf -rb pci0:0:14:0 0x40:0x48 The output is: 01 00 00 00 01 2e 68 02 00 --chuck I was also able to stop kernel panics by adding: rman_fini(>vmd_bus.rman); In the fail: statement in vmd_attach(). But I still cannot detect the SSD. -Neel ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" hostb0@pci0:0:0:0: class=0x06 rev=0x01 hdr=0x00 vendor=0x8086 device=0x9a14 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = '11th Gen Core Processor Host Bridge/DRAM Registers' class = bridge subclass = HOST-PCI vgapci0@pci0:0:2:0: class=0x03 rev=0x01 hdr=0x00 vendor=0x8086 device=0x9a49 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'UHD Graphics' class = display subclass = VGA none0@pci0:0:4:0: class=0x118000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x9a03 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' class = dasp pcib1@pci0:0:7:0: class=0x060400 rev=0x01 hdr=0x01 vendor=0x8086 device=0x9a23 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'Tiger Lake-LP Thunderbolt PCI Express Root Port' class = bridge subclass = PCI-PCI pcib2@pci0:0:7:1: class=0x060400 rev=0x01 hdr=0x01 vendor=0x8086 device=0x9a25 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'Tiger Lake-LP Thunderbolt PCI Express Root Port' class = bridge subclass = PCI-PCI none1@pci0:0:8:0: class=0x088000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x9a11 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' class = base peripheral xhci0@pci0:0:13:0: class=0x0c0330 rev=0x01 hdr=0x00 vendor=0x8086 device=0x9a13 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'Tiger Lake-LP Thunderbolt USB Controller' class = serial bus subclass = USB none2@pci0:0:13:2: class=0x0c0340 rev=0x01 hdr=0x00 vendor=0x8086 device=0x9a1b subvendor=0x subdevice=0x vendor = 'Intel Corporation' device = 'Tiger Lake-LP Thunderbolt NHI' class = serial bus subclass = USB none3@pci0:0:14:0: class=0x010400 rev=0x00 hdr=0x00 vendor=0x8086 device=0x9a0b subvendor=0x8086 subdevice=0x vendor = 'Intel Corporation' device = 'Volume Management Device NVMe RAID Controller' class = mass storage subclass = RAID none4@pci0:0:18:0: class=0x07 rev=0x20 hdr=0x00 vendor=0x8086 device=0xa0fc subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'Tiger Lake-LP Integrated Sensor Hub' class = simple comms subclass = UART xhci1@pci0:0:20:0: class=0x0c0330 rev=0x20 hdr=0x00 vendor=0x8086 device=0xa0ed subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'Tiger Lake-LP USB 3.2 Gen 2x1 xHCI Host Controller' class = serial bus subclass = USB none5@pci0:0:20:2: class=0x05 rev=0x20 hdr=0x00 vendor=0x8086 device=0xa0ef subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'Tiger Lake-LP Shared SRAM' class = memory subclass = RAM none6@pci0:0:20:3: class=0x028000 rev=0x20 hdr=0x00 vendor=0x8086 device=0xa0f0 subvendor=0x8086 subdevice=0x0074 vendor = 'Intel Corporation' device = 'Wi-Fi 6 AX201' class = network ig4iic0@pci0:0:21:0:class=0x0c8000 rev=0x20 hdr=0x00 vendor=0x8086 device=0xa0e8 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'Tiger Lake-LP Serial IO I2C Controller' class = serial bus ig4iic1@pci0:0:21:1:class=0x0c8000 rev=0x20 hdr=0x00 vendor=0x8086 device=0xa0e9 subvendor=0x103c subdevice=0x8709 vendor = 'Intel Corporation' device = 'Tiger Lake-LP Serial IO I2C Controller' class = serial bus none7@pci0:0:22:0: class=0x078000 rev=0x20 hdr=0x00
Re: Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
To extend, I am getting an issue with `pci_read_device()` where it returns a `vid` (PCI Vendor ID) of 0x. This ends up returning "Cannot allocate dinfo!" from vmd. Log (via grep): https://imgur.com/a/tAmmY7i -Neel On 2020-12-30 16:38, Neel Chauhan wrote: Hi Chuck, On 2020-12-30 10:04, Chuck Tuffli wrote: What is the output from # pciconf -rb pci0:0:14:0 0x40:0x48 The output is: 01 00 00 00 01 2e 68 02 00 --chuck I was also able to stop kernel panics by adding: rman_fini(>vmd_bus.rman); In the fail: statement in vmd_attach(). But I still cannot detect the SSD. -Neel ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
Hi Chuck, On 2020-12-30 10:04, Chuck Tuffli wrote: What is the output from # pciconf -rb pci0:0:14:0 0x40:0x48 The output is: 01 00 00 00 01 2e 68 02 00 --chuck I was also able to stop kernel panics by adding: rman_fini(>vmd_bus.rman); In the fail: statement in vmd_attach(). But I still cannot detect the SSD. -Neel ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
On Tue, Dec 29, 2020 at 6:30 PM Neel Chauhan wrote: > > Hi freebsd-hackers@, CC'd freebsd-current@, > > I hope you all had a wonderful holiday season. > > I recently got a HP Spectre x360 13t-aw200 which is an Intel > TigerLake-based laptop. It has the Intel "Evo" branding and an "Optane" > SSD which I disabled (so I can get a "second" SSD). > > On the Spectre, the NVMe is not detected: https://imgur.com/a/ighTwHQ > > I don't know if it is HP or Intel, but the VMD IDs device id is > 8086:9a0b. I'm guessing Intel since Dell laptops (XPS, Vostro) also have > this device ID [1]. > > Sadly, NVMe RAID is forced on this laptop. > > I wrote a rough patch to add the device IDs, and the patch is below: FWIW, that is the same change I would have made. Peeking at the Linux vmd driver, it doesn't appear to do anything special for 8086:9a0b as compared to the 8086:2a0c device the FreeBSD driver already supports. That said, the Linux driver reads a capability register to determine the bus number start (vmd_bus_number_start()) which I don't see in the FreeBSD driver. This is curious because, looking at the "lspci all" output from the XPS link you provided, the NVMe device shows up in PCI domain 0x1000 (i.e. not 0x). Which (and I have no direct experience with this device or code) only happens if the bus number start function returns 0x0. What is the output from # pciconf -rb pci0:0:14:0 0x40:0x48 --chuck ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Intel TigerLake NVMe vmd: Adding Support & Debugging a Patch
Hi freebsd-hackers@, CC'd freebsd-current@, I hope you all had a wonderful holiday season. I recently got a HP Spectre x360 13t-aw200 which is an Intel TigerLake-based laptop. It has the Intel "Evo" branding and an "Optane" SSD which I disabled (so I can get a "second" SSD). On the Spectre, the NVMe is not detected: https://imgur.com/a/ighTwHQ I don't know if it is HP or Intel, but the VMD IDs device id is 8086:9a0b. I'm guessing Intel since Dell laptops (XPS, Vostro) also have this device ID [1]. Sadly, NVMe RAID is forced on this laptop. I wrote a rough patch to add the device IDs, and the patch is below: --- a/sys/dev/vmd/vmd.c +++ b/sys/dev/vmd/vmd.c @@ -66,13 +66,20 @@ struct vmd_type { #define INTEL_VENDOR_ID0x8086 #define INTEL_DEVICE_ID_VMD0x201d #define INTEL_DEVICE_ID_VMD2 0x28c0 +#define INTEL_DEVICE_ID_VMD3 0x9a0b static struct vmd_type vmd_devs[] = { { INTEL_VENDOR_ID, INTEL_DEVICE_ID_VMD, "Intel Volume Management Device" }, { INTEL_VENDOR_ID, INTEL_DEVICE_ID_VMD2, "Intel Volume Management Device" }, +{ INTEL_VENDOR_ID, INTEL_DEVICE_ID_VMD3, "Intel Volume Management Device" }, { 0, 0, NULL } However, I get a panic whenever I use this patch: https://imgur.com/a/XUQksOi Without this patch, I am able to boot fine but can't see the SSD or any nvd* devices beyond a "none" device in `pciconf -lv`. For those who know about PCI/ACPI subsystems, can you please tell me what's going wrong? I'm still debugging in the meanwhile, but am no expert on PCI/ACPI subsystems. I may know more than most PC builders or CS grads, but not really enough to do it full-time. The Spectre's SSD works fine with Windows 10 (obviously) and Linux (Fedora and Debian tested). Best, Neel Chauhan Sources: [1]: Linux probes: * Vostro: https://certification.ubuntu.com/hardware/202007-28047 * XPS: https://linux-hardware.org/?probe=ba53f6e513 ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"