From the head of next, I got the MMU with LPAE support to work.
I can prepare a patch for the MMU LPAE support and later a patch for the 
LS1021A PCIE support.
I have not tested the code on QEMU yet. 

Do you require the code to be tested in QEMU before I send it?

> -----Original Message-----
> From: barebox <[email protected]> On Behalf Of Renaud
> Barbier
> Sent: 28 January 2026 16:40
> To: Ahmad Fatoum <[email protected]>; Barebox List
> <[email protected]>
> Cc: Lucas Stach <[email protected]>
> Subject: RE: PCIE on LS1021A
> 
> ***NOTICE*** This came from an external source. Use caution when
> replying, clicking links, or opening attachments.
> 
> Just to let you know I was developing from barebox 2024.09 as this was a
> requirement for our product.
> I started to move the LPAE support and follow the next branch.
> Barebox is booting but currently failing to probe the PCIe NVME device.
> 
> A bit more debugging and hopefully, I can get something soon.
> 
> > -----Original Message-----
> > From: Ahmad Fatoum <[email protected]>
> > Sent: 20 January 2026 13:41
> > To: Renaud Barbier <[email protected]>; Barebox List
> > <[email protected]>
> > Cc: Lucas Stach <[email protected]>
> > Subject: Re: PCIE on LS1021A
> >
> > ***NOTICE*** This came from an external source. Use caution when
> > replying, clicking links, or opening attachments.
> >
> > Hello Renaud,
> >
> > On 1/13/26 7:26 PM, Renaud Barbier wrote:
> > > Changing the NVME to the  PCIe2 bus and fixing a few things in the
> > > MMU
> > support, I am now able to detect the NVME:
> > >
> > > nvme pci-126f:2263.0: serial: A012410180629000000 nvme
> > > pci-126f:2263.0: model: SM681GEF AGS nvme pci-126f:2263.0: firmware:
> > > TFX7GB
> > >
> > > barebox:/ ls /dev/nvme0n1
> > > barebox:/ ls /dev/nvme0n1*
> > > /dev/nvme0n1                        /dev/nvme0n1.0
> > > /dev/nvme0n1.1                      /dev/nvme0n1.2
> > > /dev/nvme0n1.3                      /dev/nvme0n1.4
> > > ...
> > >
> > > Thanks to the following remapping:
> > > /* PCIe1 Config and memory area remapping */
> > > map_io_sections(0x4000000000ULL, IOMEM(0x24000000), 192 << 20);
> /*
> > > PCIE1 conf space */ //map_io_sections(0x4040000000ULL,
> > > IOMEM(0x40000000), 128 << 20); /* PCIE1 mem space */
> > >
> > >  /* PCIe2 Config and memory area remapping */
> > > map_io_sections(0x4800000000ULL, IOMEM(0x34000000), 192 << 20);
> /*
> > > PCIe2 config space */ map_io_sections(0x4840000000ULL,
> > > IOMEM(0x50000000), 128 << 20); /* PCIE2 mem space */
> > >
> > > For some reason, I had to comment out the remapping of the PCIe1 MEM
> > space as the system hangs just after detecting the NVME device.
> > > The PCIe1 device node is not even enabled.
> > > If you have a clue, let me know.
> >
> > I don't have an idea off the top of my head sorry.
> > If you have something roughly working, it would be good if you could
> > check it works with qemu-system-arm -M virt,highmem=on and send an
> > initial patch series?
> >
> > Cheers,
> > Ahmad
> >
> > >
> > > Cheers,
> > > Renaud
> > >
> > >
> > >
> > >
> > >> -----Original Message-----
> > >> From: barebox <[email protected]> On Behalf Of
> > >> Renaud Barbier
> > >> Sent: 07 January 2026 09:44
> > >> To: Ahmad Fatoum <[email protected]>; Barebox List
> > >> <[email protected]>
> > >> Cc: Lucas Stach <[email protected]>
> > >> Subject: RE: PCIE on LS1021A
> > >>
> > >> ***NOTICE*** This came from an external source. Use caution when
> > >> replying, clicking links, or opening attachments.
> > >>
> > >> Based on your information and U-boot and I have started to work on
> > >> the LPAE support. So far full of debugging and hacks.
> > >>
> > >> It is based on the mmu_32.c file. As I have failed to use the 3 MMU
> > >> tables, at present I am using only 2 as  in u-boot.
> > >> The 64-bit PCI space is remapped with:
> > >> map_io_sections(0x4000000000ULL ,IOMEM(0x24000000UL), 192 <<
> 20);
> > >>
> > >> To detect the NVME device, the virtulal address 0x24000000 is
> > >> hard-coded into the functions dw_pcie_[wr|rd]_other_conf of
> > >> drivers/pci/pcie- designware-host.c as follows:
> > >> if (bus->primary == pp->root_bus_nr) {
> > >>                   type = PCIE_ATU_TYPE_CFG0;
> > >>                   cpu_addr = pp->cfg0_base;
> > >>                   cfg_size = pp->cfg0_size;
> > >>                   pp->va_cfg0_base = IOMEM(0x24000000); /* XXX */
> > >>                   va_cfg_base = pp->va_cfg0_base;
> > >>
> > >> What is the method to pass the address to the driver?
> > >>
> > >> And I get the following:
> > >> layerscape-pcie [email protected]: host bridge
> > >> /soc/pcie@3400000
> > >> ranges:
> > >> layerscape-pcie [email protected]: Parsing ranges property...
> > >> layerscape-pcie [email protected]:       IO
> > >> 0x4000010000..0x400001ffff -> 0x0000000000
> > >> layerscape-pcie [email protected]:      MEM
> > >> 0x4040000000..0x407fffffff -> 0x0040000000
> > >>
> > >> ERROR: io_bus_addr = 0x0, io_base = 0x4000010000
> > >> ERROR: mem_bus_addr = 0x4040000000 -> Based on Linux output, the
> > >> mem_bus_addr should be as above 0x4000.0000 to be programmed in
> > the
> > >> ATU target register.
> > >> ERROR: mem_base = 0x4040000000, offset = 0x0
> > >>
> > >> ERROR: layerscape-pcie [email protected]: iATU unroll:
> > >> disabled
> > >>
> > >> pci: pci_scan_bus for bus 0
> > >> pci:  last_io = 0x00010000, last_mem = 0x40000000, last_mem_pref =
> > >> 0x00000000
> > >> pci: class = 00000604, hdr_type = 00000001
> > >> pci: 00:00 [1957:0e0a]
> > >> pci: pci_scan_bus for bus 1
> > >> pci:  last_io = 0x00010000, last_mem = 0x40000000, last_mem_pref =
> > >> 0x00000000
> > >>
> > >> pci: class = 00000108, hdr_type = 00000000
> > >> pci: 01:00 [126f:2263] -> NVME device found
> > >> pci: pbar0: mask=ffffc004 NP-MEM 16384 bytes
> > >> ERROR: pci: &&&  sub = 0x2263, 0x126f kind = NP-MEM&&&
> > >> ERROR: pci: &&& write BAR 0x10 = 0x40000000 &&& ...
> > >> pci: pci_scan_bus returning with max=02
> > >> pci: bridge NP limit at 0x40100000
> > >> pci: bridge IO limit at 0x00010000
> > >> pci: pbar0: mask=ff000000 NP-MEM 16777216 bytes
> > >> pci: pbar1: mask=fc000000 NP-MEM 67108864 bytes
> > >> pci: pci_scan_bus returning with max=02
> > >> ERROR: nvme pci-126f:2263.0: enabling bus mastering
> > >>
> > >> Then, the system hangs on the instruction 3 lines below:
> > >> ERROR: nvme_pci_enable : 0x4000001c -> Fails to access the NVME
> > >> CSTS register. It does not matter if mem_bus_addr is set to
> > >> 0x4000.0000 to program the ATU to translate the address
> > >> 0x40.4000.0000 to
> > 0x4000.0000.
> > >> if (readl(dev->bar + NVME_REG_CSTS) == -1)
> > >>
> > >> 0x4000.0000 is also the quadSPI memory area. So I guess I should
> > >> remap the access too.
> > >>
> > >> Unhappily, my work is now at a stop as there is a hardware failure
> > >> on my system.
> > >>
> > >> Note: the MMU may not be set properly as the out of-band fails to
> > >> on TX timeout. I can reach the prompt after the NVME probing failed.
> > >>
> > >>
> > >>
> > >
> >
> > --
> > Pengutronix e.K.                  |                             |
> > Steuerwalder Str. 21              |
> >
> https://urldefense.com/v3/__http://www.pengutronix.de/__;!!HKOSU0g!DlZ
> >
> b2oy6FdvgOu3JutuBMr0zf4ib6x_vlFyfBU3Fgcpgud4iuzA7FLewuR6dBQULYVe
> > xgDvoQqAlgtgyY1fAds9Tovg$   |
> > 31137 Hildesheim, Germany         | Phone: +49-5121-206917-0    |
> > Amtsgericht Hildesheim, HRA 2686  | Fax:   +49-5121-206917-5555 |

Reply via email to