I think that UEFI would involve the bootloader that the BIOS starts. I don't know if the bootstrap part of UEFI would be handled by seL4 itself or by something like grub/grub2.
My wild-ass guess is that anything after seL4 takes over would be handled by a driver process that has the required capabilities to interface with the corresponding hardware. On Fri, Sep 29, 2017 at 9:07 AM, Edward Sandberg < [email protected]> wrote: > Hello, > > I built Genode 17.08 with run option "image/uefi" and platform option > "sel4_x86_32". I have tried booting the resulting image on a variety of > machines but am not getting the behavior I expect. The machines I have > tried are: > > * Up board (a Windows compatible Atom-based system) > * Dell M3800 laptop > * Dell Latitude 7280 laptop > * HP ProLiant DL380 Gen9 Server > * Dell Optoplex 990 > * Asus P9X79 WS Motherboard with Intel Core i7-3930K > > The furthest I have gotten was on the UP board, which only supports UEFI > boot: > > https://www.intel.com/content/www/us/en/support/emerging- > technologies/intel-realsense-technology/000022699.html > > I saw serial output but no graphical output (see below). The other > machines I tried to boot on didn't even give me serial output. The > serial output seems to indicate that an attempt to create a framebuffer > failed. > > Is this the correct build procedure? > What hardware have you used to test UEFI boot? > Does the trace below suggest any experiments to try? > > ***************************************************************** > > WARNING: no console will be available to OS > Bender: Hello World. > > Boot config: parsing cmdline '' > Boot config: console_port = 0x3f8 > Boot config: debug_port = 0x3f8 > Boot config: disable_iommu = false > > Boot config: parsing cmdline 'sel4 disable_iommu' > Boot config: console_port = 0x3f8 > Boot config: debug_port = 0x3f8 > Boot config: disable_iommu = true > module #0: start=0xeaba000 end=0xf0ff968 size=0x645968 name='image.elf' > Physical Memory Region from 0 size 8f000 type 1 > Physical Memory Region from 8f000 size 1000 type 4 > Physical Memory Region from 90000 size e000 type 1 > Physical Memory Region from 9e000 size 2000 type 2 > Physical Memory Region from 100000 size 1ef00000 type 1 > Adding physical memory region 0x100000-0x1f000000 > Physical Memory Region from 1f000000 size 1200000 type 2 > Physical Memory Region from 20200000 size 3af29000 type 1 > Physical Memory Region from 5b129000 size 30000 type 2 > Physical Memory Region from 5b159000 size 25000 type 3 > Physical Memory Region from 5b17e000 size 5d2000 type 4 > Physical Memory Region from 5b750000 size 2b9000 type 2 > Physical Memory Region from 5ba09000 size 79000 type 20 > Physical Memory Region from 5ba82000 size 57e000 type 1 > Physical Memory Region from e0000000 size 4000000 type 2 > Physical Memory Region from fea00000 size 100000 type 2 > Physical Memory Region from fec00000 size 1000 type 2 > Physical Memory Region from fed01000 size 1000 type 2 > Physical Memory Region from fed03000 size 1000 type 2 > Physical Memory Region from fed06000 size 1000 type 2 > Physical Memory Region from fed08000 size 2000 type 2 > Physical Memory Region from fed1c000 size 1000 type 2 > Physical Memory Region from fed80000 size 40000 type 2 > Physical Memory Region from fee00000 size 1000 type 2 > Physical Memory Region from ffc00000 size 400000 type 2 > Kernel loaded to: start=0x200000 end=0x281000 size=0x81000 entry=0x20007e > ACPI: RSDP paddr=0x3c490 > ACPI: RSDP vaddr=0xdfc3c490 > ACPI: RSDT paddr=0x5b161028 > ACPI: RSDT vaddr=0xdfd61028 > ACPI: FADT paddr=0x5b1611a8 > ACPI: FADT vaddr=0xdfd611a8 > ACPI: FADT flags=0x421 > ACPI: MADT paddr=0x5b177380 > ACPI: MADT vaddr=0xdfd77380 > ACPI: MADT apic_addr=0xfee00000 > ACPI: MADT flags=0x1 > ACPI: MADT_APIC apic_id=0x0 > ACPI: MADT_APIC apic_id=0x2 > ACPI: MADT_APIC apic_id=0x4 > ACPI: MADT_APIC apic_id=0x6 > ACPI: MADT_IOAPIC ioapic_id=1 ioapic_addr=0xfec00000 gsib=0 > ACPI: MADT_ISO bus=0 source=0 gsi=2 flags=0x0 > ACPI: MADT_ISO bus=0 source=9 gsi=9 flags=0xd > ACPI: 4 CPU(s) detected > Detected 1 boot module(s): > ELF-loading userland images from boot modules: > size=0xd87000 v_entry=0x2000000 v_start=0x2000000 v_end=0x2d87000 > p_start=0xf100000 p_end=0xfe87000 > Moving loaded userland images to final location: from=0xf100000 > to=0x281000 size=0xd87000 > Starting node #0 with APIC ID 0 > > Starting node #1 with APIC ID 2 > Starting node #2 with APIC ID 4 > Starting node #3 with APIC ID 6 > Booting all finished, dropped to user space > virtual address layout of core: > overall [00002000,c0000000) > core image [02000000,02d87000) > ipc buffer [02d87000,02d88000) > boot_info [02d88000,02d8a000) > stack area [40000000,50000000) > Warning: need physical memory, but Platform object not constructed yet > Warning: need physical memory, but Platform object not constructed yet > Warning: need physical memory, but Platform object not constructed yet > Warning: need physical memory, but Platform object not constructed yet > :phys_alloc: Allocator 0x2800df8 dump: > Block: [00200000,00201000) size=4K avail=0 max_avail=0 > Block: [00201000,00202000) size=4K avail=0 max_avail=0 > Block: [00202000,00203000) size=4K avail=0 max_avail=0 > Block: [00203000,00204000) size=4K avail=0 max_avail=0 > Block: [00204000,00205000) size=4K avail=0 max_avail=0 > Block: [00205000,00206000) size=4K avail=0 max_avail=0 > Block: [00206000,00207000) size=4K avail=0 max_avail=0 > Block: [00207000,00208000) size=4K avail=0 max_avail=0 > Block: [00208000,00209000) size=4K avail=0 max_avail=0 > Block: [00209000,0020a000) size=4K avail=0 max_avail=0 > Block: [0020a000,0020b000) size=4K avail=0 max_avail=0 > Block: [0020b000,0020c000) size=4K avail=0 max_avail=0 > Block: [0020c000,0020d000) size=4K avail=0 max_avail=0 > Block: [01008000,01009000) size=4K avail=0 max_avail=0 > Block: [01009000,0100a000) size=4K avail=0 max_avail=0 > Block: [0100a000,0100b000) size=4K avail=0 max_avail=0 > Block: [0100b000,0100c000) size=4K avail=0 max_avail=0 > Block: [0100c000,0100d000) size=4K avail=0 max_avail=0 > Block: [0100d000,0100e000) size=4K avail=0 max_avail=0 > Block: [0100e000,0100f000) size=4K avail=0 max_avail=0 > Block: [0100f000,01010000) size=4K avail=0 max_avail=0 > Block: [01010000,01011000) size=4K avail=0 max_avail=0 > Block: [01011000,01012000) size=4K avail=0 max_avail=0 > Block: [01012000,01013000) size=4K avail=0 max_avail=0 > Block: [01013000,01014000) size=4K avail=0 max_avail=0 > Block: [01014000,01015000) size=4K avail=0 max_avail=0 > Block: [01015000,01016000) size=4K avail=0 max_avail=0 > Block: [01016000,01017000) size=4K avail=0 max_avail=0 > Block: [01017000,01018000) size=4K avail=0 max_avail=0 > Block: [01018000,01019000) size=4K avail=0 max_avail=0 > Block: [01019000,0101a000) size=4K avail=0 max_avail=0 > Block: [0101a000,0101b000) size=4K avail=0 max_avail=0 > Block: [0101b000,0101c000) size=4K avail=0 max_avail=0 > Block: [0101c000,0101d000) size=4K avail=0 max_avail=0 > Block: [0101d000,0101e000) size=4K avail=0 max_avail=0 > Block: [0101e000,0101f000) size=4K avail=0 max_avail=0 > Block: [0101f000,01020000) size=4K avail=0 max_avail=0 > Block: [01020000,01021000) size=4K avail=0 max_avail=0 > Block: [01021000,01022000) size=4K avail=0 max_avail=0 > Block: [01022000,01023000) size=4K avail=0 max_avail=0 > Block: [01023000,01024000) size=4K avail=0 max_avail=0 > Block: [01024000,01025000) size=4K avail=0 max_avail=0 > Block: [01025000,01026000) size=4K avail=0 max_avail=0 > Block: [01026000,01027000) size=4K avail=0 max_avail=0 > Block: [01027000,01028000) size=4K avail=0 max_avail=0 > Block: [01028000,01029000) size=4K avail=0 max_avail=0 > Block: [01029000,0102a000) size=4K avail=0 max_avail=0 > Block: [0102a000,0102b000) size=4K avail=0 max_avail=0 > Block: [0102b000,0102c000) size=4K avail=0 max_avail=0 > Block: [0102c000,0102d000) size=4K avail=0 max_avail=400M > Block: [0102d000,0102e000) size=4K avail=0 max_avail=0 > Block: [0102e000,0102f000) size=4K avail=0 max_avail=0 > Block: [0102f000,01030000) size=4K avail=0 max_avail=0 > Block: [01030000,01031000) size=4K avail=0 max_avail=0 > Block: [01031000,01032000) size=4K avail=0 max_avail=0 > Block: [01032000,01033000) size=4K avail=0 max_avail=0 > Block: [01033000,01034000) size=4K avail=0 max_avail=0 > Block: [01034000,01035000) size=4K avail=0 max_avail=0 > Block: [01035000,01036000) size=4K avail=0 max_avail=0 > Block: [01036000,01037000) size=4K avail=0 max_avail=0 > Block: [01037000,01038000) size=4K avail=0 max_avail=0 > Block: [01038000,01039000) size=4K avail=0 max_avail=0 > Block: [01039000,0103a000) size=4K avail=0 max_avail=0 > Block: [0103a000,0103b000) size=4K avail=0 max_avail=0 > Block: [0103b000,0103c000) size=4K avail=0 max_avail=0 > Block: [0103c000,0103d000) size=4K avail=0 max_avail=0 > Block: [0103d000,0103e000) size=4K avail=0 max_avail=0 > Block: [0103e000,0103f000) size=4K avail=0 max_avail=0 > Block: [0103f000,01040000) size=4K avail=0 max_avail=0 > Block: [01040000,01041000) size=4K avail=0 max_avail=0 > Block: [01041000,01042000) size=4K avail=0 max_avail=0 > Block: [01042000,01043000) size=4K avail=0 max_avail=0 > Block: [01043000,01044000) size=4K avail=0 max_avail=0 > Block: [01044000,01045000) size=4K avail=0 max_avail=400M > Block: [01045000,01046000) size=4K avail=0 max_avail=0 > Block: [01046000,01047000) size=4K avail=0 max_avail=0 > Block: [01047000,01048000) size=4K avail=0 max_avail=0 > Block: [01048000,01049000) size=4K avail=0 max_avail=0 > Block: [01049000,0104a000) size=4K avail=0 max_avail=0 > Block: [0104a000,0104b000) size=4K avail=0 max_avail=0 > Block: [0104b000,0104c000) size=4K avail=0 max_avail=0 > Block: [0104c000,0104d000) size=4K avail=0 max_avail=0 > Block: [0104d000,0104e000) size=4K avail=0 max_avail=0 > Block: [0104e000,0104f000) size=4K avail=0 max_avail=0 > Block: [0104f000,01050000) size=4K avail=0 max_avail=0 > Block: [01050000,01051000) size=4K avail=0 max_avail=400M > Block: [01051000,01052000) size=4K avail=0 max_avail=0 > Block: [01052000,01053000) size=4K avail=0 max_avail=3756K > Block: [01053000,01054000) size=4K avail=0 max_avail=0 > Block: [01054000,01055000) size=4K avail=0 max_avail=3756K > Block: [01055000,01400000) size=3756K avail=3756K max_avail=3756K > Block: [01800000,01801000) size=4K avail=0 max_avail=400M > Block: [01801000,02000000) size=8188K avail=8188K max_avail=8188K > Block: [03000000,1c000000) size=400M avail=400M max_avail=400M > Block: [1e000000,1ebe0000) size=12160K avail=12160K max_avail=12160K > => mem_size=444485632 (423 MB) / mem_avail=444112896 (423 MB) > > :unused_phys_alloc:Allocator 0x28063b8 dump: > Block: [00100000,00200000) size=1M avail=1M max_avail=1M > Block: [0020d000,01008000) size=14316K avail=14316K max_avail=14316K > Block: [01400000,01800000) size=4M avail=4M max_avail=4M > Block: [02000000,03000000) size=16M avail=16M max_avail=32M > Block: [1c000000,1e000000) size=32M avail=32M max_avail=32M > Block: [1ebe0000,1f000000) size=4224K avail=4224K max_avail=32M > Block: [fec00000,fec01000) size=4K avail=4K max_avail=4K > Block: [fee00000,fee01000) size=4K avail=4K max_avail=64K > Block: [ffff0000,ffffffff] size=64K avail=64K max_avail=64K > => mem_size=74633216 (71 MB) / mem_avail=74633216 (71 MB) > > :unused_virt_alloc:Allocator 0x2807424 dump: > Block: [00002000,02000000) size=32760K avail=32760K max_avail=32760K > Block: [02d8a000,04d8a000) size=32M avail=0 max_avail=0 > Block: [04d8a000,40000000) size=969176K avail=969176K max_avail=1792M > Block: [50000000,c0000000) size=1792M avail=1792M max_avail=1792M > => mem_size=2938585088 (2802 MB) / mem_avail=2905030656 (2770 MB) > > :virt_alloc: Allocator 0x2801e64 dump: > Block: [0283e000,0283f000) size=4K avail=0 max_avail=0 > Block: [0283f000,02840000) size=4K avail=0 max_avail=0 > Block: [02840000,02841000) size=4K avail=0 max_avail=32M > Block: [02841000,02842000) size=4K avail=0 max_avail=0 > Block: [02842000,02d87000) size=5396K avail=5396K max_avail=32M > Block: [02d8a000,04d8a000) size=32M avail=32M max_avail=32M > => mem_size=39096320 (37 MB) / mem_avail=39079936 (37 MB) > > :io_mem_alloc: Allocator 0x2802edc dump: > Block: [00000000,00100000) size=1M avail=1M max_avail=1M > Block: [1f000000,fec00000) size=3580M avail=3580M max_avail=3580M > Block: [fec01000,fee00000) size=2044K avail=2044K max_avail=18364K > Block: [fee01000,ffff0000) size=18364K avail=18364K max_avail=18364K > => mem_size=3775848448 (3600 MB) / mem_avail=3775848448 (3600 MB) > > boot module 'acpi_drv' (96740 bytes) > boot module 'fb_drv' (305596 bytes) > boot module 'status_bar' (114120 bytes) > boot module 'init' (254488 bytes) > boot module 'platform_drv' (289772 bytes) > boot module 'ps2_drv' (110324 bytes) > boot module 'testnit' (84060 bytes) > boot module 'config' (6549 bytes) > boot module 'launchpad.config' (594 bytes) > boot module 'ld.lib.so' (702684 bytes) > boot module 'rom_filter' (86348 bytes) > boot module 'timer' (89704 bytes) > boot module 'xray_trigger' (112640 bytes) > boot module 'pointer' (71144 bytes) > boot module 'report_rom' (101328 bytes) > boot module 'nitpicker' (265900 bytes) > boot module 'scout' (1703896 bytes) > boot module 'liquid_fb' (248988 bytes) > boot module 'launchpad' (740624 bytes) > boot module 'nitlog' (124736 bytes) > Warning: need physical memory, but Platform object not constructed yet > Warning: need physical memory, but Platform object not constructed yet > Genode 17.08 > 423 MiB RAM and 261142 caps assigned to init > Warning: void Genode::Rpc_cap_factory::free(Genode::Native_capability) > not implemented - resources leaked: 0x1 > Warning: void Genode::Rpc_cap_factory::free(Genode::Native_capability) > not implemented - resources leaked: 0x2 > Warning: void Genode::Rpc_cap_factory::free(Genode::Native_capability) > not implemented - resources leaked: 0x4 > Warning: void Genode::Rpc_cap_factory::free(Genode::Native_capability) > not implemented - resources leaked: 0x8 > Warning: void Genode::Rpc_cap_factory::free(Genode::Native_capability) > not implemented - resources leaked: 0x10 > [init] child "nitpicker_config" announces service "ROM" > > [init] child "acpi_report_rom" announces service "Report" > [init] child "report_rom" announces service "Report" > [init] child "acpi_report_rom" announces service "ROM" > [init] child "report_rom" announces service "ROM" > [init] child "timer" announces service "Timer" > Warning: void Genode::Rpc_cap_factory::free(Genode::Native_capability) > not implemented - resources leaked: 0x20 > [init -> nitpicker_config] Warning: top-level node <xray> missing in > input ROM xray > [init -> nitpicker_config] Warning: could not obtain input value for > input xray_enabled > [init -> acpi_drv] Found MADT > > [init -> acpi_drv] MADT IRQ 0 -> GSI 2 flags: 0 > [init -> acpi_drv] MADT IRQ 9 -> GSI 9 flags: 13 > [init -> acpi_drv] Found MCFG > [init -> acpi_drv] MCFG BASE 0xe0000000 seg 0x0 bus 0x0-0xff > Warning: void Genode::Rpc_cap_factory::free(Genode::Native_capability) > not implemented - resources leaked: 0x40 > Warning: unmapping of managed dataspaces not yet supported > > Warning: void Genode::Rpc_cap_factory::free(Genode::Native_capability) > not implemented - resources leaked: 0x80 > [init] child "platform_drv" announces service "Platform" > > [init -> fb_drv] Found PCI VGA at 00:02.0 > [init -> fb_drv] fb mapped to 0x2000 > [init] child "fb_drv" announces service "Framebuffer" > [init -> fb_drv] Warning: VBE Bios not present > [init -> fb_drv] Warning: Could not set vesa mode 0x0@16 > [init -> ps2_drv] Error: no data available > [init -> nitpicker] Error: Framebuffer-session creation failed > (ram_quota=8192, cap_quota=3) > [init -> nitpicker] Error: __cxa_guard_abort called > > Kernel: Thread 'ep' died because of an uncaught exception > [init -> nitpicker] Error: Uncaught exception of type > 'Genode::Service_denied' > [init -> nitpicker] Warning: abort called - thread: ep > > [init] child "nitpicker" exited with exit value 1 > [init -> ps2_drv] Error: no data available > [init -> ps2_drv] i8042: self test failed (0x23) > [init -> ps2_drv] Error: failed to read from port > [init -> ps2_drv] Warning: scan code setting not supported > [init -> ps2_drv] Using keyboard with scan code set 1 > [init -> ps2_drv] Error: failed to read from port > [init -> ps2_drv] Warning: could not reset mouse (missing ack) > [init -> ps2_drv] Error: failed to read from port > [init -> ps2_drv] Warning: could not reset mouse (unexpected response) > [init -> ps2_drv] Error: failed to read from port > [init -> ps2_drv] Error: failed to read from port > [init -> ps2_drv] Warning: could not enable stream > [init -> ps2_drv] Error: failed to read from port > [init -> ps2_drv] Error: failed to read from port > [init -> ps2_drv] Error: failed to read from port > [init -> platform_drv] PS2 uses IRQ, vector 0x1 > [init -> platform_drv] PS2 uses IRQ, vector 0xc > [init] child "ps2_drv" announces service "Input" > > > > On 08/25/2017 08:31 AM, Alexander Boettcher wrote: > > Hello, > > > > since last week we successful added UEFI support to Genode/seL4 for x86. > > > > In this course we extended the seL4 6.0 kernel (beside the NOVA kernel > > and our own kernel - Genode/hw) to be also a Multiboot2 (MBI2) kernel. > > The MBI2 specification [3] provides to the kernel the ACPI RSDP > > information, which was the main reason to add MBI2 support. Together > > with GRUB2 (as UEFI capable bootloader) we were able to get our setups > > running on various native x86 machines and on Qemu. > > > > Additionally we extended the 3 kernels to propagate the ACPI RSDP > > information further to the user-land, since there the ACPI driver also > > failed to lookup the ACPI RSDP information. > > > > The patch for the seL4 kernel are currently on our staging branch (our > > automatically tested branch) and will show up in the upcoming release > > next week eventually. > > > > Currently, the patches are tight to Genode, but I can open up a feature > > issue on seL4 github if you are fine with the general direction - so > > adding MBI2 support in general. Or you would rather go in another > > direction like writing an own UEFI boot loader, which maybe is more > > minimal compared to GRUB2 etc etc. > > > > I have to admit, that the code addition to the seL4 kernel is far from > > being optimal - amount of code because of redundant MBI 1 vs 2 code, > > correctness of code (I'm not super familiar with the internals of the > > seL4 kernel), missing framebuffer information etc - but this we may > > discuss in more detail on github, if wanted. > > > > > > Cheers, > > > > Alex. > > > > [0] > > https://github.com/genodelabs/genode/commit/ > b9aa16eb3e671a7e3c1474b076a244c7c97e5dea > > "sel4: kernel patch to get ACPI information" > > [1] > > https://github.com/genodelabs/genode/commit/ > c09783eed9a52ad72e8a1a986b832303574612ba > > "sel4: add uefi boot support via mbi2" > > [3] > > http://git.savannah.gnu.org/cgit/grub.git/tree/doc/ > multiboot.texi?h=multiboot2 > > > > On 10.08.2017 16:50, [email protected] wrote: > >> Hi Edward, > >> > >> In the near future? Unfortunately not. UEFI support is definitely > something that we talk about every so often, but just never makes it high > enough up the priority list for us internally. > >> > >> A configuration option for overriding the RSDP search doesn't so too > unreasonable in cases where there isn't a BIOS region to search. At least > until we can retrieve the address from the UEFI runtime. > >> > >> It is entirely possible that any number of tables and initialization > needs to happen before the timer, or other hardware, will work. Currently > the ACPI tables here are just being used to find the base address of the > HPET, and it is assumed that it is in a working state and no further setup > needs to be done. > >> > >> As for the IRQ numbers in seL4 you are seeing the local CPU vector > delivery number, not the source I/O APIC interrupt number or GSI. To > determine the IRQ source you could check the x86KSIRQState for the local > CPU vector (in this case 125), unpack the x86_irq_state_t type, and see > where it came from. > >> > >> The user code though is first trying to use the HPET and then if it > cannot find that (i.e. it's not in the ACPI tables) then it falls back to > the PIT. If it finds a HPET then it will try and use FSB (i.e. MSI) > delivery, and failing that fall back to I/O APIC delivery. If you want to > work out which of these its using you could either infer from the > x86KSIRQState as mentioned above or instrument > https://github.com/seL4/util_libs/blob/fff76a36a02b8ccef3aa0b201751c5 > 7b62ac3621/libplatsupport/src/plat/pc99/ltimer.c#L225 and > https://github.com/seL4/util_libs/blob/fff76a36a02b8ccef3aa0b201751c5 > 7b62ac3621/libplatsupport/src/plat/pc99/ltimer.c#L306 to see what exactly > it is doing. > >> > >> Adrian > >> > >> On Thu 10-Aug-2017 4:56 AM, Edward Sandberg wrote: > >> > >> Is there a plan to add UEFI support in seL4 for x86 hardware in the near > >> future? Newer x86 boards are frequently UEFI only. It is possible to > >> get around the lack of UEFI support, as I have done with the UP board: > >> > >> https://up-community.org/wiki/Hardware_Specification > >> > >> but I am hitting problems which I will detail below. > >> > >> When I compile using ia32_debug_xml_defconfig and boot using the > >> resulting images the board fails to find the RSDP location. To fix this > >> I had to modify the source code a bit: > >> > >> > >> * seL4test/projects/util_libs/libplatsupport/src/plat/pc99/acpi/acpi.h > >> > >> + #define UPBOARD_RSDP 0x5B161000 > >> > >> * seL4test/projects/util_libs/libplatsupport/src/plat/pc99/acpi/acpi.c > >> > >> - acpi->rsdp = acpi_sig_search(acpi, ACPI_SIG_RSDP, > >> strlen(ACPI_SIG_RSDP), > >> - (void *) BIOS_PADDR_START, (void *) > >> BIOS_PADDR_END); > >> + acpi->rsdp = (void *)UPBOARD_RSDP; > >> > >> * seL4test/kernel/src/plat/pc99/machine/acpi.c > >> > >> - for (addr = (char*)BIOS_PADDR_START; addr < (char*)BIOS_PADDR_END; > >> addr += 16) { > >> + for (addr = (char*)0; addr < (char*)PPTR_BASE; addr += 16) { > >> > >> It would be handy to have this as a kernel parameter to cover cases > >> where it is not successfully discovered automatically. With these > >> changes I can boot the board and several tests pass but then I get stuck > >> on INTERRUPT0001 (Test interrupts with timer). I don't get a test > >> failure or an error the board just sits and makes no more progress. > >> Someone had that test fail in this post: > >> > >> https://sel4.systems/pipermail/devel/2017-February/001328.html > >> > >> and the first recommendation was to check if the irq of the timer was > >> correctly found. I booted the board into linux to find the correct irq > >> which was listed as 0 in /proc/interupts. I added a printf to > >> handleInterrupt in the kernel source, recompiled and when I booted seL4 > >> I found that the irq reported to handleInterrupt is 125 (which sel4 > >> reports as the max interrupt value) every time that function is called. > >> Adding this printf also showed me that when the test hangs the board > >> hasn't crashed or locked up as calls to handleInterrupt are made > >> continuously. > >> > >> At this point I noticed that before any tests started to run several > >> ACPI tables are not recognized: > >> > >> Parsing ACPI tables > >> Skipping table FPDTD, unknown > >> Skipping table FIDT<9c>, unknown > >> Skipping table UEFIB, unknown > >> Skipping table TPM24, unknown > >> Skipping table LPIT^D^A, unknown > >> Skipping table BCFG9^A, unknown > >> Skipping table PRAM0, unknown > >> Skipping table CSRTL^A, unknown > >> Skipping table BCFG9^A, unknown > >> Skipping table OEM0<84>, unknown > >> Skipping table OEM1@, unknown > >> Skipping table PIDVÜ, unknown > >> Skipping table RSCI,, unknown > >> Skipping table WDAT^D^A, unknown > >> Warning: skipping table ACPI XSDT > >> > >> Maybe one or more of those tables needs to be loaded to handle > >> interrupts properly. The LPIT table is conspicuous in the case of the > >> timer test but I think other tests are likely to depend on the other > tables. > >> > >> Any suggestions about porting this type of hardware? > >> > >> > >> > >> > >> > >> _______________________________________________ > >> Devel mailing list > >> [email protected] > >> https://sel4.systems/lists/listinfo/devel > >> > > > > _______________________________________________ > Devel mailing list > [email protected] > https://sel4.systems/lists/listinfo/devel >
_______________________________________________ Devel mailing list [email protected] https://sel4.systems/lists/listinfo/devel
