On 26.12.21 17:21, jiajun huang wrote:
> Happy New Year,
> 
> I successfully created root cell and inmate cell on QEMU, and nuttx
> running in the inmate cell. At the same time, I added the ivshmem-net
> device to the root cell and the intimate cell, and loaded the NIC
> driver. Finally, I configured ip 172.16.0.1 and 172.16.0.2 for the
> network card. But when I execute ping 172.16.0.2 in the root cell, the
> error "From 172.16.0.1 icmp_seq=1 Destination Host Unreachable" appears.
> The attachment is the driver I used in linux and nuttx respectively.The
> network driver uses the virtio interface. I tried to log in the driver,
> but I found that the control flow did not enter ndo_start_xmit().

Looking at the hypervisor logs, it seems that the memory configuration
of your virtual interfaces is correct now. The shared memory between the
cells is detected, the device probing seems successful.

I guess you don't receive interrupts for your virtual devices. Could you
verify that by looking into /proc/interrupts?

Are we still running on qemu? All your devices have .iommu = 0 set, but
the .irqchip configuration looks different (root cell as well as inmate)
compared to the qemu example that we have.

> 
> ping
> PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data.
> From 172.16.0.1 icmp_seq=1 Destination Host Unreachable
> From 172.16.0.1 icmp_seq=2 Destination Host Unreachable
> From 172.16.0.1 icmp_seq=3 Destination Host Unreachable
> 
> route -n
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric Ref Use Iface
> 0.0.0.0 10.0.2.2 0.0.0.0 UG 100 0 0 enp0s2
> 10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s2
> 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 enp0s2
> 172.16.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s14
> 
> ifconfig
> enp0s2 Link encap:Ethernet HWaddr 52:54:00:12:34:56
>           inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
>           inet6 addr: fec0::8070:776d:7dfd:da1/64 Scope:Site
>           inet6 addr: fec0::1493:dcc2:ea12:8774/64 Scope:Site
>           inet6 addr: fec0::7c68:51e0:8aab:db34/64 Scope:Site
>           inet6 addr: fe80::feb9:1534:861b:722f/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>           RX packets:675167977 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:248205 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes: 52687970572 (52.6 GB) TX bytes: 49989174072 (49.9 GB)
>           Interrupt:22 Memory:feb80000-feba0000
> 
> enp0s14 Link encap:Ethernet HWaddr 3e:27:50:f3:c5:16
>           inet addr:172.16.0.1 Bcast:172.16.0.255 Mask:255.255.255.0
>           UP BROADCAST RUNNING MULTICAST MTU:16384 Metric:1
>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
> 
> lo Link encap:Local Loopback
>           inet addr:127.0.0.1 Mask:255.0.0.0
>           inet6 addr: ::1/128 Scope:Host
>           UP LOOPBACK RUNNING MTU:65536 Metric:1
>           RX packets: 491565 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:491565 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes: 29522474 (29.5 MB) TX bytes: 29522474 (29.5 MB)
> 
> arp
> ? (172.16.0.2) at <incomplete> on enp0s14
> ? (10.0.2.3) at 52:55:0a:00:02:03 [ether] on enp0s2
> ? (10.0.2.2) at 52:55:0a:00:02:02 [ether] on enp0s2
> 
> jailhouse output
> Initializing Jailhouse hypervisor v0.12 (5-g06ba27d-dirty) on CPU 2
> Code location: 0xfffffffff0000050
> Using x2APIC
> Page pool usage after early setup: mem 108/32207, remap 0/131072
> Initializing processors:
>  CPU 2... (APIC ID 2) OK
>  CPU 1... (APIC ID 1) OK
>  CPU 3... (APIC ID 3) OK
>  CPU 0... (APIC ID 0) OK
> Initializing unit: VT-d
> DMAR unit @0xfed90000/0x1000
> Reserving 24 interrupt(s) for device ff:00.0 at index 0
> Initializing unit: IOAPIC
> Initializing unit: Cache Allocation Technology
> Initializing unit: PCI
> Adding virtual PCI device 00:0d.0 to cell "RootCell"
> Adding virtual PCI device 00:0e.0 to cell "RootCell"
> Adding PCI device 00:00.0 to cell "RootCell"
> Adding PCI device 00:01.0 to cell "RootCell"
> Adding PCI device 00:02.0 to cell "RootCell"
> Reserving 5 interrupt(s) for device 00:02.0 at index 24
> Adding PCI device 00:1b.0 to cell "RootCell"
> Reserving 1 interrupt(s) for device 00:1b.0 at index 29
> Adding PCI device 00:1f.0 to cell "RootCell"
> Adding PCI device 00:1f.2 to cell "RootCell"
> Reserving 1 interrupt(s) for device 00:1f.2 at index 30
> Adding PCI device 00:1f.3 to cell "RootCell"
> Page pool usage after late setup: mem 339/32207, remap 65542/131072
> Activating hypervisor
> Reserving 1 interrupt(s) for device 00:00.0 at index 31
> Adding virtual PCI device 00:0d.0 to cell "nuttx"
> Shared memory connection established, peer cells:
>  "RootCell"
> Adding virtual PCI device 00:0e.0 to cell "nuttx"
> Shared memory connection established, peer cells:
>  "RootCell"
> Created cell "nuttx"
> Page pool usage after cell creation: mem 871/32207, remap 65543/131072
> Cell "nuttx" can be loaded
> CPU 3 received SIPI, vector 100
> Started cell "nuttx"
> 
> nuttx output
> x86_rng_initialize: Initializing RNG
> pci_enumerate: [00:0d.0] Found 110a:4106, class/reversion 00000200
> pci_enumerate: [00:0d.0] Jailhouse Shadow process memory and pipe
> shadow_probe: Shadow[0] mapped bar[0]: 0xf0000000
> shadow_probe: Shadow[0] mapped bar[1]: 0xf0001000
> pci_enable_device: 00:0d.0, CMD: 0 -> 6
> shadow_probe: Shadow[0] shared memory base: 0xf0000000, size: 0x1000
> shadow_probe: Shadow[0] State Table phy_addr: 0x176000000 virt_addr:
> 0xf0002000, size: 0x1000
> shadow_probe: Shadow[0] R/W  region phy_addr: 0x1000 virt_addr: 0x1000,
> size: 0x3ffff000
> shadow_probe: Shadow[0] I    region phy_addr: 0x1b6001000 virt_addr:
> 0xf0003000, size: 0x3000
> shadow_probe: Shadow[0] O    region phy_addr: 0x1b6005000 virt_addr:
> 0xf0007000, size: 0x3000
> shadow_probe: Initialized Shadow[0]
> pci_enumerate: [00:0e.0] Found 110a:4106, class/reversion 00000100
> pci_enumerate: [00:0e.0] Jailhouse Ivshmem-net
> ivshmnet_probe: Ivshmem-net[0] mapped bar[0]: 0xf000b000
> ivshmnet_probe: Ivshmem-net[0] mapped bar[1]: 0xf000c000
> pci_enable_device: 00:0e.0, CMD: 0 -> 6
> ivshmnet_probe: Ivshmem-net[0] State Table phy_addr:0x1b6205000
> virt_addr: 0xf000d000, size: 0x1000
> ivshmnet_probe: Ivshmem-net[0] TX region phy_addr: 0x1b6285000
> virt_addr: 0xf000e000, size: 0x7f000
> ivshmnet_probe: Ivshmem-net[0] RX region phy_addr: 0x1b6206000
> virt_addr: 0xf008d000, size: 0x7f000
> ivshmnet_probe: Initialized Ivshmem-net[1]
> shadow_state_change: Remote state: 0
> 
> cRTOS Daemon: Starting...
> 
> cRTOS Daemon: Initializing Network (eth0)...
> set ip
> set router
> set mask
>  ip  up
> cRTOS: Initialized! port: 42
> 
> cRTOS: Waiting for client
> 
> 在2021年12月22日星期三 UTC+8 22:46:01<Bezdeka, Florian> 写道:
> 
>     On Wed, 2021-12-22 at 06:33 -0800, jiajun huang wrote:
>     > Hi,
>     > I will try as you suggest.
>     > Currently I try to run this open source project
>     > https://github.com/fixstars/cRTOS/blob/master/Installation.md
>     <https://github.com/fixstars/cRTOS/blob/master/Installation.md>. I used
>     > to follow the guidelines of this project to successfully run
>     > linux+nuttx on QEMU, but I found that the ivshmem-net device on qemu
>     > does not seem to work. I suspect it is because the mmio area created
>     > by jailhouse for ivshmem-net devices is not registered in QEMU. So I
>     > decided to try to run this project on the server. I want to know if
>     > the ivshmem device is supported by QEMU?
>     >
> 
>     There are examples with ivshmem on qemu, if you get the memory mapping
>     right it will work. Normally you can't re-use the same hypervisor
>     configuration on real hardware. The IOAPIC/iommu setup is normally
>     different.
> 
>     Root-Cell:
>     https://github.com/siemens/jailhouse/blob/master/configs/x86/qemu-x86.c
>     <https://github.com/siemens/jailhouse/blob/master/configs/x86/qemu-x86.c>
> 
> 
>     Inmate/Linux:
>     
> https://github.com/siemens/jailhouse/blob/master/configs/x86/linux-x86-demo.c
>     
> <https://github.com/siemens/jailhouse/blob/master/configs/x86/linux-x86-demo.c>
> 
> 
>     >
>     > 在2021年12月22日星期三 UTC+8 22:17:37<Bezdeka, Florian> 写道:
>     > > On Wed, 2021-12-22 at 05:39 -0800, jiajun huang wrote:
>     > > > Dear Jailhouse community,
>     > > > This bug occurred when I tried to start nuttx on a none-root cell
>     > > > on
>     > > > the server. I added two ivshmem devices for nuttx. Below is my
>     > > > configuration file. I am not sure if there is a problem with the
>     > > > mmio
>     > > > area in the configuration file. What is the communication area?
>     > > > In
>     > > > addition, if jailhouse runs in QEMU, can two virtual machines
>     > > > communicate with each other through ivshmem-net?
>     > > >
>     > > > Below is my root-cell , nuttx configuration and log output from
>     > > > the
>     > > > port.
>     > >
>     > > Have you validated your cell configurations with the jailhouse
>     > > config
>     > > checker? I did not look into your configuration in detail, but
>     > > nearly
>     > > all of your inmate memory blocks are tagged with
>     > > "JAILHOUSE_MEM_ROOTSHARED" which seems uncommon.
>     > >
>     > > I would start step by step. So start from a configuration where you
>     > > know that both cells are booting up. Add virtual NICs afterwards
>     > > and
>     > > make sure that IRQs are delivered to ivshmem devices.
>     > >
>     > > Are you able to follow the boot log of your inmate? Hopefully you
>     > > will
>     > > see the reason for the VM exit there.
>     > >
>     > > HTH,
>     > > Florian
>     > >
>     > > >
>     > > > Best regards,
>     > > >
>     > > > Jiajun Huang
>     > > >
>     > >
> 
> -- 
> You received this message because you are subscribed to the Google
> Groups "Jailhouse" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to [email protected]
> <mailto:[email protected]>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/jailhouse-dev/c7eaa08e-ae92-4c5c-a57c-7ddac379242cn%40googlegroups.com
> <https://groups.google.com/d/msgid/jailhouse-dev/c7eaa08e-ae92-4c5c-a57c-7ddac379242cn%40googlegroups.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jailhouse-dev/514e2fa3-9e58-5b79-038c-fe4ac21e803e%40siemens.com.

Reply via email to