On 2017-05-05 15:02, jonas wrote:
>>>>> Hi,
>>>>>
>>>>> I'm also experimenting with ivshmem between the root-cell and a
>>>>> bare metal cell. In my case, however, on BananaPi M1.
>>>>>
>>>>> Could you elaborate on modifying the functions
>>>>> pci_(read|write)_config to use mmio instead of pio?
>>>>>
>>>>> I guess it's a matter of accessing the appropriate memory mapped
>>>>> PCI configuration space of the (virtual) PCI devices available to
>>>>> the guest/inmate instead of accessing PCI_REG_ADDR_PORT and
>>>>> PCI_REG_DATA_PORT using functions(out|in)[bwl]?  
>>>>
>>>> Exactly mmio = memory mapped IO, pio = port IO (in|out). The outs
>>>> and ins will not work, instead the whole config space will be in
>>>> physical memory. The location can be found in the root-cell
>>>> configuration .pci_mmconfig_base.
>>>> Some more information can be found here.
>>>> http://wiki.osdev.org/PCI
>>>>
>>>> The method currently implemented is called method #1 on that wiki.
>>>> Make sure to keep your access aligned with the size that is
>>>> requested.
>>>>
>>>> Code that is similar to what you will need can be found in the
>>>> hypervisor. hypervisor/pci.c include/jailhouse/mmio.h
>>>>
>>>> Henning
>>>>
>>>>   
>>>>> Best regards - Jonas Weståker
>>>>>  
>>>
>>> Thanks for the fast response.
>>> I've got a bit further in porting ivshmem-demo.c from x86 to arm, but
>>> a few new questions arise: When scanning the configuration area of
>>> the (virtual) PCI device the followning is reported: "IVSHMEM ERROR:
>>> device is not MSI-X capable" - is this a problem?
>>
>> If you see that the example will not do anything. Your pci access code
>> might still not work. You can remove that sanity check to provoke more
>> accesses.
>>
> 
> Yes, I commented out the 'return;' after the printk.
> 
>> Does the rest of the output look like the pci-code is reading sane
>> values?
> 
> IVSHMEM: Found 1af4:1110 at 00:00.0
> IVSHMEM ERROR: device is not MSI-X capable
> IVSHMEM: shmem is at 0x7bf00000
> IVSHMEM: bar0 is at 0x7c000000
> IVSHMEM: bar2 is at 0x7c004000
> IVSHMEM: mapped shmem and bars, got position 0x00000001
> IVSHMEM: Enabled IRQ:0x20
> IVSHMEM: Vector set for PCI MSI-X.
> IVSHMEM: 00:00.0 sending IRQ
> IVSHMEM: waiting for interrupt.
> 
>> What did you set num_msix_vectors to?
>>
> 
> '.num_msix_vectors = 1,'

Needs to be 0 for INTx operation.

> 
>>> jailhouse/inmates/lib/x86/mem.c:map_range() is used to map the
>>> IVSHMEM region and registers. Got any pointers to code doing the
>>> equivalent for ARM?
>>
>> I think on ARM the inmates run without paging, so the implementation
>> would be empty.
>>
> 
> OK. That simplifies/explains things... I commented out the call to 
> 'map_pages()' as well.
> 
>>> What is the expected behaviour when accessing unmapped memory in an
>>> inmate?
>>
>> As i said, i think you are running on physical so everything is visible.
>>
>>> (E.g., I can see the inmate/cell gets shut down when touching memory
>>> outside .pci_mmconfig_base + 0x100000): # Unhandled data read at
>>> 0x2100000(2) FATAL: unhandled trap (exception class 0x24)
>>> pc=0x00000ff4 cpsr=0x60000153 hsr=0x93400006
>>> r0=0x00001834 r1=0x0000000d r2=0x00000000 r3=0x00006ed1 
>>> r4=0x02100000 r5=0x00000000 r6=0x00000002 r7=0x0000ffff 
>>> r8=0x00001000 r9=0x00000000 r10=0x00000000 r11=0x00000000 
>>> r12=0x00000000 r13=0x00006f80 r14=0x00000fc4 
>>> Parking CPU 1 (Cell: "ivshmem-demo")
>>
>> This is an access outside of memory that the hypervisor gave to the
>> cell.
>>  
>>> What memory areas are made available by Jailhouse for a cell/inmate
>>> to access?
>>
>> They are described in the cell config, however the virtual PCI bus is
>> special there only the base is in the config and the size is calculated.
>> From hypervisor/pci.c pci_init you can see the 0x100000, it is
>> 1*256*4096
>>
> 
> Actually, I think I spotted a bug here. In 
> inmates/lib/pci.c:find_pci_device() there is a loop 'for (bdf = start_bdf; 
> bdf < 0x10000; bdf++)', which will touch memory outside PCI_CFG_BASE_ADDR + 
> 0x100000, hence the unhandled trap. Changing the loop to 'for (bdf = 
> start_bdf; bdf < 0x1000; bdf++)' fixes the problem (0x1000 == 4096).
> 
> Why does this work on x86? Are bigger pages used by the hypervisor to map the 
> PCI configuration area?

On x86, the is always the full mmconfig space accessible. On ARM, you
need to check what platform_info.pci_mmconfig_end_bus is set to. When we
emulate PCI, we keep it at 0, i.e. a single bus. The inmate lib is not
yet aware of such restrictions.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA ITP SES-DE
Corporate Competence Center Embedded Linux

-- 
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to