On 01.11.18 13:44, Chung-Fan Yang wrote:
2018年11月1日木曜日 18時41分23秒 UTC+9 Chung-Fan Yang:
Hello,

I am working on ivshmem-net to achieve communication between host and cell.
Therefore, I applied the ivshmem2 patch both on the jailhouse and root-cell 
linux kernel (4.9).

The root side's ivhsmem-net driver is correctly started and I can setup an ip 
address without problem. The non-root side is laking a Ethernet deriver. Thus, 
not checked.

Together I wish that my old implementation on uio_ivshmem and code ported from 
ivshmem_demo can still work on the new model.

Therefore, both root and non-root have 2 ivshmem PCI device. One isfor 
supporting the old model, an other for ivshmem-net.

I read the commits, found out that the memory locations in ivehmem2 has 
expanded to 3 per PCI device. I adapted it and changed both the uio_ivshmem and 
non-root ivshmem driver. I can read and write the shared memory without problem.

But I am current having problem on send and reviving IPIs between root and 
non-root cells. I noticed the following things in the ivshmem2 commits, which 
might affect interrupts:

1. The MMIO registers, including the doorbell register's location had changed 
.(64f5b8fe)
2. MMIO region is expanded to 4K. (27d3ad63)
3. BAR4 is move to BAR2. (b4e9474c)

For 1, I changed the index used to access MMIO door bell register. It was 12 I 
changed to 4. But it's still not working.

For 2, I don't really think it's a problem, because the mapped region is larger 
than before.

For 3, I read the code of ivshmem_demo. It used to use BAR2 to setup msix_table 
for APIC and MIS-X mapping. I am suspecting that moving the BAR4 to BAR2 makes 
this process some how broken, but I have no idea what happened and how to fix 
it.

Anyone had used the old uio driver on ivshmem2 and successful talked with other 
cells before?
The configs are attached, I can attach part of the code if requested.
I could really use some help.

Yang

Hello everyone,

I am glad that I figured it out my self.

Here are the steps. Unless specified, apply only to custom non-root driver.

First, the share memory address and size are not fixed 0x40 and 0x48 anymore.
You need to read and decide which regions you want to use by reading out the 
PCI Vendor CAPs. Anything 0 in size is a none populated memory region. Do this 
for both sides. (Reference commit ID: d2e21a50)

Second, the misx-table is popluated at bar2 not bar4 as before. The uio driver can handle 
this, but the code in ivshmem_demo cannot. You have to change the process of 
"map_shmem_and_bars." Msix-table address should be written to PCI_CFG_CAP + 8, 
which is BAR2 (was hard coded 16 to point to BAR4).

Third, the MMIO registers changed index, ivpos, a.k.a ID should be 0 instead 8.
The doorbell register should be 4 instead 12 (counting in bytes). Change the 
non-root driver accordingly. On root Linux, mmaped regions are typically used 
to poke the mmio region. Therefore, change where you poke accordingly.

Finally, on non-root side don't forget to make the probing process to recognize 
the right ivshmem version. Class and reversion read from CAP would have been 
ored with additional 0x2. On root Linux, you can just force the uio_ivshmem 
driver to use jailhouse mode by commenting out some codes.

Hope someone can benifit from this.
If you had any problem making the change, I would be happy to help.


Note that "ivshmem 2.0" is a prototype, nothing officially supported yet. It is supposed to demonstrate potential improvements of the current ivshmem variant we have in Jailhouse, provided we follow that path (not unlikely, just not decided yet).

What is your motivation to use this model, rather than the current one?

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

--
You received this message because you are subscribed to the Google Groups 
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jailhouse-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to