Re: [PATCH 1/1] hw/intc/riscv_aclint:Change the way to get CPUState from hard-base to pu_index
On 9/11/23 23:26, Philippe Mathieu-Daudé wrote: >Hi Leo, > >First, I can't find your patch in my mailbox, so I'm replying to >Dongxue's review. > >On 9/11/23 03:41, Dongxue Zhang wrote: >> Reviewed-by: Dongxue Zhang >> >> >>> On Thu, Nov 9, 2023 at 10:22 AM Leo Hou wrote: >>> >>> From: Leo Hou >>> >>> cpu_by_arch_id() uses hartid-base as the index to obtain the corresponding >>> CPUState structure variable. >>> qemu_get_cpu() uses cpu_index as the index to obtain the corresponding >>> CPUState structure variable. >>> >>> In heterogeneous CPU or multi-socket scenarios, multiple aclint needs to be >>> instantiated, >>> and the hartid-base of each cpu bound by aclint can start from 0. If >>> cpu_by_arch_id() is still used >>> in this case, all aclint will bind to the earliest initialized hart with >>> hartid-base 0 and cause conflicts. >>> >>> So with cpu_index as the index, use qemu_get_cpu() to get the CPUState >>> struct variable, >>> and connect the aclint interrupt line to the hart of the CPU indexed with >>> cpu_index >>> (the corresponding hartid-base can start at 0). It's more reasonable. >>> >>> Signed-off-by: Leo Hou >>> --- >>> hw/intc/riscv_aclint.c | 16 >>> 1 file changed, 8 insertions(+), 8 deletions(-) >>> >>>diff --git a/hw/intc/riscv_aclint.c b/hw/intc/riscv_aclint.c >>> index ab1a0b4b3a..be8f539fcb 100644 >>> --- a/hw/intc/riscv_aclint.c >>> +++ b/hw/intc/riscv_aclint.c >>> @@ -130,7 +130,7 @@ static uint64_t riscv_aclint_mtimer_read(void *opaque, >>> hwaddr addr, >>> addr < (mtimer->timecmp_base + (mtimer->num_harts << 3))) { >>> size_t hartid = mtimer->hartid_base + >>> ((addr - mtimer->timecmp_base) >> 3); >>> - CPUState *cpu = cpu_by_arch_id(hartid); >>> + CPUState *cpu = qemu_get_cpu(hartid); > >There is some code smell here. qemu_get_cpu() shouldn't be called by >device models, but only by accelerators. Yes, qemu_get_cpu() is designed to be called by accelerators. But there is currently no new API to support multi-socket and heterogeneous processor architectures,and sifive_plic has been designed with qemu_get_cpu(). Please refer to: [1] https://lore.kernel.org/qemu-devel/1519683480-33201-16-git-send-email-...@sifive.com/ [2] https://lore.kernel.org/qemu-devel/20200825184836.1282371-3-alistair.fran...@wdc.com/ >Maybe the timer should get a link of the hart array it belongs to, >and offset to this array base hartid? The same problem exists not only with timer, but also with aclint. There needs to be a general approach to this problem. >I'm going to >NACK >this patch until further review / clarifications. Regards, Leo Hou.
Re: 回复: Fw: 来自Leo Hou的邮件
On 2023/11/2 12:46, leohou wrote: On 2023/11/2 11:33, leohou1...@gmail.com wrote: On 31/10/23 16:13:32 Philippe Mathieu-Daudé wrote: Hi Leo, On 31/10/23 04:10, Leo Hou wrote: hi , all Does qemu plan to support CPU heterogeneity? Short answer is yes. When will this be available is yet to be determined, as a lot of work is required. I'm going to talk about the challenges and possible roadmap later today, feel free to join the call scheduled at 2pm CET on https://meet.jit.si/kvmcallmeeting. (See https://lore.kernel.org/qemu-devel/calendar-1ad16449-09cc-40fb-ab4a-24eafcc62...@google.com/) Hi Philippe Thank you for your reply. I didn't check my email in time because of the mailbox problem. Now I will reply to you by changing my email address. With regard to your discussion, is it convenient to announce the results of the discussion now? Is there a need for the architecture of the main cpu and several coprocessors? Examples include SCP and MCP in the ARM N2 platform, or an ARM host machine containing a risc-v coprocessor.
Re: 回复: Fw: 来自Leo Hou的邮件
On 2023/11/2 11:33, leohou1...@gmail.com wrote: On 31/10/23 16:13:32 Philippe Mathieu-Daudé wrote: Hi Leo, On 31/10/23 04:10, Leo Hou wrote: hi , all Does qemu plan to support CPU heterogeneity? Short answer is yes. When will this be available is yet to be determined, as a lot of work is required. I'm going to talk about the challenges and possible roadmap later today, feel free to join the call scheduled at 2pm CET on https://meet.jit.si/kvmcallmeeting. (See https://lore.kernel.org/qemu-devel/calendar-1ad16449-09cc-40fb-ab4a-24eafcc62...@google.com/) Hi Philippe Thank you for your reply. I didn't check my email in time because of the mailbox problem. Now I will reply to you by changing my email address. With regard to your discussion, is it convenient to announce the results of the discussion now? Is there a need for the architecture of the main cpu and several coprocessors?
回复: 来自Leo Hou的邮件
hi , all Does qemu plan to support CPU heterogeneity?
suport sr-iov to virtio-net
hi all, Why can't I receive a subscription reply email?
suport sr-iov to virtio-net
hi all, I want to support sr-iov to virtio-net, How about this feature?
suport sr-iov to virtio-net
hi all, I want to support sr-iov to virtio-net, How about this feature?
suport sr-iov to virtio-net
hi all, I want to support sr-iov to virtio-net, How about this feature?
suport sr-iov to virtio-net
hi all, I want to support sr-iov to virtio-net, How about this feature?
Re:Re: Address mapping for vIOMMU
At 2022-03-24 12:27:46, "Jason Wang" wrote: >On Thu, Mar 24, 2022 at 12:15 PM leohou wrote: >> >> hi all, >> When I use DPDK in guestOS and configering the VM with vIOMMU, I found >> that sending the gVA to the hardware device , the hardware device can't >> find the real data. >> But sending the gPA to the hardware device, the hardware device can find >> the real data. >> >> Environment: >> OS: Linux version 5.4.0-104-generic (buildd@ubuntu) (gcc version 9.3.0 >> (Ubuntu 9.3.0-17ubuntu1~20.04)) #118-Ubuntu SMP Wed Mar 2 19:02:41 UTC 2022 >> QEMU: QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.21) >> Device: virtio-net >> >> Question: >> The vIOMMU doesn't work? >> I know virtio-net does not have DMA, so when virtio-net and DPDK are >> combined, IOMMU is not needed? > >vIOMMU + virtio-net works for me like a charm. > >DPDK supported vIOMMU long ago with virtio-net. > >Make sure you vIOMMU is enabled in the guest (intel_iommu=on in guest >kernel command line, and enable_unsafe_noiommu_mode is *not* 1) > >Thanks > >> >> >> > hi, jason I'm sure my vIOMMU is enabled in the guestOS(intel_iommu=on in guest kernel command line, and enable_unsafe_noiommu_mode is "0"), but it only work when I config the DPDK in passing physical addresses model. So, I think qEMU emulation of Virtio-net has no DMA, so virt queue register in PCIe space of Virtio-NET can only configure the physical address of virt queue. Can I take it this way? Thanks!
Address mapping for vIOMMU
hi all, When I use DPDK in guestOS and configering the VM with vIOMMU, I found that sending the gVA to the hardware device , the hardware device can't find the real data. But sending the gPA to the hardware device, the hardware device can find the real data. Environment: OS: Linux version 5.4.0-104-generic (buildd@ubuntu) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)) #118-Ubuntu SMP Wed Mar 2 19:02:41 UTC 2022 QEMU: QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.21) Device: virtio-net Question: The vIOMMU doesn't work? I know virtio-net does not have DMA, so when virtio-net and DPDK are combined, IOMMU is not needed?