Hi Nicolin,
On 1/27/26 1:04 AM, Nicolin Chen wrote:
> On Mon, Jan 26, 2026 at 02:48:55PM +0100, Eric Auger wrote:
>
>>> + name = g_strdup_printf("%s vcmdq",
>>> memory_region_name(&cmdqv->mmio_cmdqv));
>>> + memory_region_init_ram_device_ptr(&cmdqv->mmio_vcmdq_page,
>>> +
>>> memory_region_owner(&cmdqv->mmio_cmdqv),
>>> + name, 0x10000, cmdqv->vcmdq_page0);
>>> + memory_region_add_subregion_overlap(&cmdqv->mmio_cmdqv, 0x10000,
>>> + &cmdqv->mmio_vcmdq_page, 1);
>>> + g_free(name);
>>> +
>>> + name = g_strdup_printf("%s vintf",
>>> memory_region_name(&cmdqv->mmio_cmdqv));
>>> + memory_region_init_ram_device_ptr(&cmdqv->mmio_vintf_page,
>>> +
>>> memory_region_owner(&cmdqv->mmio_cmdqv),
>>> + name, 0x10000, cmdqv->vcmdq_page0);
>> I don't get why we need/have 2 RAM devices pointing to the same @ptr
>> = cmdqv->vcmdq_page0. Is 0x10000 ~ VCMDQ_REG_PAGE_SIZE?
> The first one is for "vcmdq" and the second one is for "vintf".
> Explaining below...
>
>> The names of the MRs are quite confusing: If my understanding is correct
>> we have;
>>
>> cmdqv->mmio_cmdqv (0x50000 sized) which as the container for the 2
>> subregions.
>> Then within that one we have 2 subregions, one at offset 0x10000
>> (mmio_vcmdq_page
>> ), one at offset 0x30000 (cmdqv->mmio_vintf_page).
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c?h=v6.19-rc7#n19
> It's defined in the kernel driver (yea, we should clarify in the
> QEMU code as well).
>
> So, the MMIO regions look like this:
> 1st 64 KB page -- Global CMDQV registers
> 2nd 64 KB page -- Global VCMDQ registers Page0 (mmap)
> 3rd 64 KB page -- Global VCMDQ registers Page1 (trap)
> 4th 64 KB page -- VINTF0 Logical VCMDQ registers Page0 (mmap)
> 5th 64 KB page -- VINTF0 Logical VCMDQ registers Page1 (trap)
This kind of layout representation is really helpful
>
> In real hardware, there will be 6th 64KB and beyond, for VINTF1
> and others. But, we're omitting here in QEMU as only VINTF0 will
> be supported -- kernel only exposes one VINTF per VM as well.
> (Yes, we should clarify this too.)
OK
>
>> I have difficulties to link that with the commit message
>>
>> "Tegra241 CMDQV assigns each VINTF a 128KB MMIO region split into two
>> 64 KB pages:
>> - Page0: guest accessible control/status registers for all VCMDQs
>> - Page1:configuration registers ../.."
>> Those 2 pages, are they part of mmio_vcmdq_page?
> Not exactly. Both the global VCMDQ region and VINTF region have
> their own pages (at 0x10000 and 0x30000).
>
> Here, it duplicates the mapping to 0x10000 (Global VCMDQ page0)
> and 0x30000 (VINTF0 page0) for simplification, because we only
> support VINTF0 in this case.
so Global VCMDQ registers Page0 and VINTF0 Logical VCMDQ registers Page0
are basically the same?
I would recommend to use cmdqv->mmio_vcmdq_page0 and
cmdqv->mmio_vintf_page0 to avoid any misunderstanding
>
> There is a little catch in this implementation. The real physical
> mapping between a global VCMDQ and a logical VCMDQ happens when
> QEMU calls HW_QUEUE ioctl. So, the mmap'd page0 doesn't have any
> real VCMDQ backing up the emulated VCMDQ. So, perhaps QEMU should
> trap the page0 and delay the memory_region_init_ram_device_ptr()
> until the HW_QUEUE ioctl is done?
might be safer indeed.
>
> There might be also a corner case: when the kernel exposes two
> physical VCMDQs, but the guest OS only uses one, i.e. QEMU only
> allocates one HW_QUEUE for VCMDQ0 but doesn't allocate VCMDQ1.
> In such a case, the VTINF0 page0 should be able to control the
> logical VCMDQ0 only, while the global page0 should control both.
you lost me. Need to look at the kernel or spec ;-)
>
> We are getting away this corner case with any guest OS running
> Linux kernel because it only accesses VTINF pages. But likely we
> should do something about it..
>
>> Then you talk about 0x30000 :VINTF register I guess this is the second
>> cmdqv->mmio_vintf_page
>>
>> Well I am confused at this stage of the reading.
>>
>> Also without any spec, this is difficult to understand. Is there any public
>> doc?
> It seems that Red Hat can get the doc under NDA..
OK thanks
Eric
>
> Thanks
> Nicolin
>