On 15.04.2025 12:29, Oleksii Kurochko wrote:
> 
> On 4/10/25 5:13 PM, Jan Beulich wrote:
>> On 08.04.2025 17:57, Oleksii Kurochko wrote:
>>> Based on RISC-V unpriviliged spec ( Version 20240411 ):
>>> ```
>>> For implementations that conform to the RISC-V Unix Platform Specification,
>>> I/O devices and DMA operations are required to access memory coherently and
>>> via strongly ordered I/O channels. Therefore, accesses to regular main 
>>> memory
>>> regions that are concurrently accessed by external devices can also use the
>>> standard synchronization mechanisms. Implementations that do not conform
>>> to the Unix Platform Specification and/or in which devices do not access
>>> memory coherently will need to use mechanisms
>>> (which are currently platform-specific or device-specific) to enforce
>>> coherency.
>>>
>>> I/O regions in the address space should be considered non-cacheable
>>> regions in the PMAs for those regions. Such regions can be considered 
>>> coherent
>>> by the PMA if they are not cached by any agent.
>>> ```
>>> and [1]:
>>> ```
>>> The current riscv linux implementation requires SOC system to support
>>> memory coherence between all I/O devices and CPUs. But some SOC systems
>>> cannot maintain the coherence and they need support cache clean/invalid
>>> operations to synchronize data.
>>>
>>> Current implementation is no problem with SiFive FU540, because FU540
>>> keeps all IO devices and DMA master devices coherence with CPU. But to a
>>> traditional SOC vendor, it may already have a stable non-coherency SOC
>>> system, the need is simply to replace the CPU with RV CPU and rebuild
>>> the whole system with IO-coherency is very expensive.
>>> ```
>>>
>>> and the fact that all known ( to me ) CPUs that support the H-extension
>>> and that ones is going to be supported by Xen have memory coherency
>>> between all I/O devices and CPUs, so it is currently safe to use the
>>> PAGE_HYPERVISOR attribute.
>>> However, in cases where a platform does not support memory coherency, it
>>> should support CMO extensions and Svpbmt. In this scenario, updates to
>>> ioremap will be necessary.
>>> For now, a compilation error will be generated to ensure that the need to
>>> update ioremap() is not overlooked.
>>>
>>> [1]https://patchwork.kernel.org/project/linux-riscv/patch/1555947870-23014-1-git-send-email-guo...@kernel.org/
>> But MMIO access correctness isn't just a matter of coherency. There may not
>> be any caching involved in most cases, or else you may observe significantly
>> delayed or even dropped (folded with later ones) writes, and reads may be
>> serviced from the cache instead of going to actual MMIO. Therefore ...
>>
>>> --- a/xen/arch/riscv/Kconfig
>>> +++ b/xen/arch/riscv/Kconfig
>>> @@ -15,6 +15,18 @@ config ARCH_DEFCONFIG
>>>     string
>>>     default "arch/riscv/configs/tiny64_defconfig"
>>>   
>>> +config HAS_SVPBMT
>>> +   bool
>>> +   help
>>> +     This config enables usage of Svpbmt ISA-extension ( Supervisor-mode:
>>> +     page-based memory types).
>>> +
>>> +     The memory type for a page contains a combination of attributes
>>> +     that indicate the cacheability, idempotency, and ordering
>>> +     properties for access to that page.
>>> +
>>> +     The Svpbmt extension is only available on 64-bit cpus.
>> ... I kind of expect this extension (or anything else that there might be) 
>> will need
>> making use of.
> 
> In cases where the Svpbmt extension isn't available, PMA (Physical Memory 
> Attributes)
> is used to control which memory regions are cacheable, non-cacheable, 
> readable, writable,
> etc. PMA is configured in M-mode by the firmware (e.g., OpenSBI), as is done 
> in Andes
> cores, or it can be fixed at design time, as in SiFive cores.

How would things work if there was a need to map a RAM page uncacheable (via
ioremap() or otherwise)?

Jan

Reply via email to