Thank you very much for the clarification! I found tlb_set_page with size != TARGET_PAGE_SIZE makes the translation way too slow; the Linux doesn't seem to boot.
If that's the only way to reduce PMP granularity to less than TARGET_PAGE_SIZE, Can we just set the PMP default granularity to TARGET_PAGE_SIZE as it did before? OR Can we bypass the partial match violation when size is unknown? (check the starting address only) I think both of the options does not exactly match with the ISA specification, but given that size=0 always causes the problem, I want it to be fixed as soon as possible. Any thoughts would be appreciated! On Mon, Oct 7, 2019, 6:00 AM Richard Henderson <richard.hender...@linaro.org> wrote: > On 10/6/19 10:28 PM, Dayeol Lee wrote: > > riscv_cpu_tlb_fill() uses the `size` parameter to check PMP violation > > using pmp_hart_has_privs(). > > However, the size passed from tlb_fill(), which is called by > > get_page_addr_code(), is always a hard-coded value 0. > > This causes a false PMP violation if the instruction presents on a > > PMP boundary. > > > > In order to fix, simply correct the size to 4 if the access_type is > > MMU_INST_FETCH. > > That's not correct. > > In general, size 0 means "unknown size". In this case, the one tlb lookup > is > going to be used by lots of instructions -- everything that fits on the > page. > > If you want to support PMP on things that are not page boundaries, then you > will also have to call tlb_set_page with size != TARGET_PAGE_SIZE. > > Fixing that will cause instructions within that page to be executed one at > a > time, which also means they will be tlb_fill'd one at a time, which means > that > you'll get the correct size value. > > Which will be 2 or 4, depending on whether the configuration supports the > Compressed extension, and not just 4. > > > r~ > >