Hi,
On 18/07/17 13:25, Sergej Proskurin wrote:
+ /*
+ * The starting level is the number of strides (grainsizes[gran] - 3)
+ * needed to consume the input address (ARM DDI 0487B.a J1-5924).
+ */
+ level = 4 - DIV_ROUND_UP((input_size - grainsizes[gran]),
(grainsizes[gran] - 3));
+
+ /* Get the IPA output_size. */
+ ret = get_ipa_output_size(d, tcr, &output_size);
+ if ( ret )
+ return -EFAULT;
+
+ /* Make sure the base address does not exceed its configured size. */
+ ret = check_base_size(output_size, ttbr);
+ if ( !ret )
+ return -EFAULT;
+
+ /*
+ * Compute the base address of the first level translation table that is
+ * given by TTBRx_EL1 (ARM DDI 0487B.a D4-2024 and J1-5926).
+ */
+ mask = GENMASK_ULL(47, grainsizes[gran]);
+ paddr = (ttbr & mask);
+
+ for ( ; ; level++ )
+ {
+ /*
+ * Add offset given by the GVA to the translation table base address.
+ * Shift the offset by 3 as it is 8-byte aligned.
+ */
+ paddr |= offsets[gran][level] << 3;
+
+ /* Access the guest's memory to read only one PTE. */
+ ret = access_guest_memory_by_ipa(d, paddr, &pte, sizeof(lpae_t),
false);
While working on other bit of Xen, it occurred to me that
access_guest_memory_by_ipa will take the p2m lock. However it is already
taken by another caller in the stack (see get_page_from_gva).
This means you rely on the p2m lock to be recursive. I don't think we
make this assumption in any p2m code at the moment. I think it is fine
with the current locking (we are using read-write lock).
I am not a big fan of nested lock, but I can't see how to do it properly
here. Nevertheless, I would like a comment on top of the p2m rwlock to
explain we have place using nested p2m locked. So if we ever decide to
modify the lock, we will not get get caught with a deadlock in the
memaccess code.
I will review the rest of the patch later.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel