On 28.06.19 17:36, 'Pratyush Yadav' via Jailhouse wrote:
Right now, page_alloc_aligned() can fail to give aligned pages when more
than one page is being allocated. This is because the aligned_start
calculation is flawed.
Taking an example from a test case, let's say 8 pages need to be
allocated. This means an alignment of 15 bits. The mask here is 0x7. If
the page pool's base address is 0xffffc021f000, this gives us
aligned_start = 0x7. This start is clearly not aligned at a 15 bits
boundary (3 bits after shifting by PAGE_SHIFT). It is exactly the
opposite. It will never be aligned except when the page pool start also
happens to be aligned at that boundary.
In the above example, the address of the pointer returned was
0xffffc026e000 which is clearly not 15-bit aligned.
This change fixes this problem. First, zero out the mask bits. This
makes the address less than the pool start, so add mask + 1 to it. Since
zeroing out the bottom mask bits means the maximum reduction in the
address can be mask (when all those bits are 1), adding mask + 1 means
the resultant address is always greater than page pool start. In fact,
it is the first aligned address after the pool start.
This calculation fails when pool_start is already aligned. It gives
an aligned address, but it is the first aligned address after
pool_start, not pool_start itself. So don't do anything in that case.
Signed-off-by: Pratyush Yadav <[email protected]>
---
hypervisor/paging.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/hypervisor/paging.c b/hypervisor/paging.c
index 16687a89..5f5d0066 100644
--- a/hypervisor/paging.c
+++ b/hypervisor/paging.c
@@ -105,13 +105,21 @@ static unsigned long find_next_free_page(struct page_pool
*pool,
static void *page_alloc_internal(struct page_pool *pool, unsigned int num,
unsigned long align_mask)
{
- /* The pool itself might not be aligned as required. */
- unsigned long aligned_start =
- ((unsigned long)pool->base_address >> PAGE_SHIFT) & align_mask;
- unsigned long next = aligned_start;
- unsigned long start, last;
+ unsigned long aligned_start, pool_start, next, start, last;
unsigned int allocated;
+ pool_start = (unsigned long)pool->base_address >> PAGE_SHIFT;
+
+ /* The pool itself might not be aligned as required. */
+ if (pool_start & align_mask)
+ aligned_start = (pool_start & (~align_mask)) + align_mask + 1;
+ else
+ aligned_start = pool_start;
+
+ /* We need an offset from page pool start. */
+ aligned_start -= pool_start;
+ next = aligned_start;
+
restart:
/* Forward the search start to the next aligned page. */
if ((next - aligned_start) & align_mask)
Good catch! Can be done a little bit simpler, though:
aligned_start = ((pool_start + align_mask) & ~align_mask) - pool_start;
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
--
You received this message because you are subscribed to the Google Groups
"Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/jailhouse-dev/75aaa37a-7df3-794a-6372-5db258137555%40siemens.com.
For more options, visit https://groups.google.com/d/optout.