> > allocates it on a 2M boundary. I suspect you actually want (base % 2M) == > > 1M. Aligning on a 1M boundary will only DTRT half the time. > > The 1m-end is an hypothetical worry that come to mind as I was > discussing the issue with you. Basically my point is that if the pc.c > code will change and it'll pretend to qemu_ram_alloc the 0-640k and > 1M-4G range with two separate calls (this is _not_ what qemu does > right now), the alignment in qemu_ram_alloc that works right now, > would then stop working. > > This is why I thought maybe it's more correct (and less > virtual-ram-wasteful) to move the alignment in the caller even if the > patch will grow in size and it'll be pc.c specific (which it wouldn't > need to if other archs will support transparent hugepage). > > I think with what you're saying above you're basically agreeing with > me I should move the alignment in the caller. Correct me if I > misunderstood.
I don't think the target specific should know or care about this. Anthony recently proposed a different API for allocation guest RAM that would potentially make some of this information available to common code. However that has significant issues once you try and use it for anything other than the trivial PC machine. In particular I don't believe is is reasonable to assume RAM is always mapped at a fixed guest address. Paul