Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
On 26/03/2015 15:17, Ingo Molnar wrote: > > * Laurent Dufour wrote: > >>> I argue we should use the right condition to clear vdso_base: if >>> the vDSO gets at least partially unmapped. Otherwise there's >>> little point in the whole patch: either correctly track whether >>> the vDSO is OK, or don't ... >> >> That's a good option, but it may be hard to achieve in the case the >> vDSO area has been splitted in multiple pieces. >> >> Not sure there is a right way to handle that, here this is a best >> effort, allowing a process to unmap its vDSO and having the >> sigreturn call done through the stack area (it has to make it >> executable). >> >> Anyway I'll dig into that, assuming that the vdso_base pointer >> should be clear if a part of the vDSO is moved or unmapped. The >> patch will be larger since I'll have to get the vDSO size which is >> private to the vdso.c file. > > At least for munmap() I don't think that's a worry: once unmapped > (even if just partially), vdso_base becomes zero and won't ever be set > again. > > So no need to track the zillion pieces, should there be any: Humpty > Dumpty won't be whole again, right? My idea is to clear vdso_base if at least part of the vdso is unmap. But since some part of the vdso may have been moved and unmapped later, to be complete, the patch has to handle partial mremap() of the vDSO too. Otherwise such a scenario will not be detected: new_area = mremap(vdso_base + page_size, ); munmap(new_area,...); >>> There's also the question of mprotect(): can users mprotect() the >>> vDSO on PowerPC? >> >> Yes, mprotect() the vDSO is allowed on PowerPC, as it is on x86, and >> certainly all the other architectures. Furthermore, if it is done on >> a partial part of the vDSO it is splitting the vma... > > btw., CRIU's main purpose here is to reconstruct a vDSO that was > originally randomized, but whose address must now be reproduced as-is, > right? You're right, CRIU has to move the vDSO to the same address it has at checkpoint time. > In that sense detecting the 'good' mremap() as your patch does should > do the trick and is certainly not objectionable IMHO - I was just > wondering whether we could make a perfect job very simply. I'd try to address the perfect job, this may complexify the patch, especially because the vdso's size is not recorded in the PowerPC mm_context structure. Not sure it is a good idea to extend that structure.. Thanks, Laurent. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
On Thu, 2015-03-26 at 10:43 +0100, Ingo Molnar wrote: > * Benjamin Herrenschmidt wrote: > > > On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote: > > > * Ingo Molnar wrote: > > > > > > > > +#define __HAVE_ARCH_REMAP > > > > > +static inline void arch_remap(struct mm_struct *mm, > > > > > + unsigned long old_start, unsigned long > > > > > old_end, > > > > > + unsigned long new_start, unsigned long > > > > > new_end) > > > > > +{ > > > > > + /* > > > > > + * mremap() doesn't allow moving multiple vmas so we can limit > > > > > the > > > > > + * check to old_start == vdso_base. > > > > > + */ > > > > > + if (old_start == mm->context.vdso_base) > > > > > + mm->context.vdso_base = new_start; > > > > > +} > > > > > > > > mremap() doesn't allow moving multiple vmas, but it allows the > > > > movement of multi-page vmas and it also allows partial mremap()s, > > > > where it will split up a vma. > > > > > > I.e. mremap() supports the shrinking (and growing) of vmas. In that > > > case mremap() will unmap the end of the vma and will shrink the > > > remaining vDSO vma. > > > > > > Doesn't that result in a non-working vDSO that should zero out > > > vdso_base? > > > > Right. Now we can't completely prevent the user from shooting itself > > in the foot I suppose, though there is a legit usage scenario which > > is to move the vDSO around which it would be nice to support. I > > think it's reasonable to put the onus on the user here to do the > > right thing. > > I argue we should use the right condition to clear vdso_base: if the > vDSO gets at least partially unmapped. Otherwise there's little point > in the whole patch: either correctly track whether the vDSO is OK, or > don't ... Well, if we are going to clear it at all yes, we should probably be a bit smarter about it. My point however was we probably don't need to be super robust about dealing with any crazy scenario userspace might conceive. > There's also the question of mprotect(): can users mprotect() the vDSO > on PowerPC? Nothing prevents it. But here too, I wouldn't bother. The user might be doing on purpose expecting to catch the resulting signal for example (though arguably a signal from a sigreturn frame is ... odd). Cheers, Ben. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
* Benjamin Herrenschmidt wrote: > > > +#define __HAVE_ARCH_REMAP > > > +static inline void arch_remap(struct mm_struct *mm, > > > + unsigned long old_start, unsigned long old_end, > > > + unsigned long new_start, unsigned long new_end) > > > +{ > > > + /* > > > + * mremap() doesn't allow moving multiple vmas so we can limit the > > > + * check to old_start == vdso_base. > > > + */ > > > + if (old_start == mm->context.vdso_base) > > > + mm->context.vdso_base = new_start; > > > +} > > > > mremap() doesn't allow moving multiple vmas, but it allows the > > movement of multi-page vmas and it also allows partial mremap()s, > > where it will split up a vma. > > > > In particular, what happens if an mremap() is done with > > old_start == vdso_base, but a shorter end than the end of the vDSO? > > (i.e. a partial mremap() with fewer pages than the vDSO size) > > Is there a way to forbid splitting ? Does x86 deal with that case at > all or it doesn't have to for some other reason ? So we use _install_special_mapping() - maybe PowerPC does that too? That adds VM_DONTEXPAND which ought to prevent some - but not all - of the VM API weirdnesses. On x86 we'll just dump core if someone unmaps the vdso. Thanks, Ingo -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
* Ingo Molnar wrote: > > +#define __HAVE_ARCH_REMAP > > +static inline void arch_remap(struct mm_struct *mm, > > + unsigned long old_start, unsigned long old_end, > > + unsigned long new_start, unsigned long new_end) > > +{ > > + /* > > +* mremap() doesn't allow moving multiple vmas so we can limit the > > +* check to old_start == vdso_base. > > +*/ > > + if (old_start == mm->context.vdso_base) > > + mm->context.vdso_base = new_start; > > +} > > mremap() doesn't allow moving multiple vmas, but it allows the > movement of multi-page vmas and it also allows partial mremap()s, > where it will split up a vma. I.e. mremap() supports the shrinking (and growing) of vmas. In that case mremap() will unmap the end of the vma and will shrink the remaining vDSO vma. Doesn't that result in a non-working vDSO that should zero out vdso_base? Thanks, Ingo -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
On Wed, 2015-03-25 at 19:33 +0100, Ingo Molnar wrote: > * Laurent Dufour wrote: > > > +static inline void arch_unmap(struct mm_struct *mm, > > + struct vm_area_struct *vma, > > + unsigned long start, unsigned long end) > > +{ > > + if (start <= mm->context.vdso_base && mm->context.vdso_base < end) > > + mm->context.vdso_base = 0; > > +} > > So AFAICS PowerPC can have multi-page vDSOs, right? > > So what happens if I munmap() the middle or end of the vDSO? The above > condition only seems to cover unmaps that affect the first page. I > think 'affects any page' ought to be the right condition? (But I know > nothing about PowerPC so I might be wrong.) You are right, we have at least two pages. > > > +#define __HAVE_ARCH_REMAP > > +static inline void arch_remap(struct mm_struct *mm, > > + unsigned long old_start, unsigned long old_end, > > + unsigned long new_start, unsigned long new_end) > > +{ > > + /* > > +* mremap() doesn't allow moving multiple vmas so we can limit the > > +* check to old_start == vdso_base. > > +*/ > > + if (old_start == mm->context.vdso_base) > > + mm->context.vdso_base = new_start; > > +} > > mremap() doesn't allow moving multiple vmas, but it allows the > movement of multi-page vmas and it also allows partial mremap()s, > where it will split up a vma. > > In particular, what happens if an mremap() is done with > old_start == vdso_base, but a shorter end than the end of the vDSO? > (i.e. a partial mremap() with fewer pages than the vDSO size) Is there a way to forbid splitting ? Does x86 deal with that case at all or it doesn't have to for some other reason ? Cheers, Ben. > Thanks, > > Ingo > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote: > * Ingo Molnar wrote: > > > > +#define __HAVE_ARCH_REMAP > > > +static inline void arch_remap(struct mm_struct *mm, > > > + unsigned long old_start, unsigned long old_end, > > > + unsigned long new_start, unsigned long new_end) > > > +{ > > > + /* > > > + * mremap() doesn't allow moving multiple vmas so we can limit the > > > + * check to old_start == vdso_base. > > > + */ > > > + if (old_start == mm->context.vdso_base) > > > + mm->context.vdso_base = new_start; > > > +} > > > > mremap() doesn't allow moving multiple vmas, but it allows the > > movement of multi-page vmas and it also allows partial mremap()s, > > where it will split up a vma. > > I.e. mremap() supports the shrinking (and growing) of vmas. In that > case mremap() will unmap the end of the vma and will shrink the > remaining vDSO vma. > > Doesn't that result in a non-working vDSO that should zero out > vdso_base? Right. Now we can't completely prevent the user from shooting itself in the foot I suppose, though there is a legit usage scenario which is to move the vDSO around which it would be nice to support. I think it's reasonable to put the onus on the user here to do the right thing. Cheers, Ben. > Thanks, > > Ingo > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
* Benjamin Herrenschmidt wrote: > On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote: > > * Ingo Molnar wrote: > > > > > > +#define __HAVE_ARCH_REMAP > > > > +static inline void arch_remap(struct mm_struct *mm, > > > > + unsigned long old_start, unsigned long > > > > old_end, > > > > + unsigned long new_start, unsigned long > > > > new_end) > > > > +{ > > > > + /* > > > > +* mremap() doesn't allow moving multiple vmas so we can limit > > > > the > > > > +* check to old_start == vdso_base. > > > > +*/ > > > > + if (old_start == mm->context.vdso_base) > > > > + mm->context.vdso_base = new_start; > > > > +} > > > > > > mremap() doesn't allow moving multiple vmas, but it allows the > > > movement of multi-page vmas and it also allows partial mremap()s, > > > where it will split up a vma. > > > > I.e. mremap() supports the shrinking (and growing) of vmas. In that > > case mremap() will unmap the end of the vma and will shrink the > > remaining vDSO vma. > > > > Doesn't that result in a non-working vDSO that should zero out > > vdso_base? > > Right. Now we can't completely prevent the user from shooting itself > in the foot I suppose, though there is a legit usage scenario which > is to move the vDSO around which it would be nice to support. I > think it's reasonable to put the onus on the user here to do the > right thing. I argue we should use the right condition to clear vdso_base: if the vDSO gets at least partially unmapped. Otherwise there's little point in the whole patch: either correctly track whether the vDSO is OK, or don't ... There's also the question of mprotect(): can users mprotect() the vDSO on PowerPC? Thanks, Ingo -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
On 26/03/2015 10:48, Ingo Molnar wrote: > > * Benjamin Herrenschmidt wrote: > +#define __HAVE_ARCH_REMAP +static inline void arch_remap(struct mm_struct *mm, +unsigned long old_start, unsigned long old_end, +unsigned long new_start, unsigned long new_end) +{ + /* + * mremap() doesn't allow moving multiple vmas so we can limit the + * check to old_start == vdso_base. + */ + if (old_start == mm->context.vdso_base) + mm->context.vdso_base = new_start; +} >>> >>> mremap() doesn't allow moving multiple vmas, but it allows the >>> movement of multi-page vmas and it also allows partial mremap()s, >>> where it will split up a vma. >>> >>> In particular, what happens if an mremap() is done with >>> old_start == vdso_base, but a shorter end than the end of the vDSO? >>> (i.e. a partial mremap() with fewer pages than the vDSO size) >> >> Is there a way to forbid splitting ? Does x86 deal with that case at >> all or it doesn't have to for some other reason ? > > So we use _install_special_mapping() - maybe PowerPC does that too? > That adds VM_DONTEXPAND which ought to prevent some - but not all - of > the VM API weirdnesses. The same is done on PowerPC. So calling mremap() to extend the vDSO is failing but splitting it or unmapping a part of it is allowed but lead to an unusable vDSO. > On x86 we'll just dump core if someone unmaps the vdso. On PowerPC, you'll get the same result. Should we prevent the user to break its vDSO ? Thanks, Laurent. -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
* Laurent Dufour wrote: > > I argue we should use the right condition to clear vdso_base: if > > the vDSO gets at least partially unmapped. Otherwise there's > > little point in the whole patch: either correctly track whether > > the vDSO is OK, or don't ... > > That's a good option, but it may be hard to achieve in the case the > vDSO area has been splitted in multiple pieces. > > Not sure there is a right way to handle that, here this is a best > effort, allowing a process to unmap its vDSO and having the > sigreturn call done through the stack area (it has to make it > executable). > > Anyway I'll dig into that, assuming that the vdso_base pointer > should be clear if a part of the vDSO is moved or unmapped. The > patch will be larger since I'll have to get the vDSO size which is > private to the vdso.c file. At least for munmap() I don't think that's a worry: once unmapped (even if just partially), vdso_base becomes zero and won't ever be set again. So no need to track the zillion pieces, should there be any: Humpty Dumpty won't be whole again, right? > > There's also the question of mprotect(): can users mprotect() the > > vDSO on PowerPC? > > Yes, mprotect() the vDSO is allowed on PowerPC, as it is on x86, and > certainly all the other architectures. Furthermore, if it is done on > a partial part of the vDSO it is splitting the vma... btw., CRIU's main purpose here is to reconstruct a vDSO that was originally randomized, but whose address must now be reproduced as-is, right? In that sense detecting the 'good' mremap() as your patch does should do the trick and is certainly not objectionable IMHO - I was just wondering whether we could make a perfect job very simply. Thanks, Ingo -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
* Laurent Dufour wrote: > +static inline void arch_unmap(struct mm_struct *mm, > + struct vm_area_struct *vma, > + unsigned long start, unsigned long end) > +{ > + if (start <= mm->context.vdso_base && mm->context.vdso_base < end) > + mm->context.vdso_base = 0; > +} So AFAICS PowerPC can have multi-page vDSOs, right? So what happens if I munmap() the middle or end of the vDSO? The above condition only seems to cover unmaps that affect the first page. I think 'affects any page' ought to be the right condition? (But I know nothing about PowerPC so I might be wrong.) > +#define __HAVE_ARCH_REMAP > +static inline void arch_remap(struct mm_struct *mm, > + unsigned long old_start, unsigned long old_end, > + unsigned long new_start, unsigned long new_end) > +{ > + /* > + * mremap() doesn't allow moving multiple vmas so we can limit the > + * check to old_start == vdso_base. > + */ > + if (old_start == mm->context.vdso_base) > + mm->context.vdso_base = new_start; > +} mremap() doesn't allow moving multiple vmas, but it allows the movement of multi-page vmas and it also allows partial mremap()s, where it will split up a vma. In particular, what happens if an mremap() is done with old_start == vdso_base, but a shorter end than the end of the vDSO? (i.e. a partial mremap() with fewer pages than the vDSO size) Thanks, Ingo -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
Re: [uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
On 26/03/2015 10:43, Ingo Molnar wrote: > > * Benjamin Herrenschmidt wrote: > >> On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote: >>> * Ingo Molnar wrote: >>> > +#define __HAVE_ARCH_REMAP > +static inline void arch_remap(struct mm_struct *mm, > + unsigned long old_start, unsigned long old_end, > + unsigned long new_start, unsigned long new_end) > +{ > + /* > + * mremap() doesn't allow moving multiple vmas so we can limit the > + * check to old_start == vdso_base. > + */ > + if (old_start == mm->context.vdso_base) > + mm->context.vdso_base = new_start; > +} mremap() doesn't allow moving multiple vmas, but it allows the movement of multi-page vmas and it also allows partial mremap()s, where it will split up a vma. >>> >>> I.e. mremap() supports the shrinking (and growing) of vmas. In that >>> case mremap() will unmap the end of the vma and will shrink the >>> remaining vDSO vma. >>> >>> Doesn't that result in a non-working vDSO that should zero out >>> vdso_base? >> >> Right. Now we can't completely prevent the user from shooting itself >> in the foot I suppose, though there is a legit usage scenario which >> is to move the vDSO around which it would be nice to support. I >> think it's reasonable to put the onus on the user here to do the >> right thing. > > I argue we should use the right condition to clear vdso_base: if the > vDSO gets at least partially unmapped. Otherwise there's little point > in the whole patch: either correctly track whether the vDSO is OK, or > don't ... That's a good option, but it may be hard to achieve in the case the vDSO area has been splitted in multiple pieces. Not sure there is a right way to handle that, here this is a best effort, allowing a process to unmap its vDSO and having the sigreturn call done through the stack area (it has to make it executable). Anyway I'll dig into that, assuming that the vdso_base pointer should be clear if a part of the vDSO is moved or unmapped. The patch will be larger since I'll have to get the vDSO size which is private to the vdso.c file. > There's also the question of mprotect(): can users mprotect() the vDSO > on PowerPC? Yes, mprotect() the vDSO is allowed on PowerPC, as it is on x86, and certainly all the other architectures. Furthermore, if it is done on a partial part of the vDSO it is splitting the vma... -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user
[uml-user] [PATCH v3 2/2] powerpc/mm: Tracking vDSO remap
Some processes (CRIU) are moving the vDSO area using the mremap system call. As a consequence the kernel reference to the vDSO base address is no more valid and the signal return frame built once the vDSO has been moved is not pointing to the new sigreturn address. This patch handles vDSO remapping and unmapping. Signed-off-by: Laurent Dufour --- arch/powerpc/include/asm/mmu_context.h | 36 +- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h index 73382eba02dc..7d315c1898d4 100644 --- a/arch/powerpc/include/asm/mmu_context.h +++ b/arch/powerpc/include/asm/mmu_context.h @@ -8,7 +8,6 @@ #include #include #include -#include #include /* @@ -109,5 +108,40 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, #endif } +static inline void arch_dup_mmap(struct mm_struct *oldmm, +struct mm_struct *mm) +{ +} + +static inline void arch_exit_mmap(struct mm_struct *mm) +{ +} + +static inline void arch_unmap(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + if (start <= mm->context.vdso_base && mm->context.vdso_base < end) + mm->context.vdso_base = 0; +} + +static inline void arch_bprm_mm_init(struct mm_struct *mm, +struct vm_area_struct *vma) +{ +} + +#define __HAVE_ARCH_REMAP +static inline void arch_remap(struct mm_struct *mm, + unsigned long old_start, unsigned long old_end, + unsigned long new_start, unsigned long new_end) +{ + /* +* mremap() doesn't allow moving multiple vmas so we can limit the +* check to old_start == vdso_base. +*/ + if (old_start == mm->context.vdso_base) + mm->context.vdso_base = new_start; +} + #endif /* __KERNEL__ */ #endif /* __ASM_POWERPC_MMU_CONTEXT_H */ -- 1.9.1 -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user