On Wed, Jul 08, 2020 at 07:10:27PM +0300, Jarkko Sakkinen wrote:
> On Wed, Jul 08, 2020 at 03:37:08PM +0100, Matthew Wilcox wrote:
> > On Wed, Jul 08, 2020 at 05:33:20PM +0300, Jarkko Sakkinen wrote:
> > > I get the point but I don't think that your proposal could work given
> > > that mprotect-callback takes neither 'prev' nor 'newflags' as its
> > > parameters. The current callback has no means to call mprotect_fixup()
> > > properly.
> > > 
> > > It would have to be extended
> > > 
> > >   int (*mprotect)(struct vm_area_struct *vma,
> > >                   struct vm_area_struct **pprev, unsigned long start,
> > >                   unsigned long end, unsigned long prot,
> > >                   unsigned long newflags);
> > > 
> > > Is this what you want?
> > 
> > https://lore.kernel.org/linux-mm/[email protected]/
> 
> Ugh, it's there as it should be. I'm sorry - I just misread the code.
> 
> I think that should work, and we do not have to do extra conversion
> inside the callback.
> 
> There is still one thing that I'm wondering. 'page->vm_max_prot_bits'
> contains some an union of subset of {VM_READ, VM_WRITE, VM_EXEC}, and
> 'newflags' can contain other bits set too.
> 
> The old implementation of sgx_vma_mprotect() is like this:
> 
> static int sgx_vma_mprotect(struct vm_area_struct *vma, unsigned long start,
>                           unsigned long end, unsigned long prot)
> {
>       return sgx_encl_may_map(vma->vm_private_data, start, end,
>                               calc_vm_prot_bits(prot, 0));
> }
> 
> The new one should be probably the implemented along the lines of
> 
> static int sgx_vma_mprotect(struct vm_area_struct *vma,
>                           struct vm_area_struct **pprev, unsigned long start,
>                           unsigned long end, unsigned long newflags)
> {
>       unsigned long masked_newflags = newflags &
>                                       (VM_READ | VM_WRITE | VM_EXEC);
>       int ret;
> 
>       ret = sgx_encl_may_map(vma->vm_private_data, start, end,
>                                  masked_newflags);
>       if (ret)
>               return ret;
> 
>       return mprotect_fixup(vma, pprev, start, end, newflags);
> }
> 
> Alternatively the filtering can be inside sgx_encl_may_map(). Perhaps
> that is a better place for it. This was just easier function to
> represent the idea.

Turned out that mmap() handler was already masking with RWX. So I
removed masking from there and do it as the first step in
sgx_encl_may_map(), which is called by the both handlers:

int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
                     unsigned long end, unsigned long vm_flags)
{
        unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC);
        unsigned long idx, idx_start, idx_end;
        struct sgx_encl_page *page;

Also renamed the last function parameter from vm_flags to vm_port_bits.
Kind of makes the flow more understandable (i.e. vm_prot_bits is purely
internal representation, not something caller needs to be concerned of).

/Jarkko

Reply via email to