On Mon, 2021-09-13 at 20:37 +0000, Sean Christopherson wrote:
> On Mon, Sep 13, 2021, Jarkko Sakkinen wrote:
> > On Fri, 2021-09-10 at 17:10 +0200, Paolo Bonzini wrote:
> > > On 19/07/21 13:21, Yang Zhong wrote:
> > > > +void sgx_memory_backend_reset(HostMemoryBackend *backend, int fd,
> > > > +                              Error **errp)
> > > > +{
> > > > +    MemoryRegion *mr = &backend->mr;
> > > > +
> > > > +    mr->enabled = false;
> > > > +
> > > > +    /* destroy the old memory region if it exist */
> > > > +    if (fd > 0 && mr->destructor) {
> > > > +        mr->destructor(mr);
> > > > +    }
> > > > +
> > > > +    sgx_epc_backend_memory_alloc(backend, errp);
> > > > +}
> > > > +
> > > 
> > > Jarkko, Sean, Kai,
> > > 
> > > this I think is problematic because it has a race window while 
> > > /dev/sgx_vepc is closed and then reopened.  First, the vEPC space could 
> > > be exhausted by somebody doing another mmap in the meanwhile.  Second, 
> > > somebody might (for whatever reason) remove /dev/sgx_vepc while QEMU runs.
> > 
> > 1: Why is it a problem that mmap() could fail?
> 
> The flow in question is QEMU's emulation of a guest RESET.  If mmap() fails, 
> QEMU
> either has to kill the VM or disable SGX.  In either case, it's fatal to a 
> running
> workload/VM.

Thanks for the explanations.

Isn't this more about badly configured system/workloads? That's
at least for me the existential question.

I'm interested of legit workloads where this behaviour could still
cause any issues.

I'd guess than in e.g. data center environment, you'd have firly
strict orchestration for this type of resource so that you know
that workloads have an appropriate bandwidth.

/Jarkko

Reply via email to