On 1/21/19 2:13 AM, Roman Penyaev wrote:
> On 2019-01-18 17:12, Jens Axboe wrote:
> 
> [...]
> 
>> +
>> +static int io_uring_create(unsigned entries, struct io_uring_params 
>> *p,
>> +                       bool compat)
>> +{
>> +    struct user_struct *user = NULL;
>> +    struct io_ring_ctx *ctx;
>> +    int ret;
>> +
>> +    if (entries > IORING_MAX_ENTRIES)
>> +            return -EINVAL;
>> +
>> +    /*
>> +     * Use twice as many entries for the CQ ring. It's possible for the
>> +     * application to drive a higher depth than the size of the SQ ring,
>> +     * since the sqes are only used at submission time. This allows for
>> +     * some flexibility in overcommitting a bit.
>> +     */
>> +    p->sq_entries = roundup_pow_of_two(entries);
>> +    p->cq_entries = 2 * p->sq_entries;
>> +
>> +    if (!capable(CAP_IPC_LOCK)) {
>> +            user = get_uid(current_user());
>> +            ret = __io_account_mem(user, ring_pages(p->sq_entries,
>> +                                                    p->cq_entries));
>> +            if (ret) {
>> +                    free_uid(user);
>> +                    return ret;
>> +            }
>> +    }
>> +
>> +    ctx = io_ring_ctx_alloc(p);
>> +    if (!ctx)
>> +            return -ENOMEM;
> 
> Hi Jens,
> 
> It seems pages should be "unaccounted" back here and uid freed if path
> with "if (!capable(CAP_IPC_LOCK))" above was taken.

Thanks, yes that is leaky. I'll fix that up.

> But really, could please someone explain me what is wrong with 
> allocating
> all urings in mmap() without touching RLIMIT_MEMLOCK at all?  Thus all
> memory will be accounted to the caller app and if app is greedy it will
> be killed by oom.  What I'm missing?

I don't really what that'd change, if we do it off the ->mmap() or when
we setup the io_uring instance with io_uring_setup(2). We need this memory
to be pinned, we can't fault on it.

-- 
Jens Axboe

Reply via email to