On 2016/03/12 at 12:49, Dave Young wrote:
> Hi, Andrew
>
> On 03/11/16 at 12:27pm, Andrew Morton wrote:
>> On Fri, 11 Mar 2016 16:42:48 +0800 Dave Young <dyo...@redhat.com> wrote:
>>
>>> On i686 PAE enabled machine the contiguous physical area could be large
>>> and it can cause trimming down variables in below calculation in
>>> read_vmcore() and mmap_vmcore():
>>>
>>>     tsz = min_t(size_t, m->offset + m->size - *fpos, buflen);
>>>
>>> Then the real size passed down is not correct any more.
>>> Suppose m->offset + m->size - *fpos being truncated to 0, buflen >0 then
>>> we will get tsz = 0. It is of course not an expected result.
>> I don't really understand this.
>>
>> vmcore.offset if loff_t which is 64-bit
>> vmcore.size is long long
>> *fpos is loff_t
>>
>> so the expression should all be done with 64-bit arithmetic anyway.
> #define min_t(type, x, y) ({                    \
>         type __min1 = (x);                      \
>         type __min2 = (y);                      \
>         __min1 < __min2 ? __min1: __min2; })
>
> Here x = m->offset + m->size - *fpos; the expression is done with 64bit
> arithmetic, it is true. But x will be cast to size_t then compare x with y
> The casting will cause problem.
>
>> Maybe buflen (size_t) has the wrong type, but the result of the other
>> expression should be in-range by the time we come to doing the
>> comparison.
>>
>>> During our tests there are two problems caused by it:
>>> 1) read_vmcore will refuse to continue so makedumpfile fails.
>>> 2) mmap_vmcore will trigger BUG_ON() in remap_pfn_range().
>>>
>>> Use unsigned long long in min_t instead so that the variables are not
>>> truncated.
>>>
>>> Signed-off-by: Baoquan He <b...@redhat.com>
>>> Signed-off-by: Dave Young <dyo...@redhat.com>
>> I think we'll need a cc:stable here.
> Agreed. Do you think I need repost for this?
>
>>> --- linux-x86.orig/fs/proc/vmcore.c
>>> +++ linux-x86/fs/proc/vmcore.c
>>> @@ -231,7 +231,9 @@ static ssize_t __read_vmcore(char *buffe
>>>  
>>>     list_for_each_entry(m, &vmcore_list, list) {
>>>             if (*fpos < m->offset + m->size) {
>>> -                   tsz = min_t(size_t, m->offset + m->size - *fpos, 
>>> buflen);
>>> +                   tsz = (size_t)min_t(unsigned long long,
>>> +                                       m->offset + m->size - *fpos,
>>> +                                       buflen);
>> This is rather a mess.  Can we please try to fix this bug by choosing
>> appropriate types rather than all the typecasting?
> file read/mmap buflen is size_t, so tsz is alwyas less then buflen unless
> m->offset + m->size - *fpos < buflen. The only problem is we need avoid large
> value of m->offset + m->size - *fpos being casted thus it will mistakenly be
> less than buflen.

*
Can we use "tsz = min(m->offset + m->size - *fpos, buflen)" instead?
I think it's ok for this case (both have positive values), nothing will go 
wrong,
also can make the code cleaner.

Regards,
Xunlei

*
>>
>>>                     start = m->paddr + *fpos - m->offset;
>>>                     tmp = read_from_oldmem(buffer, tsz, &start, userbuf);
>>>                     if (tmp < 0)
>>> @@ -461,7 +463,8 @@ static int mmap_vmcore(struct file *file
>>>             if (start < m->offset + m->size) {
>>>                     u64 paddr = 0;
>>>  
>>> -                   tsz = min_t(size_t, m->offset + m->size - start, size);
>>> +                   tsz = (size_t)min_t(unsigned long long,
>>> +                                       m->offset + m->size - start, size);
>>>                     paddr = m->paddr + start - m->offset;
>>>                     if (vmcore_remap_oldmem_pfn(vma, vma->vm_start + len,
>>>                                                 paddr >> PAGE_SHIFT, tsz,
> Thanks
> Dave

Reply via email to