On 2017/6/2 9:45, Wei Yang wrote:
> On Fri, May 26, 2017 at 09:55:31AM +0800, zhong jiang wrote:
>> On 2017/5/26 9:36, Wei Yang wrote:
>>> On Thu, May 25, 2017 at 11:04:44AM +0800, zhong jiang wrote:
>>>> I hit the overlap issue, but it is hard to reproduced.
On 2017/5/26 9:36, Wei Yang wrote:
> On Thu, May 25, 2017 at 11:04:44AM +0800, zhong jiang wrote:
>> I hit the overlap issue, but it is hard to reproduced. if you think it is
>> safe. and the situation
>> is not happen. AFAIC, it is no need to add the code.
>>
&g
On 2017/5/26 9:36, Wei Yang wrote:
> On Thu, May 25, 2017 at 11:04:44AM +0800, zhong jiang wrote:
>> I hit the overlap issue, but it is hard to reproduced. if you think it is
>> safe. and the situation
>> is not happen. AFAIC, it is no need to add the code.
>>
&g
I hit the overlap issue, but it is hard to reproduced. if you think it is
safe. and the situation
is not happen. AFAIC, it is no need to add the code.
if you insist on the point. Maybe VM_WARN_ON is a choice.
Regards
zhongjiang
On 2017/5/24 18:03, Wei Yang wrote:
> The vmap RB tree store the
I hit the overlap issue, but it is hard to reproduced. if you think it is
safe. and the situation
is not happen. AFAIC, it is no need to add the code.
if you insist on the point. Maybe VM_WARN_ON is a choice.
Regards
zhongjiang
On 2017/5/24 18:03, Wei Yang wrote:
> The vmap RB tree store the
On 2017/5/23 17:33, Vlastimil Babka wrote:
> On 05/23/2017 11:21 AM, zhong jiang wrote:
>> On 2017/5/23 0:51, Vlastimil Babka wrote:
>>> On 05/20/2017 05:01 AM, zhong jiang wrote:
>>>> On 2017/5/20 10:40, Hugh Dickins wrote:
>>>>> On Sat, 20 May 2017,
On 2017/5/23 17:33, Vlastimil Babka wrote:
> On 05/23/2017 11:21 AM, zhong jiang wrote:
>> On 2017/5/23 0:51, Vlastimil Babka wrote:
>>> On 05/20/2017 05:01 AM, zhong jiang wrote:
>>>> On 2017/5/20 10:40, Hugh Dickins wrote:
>>>>> On Sat, 20 May 2017,
On 2017/5/23 0:51, Vlastimil Babka wrote:
> On 05/20/2017 05:01 AM, zhong jiang wrote:
>> On 2017/5/20 10:40, Hugh Dickins wrote:
>>> On Sat, 20 May 2017, Xishi Qiu wrote:
>>>> Here is a bug report form redhat:
>>>> https://bugzilla.redhat.com/show_bug.
On 2017/5/23 0:51, Vlastimil Babka wrote:
> On 05/20/2017 05:01 AM, zhong jiang wrote:
>> On 2017/5/20 10:40, Hugh Dickins wrote:
>>> On Sat, 20 May 2017, Xishi Qiu wrote:
>>>> Here is a bug report form redhat:
>>>> https://bugzilla.redhat.com/show_bug.
On 2017/5/20 10:40, Hugh Dickins wrote:
> On Sat, 20 May 2017, Xishi Qiu wrote:
>> Here is a bug report form redhat:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1305620
>> And I meet the bug too. However it is hard to reproduce, and
>> 624483f3ea82598("mm: rmap: fix use-after-free in
On 2017/5/20 10:40, Hugh Dickins wrote:
> On Sat, 20 May 2017, Xishi Qiu wrote:
>> Here is a bug report form redhat:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1305620
>> And I meet the bug too. However it is hard to reproduce, and
>> 624483f3ea82598("mm: rmap: fix use-after-free in
On 2017/5/17 21:44, Michal Hocko wrote:
> On Wed 17-05-17 20:53:57, zhong jiang wrote:
>> +to linux-mm maintainer for any suggestions
>>
>> Thanks
>> zhongjiang
>> On 2017/5/16 13:03, zhong jiang wrote:
>>> Hi
>>>
>>> I hit the fol
On 2017/5/17 21:44, Michal Hocko wrote:
> On Wed 17-05-17 20:53:57, zhong jiang wrote:
>> +to linux-mm maintainer for any suggestions
>>
>> Thanks
>> zhongjiang
>> On 2017/5/16 13:03, zhong jiang wrote:
>>> Hi
>>>
>>> I hit the fol
+to linux-mm maintainer for any suggestions
Thanks
zhongjiang
On 2017/5/16 13:03, zhong jiang wrote:
> Hi
>
> I hit the following issue by runing /proc/vmallocinfo. The kernel is 4.1
> stable and
> 32 bit to be used. after I expand the vamlloc area, the issue is not
+to linux-mm maintainer for any suggestions
Thanks
zhongjiang
On 2017/5/16 13:03, zhong jiang wrote:
> Hi
>
> I hit the following issue by runing /proc/vmallocinfo. The kernel is 4.1
> stable and
> 32 bit to be used. after I expand the vamlloc area, the issue is not
Hi
I hit the following issue by runing /proc/vmallocinfo. The kernel is 4.1
stable and
32 bit to be used. after I expand the vamlloc area, the issue is not occur
again.
it is related to the overflow. but I do not see any problem so far.
cat /proc/vmallocinfo
0xf158-0xf160 524288
Hi
I hit the following issue by runing /proc/vmallocinfo. The kernel is 4.1
stable and
32 bit to be used. after I expand the vamlloc area, the issue is not occur
again.
it is related to the overflow. but I do not see any problem so far.
cat /proc/vmallocinfo
0xf158-0xf160 524288
On 2017/5/9 23:46, Rik van Riel wrote:
> On Thu, 2017-05-04 at 10:28 +0800, zhong jiang wrote:
>> On 2017/5/4 2:46, Rik van Riel wrote:
>>> However, it is not as easy as simply checking the
>>> end against __pa(high_memory). Some systems have
>>> non-contiguo
On 2017/5/9 23:46, Rik van Riel wrote:
> On Thu, 2017-05-04 at 10:28 +0800, zhong jiang wrote:
>> On 2017/5/4 2:46, Rik van Riel wrote:
>>> However, it is not as easy as simply checking the
>>> end against __pa(high_memory). Some systems have
>>> non-contiguo
On 2017/5/4 2:46, Rik van Riel wrote:
> On Tue, 2017-05-02 at 13:54 -0700, David Rientjes wrote:
>
>>> diff --git a/drivers/char/mem.c b/drivers/char/mem.c
>>> index 7e4a9d1..3a765e02 100644
>>> --- a/drivers/char/mem.c
>>> +++ b/drivers/char/mem.c
>>> @@ -55,7 +55,7 @@ static inline int
>>
On 2017/5/4 2:46, Rik van Riel wrote:
> On Tue, 2017-05-02 at 13:54 -0700, David Rientjes wrote:
>
>>> diff --git a/drivers/char/mem.c b/drivers/char/mem.c
>>> index 7e4a9d1..3a765e02 100644
>>> --- a/drivers/char/mem.c
>>> +++ b/drivers/char/mem.c
>>> @@ -55,7 +55,7 @@ static inline int
>>
On 2017/5/3 4:54, David Rientjes wrote:
> On Thu, 27 Apr 2017, zhongjiang wrote:
>
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> Recently, I found the following issue, it will result in the panic.
>>
>> [ 168.739152] mmap1: Corrupted page table at
On 2017/5/3 4:54, David Rientjes wrote:
> On Thu, 27 Apr 2017, zhongjiang wrote:
>
>> From: zhong jiang
>>
>> Recently, I found the following issue, it will result in the panic.
>>
>> [ 168.739152] mmap1: Corrupted page table at address 7f3e6275
ping
anyone has any objections.
On 2017/4/27 19:49, zhongjiang wrote:
> From: zhong jiang <zhongji...@huawei.com>
>
> Recently, I found the following issue, it will result in the panic.
>
> [ 168.739152] mmap1: Corrupted page table at address 7f3e6275a002
> [ 1
ping
anyone has any objections.
On 2017/4/27 19:49, zhongjiang wrote:
> From: zhong jiang
>
> Recently, I found the following issue, it will result in the panic.
>
> [ 168.739152] mmap1: Corrupted page table at address 7f3e6275a002
> [ 168.745039] PGD 61f4a1067
&g
Hi, Dashi
The same issue I had occured every other week. Do you have solve it .
I want to know how it is fixed. The patch exist in the mainline.
Thanks
zhongjiang
On 2016/12/23 10:38, Dashi DS1 Cao wrote:
> I'd expected that one or more tasks doing the free were the current task of
> other
Hi, Dashi
The same issue I had occured every other week. Do you have solve it .
I want to know how it is fixed. The patch exist in the mainline.
Thanks
zhongjiang
On 2016/12/23 10:38, Dashi DS1 Cao wrote:
> I'd expected that one or more tasks doing the free were the current task of
> other
Hi, Elean
Do the issue had really occured, use-after-free. but why the patch
is not received. or is is possible for the situation.
Thanks
zhongjiang
On 2017/2/20 18:49, Elena Reshetova wrote:
> refcount_t type and corresponding API should be
> used instead of atomic_t when the variable is
Hi, Elean
Do the issue had really occured, use-after-free. but why the patch
is not received. or is is possible for the situation.
Thanks
zhongjiang
On 2017/2/20 18:49, Elena Reshetova wrote:
> refcount_t type and corresponding API should be
> used instead of atomic_t when the variable is
On 2017/4/10 22:13, Willy Tarreau wrote:
> On Mon, Apr 10, 2017 at 10:06:59PM +0800, zhong jiang wrote:
>> On 2017/4/10 20:48, Michal Hocko wrote:
>>> On Mon 10-04-17 20:10:06, zhong jiang wrote:
>>>> On 2017/4/10 16:56, Mel Gorman wrote:
>>>>> On Sat,
On 2017/4/10 22:13, Willy Tarreau wrote:
> On Mon, Apr 10, 2017 at 10:06:59PM +0800, zhong jiang wrote:
>> On 2017/4/10 20:48, Michal Hocko wrote:
>>> On Mon 10-04-17 20:10:06, zhong jiang wrote:
>>>> On 2017/4/10 16:56, Mel Gorman wrote:
>>>>> On Sat,
On 2017/4/10 22:06, Mel Gorman wrote:
> On Mon, Apr 10, 2017 at 08:10:06PM +0800, zhong jiang wrote:
>> On 2017/4/10 16:56, Mel Gorman wrote:
>>> On Sat, Apr 08, 2017 at 09:39:42PM +0800, zhong jiang wrote:
>>>> when runing the stabile docker cases in the vm. The f
On 2017/4/10 22:06, Mel Gorman wrote:
> On Mon, Apr 10, 2017 at 08:10:06PM +0800, zhong jiang wrote:
>> On 2017/4/10 16:56, Mel Gorman wrote:
>>> On Sat, Apr 08, 2017 at 09:39:42PM +0800, zhong jiang wrote:
>>>> when runing the stabile docker cases in the vm. The f
On 2017/4/10 20:48, Michal Hocko wrote:
> On Mon 10-04-17 20:10:06, zhong jiang wrote:
>> On 2017/4/10 16:56, Mel Gorman wrote:
>>> On Sat, Apr 08, 2017 at 09:39:42PM +0800, zhong jiang wrote:
>>>> when runing the stabile docker cases in the vm. The foll
On 2017/4/10 20:48, Michal Hocko wrote:
> On Mon 10-04-17 20:10:06, zhong jiang wrote:
>> On 2017/4/10 16:56, Mel Gorman wrote:
>>> On Sat, Apr 08, 2017 at 09:39:42PM +0800, zhong jiang wrote:
>>>> when runing the stabile docker cases in the vm. The foll
On 2017/4/10 16:56, Mel Gorman wrote:
> On Sat, Apr 08, 2017 at 09:39:42PM +0800, zhong jiang wrote:
>> when runing the stabile docker cases in the vm. The following issue will
>> come up.
>>
>> #40 [8801b57ffb30] async_page_fault at 8165c9f8
>>
On 2017/4/10 16:56, Mel Gorman wrote:
> On Sat, Apr 08, 2017 at 09:39:42PM +0800, zhong jiang wrote:
>> when runing the stabile docker cases in the vm. The following issue will
>> come up.
>>
>> #40 [8801b57ffb30] async_page_fault at 8165c9f8
>>
when runing the stabile docker cases in the vm. The following issue will come
up.
#40 [8801b57ffb30] async_page_fault at 8165c9f8
[exception RIP: down_read_trylock+5]
RIP: 810aca65 RSP: 8801b57ffbe8 RFLAGS: 00010202
RAX: RBX:
when runing the stabile docker cases in the vm. The following issue will come
up.
#40 [8801b57ffb30] async_page_fault at 8165c9f8
[exception RIP: down_read_trylock+5]
RIP: 810aca65 RSP: 8801b57ffbe8 RFLAGS: 00010202
RAX: RBX:
On 2017/1/23 9:30, John Hubbard wrote:
>
>
> On 01/22/2017 05:14 PM, zhong jiang wrote:
>> On 2017/1/22 20:58, zhongjiang wrote:
>>> From: zhong jiang <zhongji...@huawei.com>
>>>
>>> Recently, I find the ioremap_page_range had been abus
On 2017/1/23 9:30, John Hubbard wrote:
>
>
> On 01/22/2017 05:14 PM, zhong jiang wrote:
>> On 2017/1/22 20:58, zhongjiang wrote:
>>> From: zhong jiang
>>>
>>> Recently, I find the ioremap_page_range had been abusing. The improper
>>> address
On 2017/1/22 20:58, zhongjiang wrote:
> From: zhong jiang <zhongji...@huawei.com>
>
> Recently, I find the ioremap_page_range had been abusing. The improper
> address mapping is a issue. it will result in the crash. so, remove
> the symbol. It can be replaced by the ioremap_c
On 2017/1/22 20:58, zhongjiang wrote:
> From: zhong jiang
>
> Recently, I find the ioremap_page_range had been abusing. The improper
> address mapping is a issue. it will result in the crash. so, remove
> the symbol. It can be replaced by the ioremap_cache or others symbol.
On 2016/12/16 20:35, Will Deacon wrote:
> On Fri, Dec 16, 2016 at 05:10:05PM +0800, zhong jiang wrote:
>> On 2016/12/14 22:19, zhongjiang wrote:
>>> From: zhong jiang <zhongji...@huawei.com>
>>>
>>> when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE c
On 2016/12/16 20:35, Will Deacon wrote:
> On Fri, Dec 16, 2016 at 05:10:05PM +0800, zhong jiang wrote:
>> On 2016/12/14 22:19, zhongjiang wrote:
>>> From: zhong jiang
>>>
>>> when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE contains the
>>>
On 2016/12/16 20:35, Will Deacon wrote:
> On Fri, Dec 16, 2016 at 05:10:05PM +0800, zhong jiang wrote:
>> On 2016/12/14 22:19, zhongjiang wrote:
>>> From: zhong jiang <zhongji...@huawei.com>
>>>
>>> when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE c
On 2016/12/16 20:35, Will Deacon wrote:
> On Fri, Dec 16, 2016 at 05:10:05PM +0800, zhong jiang wrote:
>> On 2016/12/14 22:19, zhongjiang wrote:
>>> From: zhong jiang
>>>
>>> when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE contains the
>>>
On 2016/12/14 22:19, zhongjiang wrote:
> From: zhong jiang <zhongji...@huawei.com>
>
> when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE contains the
> fuctions should not be use. therefore, we add the dependency.
>
> Signed-off-by: zhong jiang <zhongji...@huawei.com&g
On 2016/12/14 22:19, zhongjiang wrote:
> From: zhong jiang
>
> when HUGETLB_PAGE is disable, WANT_HUGE_PMD_SHARE contains the
> fuctions should not be use. therefore, we add the dependency.
>
> Signed-off-by: zhong jiang
> ---
> arch/arm64/Kconfig | 1 +
>
On 2016/12/14 22:45, Ard Biesheuvel wrote:
> On 14 December 2016 at 14:19, zhongjiang <zhongji...@huawei.com> wrote:
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> I think that CONT_PTE_SHIFT is more reasonable even if they are some
>> value. and
On 2016/12/14 22:45, Ard Biesheuvel wrote:
> On 14 December 2016 at 14:19, zhongjiang wrote:
>> From: zhong jiang
>>
>> I think that CONT_PTE_SHIFT is more reasonable even if they are some
>> value. and the patch is not any functional change.
>>
>
On 2016/12/9 13:19, Eric W. Biederman wrote:
> zhong jiang <zhongji...@huawei.com> writes:
>
>> On 2016/12/8 17:41, Xunlei Pang wrote:
>>> On 12/08/2016 at 10:37 AM, zhongjiang wrote:
>>>> From: zhong jiang <zhongji...@huawei.com>
>>>>
On 2016/12/9 13:19, Eric W. Biederman wrote:
> zhong jiang writes:
>
>> On 2016/12/8 17:41, Xunlei Pang wrote:
>>> On 12/08/2016 at 10:37 AM, zhongjiang wrote:
>>>> From: zhong jiang
>>>>
> [snip]
>>>> diff --git a/kernel/kexec_cor
On 2016/12/8 17:41, Xunlei Pang wrote:
> On 12/08/2016 at 10:37 AM, zhongjiang wrote:
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> A soft lookup will occur when I run trinity in syscall kexec_load.
>> the corresponding stack information is as follows.
>
On 2016/12/8 17:41, Xunlei Pang wrote:
> On 12/08/2016 at 10:37 AM, zhongjiang wrote:
>> From: zhong jiang
>>
>> A soft lookup will occur when I run trinity in syscall kexec_load.
>> the corresponding stack information is as follows.
>>
>> [ 237.235937]
On 2016/12/8 9:50, Eric W. Biederman wrote:
> zhongjiang <zhongji...@huawei.com> writes:
>
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> A soft lookup will occur when I run trinity in syscall kexec_load.
>> the corresponding stack information is a
On 2016/12/8 9:50, Eric W. Biederman wrote:
> zhongjiang writes:
>
>> From: zhong jiang
>>
>> A soft lookup will occur when I run trinity in syscall kexec_load.
>> the corresponding stack information is as follows.
> Overall that looks reasonable. Why
On 2016/11/4 3:17, Andrew Morton wrote:
> On Sat, 29 Oct 2016 14:08:31 +0800 zhongjiang <zhongji...@huawei.com> wrote:
>
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> Since 'commit 3e89e1c5ea84 ("hugetlb: make mm and fs code explicitly
>>
On 2016/11/4 3:17, Andrew Morton wrote:
> On Sat, 29 Oct 2016 14:08:31 +0800 zhongjiang wrote:
>
>> From: zhong jiang
>>
>> Since 'commit 3e89e1c5ea84 ("hugetlb: make mm and fs code explicitly
>> non-modular")'
>> bring in the mainline. moun
On 2016/10/29 14:08, zhongjiang wrote:
> From: zhong jiang <zhongji...@huawei.com>
>
> Since 'commit 3e89e1c5ea84 ("hugetlb: make mm and fs code explicitly
> non-modular")'
> bring in the mainline. mount hugetlbfs will result in the following issue.
>
> moun
On 2016/10/29 14:08, zhongjiang wrote:
> From: zhong jiang
>
> Since 'commit 3e89e1c5ea84 ("hugetlb: make mm and fs code explicitly
> non-modular")'
> bring in the mainline. mount hugetlbfs will result in the following issue.
>
> mount: unknown filesystme type '
On 2016/10/27 12:02, Gao Feng wrote:
> On Thu, Oct 27, 2016 at 11:56 AM, zhongjiang <zhongji...@huawei.com> wrote:
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> when I compiler the newest kernel, I hit the following error with
>> Werror=may-uninit
On 2016/10/27 12:02, Gao Feng wrote:
> On Thu, Oct 27, 2016 at 11:56 AM, zhongjiang wrote:
>> From: zhong jiang
>>
>> when I compiler the newest kernel, I hit the following error with
>> Werror=may-uninitalized.
>>
>> net/core/flow_dissector.c: In functi
On 2016/10/17 23:30, Dan Streetman wrote:
> On Mon, Oct 17, 2016 at 8:48 AM, zhong jiang <zhongji...@huawei.com> wrote:
>> On 2016/10/17 20:03, Vitaly Wool wrote:
>>> Hi Zhong Jiang,
>>>
>>> On Mon, Oct 17, 2016 at 3:58 AM, zhong jiang <z
On 2016/10/17 23:30, Dan Streetman wrote:
> On Mon, Oct 17, 2016 at 8:48 AM, zhong jiang wrote:
>> On 2016/10/17 20:03, Vitaly Wool wrote:
>>> Hi Zhong Jiang,
>>>
>>> On Mon, Oct 17, 2016 at 3:58 AM, zhong jiang wrote:
>>>> Hi, Vitaly
&
On 2016/10/17 20:03, Vitaly Wool wrote:
> Hi Zhong Jiang,
>
> On Mon, Oct 17, 2016 at 3:58 AM, zhong jiang <zhongji...@huawei.com> wrote:
>> Hi, Vitaly
>>
>> About the following patch, is it right?
>>
>> Thanks
>> zhongjiang
>> On
On 2016/10/17 20:03, Vitaly Wool wrote:
> Hi Zhong Jiang,
>
> On Mon, Oct 17, 2016 at 3:58 AM, zhong jiang wrote:
>> Hi, Vitaly
>>
>> About the following patch, is it right?
>>
>> Thanks
>> zhongjiang
>> On 2016/10/13 12:02, zhongjiang wro
On 2016/10/17 20:03, Vitaly Wool wrote:
> Hi Zhong Jiang,
>
> On Mon, Oct 17, 2016 at 3:58 AM, zhong jiang <zhongji...@huawei.com> wrote:
>> Hi, Vitaly
>>
>> About the following patch, is it right?
>>
>> Thanks
>> zhongjiang
>> On
On 2016/10/17 20:03, Vitaly Wool wrote:
> Hi Zhong Jiang,
>
> On Mon, Oct 17, 2016 at 3:58 AM, zhong jiang wrote:
>> Hi, Vitaly
>>
>> About the following patch, is it right?
>>
>> Thanks
>> zhongjiang
>> On 2016/10/13 12:02, zhongjiang wro
Hi, Vitaly
About the following patch, is it right?
Thanks
zhongjiang
On 2016/10/13 12:02, zhongjiang wrote:
> From: zhong jiang <zhongji...@huawei.com>
>
> At present, zhdr->first_num plus bud can exceed the BUDDY_MASK
> in encode_handle, it will lead to the the caller han
Hi, Vitaly
About the following patch, is it right?
Thanks
zhongjiang
On 2016/10/13 12:02, zhongjiang wrote:
> From: zhong jiang
>
> At present, zhdr->first_num plus bud can exceed the BUDDY_MASK
> in encode_handle, it will lead to the the caller handle_to_buddy
> retu
On 2016/10/15 3:25, Vitaly Wool wrote:
> On Fri, Oct 14, 2016 at 3:35 PM, zhongjiang <zhongji...@huawei.com> wrote:
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> z3fold compact page has nothing with the last_chunks. even if
>> last_chunks
On 2016/10/15 3:25, Vitaly Wool wrote:
> On Fri, Oct 14, 2016 at 3:35 PM, zhongjiang wrote:
>> From: zhong jiang
>>
>> z3fold compact page has nothing with the last_chunks. even if
>> last_chunks is not free, compact page will proceed.
>>
>> The patch j
On 2016/10/13 11:33, zhongjiang wrote:
> From: zhong jiang <zhongji...@huawei.com>
>
> At present, zhdr->first_num plus bud can exceed the BUDDY_MASK
> in encode_handle, it will lead to the the caller handle_to_buddy
> return the error value.
>
> The patch fix the is
On 2016/10/13 11:33, zhongjiang wrote:
> From: zhong jiang
>
> At present, zhdr->first_num plus bud can exceed the BUDDY_MASK
> in encode_handle, it will lead to the the caller handle_to_buddy
> return the error value.
>
> The patch fix the issue by changing the
On 2016/9/25 8:06, Mike Kravetz wrote:
> On 09/23/2016 07:56 PM, zhong jiang wrote:
>> On 2016/9/24 1:19, Mike Kravetz wrote:
>>> On 09/22/2016 06:53 PM, zhong jiang wrote:
>>>> At present, we need to call hugetlb_fix_reserve_count when
>>>> hugetlb_
On 2016/9/25 8:06, Mike Kravetz wrote:
> On 09/23/2016 07:56 PM, zhong jiang wrote:
>> On 2016/9/24 1:19, Mike Kravetz wrote:
>>> On 09/22/2016 06:53 PM, zhong jiang wrote:
>>>> At present, we need to call hugetlb_fix_reserve_count when
>>>> hugetlb_
On 2016/9/24 1:19, Mike Kravetz wrote:
> On 09/22/2016 06:53 PM, zhong jiang wrote:
>> At present, we need to call hugetlb_fix_reserve_count when
>> hugetlb_unrserve_pages fails,
>> and PagePrivate will decide hugetlb reserves counts.
>>
>> we obtain the page f
On 2016/9/24 1:19, Mike Kravetz wrote:
> On 09/22/2016 06:53 PM, zhong jiang wrote:
>> At present, we need to call hugetlb_fix_reserve_count when
>> hugetlb_unrserve_pages fails,
>> and PagePrivate will decide hugetlb reserves counts.
>>
>> we obtain the page f
At present, we need to call hugetlb_fix_reserve_count when
hugetlb_unrserve_pages fails,
and PagePrivate will decide hugetlb reserves counts.
we obtain the page from page cache. and use page both lock_page and mutex_lock.
alloc_huge_page add page to page chace always hold lock page, then bail
At present, we need to call hugetlb_fix_reserve_count when
hugetlb_unrserve_pages fails,
and PagePrivate will decide hugetlb reserves counts.
we obtain the page from page cache. and use page both lock_page and mutex_lock.
alloc_huge_page add page to page chace always hold lock page, then bail
On 2016/8/22 22:28, Catalin Marinas wrote:
> On Sat, Aug 20, 2016 at 05:38:59PM +0800, zhong jiang wrote:
>> On 2016/8/19 12:11, Ganapatrao Kulkarni wrote:
>>> On Fri, Aug 19, 2016 at 9:30 AM, Ganapatrao Kulkarni
>>> <gpkulka...@gmail.com> wrote:
>>>>
On 2016/8/22 22:28, Catalin Marinas wrote:
> On Sat, Aug 20, 2016 at 05:38:59PM +0800, zhong jiang wrote:
>> On 2016/8/19 12:11, Ganapatrao Kulkarni wrote:
>>> On Fri, Aug 19, 2016 at 9:30 AM, Ganapatrao Kulkarni
>>> wrote:
>>>> On Fri, Aug 19, 2016 at 7:28
On 2016/8/19 12:11, Ganapatrao Kulkarni wrote:
> On Fri, Aug 19, 2016 at 9:30 AM, Ganapatrao Kulkarni
> <gpkulka...@gmail.com> wrote:
>> On Fri, Aug 19, 2016 at 7:28 AM, zhong jiang <zhongji...@huawei.com> wrote:
>>> On 2016/8/19 1:45, Ganapatrao Kulkarni wrote:
On 2016/8/19 12:11, Ganapatrao Kulkarni wrote:
> On Fri, Aug 19, 2016 at 9:30 AM, Ganapatrao Kulkarni
> wrote:
>> On Fri, Aug 19, 2016 at 7:28 AM, zhong jiang wrote:
>>> On 2016/8/19 1:45, Ganapatrao Kulkarni wrote:
>>>> On Thu, Aug 18, 2016 at 9:34 PM, Catalin
On 2016/8/19 1:45, Ganapatrao Kulkarni wrote:
> On Thu, Aug 18, 2016 at 9:34 PM, Catalin Marinas
> wrote:
>> On Thu, Aug 18, 2016 at 09:09:26PM +0800, zhongjiang wrote:
>>> At present, boot cpu will bound to a node from device tree when node_off
>>> enable.
>>> if the
On 2016/8/19 1:45, Ganapatrao Kulkarni wrote:
> On Thu, Aug 18, 2016 at 9:34 PM, Catalin Marinas
> wrote:
>> On Thu, Aug 18, 2016 at 09:09:26PM +0800, zhongjiang wrote:
>>> At present, boot cpu will bound to a node from device tree when node_off
>>> enable.
>>> if the node is not initialization,
On 2016/8/19 1:45, Ganapatrao Kulkarni wrote:
> On Thu, Aug 18, 2016 at 9:34 PM, Catalin Marinas
> wrote:
>> On Thu, Aug 18, 2016 at 09:09:26PM +0800, zhongjiang wrote:
>>> At present, boot cpu will bound to a node from device tree when node_off
>>> enable.
>>> if the
On 2016/8/19 1:45, Ganapatrao Kulkarni wrote:
> On Thu, Aug 18, 2016 at 9:34 PM, Catalin Marinas
> wrote:
>> On Thu, Aug 18, 2016 at 09:09:26PM +0800, zhongjiang wrote:
>>> At present, boot cpu will bound to a node from device tree when node_off
>>> enable.
>>> if the node is not initialization,
On 2016/8/10 7:29, Andrew Morton wrote:
> On Fri, 5 Aug 2016 22:04:07 +0800 zhongjiang wrote:
>
>> when required_kernelcore decrease to zero, we should exit the loop in time.
>> because It will waste time to scan the remainder node.
> The patch is rather ugly and it only
On 2016/8/10 7:29, Andrew Morton wrote:
> On Fri, 5 Aug 2016 22:04:07 +0800 zhongjiang wrote:
>
>> when required_kernelcore decrease to zero, we should exit the loop in time.
>> because It will waste time to scan the remainder node.
> The patch is rather ugly and it only affects __init code, so
On 2016/8/5 22:04, zhongjiang wrote:
> From: zhong jiang <zhongji...@huawei.com>
>
> when required_kernelcore decrease to zero, we should exit the loop in time.
> because It will waste time to scan the remainder node.
>
> Signed-off-by: zhong jiang <zhongji...@huawei.co
On 2016/8/5 22:04, zhongjiang wrote:
> From: zhong jiang
>
> when required_kernelcore decrease to zero, we should exit the loop in time.
> because It will waste time to scan the remainder node.
>
> Signed-off-by: zhong jiang
> ---
> mm/page_alloc.c | 10 +++
On 2016/8/9 1:14, Mike Kravetz wrote:
> On 08/07/2016 07:49 PM, zhongjiang wrote:
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> when memory hotplug enable, free hugepages will be freed if movable node
>> offline.
>> therefore, /proc/sys/vm/nr_hugepages
On 2016/8/9 1:14, Mike Kravetz wrote:
> On 08/07/2016 07:49 PM, zhongjiang wrote:
>> From: zhong jiang
>>
>> when memory hotplug enable, free hugepages will be freed if movable node
>> offline.
>> therefore, /proc/sys/vm/nr_hugepages will be incorrect.
according to the total_mapcount, Different process can map any subpages of the
transparent hugepages. How it can happen to ?
according to the total_mapcount, Different process can map any subpages of the
transparent hugepages. How it can happen to ?
On 2016/8/2 7:05, Andrew Morton wrote:
> On Sat, 30 Jul 2016 11:51:09 +0800 zhongjiang <zhongji...@huawei.com> wrote:
>
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> when compile the kenrel code, I happens to the following warn.
>> fs/reiserfs/ibalan
On 2016/8/2 7:05, Andrew Morton wrote:
> On Sat, 30 Jul 2016 11:51:09 +0800 zhongjiang wrote:
>
>> From: zhong jiang
>>
>> when compile the kenrel code, I happens to the following warn.
>> fs/reiserfs/ibalance.c:1156:2: warning: ___new_insert_key___ may be used
>
On 2016/7/30 7:02, Andrew Morton wrote:
> On Fri, 29 Jul 2016 22:46:39 +0800 zhongjiang <zhongji...@huawei.com> wrote:
>
>> From: zhong jiang <zhongji...@huawei.com>
>>
>> when compile the kenrel code, I happens to the following warn.
>> fs/reiserfs/ibalan
601 - 700 of 759 matches
Mail list logo