From: zijun_hu <zijun...@htc.com>
it is error to represent the max range max_distance spanned by all the
group areas as the offset of the highest group area plus unit size in
pcpu_embed_first_chunk(), it should equal to the offset plus the size
of the highest group area
in order
From: zijun_hu
it is error to represent the max range max_distance spanned by all the
group areas as the offset of the highest group area plus unit size in
pcpu_embed_first_chunk(), it should equal to the offset plus the size
of the highest group area
in order to fix this issue,let us find
On 2016/9/24 3:23, Tejun Heo wrote:
> On Sat, Sep 24, 2016 at 02:20:24AM +0800, zijun_hu wrote:
>> From: zijun_hu <zijun...@htc.com>
>>
>> correct max_distance from (base of the highest group + ai->unit_size)
>> to (base of the highest group + the group size)
On 2016/9/24 3:23, Tejun Heo wrote:
> On Sat, Sep 24, 2016 at 02:20:24AM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> correct max_distance from (base of the highest group + ai->unit_size)
>> to (base of the highest group + the group size)
>>
>> Signed-o
From: zijun_hu <zijun...@htc.com>
correct max_distance from (base of the highest group + ai->unit_size)
to (base of the highest group + the group size)
Signed-off-by: zijun_hu <zijun...@htc.com>
---
mm/percpu.c | 14 --
1 file changed, 8 insertions(+), 6 deletions
From: zijun_hu
correct max_distance from (base of the highest group + ai->unit_size)
to (base of the highest group + the group size)
Signed-off-by: zijun_hu
---
mm/percpu.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
in
From: zijun_hu <zijun...@htc.com>
simplify grouping cpu logic in pcpu_build_alloc_info() to improve
readability and performance, it discards the goto statement too
for every possible cpu, decide whether it can share group id of any
lower index CPU, use the group id if so, otherwise a new
From: zijun_hu
simplify grouping cpu logic in pcpu_build_alloc_info() to improve
readability and performance, it discards the goto statement too
for every possible cpu, decide whether it can share group id of any
lower index CPU, use the group id if so, otherwise a new group id
is allocated
On 2016/9/23 22:42, Tejun Heo wrote:
> Hello,
>
> On Wed, Sep 21, 2016 at 12:19:53PM +0800, zijun_hu wrote:
>> From: zijun_hu <zijun...@htc.com>
>>
>> endless loop maybe happen if either of parameter addr and end is not
>> page aligned for kernel API funct
On 2016/9/23 22:42, Tejun Heo wrote:
> Hello,
>
> On Wed, Sep 21, 2016 at 12:19:53PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> endless loop maybe happen if either of parameter addr and end is not
>> page aligned for kernel API function ioremap_page_range()
On 2016/9/23 22:27, Michal Hocko wrote:
> On Fri 23-09-16 22:14:40, zijun_hu wrote:
>> On 2016/9/23 21:33, Michal Hocko wrote:
>>> On Fri 23-09-16 21:00:18, zijun_hu wrote:
>>>> On 09/23/2016 08:42 PM, Michal Hocko wrote:
>>>>>>>> no, it
On 2016/9/23 22:27, Michal Hocko wrote:
> On Fri 23-09-16 22:14:40, zijun_hu wrote:
>> On 2016/9/23 21:33, Michal Hocko wrote:
>>> On Fri 23-09-16 21:00:18, zijun_hu wrote:
>>>> On 09/23/2016 08:42 PM, Michal Hocko wrote:
>>>>>>>> no, it
On 2016/9/23 21:33, Michal Hocko wrote:
> On Fri 23-09-16 21:00:18, zijun_hu wrote:
>> On 09/23/2016 08:42 PM, Michal Hocko wrote:
>>>>>> no, it don't work for many special case
>>>>>> for example, provided PMD_SIZE=2M
>>>>>> mapping
On 2016/9/23 21:33, Michal Hocko wrote:
> On Fri 23-09-16 21:00:18, zijun_hu wrote:
>> On 09/23/2016 08:42 PM, Michal Hocko wrote:
>>>>>> no, it don't work for many special case
>>>>>> for example, provided PMD_SIZE=2M
>>>>>> mapping
On 09/23/2016 08:42 PM, Michal Hocko wrote:
no, it don't work for many special case
for example, provided PMD_SIZE=2M
mapping [0x1f8800, 0x208800) virtual range will be split to two ranges
[0x1f8800, 0x20) and [0x20,0x208800) and map them separately
the first
On 09/23/2016 08:42 PM, Michal Hocko wrote:
no, it don't work for many special case
for example, provided PMD_SIZE=2M
mapping [0x1f8800, 0x208800) virtual range will be split to two ranges
[0x1f8800, 0x20) and [0x20,0x208800) and map them separately
the first
On 2016/9/23 16:45, Michal Hocko wrote:
> On Thu 22-09-16 23:13:17, zijun_hu wrote:
>> On 2016/9/22 20:47, Michal Hocko wrote:
>>> On Wed 21-09-16 12:19:53, zijun_hu wrote:
>>>> From: zijun_hu <zijun...@htc.com>
>>>>
>>>> e
On 2016/9/23 16:45, Michal Hocko wrote:
> On Thu 22-09-16 23:13:17, zijun_hu wrote:
>> On 2016/9/22 20:47, Michal Hocko wrote:
>>> On Wed 21-09-16 12:19:53, zijun_hu wrote:
>>>> From: zijun_hu
>>>>
>>>> endless loop maybe happen if eit
On 09/21/2016 12:34 PM, zijun_hu wrote:
> From: zijun_hu <zijun...@htc.com>
>
> fix the following bug:
> - endless loop maybe happen when v[un]mapping improper ranges
>whose either boundary is not aligned to page
>
> Signed-off-by: zijun_hu <zijun...@htc.
On 09/21/2016 12:34 PM, zijun_hu wrote:
> From: zijun_hu
>
> fix the following bug:
> - endless loop maybe happen when v[un]mapping improper ranges
>whose either boundary is not aligned to page
>
> Signed-off-by: zijun_hu
> ---
> mm/vmalloc.c | 9 +++--
>
On 09/21/2016 12:19 PM, zijun_hu wrote:
> From: zijun_hu <zijun...@htc.com>
>
> endless loop maybe happen if either of parameter addr and end is not
> page aligned for kernel API function ioremap_page_range()
>
> in order to fix this issue and alert improper range para
On 09/21/2016 12:19 PM, zijun_hu wrote:
> From: zijun_hu
>
> endless loop maybe happen if either of parameter addr and end is not
> page aligned for kernel API function ioremap_page_range()
>
> in order to fix this issue and alert improper range parameters to user
&
On 2016/9/23 11:30, Nicholas Piggin wrote:
> On Fri, 23 Sep 2016 00:30:20 +0800
> zijun_hu <zijun...@zoho.com> wrote:
>
>> On 2016/9/22 20:37, Michal Hocko wrote:
>>> On Thu 22-09-16 09:13:50, zijun_hu wrote:
>>>> On 09/22/2016 08:35 AM, David Rien
On 2016/9/23 11:30, Nicholas Piggin wrote:
> On Fri, 23 Sep 2016 00:30:20 +0800
> zijun_hu wrote:
>
>> On 2016/9/22 20:37, Michal Hocko wrote:
>>> On Thu 22-09-16 09:13:50, zijun_hu wrote:
>>>> On 09/22/2016 08:35 AM, David Rientjes wrote:
>>> [..
On 2016/9/22 20:37, Michal Hocko wrote:
> On Thu 22-09-16 09:13:50, zijun_hu wrote:
>> On 09/22/2016 08:35 AM, David Rientjes wrote:
> [...]
>>> The intent is as it is implemented; with your change, lazy_max_pages() is
>>> potentially increased dependin
On 2016/9/22 20:37, Michal Hocko wrote:
> On Thu 22-09-16 09:13:50, zijun_hu wrote:
>> On 09/22/2016 08:35 AM, David Rientjes wrote:
> [...]
>>> The intent is as it is implemented; with your change, lazy_max_pages() is
>>> potentially increased dependin
On 2016/9/22 20:47, Michal Hocko wrote:
> On Wed 21-09-16 12:19:53, zijun_hu wrote:
>> From: zijun_hu <zijun...@htc.com>
>>
>> endless loop maybe happen if either of parameter addr and end is not
>> page aligned for kernel API function ioremap_page_range()
On 2016/9/22 20:47, Michal Hocko wrote:
> On Wed 21-09-16 12:19:53, zijun_hu wrote:
>> From: zijun_hu
>>
>> endless loop maybe happen if either of parameter addr and end is not
>> page aligned for kernel API function ioremap_page_range()
>
> Does this hap
On 09/21/2016 12:23 PM, zijun_hu wrote:
> From: zijun_hu <zijun...@htc.com>
>
> correct a few logic error for __insert_vmap_area() since the else
> if condition is always true and meaningless
>
> in order to fix this issue, if vmap_area inserted is lower than one
>
On 09/21/2016 12:23 PM, zijun_hu wrote:
> From: zijun_hu
>
> correct a few logic error for __insert_vmap_area() since the else
> if condition is always true and meaningless
>
> in order to fix this issue, if vmap_area inserted is lower than one
> on rbtree then wa
On 09/22/2016 08:35 AM, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>> On 2016/9/22 5:21, David Rientjes wrote:
>>> On Wed, 21 Sep 2016, zijun_hu wrote:
>>>
>>>> From: zijun_hu <zijun...@htc.com>
>>>>
>>>>
On 09/22/2016 08:35 AM, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>> On 2016/9/22 5:21, David Rientjes wrote:
>>> On Wed, 21 Sep 2016, zijun_hu wrote:
>>>
>>>> From: zijun_hu
>>>>
>>>> correct lazy_max_pa
On 2016/9/22 7:15, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>> We don't support inserting when va->va_start == tmp_va->va_end, plain and
>>> simple. There's no reason to do so. NACK to the patch.
>>>
>> i am sorry i disagree
On 2016/9/22 7:15, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>> We don't support inserting when va->va_start == tmp_va->va_end, plain and
>>> simple. There's no reason to do so. NACK to the patch.
>>>
>> i am sorry i disagree
On 2016/9/22 5:21, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> From: zijun_hu <zijun...@htc.com>
>>
>> correct lazy_max_pages() return value if the number of online
>> CPUs is power of 2
>>
>> Signed-off-by: zijun_hu <zijun
On 2016/9/22 5:21, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct lazy_max_pages() return value if the number of online
>> CPUs is power of 2
>>
>> Signed-off-by: zijun_hu
>> ---
>> mm/vma
On 2016/9/22 5:16, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index cc6ecd6..a125ae8 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -2576,32 +2576,13 @@ void pcpu_free_vm_areas(st
On 2016/9/22 5:16, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index cc6ecd6..a125ae8 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -2576,32 +2576,13 @@ void pcpu_free_vm_areas(st
On 2016/9/22 6:45, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>>> correct a few logic error for __insert_vmap_area() since the else
>>>> if condition is always true and meaningless
>>>>
>>>> in order to fix this issue,
On 2016/9/22 6:45, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>>> correct a few logic error for __insert_vmap_area() since the else
>>>> if condition is always true and meaningless
>>>>
>>>> in order to fix this issue,
On 2016/9/22 5:10, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> From: zijun_hu <zijun...@htc.com>
>>
>> correct a few logic error for __insert_vmap_area() since the else
>> if condition is always true and meaningless
>>
>> i
On 2016/9/22 5:10, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct a few logic error for __insert_vmap_area() since the else
>> if condition is always true and meaningless
>>
>> in order to fix this issu
On 09/20/2016 01:49 PM, zijun_hu wrote:
> From: zijun_hu <zijun...@htc.com>
>
> for ioremap_page_range(), endless loop maybe happen if either of parameter
> addr and end is not page aligned, in order to fix this issue and hint range
> parameter requirements BUG_ON() checkup
On 09/20/2016 01:49 PM, zijun_hu wrote:
> From: zijun_hu
>
> for ioremap_page_range(), endless loop maybe happen if either of parameter
> addr and end is not page aligned, in order to fix this issue and hint range
> parameter requirements BUG_ON() checkup are pe
Hi All,
please ignore this patch
as advised by Nicholas Piggin, i split this patch to smaller patches
and resend them in another mail thread
On 09/20/2016 02:02 PM, zijun_hu wrote:
> From: zijun_hu <zijun...@htc.com>
>
> correct a few logic error in __insert_vmap_area()
Hi All,
please ignore this patch
as advised by Nicholas Piggin, i split this patch to smaller patches
and resend them in another mail thread
On 09/20/2016 02:02 PM, zijun_hu wrote:
> From: zijun_hu
>
> correct a few logic error in __insert_vmap_area() since the else if
> conditi
From: zijun_hu <zijun...@htc.com>
fix the following bug:
- endless loop maybe happen when v[un]mapping improper ranges
whose either boundary is not aligned to page
Signed-off-by: zijun_hu <zijun...@htc.com>
---
mm/vmalloc.c | 9 +++--
1 file changed, 7 insertions(+),
From: zijun_hu
fix the following bug:
- endless loop maybe happen when v[un]mapping improper ranges
whose either boundary is not aligned to page
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm
From: zijun_hu <zijun...@htc.com>
improve performance for pcpu_get_vm_areas() in below aspects
- reduce the counter of vmap_areas overlay checkup loop to half
- find the previous or next one of a vamp_area by list_head but rbtree
Signed-off-by: zijun_hu <zijun...@htc.com>
---
i
From: zijun_hu
improve performance for pcpu_get_vm_areas() in below aspects
- reduce the counter of vmap_areas overlay checkup loop to half
- find the previous or next one of a vamp_area by list_head but rbtree
Signed-off-by: zijun_hu
---
include/linux/list.h | 11 +++
mm/internal.h
From: zijun_hu <zijun...@htc.com>
correct lazy_max_pages() return value if the number of online
CPUs is power of 2
Signed-off-by: zijun_hu <zijun...@htc.com>
---
mm/vmalloc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
i
From: zijun_hu
correct lazy_max_pages() return value if the number of online
CPUs is power of 2
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a125ae8..2804224 100644
--- a/mm/vmalloc.c
+++ b/mm
From: zijun_hu <zijun...@htc.com>
simplify /proc/vmallocinfo implementation via seq_file helpers
for list_head
Signed-off-by: zijun_hu <zijun...@htc.com>
---
mm/vmalloc.c | 27 +--
1 file changed, 5 insertions(+), 22 deletions(-)
diff --git a/mm/vmalloc.c b/
From: zijun_hu
simplify /proc/vmallocinfo implementation via seq_file helpers
for list_head
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 27 +--
1 file changed, 5 insertions(+), 22 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index cc6ecd6..a125ae8 100644
From: zijun_hu <zijun...@htc.com>
correct a few logic error for __insert_vmap_area() since the else
if condition is always true and meaningless
in order to fix this issue, if vmap_area inserted is lower than one
on rbtree then walk around left branch; if higher then right branch
oth
From: zijun_hu
correct a few logic error for __insert_vmap_area() since the else
if condition is always true and meaningless
in order to fix this issue, if vmap_area inserted is lower than one
on rbtree then walk around left branch; if higher then right branch
otherwise intersects
From: zijun_hu <zijun...@htc.com>
endless loop maybe happen if either of parameter addr and end is not
page aligned for kernel API function ioremap_page_range()
in order to fix this issue and alert improper range parameters to user
WARN_ON() checkup and rounding down range lower bo
From: zijun_hu
endless loop maybe happen if either of parameter addr and end is not
page aligned for kernel API function ioremap_page_range()
in order to fix this issue and alert improper range parameters to user
WARN_ON() checkup and rounding down range lower boundary are performed
firstly
From: zijun_hu <zijun...@htc.com>
canonicalize macro PAGE_ALIGNED() definition
Signed-off-by: zijun_hu <zijun...@htc.com>
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef815b9..ec
From: zijun_hu
canonicalize macro PAGE_ALIGNED() definition
Signed-off-by: zijun_hu
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef815b9..ec68186 100644
--- a/include/linux/mm.h
+++ b/include/linux
On 09/20/2016 02:54 PM, Nicholas Piggin wrote:
> On Tue, 20 Sep 2016 14:02:26 +0800
> zijun_hu <zijun...@zoho.com> wrote:
>
>> From: zijun_hu <zijun...@htc.com>
>>
>> correct a few logic error in __insert_vmap_area() since the else if
>> conditi
On 09/20/2016 02:54 PM, Nicholas Piggin wrote:
> On Tue, 20 Sep 2016 14:02:26 +0800
> zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct a few logic error in __insert_vmap_area() since the else if
>> condition is always true and meaningless
>>
>>
From: zijun_hu <zijun...@htc.com>
correct a few logic error in __insert_vmap_area() since the else if
condition is always true and meaningless
avoid endless loop under [un]mapping improper ranges whose boundary
are not aligned to page
correct lazy_max_pages() return value if the
From: zijun_hu
correct a few logic error in __insert_vmap_area() since the else if
condition is always true and meaningless
avoid endless loop under [un]mapping improper ranges whose boundary
are not aligned to page
correct lazy_max_pages() return value if the number of online cpus
is power
From: zijun_hu <zijun...@htc.com>
for ioremap_page_range(), endless loop maybe happen if either of parameter
addr and end is not page aligned, in order to fix this issue and hint range
parameter requirements BUG_ON() checkup are performed firstly
for ioremap_pte_range(), loop end con
From: zijun_hu
for ioremap_page_range(), endless loop maybe happen if either of parameter
addr and end is not page aligned, in order to fix this issue and hint range
parameter requirements BUG_ON() checkup are performed firstly
for ioremap_pte_range(), loop end condition is optimized due
From: zijun_hu <zijun...@htc.com>
canonicalize macro PAGE_ALIGNED() definition
Signed-off-by: zijun_hu <zijun...@htc.com>
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef815b9..ec
From: zijun_hu
canonicalize macro PAGE_ALIGNED() definition
Signed-off-by: zijun_hu
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef815b9..ec68186 100644
--- a/include/linux/mm.h
+++ b/include/linux
On 09/03/2016 08:15 PM, Dmitry Vyukov wrote:
> Hello,
>
> While running syzkaller fuzzer I've got the following GPF:
>
> general protection fault: [#1] SMP DEBUG_PAGEALLOC KASAN
> Dumping ftrace buffer:
>(ftrace buffer empty)
> Modules linked in:
> CPU: 2 PID: 4268 Comm: syz-executor
On 09/03/2016 08:15 PM, Dmitry Vyukov wrote:
> Hello,
>
> While running syzkaller fuzzer I've got the following GPF:
>
> general protection fault: [#1] SMP DEBUG_PAGEALLOC KASAN
> Dumping ftrace buffer:
>(ftrace buffer empty)
> Modules linked in:
> CPU: 2 PID: 4268 Comm: syz-executor
On 09/01/2016 07:21 PM, Mark Rutland wrote:
> On Thu, Sep 01, 2016 at 06:58:29PM +0800, zijun_hu wrote:
>> From: zijun_hu <zijun...@htc.com>
>>
>> regard FDT_SW_MAGIC as good fdt magic during mapping fdt area
>> see fdt_check_header() for details
>
> It loo
On 09/01/2016 07:21 PM, Mark Rutland wrote:
> On Thu, Sep 01, 2016 at 06:58:29PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> regard FDT_SW_MAGIC as good fdt magic during mapping fdt area
>> see fdt_check_header() for details
>
> It looks like we shoul
From: zijun_hu <zijun...@htc.com>
regard FDT_SW_MAGIC as good fdt magic during mapping fdt area
see fdt_check_header() for details
Signed-off-by: zijun_hu <zijun...@htc.com>
---
arch/arm64/mm/mmu.c | 3 ++-
scripts/dtc/libfdt/fdt.h | 3 ++-
script
From: zijun_hu
regard FDT_SW_MAGIC as good fdt magic during mapping fdt area
see fdt_check_header() for details
Signed-off-by: zijun_hu
---
arch/arm64/mm/mmu.c | 3 ++-
scripts/dtc/libfdt/fdt.h | 3 ++-
scripts/dtc/libfdt/libfdt.h | 2 ++
scripts/dtc
From: zijun_hu <zijun...@htc.com>
remove duplicate macro __KERNEL__ check
Signed-off-by: zijun_hu <zijun...@htc.com>
---
arch/arm64/include/asm/processor.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm64/include/asm/processor.h
b/arch/arm64/include/asm/proce
From: zijun_hu
remove duplicate macro __KERNEL__ check
Signed-off-by: zijun_hu
---
arch/arm64/include/asm/processor.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm64/include/asm/processor.h
b/arch/arm64/include/asm/processor.h
index ace0a96e7d6e..df2e53d3a969 100644
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/27 23:27, zijun_hu wrote:
> From: zijun_hu <zijun...@htc.com>
>
> this patch fixes the following bugs:
>
> - no bootmem is implemented by memblock current
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/27 23:27, zijun_hu wrote:
> From: zijun_hu
>
> this patch fixes the following bugs:
>
> - no bootmem is implemented by memblock currently, but config option
>CONFIG_NO_BOOT
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/27 23:35, zijun_hu wrote:
> From: zijun_hu <zijun...@htc.com>
>
> in ___alloc_bootmem_node_nopanic(), substitute kzalloc_node()
> for kzalloc() in order to allocate memory
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/27 23:35, zijun_hu wrote:
> From: zijun_hu
>
> in ___alloc_bootmem_node_nopanic(), substitute kzalloc_node()
> for kzalloc() in order to allocate memory within given node
> pref
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/28 15:48, kbuild test robot wrote:
> Hi zijun_hu,
>
> [auto build test ERROR on mmotm/master]
> [also build test ERROR on v4.8-rc3 next-20160825]
> [if your patch is applied to the
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/28 15:48, kbuild test robot wrote:
> Hi zijun_hu,
>
> [auto build test ERROR on mmotm/master]
> [also build test ERROR on v4.8-rc3 next-20160825]
> [if your patch is applied to the
From: zijun_hu <zijun...@htc.com>
in ___alloc_bootmem_node_nopanic(), replace kzalloc() by
kzalloc_node() in order to allocate memory within given node
preferentially when slab is available
Signed-off-by: zijun_hu <zijun...@htc.com>
---
mm/bootmem.c | 14 ++
1 fil
From: zijun_hu
in ___alloc_bootmem_node_nopanic(), replace kzalloc() by
kzalloc_node() in order to allocate memory within given node
preferentially when slab is available
Signed-off-by: zijun_hu
---
mm/bootmem.c | 14 ++
1 file changed, 2 insertions(+), 12 deletions(-)
diff --git
From: zijun_hu <zijun...@htc.com>
this patch fixes the following bugs:
- the same ARCH_LOW_ADDRESS_LIMIT statements are duplicated between
header and relevant source
- don't ensure ARCH_LOW_ADDRESS_LIMIT perhaps defined by ARCH in
asm/processor.h is preferred over default in
From: zijun_hu
this patch fixes the following bugs:
- the same ARCH_LOW_ADDRESS_LIMIT statements are duplicated between
header and relevant source
- don't ensure ARCH_LOW_ADDRESS_LIMIT perhaps defined by ARCH in
asm/processor.h is preferred over default in linux/bootmem.h
completely
From: zijun_hu <zijun...@htc.com>
in ___alloc_bootmem_node_nopanic(), substitute kzalloc_node()
for kzalloc() in order to allocate memory within given node
preferentially when slab is available
free_all_bootmem_core() is optimized to make the first two parameters
of __free_pages_bootmem()
From: zijun_hu
in ___alloc_bootmem_node_nopanic(), substitute kzalloc_node()
for kzalloc() in order to allocate memory within given node
preferentially when slab is available
free_all_bootmem_core() is optimized to make the first two parameters
of __free_pages_bootmem() looks consistent
From: zijun_hu <zijun...@htc.com>
this patch fixes the following bugs:
- no bootmem is implemented by memblock currently, but config option
CONFIG_NO_BOOTMEM doesn't depend on CONFIG_HAVE_MEMBLOCK
- the same ARCH_LOW_ADDRESS_LIMIT statements are duplicated between
header and re
From: zijun_hu
this patch fixes the following bugs:
- no bootmem is implemented by memblock currently, but config option
CONFIG_NO_BOOTMEM doesn't depend on CONFIG_HAVE_MEMBLOCK
- the same ARCH_LOW_ADDRESS_LIMIT statements are duplicated between
header and relevant source
- don't
From: zijun_hu <zijun...@htc.com>
it causes double align requirement for __get_vm_area_node() if parameter
size is power of 2 and VM_IOREMAP is set in parameter flags, for example
size=0x1 -> fls_long(0x1)=17 -> align=0x2
get_count_order_long() is implemented and
From: zijun_hu
it causes double align requirement for __get_vm_area_node() if parameter
size is power of 2 and VM_IOREMAP is set in parameter flags, for example
size=0x1 -> fls_long(0x1)=17 -> align=0x2
get_count_order_long() is implemented and used instead of fls_long() for
On 08/18/2016 05:01 PM, Peter Zijlstra wrote:
> On Thu, Aug 18, 2016 at 04:19:10PM +0800, zijun_hu wrote:
>> From: zijun_hu <zijun...@htc.com>
>>
>> for LP64 ABI, struct rb_node aligns at 8 bytes boundary due to
>> sizeof(long) == 8 normally, so 0x07 should be
On 08/18/2016 05:01 PM, Peter Zijlstra wrote:
> On Thu, Aug 18, 2016 at 04:19:10PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> for LP64 ABI, struct rb_node aligns at 8 bytes boundary due to
>> sizeof(long) == 8 normally, so 0x07 should be used to extract
>&g
From: zijun_hu <zijun...@htc.com>
for LP64 ABI, struct rb_node aligns at 8 bytes boundary due to
sizeof(long) == 8 normally, so 0x07 should be used to extract
node's parent rather than 0x03
the mask is corrected based on normal alignment of struct rb_node
macros are introduced to replac
From: zijun_hu
for LP64 ABI, struct rb_node aligns at 8 bytes boundary due to
sizeof(long) == 8 normally, so 0x07 should be used to extract
node's parent rather than 0x03
the mask is corrected based on normal alignment of struct rb_node
macros are introduced to replace hard coding numbers too
On 2016/8/18 8:28, Al Viro wrote:
> On Thu, Aug 18, 2016 at 08:10:19AM +0800, zijun_hu wrote:
>
>> Documentation/kbuild/makefiles.txt:
>> The kernel includes a set of headers that is exported to userspace.
>> Many headers can be exported as-is but other headers require a
&
On 2016/8/18 8:28, Al Viro wrote:
> On Thu, Aug 18, 2016 at 08:10:19AM +0800, zijun_hu wrote:
>
>> Documentation/kbuild/makefiles.txt:
>> The kernel includes a set of headers that is exported to userspace.
>> Many headers can be exported as-is but other headers require a
&
On 2016/8/18 7:59, Al Viro wrote:
> On Thu, Aug 18, 2016 at 07:51:19AM +0800, zijun_hu wrote:
>>> What the hell is anything without __KERNEL__ doing with linux/bitops.h in
>>> the first place? IOW, why do we have those ifdefs at all?
>>>
>>
>> __KERNEL__
On 2016/8/18 7:59, Al Viro wrote:
> On Thu, Aug 18, 2016 at 07:51:19AM +0800, zijun_hu wrote:
>>> What the hell is anything without __KERNEL__ doing with linux/bitops.h in
>>> the first place? IOW, why do we have those ifdefs at all?
>>>
>>
>> __KERNEL__
101 - 200 of 232 matches
Mail list logo