From: zijun_hu
canonicalize macro PAGE_ALIGNED() definition
Signed-off-by: zijun_hu
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef815b9..ec68186 100644
--- a/include/linux/mm.h
+++ b/include/linux
From: zijun_hu
for ioremap_page_range(), endless loop maybe happen if either of parameter
addr and end is not page aligned, in order to fix this issue and hint range
parameter requirements BUG_ON() checkup are performed firstly
for ioremap_pte_range(), loop end condition is optimized due to
From: zijun_hu
correct a few logic error in __insert_vmap_area() since the else if
condition is always true and meaningless
avoid endless loop under [un]mapping improper ranges whose boundary
are not aligned to page
correct lazy_max_pages() return value if the number of online cpus
is power of
On 09/20/2016 02:54 PM, Nicholas Piggin wrote:
> On Tue, 20 Sep 2016 14:02:26 +0800
> zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct a few logic error in __insert_vmap_area() since the else if
>> condition is always true and meaningless
>>
>>
From: zijun_hu
canonicalize macro PAGE_ALIGNED() definition
Signed-off-by: zijun_hu
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef815b9..ec68186 100644
--- a/include/linux/mm.h
+++ b/include/linux
From: zijun_hu
endless loop maybe happen if either of parameter addr and end is not
page aligned for kernel API function ioremap_page_range()
in order to fix this issue and alert improper range parameters to user
WARN_ON() checkup and rounding down range lower boundary are performed
firstly
From: zijun_hu
correct a few logic error for __insert_vmap_area() since the else
if condition is always true and meaningless
in order to fix this issue, if vmap_area inserted is lower than one
on rbtree then walk around left branch; if higher then right branch
otherwise intersects with the
From: zijun_hu
simplify /proc/vmallocinfo implementation via seq_file helpers
for list_head
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 27 +--
1 file changed, 5 insertions(+), 22 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index cc6ecd6..a125ae8 100644
--- a
From: zijun_hu
correct lazy_max_pages() return value if the number of online
CPUs is power of 2
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a125ae8..2804224 100644
--- a/mm/vmalloc.c
+++ b/mm
From: zijun_hu
improve performance for pcpu_get_vm_areas() in below aspects
- reduce the counter of vmap_areas overlay checkup loop to half
- find the previous or next one of a vamp_area by list_head but rbtree
Signed-off-by: zijun_hu
---
include/linux/list.h | 11 +++
mm/internal.h
From: zijun_hu
fix the following bug:
- endless loop maybe happen when v[un]mapping improper ranges
whose either boundary is not aligned to page
Signed-off-by: zijun_hu
---
mm/vmalloc.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/vmalloc.c b/mm
Hi All,
please ignore this patch
as advised by Nicholas Piggin, i split this patch to smaller patches
and resend them in another mail thread
On 09/20/2016 02:02 PM, zijun_hu wrote:
> From: zijun_hu
>
> correct a few logic error in __insert_vmap_area() since the else if
> conditi
On 09/20/2016 01:49 PM, zijun_hu wrote:
> From: zijun_hu
>
> for ioremap_page_range(), endless loop maybe happen if either of parameter
> addr and end is not page aligned, in order to fix this issue and hint range
> parameter requirements BUG_ON() checkup are performed f
On 2016/9/22 5:10, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct a few logic error for __insert_vmap_area() since the else
>> if condition is always true and meaningless
>>
>> in order to fix this issu
On 2016/9/22 6:45, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>>> correct a few logic error for __insert_vmap_area() since the else
>>>> if condition is always true and meaningless
>>>>
>>>> in order to fix this issue,
On 2016/9/22 5:16, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index cc6ecd6..a125ae8 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -2576,32 +2576,13 @@ void pcpu_free_vm_areas(st
On 2016/9/22 5:21, David Rientjes wrote:
> On Wed, 21 Sep 2016, zijun_hu wrote:
>
>> From: zijun_hu
>>
>> correct lazy_max_pages() return value if the number of online
>> CPUs is power of 2
>>
>> Signed-off-by: zijun_hu
>> ---
>> mm/vma
On 2016/9/22 7:15, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>> We don't support inserting when va->va_start == tmp_va->va_end, plain and
>>> simple. There's no reason to do so. NACK to the patch.
>>>
>> i am sorry
On 09/22/2016 08:35 AM, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>> On 2016/9/22 5:21, David Rientjes wrote:
>>> On Wed, 21 Sep 2016, zijun_hu wrote:
>>>
>>>> From: zijun_hu
>>>>
>>>> correct lazy_max_pa
On 09/21/2016 12:23 PM, zijun_hu wrote:
> From: zijun_hu
>
> correct a few logic error for __insert_vmap_area() since the else
> if condition is always true and meaningless
>
> in order to fix this issue, if vmap_area inserted is lower than one
> on rbtree then walk a
On 2016/9/22 20:47, Michal Hocko wrote:
> On Wed 21-09-16 12:19:53, zijun_hu wrote:
>> From: zijun_hu
>>
>> endless loop maybe happen if either of parameter addr and end is not
>> page aligned for kernel API function ioremap_page_range()
>
> Does this happen in
On 2016/9/22 20:37, Michal Hocko wrote:
> On Thu 22-09-16 09:13:50, zijun_hu wrote:
>> On 09/22/2016 08:35 AM, David Rientjes wrote:
> [...]
>>> The intent is as it is implemented; with your change, lazy_max_pages() is
>>> potentially increased depending on the n
On 2016/9/23 11:30, Nicholas Piggin wrote:
> On Fri, 23 Sep 2016 00:30:20 +0800
> zijun_hu wrote:
>
>> On 2016/9/22 20:37, Michal Hocko wrote:
>>> On Thu 22-09-16 09:13:50, zijun_hu wrote:
>>>> On 09/22/2016 08:35 AM, David Rientjes wrote:
>>> [..
On 09/21/2016 12:19 PM, zijun_hu wrote:
> From: zijun_hu
>
> endless loop maybe happen if either of parameter addr and end is not
> page aligned for kernel API function ioremap_page_range()
>
> in order to fix this issue and alert improper range parameters to user
>
On 09/21/2016 12:34 PM, zijun_hu wrote:
> From: zijun_hu
>
> fix the following bug:
> - endless loop maybe happen when v[un]mapping improper ranges
>whose either boundary is not aligned to page
>
> Signed-off-by: zijun_hu
> ---
> mm/vmalloc.c | 9 +++--
>
On 2016/9/23 16:45, Michal Hocko wrote:
> On Thu 22-09-16 23:13:17, zijun_hu wrote:
>> On 2016/9/22 20:47, Michal Hocko wrote:
>>> On Wed 21-09-16 12:19:53, zijun_hu wrote:
>>>> From: zijun_hu
>>>>
>>>> endless loop maybe happen if either
On 09/23/2016 08:42 PM, Michal Hocko wrote:
no, it don't work for many special case
for example, provided PMD_SIZE=2M
mapping [0x1f8800, 0x208800) virtual range will be split to two ranges
[0x1f8800, 0x20) and [0x20,0x208800) and map them separately
the first range
On 2016/9/23 21:33, Michal Hocko wrote:
> On Fri 23-09-16 21:00:18, zijun_hu wrote:
>> On 09/23/2016 08:42 PM, Michal Hocko wrote:
>>>>>> no, it don't work for many special case
>>>>>> for example, provided PMD_SIZE=2M
>>>>>> ma
On 2016/9/23 22:27, Michal Hocko wrote:
> On Fri 23-09-16 22:14:40, zijun_hu wrote:
>> On 2016/9/23 21:33, Michal Hocko wrote:
>>> On Fri 23-09-16 21:00:18, zijun_hu wrote:
>>>> On 09/23/2016 08:42 PM, Michal Hocko wrote:
>>>>>>>> no, it
On 2016/9/23 22:42, Tejun Heo wrote:
> Hello,
>
> On Wed, Sep 21, 2016 at 12:19:53PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> endless loop maybe happen if either of parameter addr and end is not
>> page aligned for kernel API function ioremap_page_range()
From: zijun_hu
simplify grouping cpu logic in pcpu_build_alloc_info() to improve
readability and performance, it discards the goto statement too
for every possible cpu, decide whether it can share group id of any
lower index CPU, use the group id if so, otherwise a new group id
is allocated to
From: zijun_hu
correct max_distance from (base of the highest group + ai->unit_size)
to (base of the highest group + the group size)
Signed-off-by: zijun_hu
---
mm/percpu.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
in
On 2016/9/24 3:23, Tejun Heo wrote:
> On Sat, Sep 24, 2016 at 02:20:24AM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> correct max_distance from (base of the highest group + ai->unit_size)
>> to (base of the highest group + the group size)
>>
>> Signed-o
From: zijun_hu
it is error to represent the max range max_distance spanned by all the
group areas as the offset of the highest group area plus unit size in
pcpu_embed_first_chunk(), it should equal to the offset plus the size
of the highest group area
in order to fix this issue,let us find the
On 09/22/2016 07:15 AM, David Rientjes wrote:
> On Thu, 22 Sep 2016, zijun_hu wrote:
>
>>> We don't support inserting when va->va_start == tmp_va->va_end, plain and
>>> simple. There's no reason to do so. NACK to the patch.
>>>
>> i a
From: zijun_hu
macro PAGE_ALIGNED() is prone to cause error because it doesn't follow
convention to parenthesize parameter @addr within macro body, for example
unsigned long *ptr = kmalloc(...); PAGE_ALIGNED(ptr + 16);
for the left parameter of macro IS_ALIGNED(), (unsigned long)(ptr + 1
From: zijun_hu
__insert_vmap_area() has a few obvious logic errors as shown by comments
within below code segments
static void __insert_vmap_area(struct vmap_area *va)
{
as a internal function parameter, we assume vmap_area @va has nonzero size
...
if (va->va_start < tmp-&
From: zijun_hu
simplify /proc/vmallocinfo implementation via existing seq_file
helpers for list_head
Signed-off-by: zijun_hu
---
Changes in v2:
- more detailed commit message is provided
- the redundant type cast for list_entry() is removed as advised
by rient...@google.com
mm
>From 07b9216ec3494515e7a6c41e0333eb8782427db3 Mon Sep 17 00:00:00 2001
From: zijun_hu
Date: Mon, 1 Aug 2016 17:04:59 +0800
Subject: [PATCH] arm64: fix address fault during mapping fdt region
fdt_check_header() accesses other fileds of fdt header but
the first 8 bytes such as version;
On 08/01/2016 05:50 PM, Ard Biesheuvel wrote:
> On 1 August 2016 at 11:42, zijun_hu wrote:
>> From 07b9216ec3494515e7a6c41e0333eb8782427db3 Mon Sep 17 00:00:00 2001
>> From: zijun_hu
>> Date: Mon, 1 Aug 2016 17:04:59 +0800
>> Subject: [PATCH] arm64: fix address faul
On 08/01/2016 07:24 PM, Mark Rutland wrote:
> On Mon, Aug 01, 2016 at 06:59:50PM +0800, zijun_hu wrote:
>> On 08/01/2016 05:50 PM, Ard Biesheuvel wrote:
>>> On 1 August 2016 at 11:42, zijun_hu wrote:
>>> Couldn't we simply do this instead?
>> this solut
From: zijun_hu
remove duplicate macro __KERNEL__ check
Signed-off-by: zijun_hu
---
arch/arm64/include/asm/processor.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm64/include/asm/processor.h
b/arch/arm64/include/asm/processor.h
index ace0a96e7d6e..df2e53d3a969 100644
--- a
From: zijun_hu
regard FDT_SW_MAGIC as good fdt magic during mapping fdt area
see fdt_check_header() for details
Signed-off-by: zijun_hu
---
arch/arm64/mm/mmu.c | 3 ++-
scripts/dtc/libfdt/fdt.h | 3 ++-
scripts/dtc/libfdt/libfdt.h | 2 ++
scripts/dtc
On 09/01/2016 07:21 PM, Mark Rutland wrote:
> On Thu, Sep 01, 2016 at 06:58:29PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> regard FDT_SW_MAGIC as good fdt magic during mapping fdt area
>> see fdt_check_header() for details
>
> It looks like we should only se
On 08/05/2016 05:24 AM, Andrew Morton wrote:
>>
>> it causes double align requirement for __get_vm_area_node() if parameter
>> size is power of 2 and VM_IOREMAP is set in parameter flags
>>
>> it is fixed by handling the specail case manually due to lack of
>> get_count_order() for long parameter
>
alignment=0x1
size=0x0f000: alignment=0x1
> If so, I'm struggling to see the sense in this. Shouldn't we be
> changing things so that
>
> size=0x1: alignment=0x1
> size=0x0f000: alignment=0x1
>
> ?
okay, it is the aim of my patch as explained abov
From: zijun_hu
pcpu_build_alloc_info() groups CPUs according to relevant proximity
together to allocate memory for each percpu unit based on group.
however, the grouping algorithm consists of three loops and a goto
statement actually, and is inefficient and difficult to understand
the original
On 2016/10/11 20:48, zijun_hu wrote:
> From: zijun_hu
> in order to verify the new algorithm, we enumerate many pairs of type
> @pcpu_fc_cpu_distance_fn_t function and the relevant CPU IDs array such
> below sample, then apply both algorithms to the same pair and print the
> g
From: zijun_hu
as shown by pcpu_build_alloc_info(), the number of units within a percpu
group is educed by rounding up the number of CPUs within the group to
@upa boundary, therefore, the number of CPUs isn't equal to the units's
if it isn't aligned to @upa no
From: zijun_hu
the LSB of a chunk->map element is used for free/in-use flag of a area
and the other bits for offset, the sufficient and necessary condition of
this usage is that both size and alignment of a area must be even numbers
however, pcpu_alloc() doesn't force its @align paramete
From: zijun_hu
as shown by pcpu_setup_first_chunk(), the first chunk is same as the
reserved chunk if the reserved size is nonzero but the dynamic is zero
this special scenario is referred as the special case by below content
fix several trivial issues:
1) correct or fix several comments
the
From: zijun_hu
as shown by pcpu_build_alloc_info(), the number of units within a percpu
group is educed by rounding up the number of CPUs within the group to
@upa boundary, therefore, the number of CPUs isn't equal to the units's
if it isn't aligned to @upa no
Hi all,
please ignore this patch since it includes a build error
i resend the fixed patch in v2 version
i am sorry for my incaution
On 2016/10/11 21:03, zijun_hu wrote:
> From: zijun_hu
>
> as shown by pcpu_build_alloc_info(), the number of units within a percpu
> group is educed
On 2016/10/12 1:22, Michal Hocko wrote:
> On Tue 11-10-16 21:24:50, zijun_hu wrote:
>> From: zijun_hu
>>
>> the LSB of a chunk->map element is used for free/in-use flag of a area
>> and the other bits for offset, the sufficient and necessary condition of
>&
On 10/12/2016 02:53 PM, Michal Hocko wrote:
> On Wed 12-10-16 08:28:17, zijun_hu wrote:
>> On 2016/10/12 1:22, Michal Hocko wrote:
>>> On Tue 11-10-16 21:24:50, zijun_hu wrote:
>>>> From: zijun_hu
>>>>
>>>> the LSB of a chunk->map element
On 10/12/2016 02:53 PM, Michal Hocko wrote:
> On Wed 12-10-16 08:28:17, zijun_hu wrote:
>> On 2016/10/12 1:22, Michal Hocko wrote:
>>> On Tue 11-10-16 21:24:50, zijun_hu wrote:
>>>> From: zijun_hu
>>>>
>>>> the LSB of a chunk->map element
On 10/12/2016 04:25 PM, Michal Hocko wrote:
> On Wed 12-10-16 15:24:33, zijun_hu wrote:
>> On 10/12/2016 02:53 PM, Michal Hocko wrote:
>>> On Wed 12-10-16 08:28:17, zijun_hu wrote:
>>>> On 2016/10/12 1:22, Michal Hocko wrote:
>>>>> On Tue 11-10-16 21
On 10/12/2016 05:54 PM, Michal Hocko wrote:
> On Wed 12-10-16 16:44:31, zijun_hu wrote:
>> On 10/12/2016 04:25 PM, Michal Hocko wrote:
>>> On Wed 12-10-16 15:24:33, zijun_hu wrote:
> [...]
>>>> i found the following code segments in mm/vmalloc.c
>>>
From: zijun_hu
many seq_file helpers exist for simplifying implementation of virtual files
especially, for /proc nodes. however, the helpers for iteration over
list_head are available but aren't adopted to implement /proc/vmallocinfo
currently.
simplify /proc/vmallocinfo implementati
From: zijun_hu
the KVA allocator organizes vmap_areas allocated by rbtree. in order to
insert a new vmap_area @i_va into the rbtree, walk around the rbtree from
root and compare the vmap_area @t_va met on the rbtree against @i_va; walk
toward the left branch of @t_va if @i_va is lower than @t_va
On 2016/10/12 22:46, Michal Hocko wrote:
> [Let's CC Nick who has written this code]
>
> On Wed 12-10-16 22:30:13, zijun_hu wrote:
>> From: zijun_hu
>>
>> the KVA allocator organizes vmap_areas allocated by rbtree. in order to
>> insert a new vmap_area
On 10/13/2016 05:41 AM, Andrew Morton wrote:
> On Tue, 11 Oct 2016 22:00:28 +0800 zijun_hu wrote:
>
>> as shown by pcpu_build_alloc_info(), the number of units within a percpu
>> group is educed by rounding up the number of CPUs within the group to
>> @upa boundary, there
On 10/13/2016 05:41 AM, Andrew Morton wrote:
> On Tue, 11 Oct 2016 22:00:28 +0800 zijun_hu wrote:
>
>> as shown by pcpu_build_alloc_info(), the number of units within a percpu
>> group is educed by rounding up the number of CPUs within the group to
>> @upa boundary, there
Hi Nicholas,
i find __insert_vmap_area() is introduced by you
could you offer comments for this patch related to that funciton
thanks
On 10/12/2016 10:46 PM, Michal Hocko wrote:
> [Let's CC Nick who has written this code]
>
> On Wed 12-10-16 22:30:13, zijun_hu wrote:
>
On 08/10/2016 05:28 AM, Andrew Morton wrote:
> On Fri, 5 Aug 2016 23:48:21 +0800 zijun_hu wrote:
>
>> From: zijun_hu
>> Date: Fri, 5 Aug 2016 22:10:07 +0800
>> Subject: [PATCH 1/1] mm/vmalloc: fix align value calculation error
>>
>> it causes double align req
ils
>From 5a74cb46b7754a45428ff95f4653ad27025c3131 Mon Sep 17 00:00:00 2001
From: zijun_hu
Date: Tue, 2 Aug 2016 12:35:28 +0800
Subject: [PATCH] mm/memblock.c: fix NULL dereference error
it causes NULL dereference error and failure to get type_a->regions[0] info
if parameter type_b of __next_mem_range_rev() == NULL
the b
i am sorry, the second patch is only a test patch, please don't apply it
i will send another mail for correct this
On 08/02/2016 01:23 PM, kbuild test robot wrote:
> Hi zijun_hu,
>
> [auto build test WARNING on mmotm/master]
> [also build test WARNING on v4.7 next-20160801]
&g
this patch against linus's mainline
for fixing relevant bugs completely
>From 5d79c31d755dc3f03ecc5b4134f21793258636cd Mon Sep 17 00:00:00 2001
From: zijun_hu
Date: Tue, 2 Aug 2016 12:35:28 +0800
Subject: [PATCH] mm/memblock.c: fix NULL dereference error
it causes NULL dereference error and f
On 08/02/2016 01:03 PM, zijun_hu wrote:
> Hi Andrew,
>
> this patch is part of https://lkml.org/lkml/2016/7/26/347 and isn't merged in
> as you advised in another mail, i release this patch against linus's mainline
> for fixing relevant bugs completely, see test patch
>From e40d1066f61394992e0167f259001ae9d2581dc1 Mon Sep 17 00:00:00 2001
From: zijun_hu
Date: Thu, 4 Aug 2016 14:22:52 +0800
Subject: [PATCH] mm/vmalloc: fix align value calculation error
it causes double align requirement for __get_vm_area_node() if parameter
size is power of 2 and VM_IOREMAP
On 08/04/2016 04:02 PM, zijun_hu wrote:
>>From e40d1066f61394992e0167f259001ae9d2581dc1 Mon Sep 17 00:00:00 2001
> From: zijun_hu
> Date: Thu, 4 Aug 2016 14:22:52 +0800
> Subject: [PATCH] mm/vmalloc: fix align value calculation error
>
> it causes double align requirement f
From: zijun_hu
this patch fixes the following bugs:
- no bootmem is implemented by memblock currently, but config option
CONFIG_NO_BOOTMEM doesn't depend on CONFIG_HAVE_MEMBLOCK
- the same ARCH_LOW_ADDRESS_LIMIT statements are duplicated between
header and relevant source
-
From: zijun_hu
in ___alloc_bootmem_node_nopanic(), substitute kzalloc_node()
for kzalloc() in order to allocate memory within given node
preferentially when slab is available
free_all_bootmem_core() is optimized to make the first two parameters
of __free_pages_bootmem() looks consistent with
From: zijun_hu
this patch fixes the following bugs:
- the same ARCH_LOW_ADDRESS_LIMIT statements are duplicated between
header and relevant source
- don't ensure ARCH_LOW_ADDRESS_LIMIT perhaps defined by ARCH in
asm/processor.h is preferred over default in linux/bootmem.h
compl
From: zijun_hu
in ___alloc_bootmem_node_nopanic(), replace kzalloc() by
kzalloc_node() in order to allocate memory within given node
preferentially when slab is available
Signed-off-by: zijun_hu
---
mm/bootmem.c | 14 ++
1 file changed, 2 insertions(+), 12 deletions(-)
diff --git
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/28 15:48, kbuild test robot wrote:
> Hi zijun_hu,
>
> [auto build test ERROR on mmotm/master]
> [also build test ERROR on v4.8-rc3 next-20160825]
> [if your patch is applied to the
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/27 23:35, zijun_hu wrote:
> From: zijun_hu
>
> in ___alloc_bootmem_node_nopanic(), substitute kzalloc_node()
> for kzalloc() in order to allocate memory within given node
> pref
i am sorry, this patch has many bugs
i resend it in another mail thread
please ignore it
On 2016/8/27 23:27, zijun_hu wrote:
> From: zijun_hu
>
> this patch fixes the following bugs:
>
> - no bootmem is implemented by memblock currently, but config option
>CONFIG_NO_BOOT
On 2016/8/18 1:20, Al Viro wrote:
> On Tue, Aug 16, 2016 at 03:46:22PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> move out get_count_order[_long]() definitions from scope limited
>> by macro __KERNEL__
>>
>> it not only make both functions available i
On 2016/8/18 7:59, Al Viro wrote:
> On Thu, Aug 18, 2016 at 07:51:19AM +0800, zijun_hu wrote:
>>> What the hell is anything without __KERNEL__ doing with linux/bitops.h in
>>> the first place? IOW, why do we have those ifdefs at all?
>>>
>>
>> __KERNEL__
On 2016/8/18 8:28, Al Viro wrote:
> On Thu, Aug 18, 2016 at 08:10:19AM +0800, zijun_hu wrote:
>
>> Documentation/kbuild/makefiles.txt:
>> The kernel includes a set of headers that is exported to userspace.
>> Many headers can be exported as-is but other headers require a
&
From: zijun_hu
for LP64 ABI, struct rb_node aligns at 8 bytes boundary due to
sizeof(long) == 8 normally, so 0x07 should be used to extract
node's parent rather than 0x03
the mask is corrected based on normal alignment of struct rb_node
macros are introduced to replace hard coding number
On 08/18/2016 05:01 PM, Peter Zijlstra wrote:
> On Thu, Aug 18, 2016 at 04:19:10PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> for LP64 ABI, struct rb_node aligns at 8 bytes boundary due to
>> sizeof(long) == 8 normally, so 0x07 should be used to extract
>&g
From: zijun_hu
it causes double align requirement for __get_vm_area_node() if parameter
size is power of 2 and VM_IOREMAP is set in parameter flags, for example
size=0x1 -> fls_long(0x1)=17 -> align=0x2
get_count_order_long() is implemented and used instead of fls_long() for
On 07/17/2017 04:07 PM, Zhaoyang Huang wrote:
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process
>
> For current approach, the worst case is that the starting node which be found
> for searching the 'vmap_area_
On 07/17/2017 04:45 PM, zijun_hu wrote:
> On 07/17/2017 04:07 PM, Zhaoyang Huang wrote:
>> It is no need to find the very beginning of the area within
>> alloc_vmap_area, which can be done by judging each node during the process
>>
>> For current approach, the worst cas
On 07/18/2017 04:31 PM, Zhaoyang Huang (黄朝阳) wrote:
>
> It is no need to find the very beginning of the area within
> alloc_vmap_area, which can be done by judging each node during the process
>
it seems the original code is wrote to achieve the following two purposes :
A, the result vamp_area ha
On 07/19/2017 06:44 PM, Zhaoyang Huang wrote:
> /proc/vmallocinfo will not show the area allocated by vm_map_ram, which
> will make confusion when debug. Add vm_struct for them and show them in
> proc.
>
> Signed-off-by: Zhaoyang Huang
> ---
another patch titled "vmalloc: show lazy-purged vma inf
From: zijun_hu
get_cpu_number() doesn't use existing helper to iterate over possible
CPUs, so error happens in case of discontinuous @cpu_possible_mask
such as 0b0001.
fixed by using existing helper for_each_possible_cpu().
Signed-off-by: zijun_hu
---
drivers/irqchip/irq-gic-v3.
On 09/15/2017 03:20 AM, Marc Zyngier wrote:
> On Thu, Sep 14 2017 at 1:15:14 pm BST, zijun_hu wrote:
>> From: zijun_hu
>>
>> get_cpu_number() doesn't use existing helper to iterate over possible
>> CPUs, so error happens in case of discontinuous @cpu_possible_mas
From: zijun_hu
get_cpu_number() doesn't use existing helper to iterate over possible
CPUs, it will cause error in case of discontinuous @cpu_possible_mask
such as 0b0001, such discontinuous @cpu_possible_mask is resulted
in likely because one core have failed to come up on a SMP ma
From: zijun_hu
type bool is used to index three arrays in alloc_and_link_pwqs()
it doesn't look like conventional.
it is fixed by using type int to index the relevant arrays.
Signed-off-by: zijun_hu
---
kernel/workqueue.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
On 2017/9/6 22:33, Tejun Heo wrote:
> Hello,
>
> On Wed, Sep 06, 2017 at 11:34:14AM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> type bool is used to index three arrays in alloc_and_link_pwqs()
>> it doesn't look like conventional.
>>
>> i
On 2017/9/7 0:40, Tejun Heo wrote:
> On Thu, Sep 07, 2017 at 12:04:59AM +0800, zijun_hu wrote:
>> On 2017/9/6 22:33, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Wed, Sep 06, 2017 at 11:34:14AM +0800, zijun_hu wrote:
>>>> From: zijun_hu
>
On 2016/10/14 7:37, Tejun Heo wrote:
> Hello, Zijun.
>
> On Tue, Oct 11, 2016 at 08:48:45PM +0800, zijun_hu wrote:
>> compared with the original algorithm theoretically and practically, the
>> new one educes the same grouping results, besides, it is more effective,
>
On 2016/10/14 7:29, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 10:00:28PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> as shown by pcpu_build_alloc_info(), the number of units within a percpu
>> group is educed by rounding up the number of CPUs within the group to
On 2016/10/14 7:29, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 10:00:28PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> as shown by pcpu_build_alloc_info(), the number of units within a percpu
>> group is educed by rounding up the number of CPUs within the group to
On 2016/10/14 7:31, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 09:24:50PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> the LSB of a chunk->map element is used for free/in-use flag of a area
>> and the other bits for offset, the sufficient and necessary condition
On 2016/10/14 8:34, Tejun Heo wrote:
> On Tue, Oct 11, 2016 at 09:29:27PM +0800, zijun_hu wrote:
>> From: zijun_hu
>>
>> as shown by pcpu_setup_first_chunk(), the first chunk is same as the
>> reserved chunk if the reserved size is nonzero but the dynamic is zero
On 2016/10/14 8:28, Tejun Heo wrote:
> Hello,
>
> On Fri, Oct 14, 2016 at 08:23:06AM +0800, zijun_hu wrote:
>> for the current code, only power of 2 alignment value can works well
>>
>> is it acceptable to performing a power of 2 checking and returning error code
>
1 - 100 of 116 matches
Mail list logo