[v2 1/3] makedumpfile: introduce struct cycle to store the cyclic region

2014-01-23 Thread Baoquan He
The cyclic mode uses two member variables in struct DumpInfo to store
the cyclic region each time. Since there's a global instance of struct
DumpInfo, then those two member variables, cyclic_start_pfn and
cyclic_end_pfn would be the same as global variable on behavior. So
with this attribute, the implementation of updating cyclic region
have to be coupled with several other functions. Mainly in function
update_cyclic_region() this happened.

struct DumpInfo {
...
unsigned long long cyclic_start_pfn;
unsigned long long cyclic_end_pfn;
...
}

Now introduce struct cycle and several helper functions and MACRO
as Hatayama suggested. With these, the pfn region which is contained
in struct cycle can be passed down to inner functions. Then several
actions embedded in update_cyclic_region() can be decoupled.

struct cycle {
unsigned long long start_pfn;
unsigned long long end_pfn;
};

Signed-off-by: Baoquan He b...@redhat.com
---
 makedumpfile.c | 27 +++
 makedumpfile.h |  5 +
 2 files changed, 32 insertions(+)

diff --git a/makedumpfile.c b/makedumpfile.c
index 73467ab..0932b2c 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -37,6 +37,33 @@ struct DumpInfo  *info = NULL;
 
 char filename_stdout[] = FILENAME_STDOUT;
 
+static void first_cycle(unsigned long long start, unsigned long long max, 
struct cycle *cycle)
+{
+   cycle-start_pfn = round(start, info-pfn_cyclic);
+   cycle-end_pfn = cycle-start_pfn + info-pfn_cyclic;
+
+   if (cycle-end_pfn  max)
+   cycle-end_pfn = max;
+}
+
+static void update_cycle(unsigned long long max, struct cycle *cycle)
+{
+   cycle-start_pfn= cycle-end_pfn;
+   cycle-end_pfn=  cycle-start_pfn + info-pfn_cyclic;
+
+   if (cycle-end_pfn  max)
+   cycle-end_pfn = max;
+}
+
+static int end_cycle(unsigned long long max, struct cycle *cycle)
+{
+   return (cycle-start_pfn =  max)?TRUE:FALSE;
+}
+
+#define for_each_cycle(start, max, C) \
+   for (first_cycle(start, max, C); !end_cycle(max, C); \
+update_cycle(max, C))
+
 /*
  * The numbers of the excluded pages
  */
diff --git a/makedumpfile.h b/makedumpfile.h
index 3d270c6..4cf8102 100644
--- a/makedumpfile.h
+++ b/makedumpfile.h
@@ -1590,6 +1590,11 @@ int get_xen_info_ia64(void);
 #define get_xen_info_arch(X) FALSE
 #endif /* s390x */
 
+struct cycle {
+   unsigned long long start_pfn;
+   unsigned long long end_pfn;
+};
+
 static inline int
 is_on(char *bitmap, int i)
 {
-- 
1.8.3.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[v2 0/3] Introduce struct cycle to store cyclic region and clean up

2014-01-23 Thread Baoquan He
v1-v2:
In first_cycle(), start_pfn assignation is not aligned to info-pfn_cyclic.
Now in v2 change this. Accordingly the pfn in write_elf_header() and
write_elf_pages_cyclic() need be adjusted to the real beginning of a
load segment, but not the cycle-start_pfn which may be lower after 
alignment.


Baoquan He (3):
  makedumpfile: introduce struct cycle to store the cyclic region
  makedumpfile: use struct cycle to update cyclic region and clean up
  makedumpfile: remove member variables representing cyclic pfn region
in struct DumpInfo

 makedumpfile.c | 497 -
 makedumpfile.h |  21 +--
 sadump_info.c  |   4 +-
 3 files changed, 259 insertions(+), 263 deletions(-)

-- 
1.8.3.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[v2 3/3] makedumpfile: remove member variables representing cyclic pfn region in struct DumpInfo

2014-01-23 Thread Baoquan He
Now member variables to store cyclic pfn region in struct DumpInfo
is not needed any more, remove them.

Signed-off-by: Baoquan He b...@redhat.com
---
 makedumpfile.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/makedumpfile.h b/makedumpfile.h
index 38df2b3..0d5b086 100644
--- a/makedumpfile.h
+++ b/makedumpfile.h
@@ -1056,8 +1056,6 @@ struct DumpInfo {
 */
char   *partial_bitmap1;
char   *partial_bitmap2;
-   unsigned long long cyclic_start_pfn;
-   unsigned long long cyclic_end_pfn;  
unsigned long long num_dumpable;
unsigned long  bufsize_cyclic;
unsigned long  pfn_cyclic;
-- 
1.8.3.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[v2 2/3] makedumpfile: use struct cycle to update cyclic region and clean up

2014-01-23 Thread Baoquan He
Per struct cycle and the relevant helper functions, it's simple to
update cyclic region only. update_cyclic_region() is broken and not
needed any more. cycle containing the cyclic region is passed down,
and not global.

Related clean up is also done in this patch.

Signed-off-by: Baoquan He b...@redhat.com
---
 makedumpfile.c | 470 ++---
 makedumpfile.h |  14 +-
 sadump_info.c  |   4 +-
 3 files changed, 227 insertions(+), 261 deletions(-)

diff --git a/makedumpfile.c b/makedumpfile.c
index 0932b2c..609e9e2 100644
--- a/makedumpfile.c
+++ b/makedumpfile.c
@@ -206,7 +206,7 @@ is_in_same_page(unsigned long vaddr1, unsigned long vaddr2)
 
 #define BITMAP_SECT_LEN 4096
 static inline int is_dumpable(struct dump_bitmap *, unsigned long long);
-static inline int is_dumpable_cyclic(char *bitmap, unsigned long long);
+static inline int is_dumpable_cyclic(char *bitmap, unsigned long long, struct 
cycle *cycle);
 unsigned long
 pfn_to_pos(unsigned long long pfn)
 {
@@ -3301,18 +3301,15 @@ set_bitmap(struct dump_bitmap *bitmap, unsigned long 
long pfn,
 }
 
 int
-set_bitmap_cyclic(char *bitmap, unsigned long long pfn, int val)
+set_bitmap_cyclic(char *bitmap, unsigned long long pfn, int val, struct cycle 
*cycle)
 {
int byte, bit;
 
-   if (pfn  info-cyclic_start_pfn || info-cyclic_end_pfn = pfn)
-   return FALSE;
-
/*
 * If val is 0, clear bit on the bitmap.
 */
-   byte = (pfn - info-cyclic_start_pfn)3;
-   bit  = (pfn - info-cyclic_start_pfn)  7;
+   byte = (pfn - cycle-start_pfn)3;
+   bit  = (pfn - cycle-start_pfn)  7;
if (val)
bitmap[byte] |= 1bit;
else
@@ -3361,37 +3358,37 @@ sync_2nd_bitmap(void)
 }
 
 int
-set_bit_on_1st_bitmap(unsigned long long pfn)
+set_bit_on_1st_bitmap(unsigned long long pfn, struct cycle *cycle)
 {
if (info-flag_cyclic) {
-   return set_bitmap_cyclic(info-partial_bitmap1, pfn, 1);
+   return set_bitmap_cyclic(info-partial_bitmap1, pfn, 1, cycle);
} else {
return set_bitmap(info-bitmap1, pfn, 1);
}
 }
 
 int
-clear_bit_on_1st_bitmap(unsigned long long pfn)
+clear_bit_on_1st_bitmap(unsigned long long pfn, struct cycle *cycle)
 {
if (info-flag_cyclic) {
-   return set_bitmap_cyclic(info-partial_bitmap1, pfn, 0);
+   return set_bitmap_cyclic(info-partial_bitmap1, pfn, 0, cycle);
} else {
return set_bitmap(info-bitmap1, pfn, 0);
}
 }
 
 int
-clear_bit_on_2nd_bitmap(unsigned long long pfn)
+clear_bit_on_2nd_bitmap(unsigned long long pfn, struct cycle *cycle)
 {
if (info-flag_cyclic) {
-   return set_bitmap_cyclic(info-partial_bitmap2, pfn, 0);
+   return set_bitmap_cyclic(info-partial_bitmap2, pfn, 0, cycle);
} else {
return set_bitmap(info-bitmap2, pfn, 0);
}
 }
 
 int
-clear_bit_on_2nd_bitmap_for_kernel(unsigned long long pfn)
+clear_bit_on_2nd_bitmap_for_kernel(unsigned long long pfn, struct cycle *cycle)
 {
unsigned long long maddr;
 
@@ -3404,21 +3401,21 @@ clear_bit_on_2nd_bitmap_for_kernel(unsigned long long 
pfn)
}
pfn = paddr_to_pfn(maddr);
}
-   return clear_bit_on_2nd_bitmap(pfn);
+   return clear_bit_on_2nd_bitmap(pfn, cycle);
 }
 
 int
-set_bit_on_2nd_bitmap(unsigned long long pfn)
+set_bit_on_2nd_bitmap(unsigned long long pfn, struct cycle *cycle)
 {
if (info-flag_cyclic) {
-   return set_bitmap_cyclic(info-partial_bitmap2, pfn, 1);
+   return set_bitmap_cyclic(info-partial_bitmap2, pfn, 1, cycle);
} else {
return set_bitmap(info-bitmap2, pfn, 1);
}
 }
 
 int
-set_bit_on_2nd_bitmap_for_kernel(unsigned long long pfn)
+set_bit_on_2nd_bitmap_for_kernel(unsigned long long pfn, struct cycle *cycle)
 {
unsigned long long maddr;
 
@@ -3431,7 +3428,7 @@ set_bit_on_2nd_bitmap_for_kernel(unsigned long long pfn)
}
pfn = paddr_to_pfn(maddr);
}
-   return set_bit_on_2nd_bitmap(pfn);
+   return set_bit_on_2nd_bitmap(pfn, cycle);
 }
 
 static inline int
@@ -3757,7 +3754,7 @@ page_to_pfn(unsigned long page)
 }
 
 int
-reset_bitmap_of_free_pages(unsigned long node_zones)
+reset_bitmap_of_free_pages(unsigned long node_zones, struct cycle *cycle)
 {
 
int order, i, migrate_type, migrate_types;
@@ -3803,7 +3800,7 @@ reset_bitmap_of_free_pages(unsigned long node_zones)
}
for (i = 0; i  (1order); i++) {
pfn = start_pfn + i;
-   if 
(clear_bit_on_2nd_bitmap_for_kernel(pfn))
+   if 
(clear_bit_on_2nd_bitmap_for_kernel(pfn, cycle))
found_free_pages++;
 

Re: [PATCH 3/3] ARM: allow kernel to be loaded in middle of phymem

2014-01-23 Thread Nicolas Pitre
On Wed, 22 Jan 2014, Wang Nan wrote:

 This patch allows the kernel to be loaded at the middle of kernel awared
 physical memory. Before this patch, users must use mem= or device tree to 
 cheat
 kernel about the start address of physical memory.
 
 This feature is useful in some special cases, for example, building a crash
 dump kernel. Without it, kernel command line, atag and devicetree must be
 adjusted carefully, sometimes is impossible.

With CONFIG_PATCH_PHYS_VIRT the value for PHYS_OFFSET is determined 
dynamically by rounding down the kernel image start address to the 
previous 16MB boundary.  In the case of a crash kernel, this might be 
cleaner to simply readjust __pv_phys_offset during early boot and call 
fixup_pv_table(), and then reserve away the memory from the previous 
kernel.  That will let you access that memory directly (with gdb for 
example) and no pointer address translation will be required.


 Signed-off-by: Wang Nan wangn...@huawei.com
 Cc: sta...@vger.kernel.org # 3.4+
 Cc: Eric Biederman ebied...@xmission.com
 Cc: Russell King rmk+ker...@arm.linux.org.uk
 Cc: Andrew Morton a...@linux-foundation.org
 Cc: Geng Hui hui.g...@huawei.com
 ---
  arch/arm/mm/init.c | 21 -
  arch/arm/mm/mmu.c  | 13 +
  mm/page_alloc.c|  7 +--
  3 files changed, 38 insertions(+), 3 deletions(-)
 
 diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
 index 3e8f106..4952726 100644
 --- a/arch/arm/mm/init.c
 +++ b/arch/arm/mm/init.c
 @@ -334,9 +334,28 @@ void __init arm_memblock_init(struct meminfo *mi,
  {
   int i;
  
 - for (i = 0; i  mi-nr_banks; i++)
 + for (i = 0; i  mi-nr_banks; i++) {
   memblock_add(mi-bank[i].start, mi-bank[i].size);
  
 + /*
 +  * In some special case, for example, building a crushdump
 +  * kernel, we want the kernel to be loaded in the middle of
 +  * physical memory. In such case, the physical memory before
 +  * PHYS_OFFSET is awkward: it can't get directly mapped
 +  * (because its address will be smaller than PAGE_OFFSET,
 +  * disturbs user address space) also can't be mapped as
 +  * HighMem. We reserve such pages here. The only way to access
 +  * those pages is ioremap.
 +  */
 + if (mi-bank[i].start  PHYS_OFFSET) {
 + unsigned long reserv_size = PHYS_OFFSET -
 + mi-bank[i].start;
 + if (reserv_size  mi-bank[i].size)
 + reserv_size = mi-bank[i].size;
 + memblock_reserve(mi-bank[i].start, reserv_size);
 + }
 + }
 +
   /* Register the kernel text, kernel data and initrd with memblock. */
  #ifdef CONFIG_XIP_KERNEL
   memblock_reserve(__pa(_sdata), _end - _sdata);
 diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
 index 580ef2d..2a17c24 100644
 --- a/arch/arm/mm/mmu.c
 +++ b/arch/arm/mm/mmu.c
 @@ -1308,6 +1308,19 @@ static void __init map_lowmem(void)
   if (start = end)
   break;
  
 + /*
 +  * If this memblock contain memory before PAGE_OFFSET, memory
 +  * before PAGE_OFFSET should't get directly mapped, see code
 +  * in create_mapping(). However, memory after PAGE_OFFSET is
 +  * occupyed by kernel and still need to be mapped.
 +  */
 + if (__phys_to_virt(start)  PAGE_OFFSET) {
 + if (__phys_to_virt(end)  PAGE_OFFSET)
 + start = __virt_to_phys(PAGE_OFFSET);
 + else
 + break;
 + }
 +
   map.pfn = __phys_to_pfn(start);
   map.virtual = __phys_to_virt(start);
   map.length = end - start;
 diff --git a/mm/page_alloc.c b/mm/page_alloc.c
 index 5248fe0..d2959e3 100644
 --- a/mm/page_alloc.c
 +++ b/mm/page_alloc.c
 @@ -4840,10 +4840,13 @@ static void __init_refok alloc_node_mem_map(struct 
 pglist_data *pgdat)
*/
   if (pgdat == NODE_DATA(0)) {
   mem_map = NODE_DATA(0)-node_mem_map;
 -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
 + /*
 +  * In case of CONFIG_HAVE_MEMBLOCK_NODE_MAP or when kernel
 +  * loaded at the middle of physical memory, mem_map should
 +  * be adjusted.
 +  */
   if (page_to_pfn(mem_map) != pgdat-node_start_pfn)
   mem_map -= (pgdat-node_start_pfn - ARCH_PFN_OFFSET);
 -#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
   }
  #endif
  #endif /* CONFIG_FLAT_NODE_MEM_MAP */
 -- 
 1.8.4
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
 


Re: [PATCH 3/3] ARM: allow kernel to be loaded in middle of phymem

2014-01-23 Thread Russell King - ARM Linux
On Thu, Jan 23, 2014 at 02:15:07PM -0500, Nicolas Pitre wrote:
 On Wed, 22 Jan 2014, Wang Nan wrote:
 
  This patch allows the kernel to be loaded at the middle of kernel awared
  physical memory. Before this patch, users must use mem= or device tree to 
  cheat
  kernel about the start address of physical memory.
  
  This feature is useful in some special cases, for example, building a crash
  dump kernel. Without it, kernel command line, atag and devicetree must be
  adjusted carefully, sometimes is impossible.
 
 With CONFIG_PATCH_PHYS_VIRT the value for PHYS_OFFSET is determined 
 dynamically by rounding down the kernel image start address to the 
 previous 16MB boundary.  In the case of a crash kernel, this might be 
 cleaner to simply readjust __pv_phys_offset during early boot and call 
 fixup_pv_table(), and then reserve away the memory from the previous 
 kernel.  That will let you access that memory directly (with gdb for 
 example) and no pointer address translation will be required.

We already have support in the kernel to ignore memory below the calculated
PHYS_OFFSET.  See 571b14375019c3a66ef70d4d4a7083f4238aca30.

-- 
FTTC broadband for 0.8mile line: 5.8Mbps down 500kbps up.  Estimation
in database were 13.1 to 19Mbit for a good line, about 7.5+ for a bad.
Estimate before purchase was up to 13.2Mbit.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH 3/3] ARM: allow kernel to be loaded in middle of phymem

2014-01-23 Thread Nicolas Pitre
On Thu, 23 Jan 2014, Russell King - ARM Linux wrote:

 On Thu, Jan 23, 2014 at 02:15:07PM -0500, Nicolas Pitre wrote:
  On Wed, 22 Jan 2014, Wang Nan wrote:
  
   This patch allows the kernel to be loaded at the middle of kernel awared
   physical memory. Before this patch, users must use mem= or device tree to 
   cheat
   kernel about the start address of physical memory.
   
   This feature is useful in some special cases, for example, building a 
   crash
   dump kernel. Without it, kernel command line, atag and devicetree must be
   adjusted carefully, sometimes is impossible.
  
  With CONFIG_PATCH_PHYS_VIRT the value for PHYS_OFFSET is determined 
  dynamically by rounding down the kernel image start address to the 
  previous 16MB boundary.  In the case of a crash kernel, this might be 
  cleaner to simply readjust __pv_phys_offset during early boot and call 
  fixup_pv_table(), and then reserve away the memory from the previous 
  kernel.  That will let you access that memory directly (with gdb for 
  example) and no pointer address translation will be required.
 
 We already have support in the kernel to ignore memory below the calculated
 PHYS_OFFSET.  See 571b14375019c3a66ef70d4d4a7083f4238aca30.

Sure.  Anyway what I'm suggesting above  would require that the crash 
kernel be linked at a different virtual address for that to work.  
That's probably more trouble than simply mapping the otherwise still 
unmapped memory from the crashed kernel.


Nicolas

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec