Re: [PATCH -v3 00/14] x86, mm: init_memory_mapping cleanup
On Thu, Sep 13, 2012 at 8:00 AM, Jacob Shin wrote: > On Wed, Sep 05, 2012 at 03:08:15PM -0500, Jacob Shin wrote: >> On Tue, Sep 04, 2012 at 10:46:17PM -0700, Yinghai Lu wrote: >> > Only create mapping for E820_820 and E820_RESERVED_KERN. >> > >> > Seperate calculate_table_space_size and find_early_page_table out with >> > init_memory_mapping. >> > >> > For all ranges, will allocate page table one time, but init mapping >> > only for E820 RAM and E820_RESERVED_KERN. >> > >> > Could be found at: >> > git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git >> > for-x86-mm >> > >> > Thanks >> > Yinghai >> > >> > >> > Jacob Shin (4): >> > x86: if kernel .text .data .bss are not marked as E820_RAM, complain and >> > fix >> > x86: Fixup code testing if a pfn is direct mapped >> > x86: Only direct map addresses that are marked as E820_RAM >> > x86/mm: calculate_table_space_size based on memory ranges that are being >> > mapped >> > >> > Yinghai Lu (10): >> > x86, mm: Add global page_size_mask >> > x86, mm: Split out split_mem_range >> > x86, mm: Moving init_memory_mapping calling >> > x86, mm: Revert back good_end setting for 64bit >> > x86, mm: Find early page table only one time >> > x86, mm: Separate out calculate_table_space_size() >> > x86, mm: Move down two calculate_table_space_size down. >> > x86, mm: set memblock initial limit to 1M >> > x86, mm: Use func pointer to table size calculation and mapping >> > x86, mm: Map ISA area with connected ram range at the same time >> > >> > arch/x86/include/asm/init.h |1 - >> > arch/x86/include/asm/page_types.h |2 + >> > arch/x86/include/asm/pgtable.h|1 + >> > arch/x86/kernel/cpu/amd.c |8 +- >> > arch/x86/kernel/setup.c | 36 +++-- >> > arch/x86/mm/init.c| 357 >> > + >> > arch/x86/mm/init_64.c |6 +- >> > arch/x86/platform/efi/efi.c |8 +- >> > 8 files changed, 280 insertions(+), 139 deletions(-) >> > >> > -- >> > 1.7.7 >> > >> > >> >> Tested -v3 on our (AMD) machines and everything looks good. > > Hi, hpa, wondering if this version finally looks okay to you for 3.7 ? > Can you please put patch 1-13 into tip? Thanks Yinghai -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH -v3 00/14] x86, mm: init_memory_mapping cleanup
On Thu, Sep 13, 2012 at 8:00 AM, Jacob Shin jacob.s...@amd.com wrote: On Wed, Sep 05, 2012 at 03:08:15PM -0500, Jacob Shin wrote: On Tue, Sep 04, 2012 at 10:46:17PM -0700, Yinghai Lu wrote: Only create mapping for E820_820 and E820_RESERVED_KERN. Seperate calculate_table_space_size and find_early_page_table out with init_memory_mapping. For all ranges, will allocate page table one time, but init mapping only for E820 RAM and E820_RESERVED_KERN. Could be found at: git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-x86-mm Thanks Yinghai Jacob Shin (4): x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix x86: Fixup code testing if a pfn is direct mapped x86: Only direct map addresses that are marked as E820_RAM x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu (10): x86, mm: Add global page_size_mask x86, mm: Split out split_mem_range x86, mm: Moving init_memory_mapping calling x86, mm: Revert back good_end setting for 64bit x86, mm: Find early page table only one time x86, mm: Separate out calculate_table_space_size() x86, mm: Move down two calculate_table_space_size down. x86, mm: set memblock initial limit to 1M x86, mm: Use func pointer to table size calculation and mapping x86, mm: Map ISA area with connected ram range at the same time arch/x86/include/asm/init.h |1 - arch/x86/include/asm/page_types.h |2 + arch/x86/include/asm/pgtable.h|1 + arch/x86/kernel/cpu/amd.c |8 +- arch/x86/kernel/setup.c | 36 +++-- arch/x86/mm/init.c| 357 + arch/x86/mm/init_64.c |6 +- arch/x86/platform/efi/efi.c |8 +- 8 files changed, 280 insertions(+), 139 deletions(-) -- 1.7.7 Tested -v3 on our (AMD) machines and everything looks good. Hi, hpa, wondering if this version finally looks okay to you for 3.7 ? Can you please put patch 1-13 into tip? Thanks Yinghai -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH -v3 00/14] x86, mm: init_memory_mapping cleanup
On Wed, Sep 05, 2012 at 03:08:15PM -0500, Jacob Shin wrote: > On Tue, Sep 04, 2012 at 10:46:17PM -0700, Yinghai Lu wrote: > > Only create mapping for E820_820 and E820_RESERVED_KERN. > > > > Seperate calculate_table_space_size and find_early_page_table out with > > init_memory_mapping. > > > > For all ranges, will allocate page table one time, but init mapping > > only for E820 RAM and E820_RESERVED_KERN. > > > > Could be found at: > > git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git > > for-x86-mm > > > > Thanks > > Yinghai > > > > > > Jacob Shin (4): > > x86: if kernel .text .data .bss are not marked as E820_RAM, complain and > > fix > > x86: Fixup code testing if a pfn is direct mapped > > x86: Only direct map addresses that are marked as E820_RAM > > x86/mm: calculate_table_space_size based on memory ranges that are being > > mapped > > > > Yinghai Lu (10): > > x86, mm: Add global page_size_mask > > x86, mm: Split out split_mem_range > > x86, mm: Moving init_memory_mapping calling > > x86, mm: Revert back good_end setting for 64bit > > x86, mm: Find early page table only one time > > x86, mm: Separate out calculate_table_space_size() > > x86, mm: Move down two calculate_table_space_size down. > > x86, mm: set memblock initial limit to 1M > > x86, mm: Use func pointer to table size calculation and mapping > > x86, mm: Map ISA area with connected ram range at the same time > > > > arch/x86/include/asm/init.h |1 - > > arch/x86/include/asm/page_types.h |2 + > > arch/x86/include/asm/pgtable.h|1 + > > arch/x86/kernel/cpu/amd.c |8 +- > > arch/x86/kernel/setup.c | 36 +++-- > > arch/x86/mm/init.c| 357 > > + > > arch/x86/mm/init_64.c |6 +- > > arch/x86/platform/efi/efi.c |8 +- > > 8 files changed, 280 insertions(+), 139 deletions(-) > > > > -- > > 1.7.7 > > > > > > Tested -v3 on our (AMD) machines and everything looks good. Hi, hpa, wondering if this version finally looks okay to you for 3.7 ? Thanks, -Jacob -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH -v3 00/14] x86, mm: init_memory_mapping cleanup
On Wed, Sep 05, 2012 at 03:08:15PM -0500, Jacob Shin wrote: On Tue, Sep 04, 2012 at 10:46:17PM -0700, Yinghai Lu wrote: Only create mapping for E820_820 and E820_RESERVED_KERN. Seperate calculate_table_space_size and find_early_page_table out with init_memory_mapping. For all ranges, will allocate page table one time, but init mapping only for E820 RAM and E820_RESERVED_KERN. Could be found at: git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-x86-mm Thanks Yinghai Jacob Shin (4): x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix x86: Fixup code testing if a pfn is direct mapped x86: Only direct map addresses that are marked as E820_RAM x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu (10): x86, mm: Add global page_size_mask x86, mm: Split out split_mem_range x86, mm: Moving init_memory_mapping calling x86, mm: Revert back good_end setting for 64bit x86, mm: Find early page table only one time x86, mm: Separate out calculate_table_space_size() x86, mm: Move down two calculate_table_space_size down. x86, mm: set memblock initial limit to 1M x86, mm: Use func pointer to table size calculation and mapping x86, mm: Map ISA area with connected ram range at the same time arch/x86/include/asm/init.h |1 - arch/x86/include/asm/page_types.h |2 + arch/x86/include/asm/pgtable.h|1 + arch/x86/kernel/cpu/amd.c |8 +- arch/x86/kernel/setup.c | 36 +++-- arch/x86/mm/init.c| 357 + arch/x86/mm/init_64.c |6 +- arch/x86/platform/efi/efi.c |8 +- 8 files changed, 280 insertions(+), 139 deletions(-) -- 1.7.7 Tested -v3 on our (AMD) machines and everything looks good. Hi, hpa, wondering if this version finally looks okay to you for 3.7 ? Thanks, -Jacob -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH -v3 00/14] x86, mm: init_memory_mapping cleanup
On Tue, Sep 04, 2012 at 10:46:17PM -0700, Yinghai Lu wrote: > Only create mapping for E820_820 and E820_RESERVED_KERN. > > Seperate calculate_table_space_size and find_early_page_table out with > init_memory_mapping. > > For all ranges, will allocate page table one time, but init mapping > only for E820 RAM and E820_RESERVED_KERN. > > Could be found at: > git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git > for-x86-mm > > Thanks > Yinghai > > > Jacob Shin (4): > x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix > x86: Fixup code testing if a pfn is direct mapped > x86: Only direct map addresses that are marked as E820_RAM > x86/mm: calculate_table_space_size based on memory ranges that are being > mapped > > Yinghai Lu (10): > x86, mm: Add global page_size_mask > x86, mm: Split out split_mem_range > x86, mm: Moving init_memory_mapping calling > x86, mm: Revert back good_end setting for 64bit > x86, mm: Find early page table only one time > x86, mm: Separate out calculate_table_space_size() > x86, mm: Move down two calculate_table_space_size down. > x86, mm: set memblock initial limit to 1M > x86, mm: Use func pointer to table size calculation and mapping > x86, mm: Map ISA area with connected ram range at the same time > > arch/x86/include/asm/init.h |1 - > arch/x86/include/asm/page_types.h |2 + > arch/x86/include/asm/pgtable.h|1 + > arch/x86/kernel/cpu/amd.c |8 +- > arch/x86/kernel/setup.c | 36 +++-- > arch/x86/mm/init.c| 357 > + > arch/x86/mm/init_64.c |6 +- > arch/x86/platform/efi/efi.c |8 +- > 8 files changed, 280 insertions(+), 139 deletions(-) > > -- > 1.7.7 > > Tested -v3 on our (AMD) machines and everything looks good. Thanks, -Jacob -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH -v3 00/14] x86, mm: init_memory_mapping cleanup
On Tue, Sep 04, 2012 at 10:46:17PM -0700, Yinghai Lu wrote: Only create mapping for E820_820 and E820_RESERVED_KERN. Seperate calculate_table_space_size and find_early_page_table out with init_memory_mapping. For all ranges, will allocate page table one time, but init mapping only for E820 RAM and E820_RESERVED_KERN. Could be found at: git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-x86-mm Thanks Yinghai Jacob Shin (4): x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix x86: Fixup code testing if a pfn is direct mapped x86: Only direct map addresses that are marked as E820_RAM x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu (10): x86, mm: Add global page_size_mask x86, mm: Split out split_mem_range x86, mm: Moving init_memory_mapping calling x86, mm: Revert back good_end setting for 64bit x86, mm: Find early page table only one time x86, mm: Separate out calculate_table_space_size() x86, mm: Move down two calculate_table_space_size down. x86, mm: set memblock initial limit to 1M x86, mm: Use func pointer to table size calculation and mapping x86, mm: Map ISA area with connected ram range at the same time arch/x86/include/asm/init.h |1 - arch/x86/include/asm/page_types.h |2 + arch/x86/include/asm/pgtable.h|1 + arch/x86/kernel/cpu/amd.c |8 +- arch/x86/kernel/setup.c | 36 +++-- arch/x86/mm/init.c| 357 + arch/x86/mm/init_64.c |6 +- arch/x86/platform/efi/efi.c |8 +- 8 files changed, 280 insertions(+), 139 deletions(-) -- 1.7.7 Tested -v3 on our (AMD) machines and everything looks good. Thanks, -Jacob -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH -v3 00/14] x86, mm: init_memory_mapping cleanup
Only create mapping for E820_820 and E820_RESERVED_KERN. Seperate calculate_table_space_size and find_early_page_table out with init_memory_mapping. For all ranges, will allocate page table one time, but init mapping only for E820 RAM and E820_RESERVED_KERN. Could be found at: git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-x86-mm Thanks Yinghai Jacob Shin (4): x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix x86: Fixup code testing if a pfn is direct mapped x86: Only direct map addresses that are marked as E820_RAM x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu (10): x86, mm: Add global page_size_mask x86, mm: Split out split_mem_range x86, mm: Moving init_memory_mapping calling x86, mm: Revert back good_end setting for 64bit x86, mm: Find early page table only one time x86, mm: Separate out calculate_table_space_size() x86, mm: Move down two calculate_table_space_size down. x86, mm: set memblock initial limit to 1M x86, mm: Use func pointer to table size calculation and mapping x86, mm: Map ISA area with connected ram range at the same time arch/x86/include/asm/init.h |1 - arch/x86/include/asm/page_types.h |2 + arch/x86/include/asm/pgtable.h|1 + arch/x86/kernel/cpu/amd.c |8 +- arch/x86/kernel/setup.c | 36 +++-- arch/x86/mm/init.c| 357 + arch/x86/mm/init_64.c |6 +- arch/x86/platform/efi/efi.c |8 +- 8 files changed, 280 insertions(+), 139 deletions(-) -- 1.7.7 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH -v3 00/14] x86, mm: init_memory_mapping cleanup
Only create mapping for E820_820 and E820_RESERVED_KERN. Seperate calculate_table_space_size and find_early_page_table out with init_memory_mapping. For all ranges, will allocate page table one time, but init mapping only for E820 RAM and E820_RESERVED_KERN. Could be found at: git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-x86-mm Thanks Yinghai Jacob Shin (4): x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix x86: Fixup code testing if a pfn is direct mapped x86: Only direct map addresses that are marked as E820_RAM x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu (10): x86, mm: Add global page_size_mask x86, mm: Split out split_mem_range x86, mm: Moving init_memory_mapping calling x86, mm: Revert back good_end setting for 64bit x86, mm: Find early page table only one time x86, mm: Separate out calculate_table_space_size() x86, mm: Move down two calculate_table_space_size down. x86, mm: set memblock initial limit to 1M x86, mm: Use func pointer to table size calculation and mapping x86, mm: Map ISA area with connected ram range at the same time arch/x86/include/asm/init.h |1 - arch/x86/include/asm/page_types.h |2 + arch/x86/include/asm/pgtable.h|1 + arch/x86/kernel/cpu/amd.c |8 +- arch/x86/kernel/setup.c | 36 +++-- arch/x86/mm/init.c| 357 + arch/x86/mm/init_64.c |6 +- arch/x86/platform/efi/efi.c |8 +- 8 files changed, 280 insertions(+), 139 deletions(-) -- 1.7.7 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/