Re: [PATCH v8 12/46] x86, mm: use pfn_range_is_mapped() with CPA

2012-11-28 Thread Yinghai Lu
On Wed, Nov 28, 2012 at 9:06 AM, Konrad Rzeszutek Wilk
 wrote:
> On Fri, Nov 16, 2012 at 07:38:49PM -0800, Yinghai Lu wrote:
>> We are going to map ram only, so under max_low_pfn_mapped,
>> between 4g and max_pfn_mapped does not mean mapped at all.
>
> I think I know what you are saying but I am having a hard
> time parsing it. Is this what you mean?
>
> "We check to see if the PFNs are under max_low_pfn_mapped or
> between 4G and max_pfn_mapped. If they are not: then we
> map them."  ?

No

---
We are going to map ram only in patch:
x86, mm: Only direct map addresses that are marked as E820_RAM

so range under max_low_pfn_mapped, ranges between 4g and max_pfn_mapped
could have holes in them and the holes will not be mapped.

Use pfn_range_is_mapped() to check if the range is mapped.
---
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v8 12/46] x86, mm: use pfn_range_is_mapped() with CPA

2012-11-28 Thread Konrad Rzeszutek Wilk
On Fri, Nov 16, 2012 at 07:38:49PM -0800, Yinghai Lu wrote:
> We are going to map ram only, so under max_low_pfn_mapped,
> between 4g and max_pfn_mapped does not mean mapped at all.

I think I know what you are saying but I am having a hard
time parsing it. Is this what you mean?

"We check to see if the PFNs are under max_low_pfn_mapped or
between 4G and max_pfn_mapped. If they are not: then we
map them."  ?


> 
> Use pfn_range_is_mapped() directly.
> 
> Signed-off-by: Yinghai Lu 
> ---
>  arch/x86/mm/pageattr.c |   16 +++-
>  1 files changed, 3 insertions(+), 13 deletions(-)
> 
> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> index a718e0d..44acfcd 100644
> --- a/arch/x86/mm/pageattr.c
> +++ b/arch/x86/mm/pageattr.c
> @@ -551,16 +551,10 @@ static int split_large_page(pte_t *kpte, unsigned long 
> address)
>   for (i = 0; i < PTRS_PER_PTE; i++, pfn += pfninc)
>   set_pte([i], pfn_pte(pfn, ref_prot));
>  
> - if (address >= (unsigned long)__va(0) &&
> - address < (unsigned long)__va(max_low_pfn_mapped << PAGE_SHIFT))
> + if (pfn_range_is_mapped(PFN_DOWN(__pa(address)),
> + PFN_DOWN(__pa(address)) + 1))
>   split_page_count(level);
>  
> -#ifdef CONFIG_X86_64
> - if (address >= (unsigned long)__va(1UL<<32) &&
> - address < (unsigned long)__va(max_pfn_mapped << PAGE_SHIFT))
> - split_page_count(level);
> -#endif
> -
>   /*
>* Install the new, split up pagetable.
>*
> @@ -729,13 +723,9 @@ static int cpa_process_alias(struct cpa_data *cpa)
>   unsigned long vaddr;
>   int ret;
>  
> - if (cpa->pfn >= max_pfn_mapped)
> + if (!pfn_range_is_mapped(cpa->pfn, cpa->pfn + 1))
>   return 0;
>  
> -#ifdef CONFIG_X86_64
> - if (cpa->pfn >= max_low_pfn_mapped && cpa->pfn < (1UL<<(32-PAGE_SHIFT)))
> - return 0;
> -#endif
>   /*
>* No need to redo, when the primary call touched the direct
>* mapping already:
> -- 
> 1.7.7
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v8 12/46] x86, mm: use pfn_range_is_mapped() with CPA

2012-11-28 Thread Konrad Rzeszutek Wilk
On Fri, Nov 16, 2012 at 07:38:49PM -0800, Yinghai Lu wrote:
 We are going to map ram only, so under max_low_pfn_mapped,
 between 4g and max_pfn_mapped does not mean mapped at all.

I think I know what you are saying but I am having a hard
time parsing it. Is this what you mean?

We check to see if the PFNs are under max_low_pfn_mapped or
between 4G and max_pfn_mapped. If they are not: then we
map them.  ?


 
 Use pfn_range_is_mapped() directly.
 
 Signed-off-by: Yinghai Lu ying...@kernel.org
 ---
  arch/x86/mm/pageattr.c |   16 +++-
  1 files changed, 3 insertions(+), 13 deletions(-)
 
 diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
 index a718e0d..44acfcd 100644
 --- a/arch/x86/mm/pageattr.c
 +++ b/arch/x86/mm/pageattr.c
 @@ -551,16 +551,10 @@ static int split_large_page(pte_t *kpte, unsigned long 
 address)
   for (i = 0; i  PTRS_PER_PTE; i++, pfn += pfninc)
   set_pte(pbase[i], pfn_pte(pfn, ref_prot));
  
 - if (address = (unsigned long)__va(0) 
 - address  (unsigned long)__va(max_low_pfn_mapped  PAGE_SHIFT))
 + if (pfn_range_is_mapped(PFN_DOWN(__pa(address)),
 + PFN_DOWN(__pa(address)) + 1))
   split_page_count(level);
  
 -#ifdef CONFIG_X86_64
 - if (address = (unsigned long)__va(1UL32) 
 - address  (unsigned long)__va(max_pfn_mapped  PAGE_SHIFT))
 - split_page_count(level);
 -#endif
 -
   /*
* Install the new, split up pagetable.
*
 @@ -729,13 +723,9 @@ static int cpa_process_alias(struct cpa_data *cpa)
   unsigned long vaddr;
   int ret;
  
 - if (cpa-pfn = max_pfn_mapped)
 + if (!pfn_range_is_mapped(cpa-pfn, cpa-pfn + 1))
   return 0;
  
 -#ifdef CONFIG_X86_64
 - if (cpa-pfn = max_low_pfn_mapped  cpa-pfn  (1UL(32-PAGE_SHIFT)))
 - return 0;
 -#endif
   /*
* No need to redo, when the primary call touched the direct
* mapping already:
 -- 
 1.7.7
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v8 12/46] x86, mm: use pfn_range_is_mapped() with CPA

2012-11-28 Thread Yinghai Lu
On Wed, Nov 28, 2012 at 9:06 AM, Konrad Rzeszutek Wilk
konrad.w...@oracle.com wrote:
 On Fri, Nov 16, 2012 at 07:38:49PM -0800, Yinghai Lu wrote:
 We are going to map ram only, so under max_low_pfn_mapped,
 between 4g and max_pfn_mapped does not mean mapped at all.

 I think I know what you are saying but I am having a hard
 time parsing it. Is this what you mean?

 We check to see if the PFNs are under max_low_pfn_mapped or
 between 4G and max_pfn_mapped. If they are not: then we
 map them.  ?

No

---
We are going to map ram only in patch:
x86, mm: Only direct map addresses that are marked as E820_RAM

so range under max_low_pfn_mapped, ranges between 4g and max_pfn_mapped
could have holes in them and the holes will not be mapped.

Use pfn_range_is_mapped() to check if the range is mapped.
---
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v8 12/46] x86, mm: use pfn_range_is_mapped() with CPA

2012-11-16 Thread Yinghai Lu
We are going to map ram only, so under max_low_pfn_mapped,
between 4g and max_pfn_mapped does not mean mapped at all.

Use pfn_range_is_mapped() directly.

Signed-off-by: Yinghai Lu 
---
 arch/x86/mm/pageattr.c |   16 +++-
 1 files changed, 3 insertions(+), 13 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index a718e0d..44acfcd 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -551,16 +551,10 @@ static int split_large_page(pte_t *kpte, unsigned long 
address)
for (i = 0; i < PTRS_PER_PTE; i++, pfn += pfninc)
set_pte([i], pfn_pte(pfn, ref_prot));
 
-   if (address >= (unsigned long)__va(0) &&
-   address < (unsigned long)__va(max_low_pfn_mapped << PAGE_SHIFT))
+   if (pfn_range_is_mapped(PFN_DOWN(__pa(address)),
+   PFN_DOWN(__pa(address)) + 1))
split_page_count(level);
 
-#ifdef CONFIG_X86_64
-   if (address >= (unsigned long)__va(1UL<<32) &&
-   address < (unsigned long)__va(max_pfn_mapped << PAGE_SHIFT))
-   split_page_count(level);
-#endif
-
/*
 * Install the new, split up pagetable.
 *
@@ -729,13 +723,9 @@ static int cpa_process_alias(struct cpa_data *cpa)
unsigned long vaddr;
int ret;
 
-   if (cpa->pfn >= max_pfn_mapped)
+   if (!pfn_range_is_mapped(cpa->pfn, cpa->pfn + 1))
return 0;
 
-#ifdef CONFIG_X86_64
-   if (cpa->pfn >= max_low_pfn_mapped && cpa->pfn < (1UL<<(32-PAGE_SHIFT)))
-   return 0;
-#endif
/*
 * No need to redo, when the primary call touched the direct
 * mapping already:
-- 
1.7.7

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v8 12/46] x86, mm: use pfn_range_is_mapped() with CPA

2012-11-16 Thread Yinghai Lu
We are going to map ram only, so under max_low_pfn_mapped,
between 4g and max_pfn_mapped does not mean mapped at all.

Use pfn_range_is_mapped() directly.

Signed-off-by: Yinghai Lu ying...@kernel.org
---
 arch/x86/mm/pageattr.c |   16 +++-
 1 files changed, 3 insertions(+), 13 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index a718e0d..44acfcd 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -551,16 +551,10 @@ static int split_large_page(pte_t *kpte, unsigned long 
address)
for (i = 0; i  PTRS_PER_PTE; i++, pfn += pfninc)
set_pte(pbase[i], pfn_pte(pfn, ref_prot));
 
-   if (address = (unsigned long)__va(0) 
-   address  (unsigned long)__va(max_low_pfn_mapped  PAGE_SHIFT))
+   if (pfn_range_is_mapped(PFN_DOWN(__pa(address)),
+   PFN_DOWN(__pa(address)) + 1))
split_page_count(level);
 
-#ifdef CONFIG_X86_64
-   if (address = (unsigned long)__va(1UL32) 
-   address  (unsigned long)__va(max_pfn_mapped  PAGE_SHIFT))
-   split_page_count(level);
-#endif
-
/*
 * Install the new, split up pagetable.
 *
@@ -729,13 +723,9 @@ static int cpa_process_alias(struct cpa_data *cpa)
unsigned long vaddr;
int ret;
 
-   if (cpa-pfn = max_pfn_mapped)
+   if (!pfn_range_is_mapped(cpa-pfn, cpa-pfn + 1))
return 0;
 
-#ifdef CONFIG_X86_64
-   if (cpa-pfn = max_low_pfn_mapped  cpa-pfn  (1UL(32-PAGE_SHIFT)))
-   return 0;
-#endif
/*
 * No need to redo, when the primary call touched the direct
 * mapping already:
-- 
1.7.7

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/