On Wed, 2015-03-04 at 23:09 +0100, Ingo Molnar wrote:
> * Toshi Kani wrote:
:
> Hm, so I don't see where you set the proper x86 PAT table attributes
> for the pmds.
>
> MTRR's are basically a legacy mechanism, the proper way to set cache
> attribute is PAT and I don't see where this generic
* Toshi Kani wrote:
> ioremap_pud_range() and ioremap_pmd_range() are changed to create
> huge I/O mappings when their capability is enabled, and a request
> meets required conditions -- both virtual & physical addresses are
> aligned by their huge page size, and a requested range fufills
* Toshi Kani toshi.k...@hp.com wrote:
ioremap_pud_range() and ioremap_pmd_range() are changed to create
huge I/O mappings when their capability is enabled, and a request
meets required conditions -- both virtual physical addresses are
aligned by their huge page size, and a requested
On Wed, 2015-03-04 at 23:09 +0100, Ingo Molnar wrote:
* Toshi Kani toshi.k...@hp.com wrote:
:
Hm, so I don't see where you set the proper x86 PAT table attributes
for the pmds.
MTRR's are basically a legacy mechanism, the proper way to set cache
attribute is PAT and I don't see where
ioremap_pud_range() and ioremap_pmd_range() are changed to create
huge I/O mappings when their capability is enabled, and a request
meets required conditions -- both virtual & physical addresses
are aligned by their huge page size, and a requested range fufills
their huge page size. When
ioremap_pud_range() and ioremap_pmd_range() are changed to create
huge I/O mappings when their capability is enabled, and a request
meets required conditions -- both virtual physical addresses
are aligned by their huge page size, and a requested range fufills
their huge page size. When
6 matches
Mail list logo