On Fri, Apr 05, 2013 at 05:17:16PM -0400, KOSAKI Motohiro wrote:
> (3/22/13 4:23 PM), Naoya Horiguchi wrote:
> > This patch extends check_range() to handle vma with VM_HUGETLB set.
> > We will be able to migrate hugepage with migrate_pages(2) after
> > applying the enablement patch which comes
On Fri, Apr 05, 2013 at 05:17:16PM -0400, KOSAKI Motohiro wrote:
(3/22/13 4:23 PM), Naoya Horiguchi wrote:
This patch extends check_range() to handle vma with VM_HUGETLB set.
We will be able to migrate hugepage with migrate_pages(2) after
applying the enablement patch which comes later in
(3/22/13 4:23 PM), Naoya Horiguchi wrote:
> This patch extends check_range() to handle vma with VM_HUGETLB set.
> We will be able to migrate hugepage with migrate_pages(2) after
> applying the enablement patch which comes later in this series.
>
> Note that for larger hugepages (covered by pud
(3/22/13 4:23 PM), Naoya Horiguchi wrote:
This patch extends check_range() to handle vma with VM_HUGETLB set.
We will be able to migrate hugepage with migrate_pages(2) after
applying the enablement patch which comes later in this series.
Note that for larger hugepages (covered by pud
On Tue 26-03-13 01:13:10, Naoya Horiguchi wrote:
> On Mon, Mar 25, 2013 at 02:04:16PM +0100, Michal Hocko wrote:
> > On Fri 22-03-13 16:23:50, Naoya Horiguchi wrote:
[...]
> > > @@ -1012,14 +1040,8 @@ static int migrate_to_node(struct mm_struct *mm,
> > > int source, int dest,
> > >
On Tue 26-03-13 01:13:10, Naoya Horiguchi wrote:
On Mon, Mar 25, 2013 at 02:04:16PM +0100, Michal Hocko wrote:
On Fri 22-03-13 16:23:50, Naoya Horiguchi wrote:
[...]
@@ -1012,14 +1040,8 @@ static int migrate_to_node(struct mm_struct *mm,
int source, int dest,
check_range(mm,
On Mon, Mar 25, 2013 at 02:04:16PM +0100, Michal Hocko wrote:
> On Fri 22-03-13 16:23:50, Naoya Horiguchi wrote:
> [...]
> > @@ -523,6 +544,11 @@ static inline int check_pmd_range(struct
> > vm_area_struct *vma, pud_t *pud,
> > pmd = pmd_offset(pud, addr);
> > do {
> > next =
On Fri 22-03-13 16:23:50, Naoya Horiguchi wrote:
[...]
> @@ -523,6 +544,11 @@ static inline int check_pmd_range(struct vm_area_struct
> *vma, pud_t *pud,
> pmd = pmd_offset(pud, addr);
> do {
> next = pmd_addr_end(addr, end);
> + if (pmd_huge(*pmd) &&
On Fri 22-03-13 16:23:50, Naoya Horiguchi wrote:
[...]
@@ -523,6 +544,11 @@ static inline int check_pmd_range(struct vm_area_struct
*vma, pud_t *pud,
pmd = pmd_offset(pud, addr);
do {
next = pmd_addr_end(addr, end);
+ if (pmd_huge(*pmd)
On Mon, Mar 25, 2013 at 02:04:16PM +0100, Michal Hocko wrote:
On Fri 22-03-13 16:23:50, Naoya Horiguchi wrote:
[...]
@@ -523,6 +544,11 @@ static inline int check_pmd_range(struct
vm_area_struct *vma, pud_t *pud,
pmd = pmd_offset(pud, addr);
do {
next =
This patch extends check_range() to handle vma with VM_HUGETLB set.
We will be able to migrate hugepage with migrate_pages(2) after
applying the enablement patch which comes later in this series.
Note that for larger hugepages (covered by pud entries, 1GB for
x86_64 for example), we simply skip
This patch extends check_range() to handle vma with VM_HUGETLB set.
We will be able to migrate hugepage with migrate_pages(2) after
applying the enablement patch which comes later in this series.
Note that for larger hugepages (covered by pud entries, 1GB for
x86_64 for example), we simply skip
12 matches
Mail list logo