Christoph Lameter wrote:
> The same can be done using the virtual->physical mappings that exist on
> many platforms for the kernel address space (ia64 dynamically calculates
> those, x86_64 uses a page table with 2M pages for mapping the kernel).
Yes, that's basically what Xen does - there's a n
On Thu, 7 Dec 2006, Jeremy Fitzhardinge wrote:
> You can also deal with memory hotplug by adding a Xen-style
> pseudo-physical vs machine address abstraction. This doesn't help with
> making space for contiguous allocations, but it does allow you to move
> "physical" pages from one machine page t
Christoph Lameter wrote:
> On Wed, 6 Dec 2006, Mel Gorman wrote:
>
>> Objective: Get contiguous block of free pages
>> Required: Pages that can move
>> Move means: Migrating them or reclaiming
>> How we do it for high-order allocations: Take a page from the LRU, move
>> the pages within tha
On Wed, 6 Dec 2006, Mel Gorman wrote:
> Objective: Get contiguous block of free pages
> Required: Pages that can move
> Move means: Migrating them or reclaiming
> How we do it for high-order allocations: Take a page from the LRU, move
> the pages within that high-order block
> How we do it fo
Peter Zijlstra wrote:
On Mon, 2006-12-04 at 11:30 -0800, Andrew Morton wrote:
I'd also like to pin down the situation with lumpy-reclaim versus
anti-fragmentation. No offence, but I would of course prefer to avoid
merging the anti-frag patches simply based on their stupendous size. It
seems t
On Tue, 5 Dec 2006, Christoph Lameter wrote:
On Tue, 5 Dec 2006, Mel Gorman wrote:
There are times you want to reclaim just part of a zone - specifically
satisfying a high-order allocations. See sitations 1 and 2 from elsewhere
in this thread. On a similar vein, there will be times when you wa
On Tue, 5 Dec 2006, Mel Gorman wrote:
> There are times you want to reclaim just part of a zone - specifically
> satisfying a high-order allocations. See sitations 1 and 2 from elsewhere
> in this thread. On a similar vein, there will be times when you want to
> migrate a PFN range for similar rea
On (05/12/06 12:01), Christoph Lameter didst pronounce:
> On Tue, 5 Dec 2006, Andrew Morton wrote:
>
> > > We always run reclaim against the whole zone not against parts. Why
> > > would we start running reclaim against a portion of a zone?
> >
> > Oh for gawd's sake.
>
> Yes indeed. Another fa
On Tue, 5 Dec 2006, Andrew Morton wrote:
> On Tue, 5 Dec 2006 08:05:16 -0800 (PST)
> Christoph Lameter <[EMAIL PROTECTED]> wrote:
>
> > On Tue, 5 Dec 2006, Mel Gorman wrote:
> >
> > > That is one possibility. There are people working on fake nodes for
> > > containers
> > > at the moment. If th
On Tue, 5 Dec 2006, Andrew Morton wrote:
> > We always run reclaim against the whole zone not against parts. Why
> > would we start running reclaim against a portion of a zone?
>
> Oh for gawd's sake.
Yes indeed. Another failure to answer a simple question.
> If you want to allocate a page fr
On Tue, 5 Dec 2006, Mel Gorman wrote:
> Portions of it sure, but to offline the DIMM, all pages must be removed from
> it. To guarantee the offlining, that means only __GFP_MOVABLE allocations
> are allowed within that area and a zone is the easiest way to do it.
We were talking about the memory
On Tue, 5 Dec 2006 08:00:39 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Mon, 4 Dec 2006, Andrew Morton wrote:
>
> > > > What happens when we need to run reclaim against just a section of a
> > > > zone?
> > > > Lumpy-reclaim could be used here; perhaps that's Mel's approach too
On Tue, 5 Dec 2006 08:05:16 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Tue, 5 Dec 2006, Mel Gorman wrote:
>
> > That is one possibility. There are people working on fake nodes for
> > containers
> > at the moment. If that pans out, the infrastructure would be available to
> >
On (04/12/06 14:22), Andrew Morton didst pronounce:
> On Mon, 4 Dec 2006 13:43:44 -0800 (PST)
> Christoph Lameter <[EMAIL PROTECTED]> wrote:
>
> > On Mon, 4 Dec 2006, Andrew Morton wrote:
> >
> > > What happens when we need to run reclaim against just a section of a zone?
> > > Lumpy-reclaim coul
On (05/12/06 08:14), Christoph Lameter didst pronounce:
> On Mon, 4 Dec 2006, Mel Gorman wrote:
>
> > 4. Offlining a DIMM
> > 5. Offlining a Node
> >
> > For Situation 4, a zone may be needed because MAX_ORDER_NR_PAGES would have
> > to be set to too high for anti-frag to be effective. However, z
On Mon, 4 Dec 2006, Mel Gorman wrote:
> 4. Offlining a DIMM
> 5. Offlining a Node
>
> For Situation 4, a zone may be needed because MAX_ORDER_NR_PAGES would have
> to be set to too high for anti-frag to be effective. However, zones would
> have to be tuned at boot-time and that would be an annoyi
On Tue, 5 Dec 2006, Mel Gorman wrote:
> That is one possibility. There are people working on fake nodes for containers
> at the moment. If that pans out, the infrastructure would be available to
> create one node per DIMM.
Right that is a hack in use for one project. We would be adding huge
amou
On Mon, 4 Dec 2006, Andrew Morton wrote:
> > > What happens when we need to run reclaim against just a section of a zone?
> > > Lumpy-reclaim could be used here; perhaps that's Mel's approach too?
> >
> > Why would we run reclaim against a section of a zone?
>
> Strange question. Because all th
Andrew Morton wrote:
On Mon, 4 Dec 2006 20:34:29 + (GMT)
Mel Gorman <[EMAIL PROTECTED]> wrote:
IOW: big-picture where-do-we-go-from-here stuff.
Start with lumpy reclaim,
I had lumpy-reclaim in my todo-queue but it seems to have gone away. I
think I need a lumpy-reclaim resend, please.
Mel Gorman wrote:
On Mon, 4 Dec 2006, Andrew Morton wrote:
, but I would of course prefer to avoid
merging the anti-frag patches simply based on their stupendous size.
It seems to me that lumpy-reclaim is suitable for the e1000 problem
, but perhaps not for the hugetlbpage problem.
I belie
On Tue, 5 Dec 2006, KAMEZAWA Hiroyuki wrote:
Hi, your plan looks good to me.
Thanks.
some comments.
On Mon, 4 Dec 2006 23:45:32 + (GMT)
Mel Gorman <[EMAIL PROTECTED]> wrote:
1. Use lumpy-reclaim to intelligently reclaim contigous pages. The same
logic can be used to reclaim within
Hi, your plan looks good to me.
some comments.
On Mon, 4 Dec 2006 23:45:32 + (GMT)
Mel Gorman <[EMAIL PROTECTED]> wrote:
> 1. Use lumpy-reclaim to intelligently reclaim contigous pages. The same
> logic can be used to reclaim within a PFN range
> 2. Merge anti-frag to help high-order allo
On (04/12/06 14:34), Andrew Morton didst pronounce:
On Mon, 4 Dec 2006 20:34:29 + (GMT)
Mel Gorman <[EMAIL PROTECTED]> wrote:
> > IOW: big-picture where-do-we-go-from-here stuff.
> >
>
> Start with lumpy reclaim,
I had lumpy-reclaim in my todo-queue but it seems to have gone away. I
thin
On Mon, 4 Dec 2006 20:34:29 + (GMT)
Mel Gorman <[EMAIL PROTECTED]> wrote:
> > IOW: big-picture where-do-we-go-from-here stuff.
> >
>
> Start with lumpy reclaim,
I had lumpy-reclaim in my todo-queue but it seems to have gone away. I
think I need a lumpy-reclaim resend, please.
> then I'd li
On Mon, 4 Dec 2006 13:43:44 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Mon, 4 Dec 2006, Andrew Morton wrote:
>
> > What happens when we need to run reclaim against just a section of a zone?
> > Lumpy-reclaim could be used here; perhaps that's Mel's approach too?
>
> Why would
On Mon, 4 Dec 2006, Andrew Morton wrote:
> What happens when we need to run reclaim against just a section of a zone?
> Lumpy-reclaim could be used here; perhaps that's Mel's approach too?
Why would we run reclaim against a section of a zone?
> We'd need new infrastructure to perform the
> sect
On Mon, 4 Dec 2006 12:17:26 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > I suspect you'll have to live with that. I've yet to see a vaguely sane
> > proposal to otherwise prevent unreclaimable, unmoveable kernel allocations
> > from landing in a hot-unpluggable physical memory reg
On Mon, 2006-12-04 at 11:30 -0800, Andrew Morton wrote:
> I'd also like to pin down the situation with lumpy-reclaim versus
> anti-fragmentation. No offence, but I would of course prefer to avoid
> merging the anti-frag patches simply based on their stupendous size. It
> seems to me that lumpy-r
On Mon, 4 Dec 2006, Andrew Morton wrote:
On Mon, 4 Dec 2006 14:07:47 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
o copy_strings() and variants are no longer setting the flag as the pages
are not obviously movable when I took a much closer look.
o The arch function alloc_zeroed_user_highpage
On Mon, 4 Dec 2006 11:41:42 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > That depends on how we do hot-unplug, if we do it. I continue to suspect
> > that it'll be done via memory zones: effectively by resurrecting
> > GFP_HIGHMEM. In which case there's little overlap with anti-f
On Mon, 4 Dec 2006, Andrew Morton wrote:
> > The multi zone approach does not work with NUMA. NUMA only supports a
> > single zone for memory policy control etc.
>
> Wot? memory policies are a per-vma thing?
They only apply to "policy_zone" of a node. policy_zone can only take a
single type o
On Mon, 4 Dec 2006, Andrew Morton wrote:
> My concern is that __GFP_MOVABLE is useful for fragmentation-avoidance, but
> useless for memory hot-unplug. So that if/when hot-unplug comes along
> we'll add more gunk which is a somewhat-superset of the GFP_MOVABLE
> infrastructure, hence we didn't ne
On Mon, 4 Dec 2006 14:07:47 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
> o copy_strings() and variants are no longer setting the flag as the pages
> are not obviously movable when I took a much closer look.
>
> o The arch function alloc_zeroed_user_highpage() is now called
> __alloc_zeroed_u
On (01/12/06 11:01), Andrew Morton didst pronounce:
> On Fri, 1 Dec 2006 09:54:11 + (GMT)
> Mel Gorman <[EMAIL PROTECTED]> wrote:
>
> > >> @@ -65,7 +65,7 @@ static inline void clear_user_highpage(s
> > >> static inline struct page *
> > >> alloc_zeroed_user_highpage(struct vm_area_struct *vm
On Fri, 1 Dec 2006 09:54:11 + (GMT)
Mel Gorman <[EMAIL PROTECTED]> wrote:
> >> @@ -65,7 +65,7 @@ static inline void clear_user_highpage(s
> >> static inline struct page *
> >> alloc_zeroed_user_highpage(struct vm_area_struct *vma, unsigned long
> >> vaddr)
> >> {
> >> - struct page *page
On Thu, 30 Nov 2006, Andrew Morton wrote:
On Thu, 30 Nov 2006 17:07:46 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
Am reporting this patch after there were no further comments on the last
version.
Am not sure what to do with it - nothing actually uses __GFP_MOVABLE.
Nothing yet. To begin
On Thu, 30 Nov 2006 17:07:46 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
> Am reporting this patch after there were no further comments on the last
> version.
Am not sure what to do with it - nothing actually uses __GFP_MOVABLE.
> It is often known at allocation time when a page may be migrated
Am reporting this patch after there were no further comments on the last
version.
It is often known at allocation time when a page may be migrated or not. This
page adds a flag called __GFP_MOVABLE and GFP_HIGH_MOVABLE. Allocations using
the __GFP_MOVABLE can be either migrated using the page migr
38 matches
Mail list logo