()ed we'll now down_write().
Suggested-by: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Cc: Andrea Arcangeli aarca...@redhat.com
Cc: Rik van Riel r...@redhat.com
Cc: Mel Gorman mgor...@suse.de
Cc: Hugh
...@redhat.com
Cc: Rik van Riel r...@redhat.com
Cc: Mel Gorman mgor...@suse.de
Cc: Hugh Dickins hu...@google.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
include/linux/rmap.h | 15 +--
mm/huge_memory.c |4 ++--
mm/memory-failure.c |4 ++--
mm/migrate.c
On Sun, Dec 02, 2012 at 07:43:10PM +0100, Ingo Molnar wrote:
From: Mel Gorman mgor...@suse.de
This is the simplest possible policy that still does something
of note. When a pte_numa is faulted, it is moved immediately.
Any replacement policy must at least do better than this and in
all
On Tue, Dec 04, 2012 at 02:54:08PM +0200, Tommi Rantala wrote:
2012/10/9 Mel Gorman mgor...@suse.de:
commit 00442ad04a5eac08a98255697c510e708f6082e2 upstream.
Commit cc9a6c877661 (cpuset: mm: reduce large amounts of memory barrier
related damage v3) introduced a potential memory
On Tue, Dec 04, 2012 at 06:37:41AM -0800, Michel Lespinasse wrote:
On Mon, Dec 3, 2012 at 6:17 AM, Mel Gorman mgor...@suse.de wrote:
On Sat, Dec 01, 2012 at 09:15:38PM +0100, Ingo Molnar wrote:
@@ -732,7 +732,7 @@ static int page_referenced_anon(struct p
struct anon_vma_chain *avc
scalability patches help balancenuma a bit for some of the
tests although it increases system CPU usage a little.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Tue, Dec 04, 2012 at 11:24:30PM -0800, Hugh Dickins wrote:
From: Mel Gorman mgor...@suse.de
Commit 00442ad04a5e (mempolicy: fix a memory corruption by refcount
imbalance in alloc_pages_vma()) changed get_vma_policy() to raise the
refcount on a shmem shared mempolicy; whereas
reproducible and now they are
running the backup program without accessing /proc/kcore so the patch has
not been validated but I think it makes sense. If reviewers agree then it
should also be included in -stable back as far as 3.0-stable.
Cc: sta...@vger.kernel.org
Signed-off-by: Mel Gorman mgor
above. This patch adds the necessary pud_large() check.
Cc: sta...@vger.kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
Reviewed-by: Rik van Riel r...@redhat.com
Reviewed-by: Michal Hocko mho...@suse.cz
Acked-by: Johannes Weiner han...@cmpxchg.org
---
arch/x86/include/asm/pgtable.h |5
On Wed, Feb 13, 2013 at 12:10:31PM +0100, Ingo Molnar wrote:
* Mel Gorman mgor...@suse.de wrote:
Andrew or Ingo, please pick up.
Already did - will push it out later today.
Whoops, thanks. Sorry for the noise.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line
in as a fix?
On a semi-related note; is there a plan for backporting highmem support for
the LTSI kernel considering it's aimed at embedded and CMA was highlighted
in their announcment for 3.4 support?
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
...
From: Andrew Morton a...@linux-foundation.org
Subject: include/linux/mmzone.h: cleanups
- implement zone_idx() in C to fix its references-args-twice macro bug
- use zone_idx() in is_highmem() to remove large amounts of silly fluff.
Cc: Lin Feng linf...@cn.fujitsu.com
Cc: Mel Gorman m
only from zone non-movable in memory.
It's a wrapper of get_user_pages() but it makes sure that all pages come from
non-movable zone via additional page migration.
Cc: Andrew Morton a...@linux-foundation.org
Cc: Mel Gorman mgor...@suse.de
Cc: KAMEZAWA Hiroyuki kamezawa.hir...@jp.fujitsu.com
On Tue, Feb 05, 2013 at 11:57:22AM +, Mel Gorman wrote:
+ migrate_pre_flag = 1;
+ }
+
+ if (!isolate_lru_page(pages[i])) {
+ inc_zone_page_state(pages[i], NR_ISOLATED_ANON
[nid] = RB_ROOT;
+
ksm_thread = kthread_run(ksm_scan_thread, NULL, ksmd);
if (IS_ERR(ksm_thread)) {
printk(KERN_ERR ksm: creating kthread failed\n);
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
) {
This is not your fault, the old code is wrong too. It is assuming that all
nodes are populated in numeric orders with no holes. It won't work if just
two nodes 0 and 4 are online. It should be using for_each_online_node().
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux
);
page = NULL;
}
}
return page;
}
Up to you, I'm not going to make a big deal of it.
FWIW, I agree that removing rcu_read_lock() is fine.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
effort to continue removing as many of the stable nodes anyway.
We're in trouble either way of course.
Otherwise I didn't spot a problem so as weak as it is due my familiarity
with KSM;
Acked-by: Mel Gorman mgor...@suse.de
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line
);
ClearPagePrivate(page);
set_page_private(page, 0);
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
On Wed, Feb 06, 2013 at 09:42:34AM +0900, Minchan Kim wrote:
On Tue, Feb 05, 2013 at 12:01:37PM +, Mel Gorman wrote:
On Tue, Feb 05, 2013 at 05:21:52PM +0800, Lin Feng wrote:
get_user_pages() always tries to allocate pages from movable zone, which
is not
reliable to memory
On Tue, Feb 05, 2013 at 06:26:51PM -0800, Michel Lespinasse wrote:
Just nitpicking, but:
On Tue, Feb 5, 2013 at 3:57 AM, Mel Gorman mgor...@suse.de wrote:
+static inline bool zone_is_idx(struct zone *zone, enum zone_type idx)
+{
+ /* This mess avoids a potentially expensive pointer
On Mon, Jan 28, 2013 at 08:39:35PM -0800, Hugh Dickins wrote:
On Thu, 24 Jan 2013, Mel Gorman wrote:
The function names page_xchg_last_nid(), page_last_nid() and
reset_page_last_nid() were judged to be inconsistent so rename them
to a struct_field_op style pattern. As it looked jarring
to play catchup again when I get back. It's going to be close to 2
weeks before I can start figuring out what went wrong here but I plan to
start with 3.0 and work forward and see how I get on.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
TESTDISK_FILESYSTEM=ext4
+export TESTDISK_MKFS_PARAM=
+export TESTDISK_MOUNT_ARGS=
#
# Test NFS disk to setup (optional)
#export TESTDISK_NFS_MOUNT=192.168.10.7:/exports/`hostname`
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
s/me/be/ and clarify the comment a bit when we're changing it anyway.
Suggested-by: Simon Jeons simon.je...@gmail.com
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/mm_types.h |6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/mm_types.h b
also remove the warning if we grow
enough 64bit only page-flags to push the last-cpu out.
[mgor...@suse.de: Minor modifications]
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/mm.h| 33 -
include/linux/mm_types.h |2
but it
should be fixed. While we are there, migrate_balanced_pgdat() should treat
nr_migrate_pages as an unsigned long as it is treated as a watermark.
Suggested-by: Wanpeng Li liw...@linux.vnet.ibm.com
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/migrate.c |6 --
1 file changed, 4 insertions
The following series is a few follow-up patches left over from NUMA
balancing. The three three patches are tiny fixes. Patches 4 and 5 fold
page-_last_nid into page-flags and is entirely based on work from Peter
Zijlstra. The final patch is a cleanup by Hugh Dickins that he had marked
as a
()
always happens before put_page().
[mgor...@suse.de: changelog only]
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/huge_memory.c | 28 ++--
mm/migrate.c | 95 --
2 files changed, 52 insertions(+), 71 deletions(-)
diff --git a/mm
-by: Mel Gorman mgor...@suse.de
---
include/linux/mm.h| 40 -
include/linux/mm_types.h |1 +
include/linux/mmzone.h| 22 +---
include/linux/page-flags-layout.h | 71 +
4 files changed, 73
count_vm_numa_event() so that
the definitions look similar.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/vmstat.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index a13291f..5fd71a7 100644
--- a/include/linux
On Tue, Jan 22, 2013 at 02:40:24PM -0800, Andrew Morton wrote:
On Tue, 22 Jan 2013 17:12:39 +
Mel Gorman mgor...@suse.de wrote:
The current definitions for count_vm_numa_events() is wrong for
!CONFIG_NUMA_BALANCING as the following would miss the side-effect
On Tue, Jan 22, 2013 at 02:46:59PM -0800, Andrew Morton wrote:
On Tue, 22 Jan 2013 17:12:41 +
Mel Gorman mgor...@suse.de wrote:
From: Peter Zijlstra a.p.zijls...@chello.nl
page-_last_nid fits into page-flags on 64-bit. The unlikely 32-bit NUMA
configuration with NUMA Balancing
] mminit::pageflags_layout_pgshifts Section 0 Node 55 Zone 53
Lastnid 44
[0.00] mminit::pageflags_layout_nodezoneid Node/Zone ID: 64 - 53
[0.00] mminit::pageflags_layout_usage location: 64 - 44 layout 44 -
25 unused 25 - 0 page-flags
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm
-related fields start with page (page_count,
page_mapcount etc.) but the setters begin with set (set_page_section,
set_page_zone, set_page_links etc.). For mapcount, we also have
reset_page_mapcount() so to me reset_page_last_nid() is already
consistent.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from
128 bytes of text in the vmlinux file for the kernel
configuration I used for testing automatic NUMA balancing.
Signed-off-by: Mel Gorman mgor...@suse.de
---
include/linux/mm.h | 21 +
mm/mmzone.c| 20 +++-
2 files changed, 24 insertions(+), 17
renames reset_page_mapcount() to
page_mapcount_reset(). There are others like init_page_count() but as it
is used throughout the arch code a rename would likely cause more conflicts
than it is worth.
Suggested-by: Andrew Morton a...@linux-foundation.org
Signed-off-by: Mel Gorman mgor...@suse.de
On Thu, Feb 07, 2013 at 04:07:17PM -0800, Hugh Dickins wrote:
On Tue, 5 Feb 2013, Mel Gorman wrote:
On Fri, Jan 25, 2013 at 05:59:35PM -0800, Hugh Dickins wrote:
Memory hotremove's ksm_check_stable_tree() is pitifully inefficient
(restarting whenever it finds a stale node to remove
the nasty work and spares
everywhere else from having to worry about the difficulties.
Ok, I'm convinced. As you say, the case for having one function is a lot
strong later in the series when this function becomes quite complex. Thanks.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send
, and an error
then will just prevent changing merge_across_nodes at that time. So
the mysteriously unremovable stable nodes remain the same kind of tree.
Ok.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
(EXIT_FAILURE);
}
munmap(buf, expected);
close(fd);
free(vec);
exit(EXIT_SUCCESS);
}
Cc: sta...@vger.kernel.org
Reported-by: Rob van der Heij rvdh...@gmail.com
Signed-off-by: Mel Gorman mgor...@suse.de
---
mm/fadvise.c | 18 --
1 file changed
? Mel?
Looks correct to me and should cc sta...@vger.kernel.org
Acked-by: Mel Gorman mgor...@suse.de
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
allocated page will be grouped with
other movable pages.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http
On Tue, Feb 19, 2013 at 05:55:30PM +0800, Lin Feng wrote:
Hi Mel,
On 02/18/2013 11:17 PM, Mel Gorman wrote:
SNIP
result. It's a little clumsy but the memory hot-remove failure message
could list what applications have pinned the pages that cannot be
removed
so
On Thu, Feb 14, 2013 at 12:39:26PM -0800, Andrew Morton wrote:
On Thu, 14 Feb 2013 12:03:49 +
Mel Gorman mgor...@suse.de wrote:
Rob van der Heij reported the following (paraphrased) on private mail.
The scenario is that I want to avoid backups to fill up the page
cache
On Sat, Feb 23, 2013 at 03:34:17PM +0800, Hillf Danton wrote:
Hello all
On Mon, Dec 17, 2012 at 7:19 AM, Linus Torvalds
torva...@linux-foundation.org wrote:
On Wed, Dec 12, 2012 at 2:03 AM, Mel Gorman mgor...@suse.de wrote:
This is a pull request for Automatic NUMA Balancing V11. The list
On Mon, 8 Aug 2005, Andrew Morton wrote:
Mel Gorman [EMAIL PROTECTED] wrote:
Hi,
I am working on a direct reclaim strategy to free up large blocks of
contiguous pages. The part I have is working fine, but I am finding a
hundreds of pages that are being used for inodes that I need
On Wed, 10 Aug 2005, Andrew Morton wrote:
Mel Gorman [EMAIL PROTECTED] wrote:
On Mon, 8 Aug 2005, Andrew Morton wrote:
Mel Gorman [EMAIL PROTECTED] wrote:
Hi,
I am working on a direct reclaim strategy to free up large blocks of
contiguous pages. The part I have
On Wed, 10 Aug 2005, Andrew Morton wrote:
Mel Gorman [EMAIL PROTECTED] wrote:
On Wed, 10 Aug 2005, Andrew Morton wrote:
Mel Gorman [EMAIL PROTECTED] wrote:
On Mon, 8 Aug 2005, Andrew Morton wrote:
Mel Gorman [EMAIL PROTECTED] wrote:
Hi,
I am
On Wed, 10 Aug 2005, Dave Hansen wrote:
On Wed, 2005-08-10 at 18:27 +0100, Mel Gorman wrote:
I later linearly scan the mem_map looking for pages that can be freed up
(usually LRU pages). I was expecting any page with PG_inode set to have a
page-mapping but not all of them do
This patch adds the kernelcore= parameter for ppc and powerpc.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
---
powerpc/kernel/prom.c |1 +
ppc/mm/init.c |2 ++
2 files changed, 3 insertions(+)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff
linux-2.6.20-rc4-mm1
This patch adds the kernelcore= parameter for x86.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
---
setup.c |1 +
1 files changed, 1 insertion(+)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff
linux-2.6.20-rc4-mm1-003_mark_hugepages_movable/arch/i386/kernel/setup.c
linux-2.6.20-rc4-mm1
Once all patches are applied, a new command-line parameter exist and a new
sysctl. This patch adds the necessary documentation.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
---
filesystems/proc.txt | 15 +++
kernel-parameters.txt | 16
sysctl/vm.txt
This patch adds the kernelcore= parameter for x86_64.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
---
e820.c |1 +
1 files changed, 1 insertion(+)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff
linux-2.6.20-rc4-mm1-005_ppc64_set_kernelcore/arch/x86_64/kernel/e820.c
linux-2.6.20-rc4
This patch adds the kernelcore= parameter for ia64.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
---
efi.c |3 +++
1 files changed, 3 insertions(+)
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff
linux-2.6.20-rc4-mm1-006_x8664_set_kernelcore/arch/ia64/kernel/efi.c
linux-2.6.20-rc4-mm1
are not mlocked. Despite huge pages being
non-movable, we do not introduce additional external fragmentation of note
as huge pages are always the largest contiguous block we care about.
A lot of credit goes to Andy Whitcroft for catching a large variety of
problems during review of the patches.
--
Mel Gorman
function. This clean-up suggestion is courtesy of
Hugh Dickens.
Additional credit goes to Christoph Lameter and Linus Torvalds for shaping
the concept. Credit to Hugh Dickens for catching issues with shmem swap
vector and ramfs allocations.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
---
fs/inode.c
-by: Mel Gorman [EMAIL PROTECTED]
---
include/linux/gfp.h|3
include/linux/mm.h |1
include/linux/mmzone.h | 21 +++-
mm/highmem.c |5
mm/page_alloc.c| 224 +++-
5 files changed, 247 insertions(+), 7 deletions(-)
diff
external fragmentation of note as huge pages are always
the largest contiguous block we care about.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
---
include/linux/hugetlb.h |3 +++
include/linux/mempolicy.h |6 +++---
include/linux/sysctl.h|1 +
kernel/sysctl.c |8
On Fri, 26 Jan 2007, Nick Piggin wrote:
Mel Gorman wrote:
It is often known at allocation time when a page may be migrated or
not. This patch adds a flag called __GFP_MOVABLE and a new mask called
GFP_HIGH_MOVABLE.
Shouldn't that be HIGHUSER_MOVABLE?
I suppose, but it's a bit verbose. I
On Fri, 26 Jan 2007, Andrew Morton wrote:
On Thu, 25 Jan 2007 23:44:58 + (GMT)
Mel Gorman [EMAIL PROTECTED] wrote:
The following 8 patches against 2.6.20-rc4-mm1 create a zone called
ZONE_MOVABLE
Argh. These surely get all tangled up with the
make-zones-optional-by-adding-zillions
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Thu, 25 Jan 2007, Mel Gorman wrote:
The following 8 patches against 2.6.20-rc4-mm1 create a zone called
ZONE_MOVABLE that is only usable by allocations that specify both __GFP_HIGHMEM
and __GFP_MOVABLE. This has the effect of keeping all non
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Thu, 25 Jan 2007, Mel Gorman wrote:
@@ -166,6 +168,8 @@ enum zone_type {
#define ZONES_SHIFT 1
#elif __ZONE_COUNT = 4
#define ZONES_SHIFT 2
+#elif __ZONE_COUNT = 8
+#define ZONES_SHIFT 3
#else
You do not need a shift of 3. Even
, someone suggests I go
back the other way)
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Fri, 26 Jan 2007, Mel Gorman wrote:
For arches that do not have HIGHMEM other zones would be okay too it
seems.
It would, but it'd obscure the code to take advantage of that.
No MOVABLE memory for 64 bit platforms that do not have HIGHMEM
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Fri, 26 Jan 2007, Mel Gorman wrote:
Because Andrew has made it pretty clear he will not take those patches on the
grounds of complexity - at least until it can be shown that they fix the e1000
problem. Any improvement on the behavior of those
for FOR_ALL_ZONES(), what code in there uses special
awareness of the zone?
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
-
To unsubscribe from this list: send the line unsubscribe linux-kernel
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Fri, 26 Jan 2007, Mel Gorman wrote:
What is the e1000 problem? Jumbo packet allocation via GFP_KERNEL?
Yes. Potentially the anti-fragmentation patches could address this by
clustering atomic allocations together as much as possible
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Fri, 26 Jan 2007, Mel Gorman wrote:
Other than adding some TEXT_FOR_MOVABLE, an addition to TEXTS_FOR_ZONES() and
similar updates for FOR_ALL_ZONES(), what code in there uses special awareness
of the zone?
Look for special handling
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Fri, 26 Jan 2007, Mel Gorman wrote:
It's come up a few times and the converation is always fairly similar although
the thread http://lkml.org/lkml/2006/9/22/44 has interesting information on
the topic. There has been no serious discussion
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Fri, 26 Jan 2007, Mel Gorman wrote:
The zone-based approach does nothing to help jumbo frame allocations. It only
helps hugepage allocations at runtime and potentially memory hot-remove.
Sounds like the max order based approach is better
On Fri, 26 Jan 2007, Chris Friesen wrote:
Mel Gorman wrote:
Worse, the problem is to have high order contiguous blocks free at the time
of allocation without reclaim or migration. If the allocations were not
atomic, anti-fragmentation as it is today would be enough.
Has anyone looked
On Fri, 26 Jan 2007, Christoph Lameter wrote:
On Thu, 25 Jan 2007, Mel Gorman wrote:
@@ -166,6 +168,8 @@ enum zone_type {
#define ZONES_SHIFT 1
#elif __ZONE_COUNT = 4
#define ZONES_SHIFT 2
+#elif __ZONE_COUNT = 8
+#define ZONES_SHIFT 3
#else
You do not need a shift of 3. Even
On (26/01/07 09:16), Christoph Lameter didst pronounce:
I do not see any updates of vmstat.c and vmstat.h. This
means that VM statistics are not kept / considered for ZONE_MOVABLE.
Based on searching around for ZONE_DMA32, the following patch appears to be
all that is required;
diff -rup -X
On Mon, 8 Aug 2005, Jörn Engel wrote:
On Mon, 8 August 2005 16:52:52 +0100, Mel Gorman wrote:
I am working on a direct reclaim strategy to free up large blocks of
contiguous pages. The part I have is working fine, but I am finding a
hundreds of pages that are being used for inodes that I
is then only sorted once when required. Successfully boot tested on a
number of machines.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff
linux-2.6.19-mm1-clean/mm/page_alloc.c
linux-2.6.19-mm1-excessivesort/mm/page_alloc.c
--- linux-2.6.19-mm1-clean/mm
On Tue, 5 Dec 2006, Christoph Lameter wrote:
On Tue, 5 Dec 2006, Mel Gorman wrote:
There are times you want to reclaim just part of a zone - specifically
satisfying a high-order allocations. See sitations 1 and 2 from elsewhere
in this thread. On a similar vein, there will be times when you
;
+ }
+ return ((end - start) - ram);
+}
+
+
+/*
* Mark e820 reserved areas as busy for the resource manager.
*/
void __init e820_reserve_resources(void)
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin
for the SPARSEMEM memory model. This
only applies to FLATMEM and DISCONTIG configurations.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff
linux-2.6.19-rc6-mm1-clean/arch/ia64/mm/contig.c
linux-2.6.19-rc6-mm1-debug_bootmem_init_issues/arch/ia64/mm/contig.c
On Mon, 27 Nov 2006, Andi Kleen wrote:
On Monday 27 November 2006 15:08, Mel Gorman wrote:
A number of bug reports have been submitted related to memory initialisation
that would have been easier to debug if the PFN of page addresses were
available. The dmesg output is often insufficient
number of lines
like
Entering add_active_range(0, 1024, 30719) 0 entries of 256 used
I see your point. I'll look into doing something like apic_printk().
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin
for its metadata allocations as well as its page_cache_alloc()s:
that's just a special case. Though the ramfs case is more telling
(its pagecache pages being not at present movable).
Hugh
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick
to Hugh Dickens for catching issues with shmem swap
vector and ramfs allocations.
Signed-off-by: Mel Gorman [EMAIL PROTECTED]
diff -rup -X /usr/src/patchset-0.6/bin//dontdiff
linux-2.6.19-rc5-mm2-clean/fs/compat.c
linux-2.6.19-rc5-mm2-mark_highmovable/fs/compat.c
--- linux-2.6.19-rc5-mm2-clean/fs
On Mon, 27 Nov 2006, Rohit Seth wrote:
Hi Mel,
On Mon, 2006-11-27 at 13:18 +, Mel Gorman wrote:
On Wed, 22 Nov 2006, Rohit Seth wrote:
This patch provides a IO hole size in a given address range.
Hi,
This patch reintroduces a function that doubles up what
absent_pages_in_range
On Tue, 28 Nov 2006, Rohit Seth wrote:
On Tue, 2006-11-28 at 13:24 +, Mel Gorman wrote:
On Mon, 27 Nov 2006, Rohit Seth wrote:
Hi Mel,
On Mon, 2006-11-27 at 13:18 +, Mel Gorman wrote:
On Wed, 22 Nov 2006, Rohit Seth wrote:
This patch provides a IO hole size in a given address
On Wed, 29 Nov 2006, Andrew Morton wrote:
On Wed, 29 Nov 2006 18:00:47 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
page_alloc.c contains a large amount of memory initialisation code which
obscures the purpose of the file. This patch breaks out the initialisation
code into a separate file
migration mechanism
or reclaimed by syncing with backing storage and discarding.
Additional credit goes to Christoph Lameter and Linus Torvalds for shaping
the concept. Credit to Hugh Dickens for catching issues with shmem swap
vector and ramfs allocations.
Signed-off-by: Mel Gorman [EMAIL PROTECTED
On Thu, 30 Nov 2006, Andrew Morton wrote:
On Thu, 30 Nov 2006 17:07:46 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
Am reporting this patch after there were no further comments on the last
version.
Am not sure what to do with it - nothing actually uses __GFP_MOVABLE.
Nothing yet. To begin
On (01/12/06 11:01), Andrew Morton didst pronounce:
On Fri, 1 Dec 2006 09:54:11 + (GMT)
Mel Gorman [EMAIL PROTECTED] wrote:
@@ -65,7 +65,7 @@ static inline void clear_user_highpage(s
static inline struct page *
alloc_zeroed_user_highpage(struct vm_area_struct *vma, unsigned long
);
} else
page_cache_release(page);
}
- pagevec_lru_add(lru_pvec);
ret = 0;
out:
return ret;
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin
On Mon, 4 Dec 2006, Andrew Morton wrote:
On Mon, 4 Dec 2006 14:07:47 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
o copy_strings() and variants are no longer setting the flag as the pages
are not obviously movable when I took a much closer look.
o The arch function
On (04/12/06 14:34), Andrew Morton didst pronounce:
On Mon, 4 Dec 2006 20:34:29 + (GMT)
Mel Gorman [EMAIL PROTECTED] wrote:
IOW: big-picture where-do-we-go-from-here stuff.
Start with lumpy reclaim,
I had lumpy-reclaim in my todo-queue but it seems to have gone away. I
think I
On Tue, 5 Dec 2006, KAMEZAWA Hiroyuki wrote:
Hi, your plan looks good to me.
Thanks.
some comments.
On Mon, 4 Dec 2006 23:45:32 + (GMT)
Mel Gorman [EMAIL PROTECTED] wrote:
1. Use lumpy-reclaim to intelligently reclaim contigous pages. The same
logic can be used to reclaim within
On (05/12/06 08:14), Christoph Lameter didst pronounce:
On Mon, 4 Dec 2006, Mel Gorman wrote:
4. Offlining a DIMM
5. Offlining a Node
For Situation 4, a zone may be needed because MAX_ORDER_NR_PAGES would have
to be set to too high for anti-frag to be effective. However, zones would
to external fragmentation in the allowable zone
and kernel allocations might trigger OOM because all the free memory was
in ZONE_MOVABLE.
The options above should not require zone infrastructure other than the
LRU lists for scanning.
Is this sufficient detail?
--
Mel Gorman
Part-time Phd Student
blocks. Without it, you're probably wasting your time.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body
On Thu, Sep 06, 2012 at 02:31:12PM +0900, Minchan Kim wrote:
Hi Mel,
On Wed, Sep 05, 2012 at 11:56:11AM +0100, Mel Gorman wrote:
On Wed, Sep 05, 2012 at 05:11:13PM +0900, Minchan Kim wrote:
This patch introudes MIGRATE_DISCARD mode in migration.
It drops *clean cache pages* instead
On Thu, Sep 06, 2012 at 09:29:35AM +0100, Mel Gorman wrote:
On Thu, Sep 06, 2012 at 02:31:12PM +0900, Minchan Kim wrote:
Hi Mel,
On Wed, Sep 05, 2012 at 11:56:11AM +0100, Mel Gorman wrote:
On Wed, Sep 05, 2012 at 05:11:13PM +0900, Minchan Kim wrote:
This patch introudes
and was a suggestion on how it could be made
better. However, retrying hot-remove would be even better again. I'm not
suggesting it be done as part of this series.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
701 - 800 of 10256 matches
Mail list logo