Map profile events for the i386/core_i7 and add a sensible default for
i386 in general.
Signed-off-by: Mel Gorman
---
oprofile_map_events.pl |5 +
1 file changed, 5 insertions(+)
diff --git a/oprofile_map_events.pl b/oprofile_map_events.pl
index eb413e2..7af5c62 100755
--- a
On Thu, Sep 22, 2011 at 03:18:04PM -0400, Eric B Munson wrote:
> On Thu, 22 Sep 2011 17:33:16 +0100, Mel Gorman wrote:
> > On Thu, Sep 22, 2011 at 11:43:23AM +0100, Mel Gorman wrote:
> >> Distributions typically install vmlinux gzipped but oprofile_start
> >> assume
On Thu, Sep 22, 2011 at 11:43:23AM +0100, Mel Gorman wrote:
> Distributions typically install vmlinux gzipped but oprofile_start
> assumes it is not gzipped because that is what would occur for a
> manual kernel install. oprofile can work with a gzipped vmlinux
> so check for it it.
&
Distributions typically install vmlinux gzipped but oprofile_start
assumes it is not gzipped because that is what would occur for a
manual kernel install. oprofile can work with a gzipped vmlinux
so check for it it.
Signed-off-by: Mel Gorman
---
oprofile_start.sh |6 ++
1 files changed
On Tue, Nov 23, 2010 at 08:52:16AM -0700, Eric B Munson wrote:
> As with get_huge_pages it is appropriate to use MAP_HUGETLB for
> mappings that will hold the heap.
>
> Signed-off-by: Eric B Munson
Acked-by: Mel Gorman
--
Mel Gorman
Part-time Phd Student
On Tue, Nov 23, 2010 at 08:52:14AM -0700, Eric B Munson wrote:
> When the kernel supports MAP_HUGETLB use it for requesting
> a huge page backed area instead of creating a file descriptor.
>
> Signed-off-by: Eric B Munson
Acked-by: Mel Gorman
--
Mel Gorman
Part-time
reported MMU
> page size from /proc/self/smaps. get_mapping_page_size returns the page
> size that is being used for the specified mapping in bytes.
>
> Signed-off-by: Eric B Munson
Acked-by: Mel Gorman
--
Mel Gorman
Part-time Phd Student Linux Technology
nel supports MAP_HUGETLB.
>
> Signed-off-by: Eric B Munson
Acked-by: Mel Gorman
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM D
ates out the detection of valid page
> sizes from the detection of active mount points.
>
> Signed-off-by: Eric B Munson
Acked-by: Mel Gorman
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IB
On Tue, Nov 23, 2010 at 07:40:05AM -0700, Eric B Munson wrote:
> On Tue, 23 Nov 2010, Mel Gorman wrote:
>
>
>
> > > }
> >
> > I'm missing something. How does opening /dev/zero guarantee that we are
> > checking for huge page availability?
.map_hugetlb &&
> + hpage_size == kernel_default_hugepage_size()) {
> + heap_fd = -1;
> + } else {
> + if (!hugetlbfs_find_path_for_size(hpage_size)) {
> + WARNING("Hugepage size %li unavailable", hpage_size);
> +
iled (flags: 0x%lX):
> %s\n",
> flags, strerror(saved_error));
> @@ -116,7 +145,7 @@ void *get_huge_pages(size_t len, ghp_t flags)
> }
>
> /* Close the file so we do not have to track the descriptor */
> - if (close(buf_fd) != 0) {
&g
getlbfs_check_priv_resv();
> #define hugetlbfs_check_safe_noreserve __lh_hugetlbfs_check_safe_noreserve
> extern void hugetlbfs_check_safe_noreserve();
> +#define hugetlbfs_check_map_hugetlb __lh_hugetblfs_check_map_hugetlb
> +extern void hugetlbfs_check_map_hugetlb();
> #define __hu
RE_MAP_HUGETLB,
> +
make it mma
If the kernel has the ability to mmap(MAP_HUGETLB)
It was possible to map shared memory without a mount before and while I
know that's not mmap(), the current comment is fuzzy. Otherwise
Acked-by: Mel Gorman
> HUGETLB_FEATURE_NR,
> };
>
ates out the detection of valid page
> sizes from the detection of active mount points.
>
> Signed-off-by: Eric B Munson
Acked-by: Mel Gorman
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IB
n kB, we return B */
> + return page_size * 1024;
> + }
> + }
> +
> + /* We couldn't find an entry for this addr in smaps */
> + fclose(f);
> + return 0;
> +}
> +
> /* We define this function standalone, rather tha
-
> Beautiful is writing same markup. Internet Explorer 9 supports
> standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3.
> Spend less time writing and rewriting code and more time creati
While cpupcstat appeared to have most of the infrastructure necessary to
work out the time spent servicing TLB misses, it does not make the
actual calculation and print it. This patch should cover it.
Signed-off-by: Mel Gorman
---
cpupcstat | 34 +++---
1 files
Add an instructions retired event for i386/core.
Signed-off-by: Mel Gorman
---
oprofile_map_events.pl |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/oprofile_map_events.pl b/oprofile_map_events.pl
index 3c0fb89..eb413e2 100755
--- a/oprofile_map_events.pl
+++ b
insertions(+), 29 deletions(-)
Mel Gorman (9):
tlbmiss_cost.sh: Cache the value for tlb miss cost
tlbmiss_cost.sh: Allow monitoring of TLB miss events on a global
basis
tlbmiss_cost.sh: Suppress errors from opreport
oprofile_map_events.pl: Add an instructions retired event for
i386
tlbmiss_cost.sh now knows how to cache TLB miss cost information. Do not
duplicate the work in cpupcstat.
Signed-off-by: Mel Gorman
---
cpupcstat | 15 ---
1 files changed, 0 insertions(+), 15 deletions(-)
diff --git a/cpupcstat b/cpupcstat
index c4a1795..deca179 100755
--- a
oprofile at its maximum sampling rate causes a lot of interference.
There is no good automatic way of finding out a non-interfering level as
it basically depends on the workload. Still, we can reduce the impact
somewhat with this patch.
Signed-off-by: Mel Gorman
---
TLBC/OpCollect.pm |2
s are reused later unless -f or
--ignore-cache is specified.
Signed-off-by: Mel Gorman
---
contrib/tlbmiss_cost.sh | 39 ++-
1 files changed, 38 insertions(+), 1 deletions(-)
diff --git a/contrib/tlbmiss_cost.sh b/contrib/tlbmiss_cost.sh
index 3c750af..1f
global events.
Signed-off-by: Mel Gorman
---
TLBC/OpCollect.pm |7 +--
TLBC/PerfCollect.pm |2 +-
cpupcstat |8 +++-
man/cpupcstat.8 |5 +
4 files changed, 18 insertions(+), 4 deletions(-)
diff --git a/TLBC/OpCollect.pm b/TLBC/OpCollect.pm
index
If opreport throws up an error, it shows up in the tlbmiss_cost.sh
output. Stop that.
Signed-off-by: Mel Gorman
---
TLBC/OpCollect.pm |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/TLBC/OpCollect.pm b/TLBC/OpCollect.pm
index 2a1c006..c0944bb 100644
--- a/TLBC
Add usage and manual page information on --time-servicing.
Signed-off-by: Mel Gorman
---
cpupcstat |2 ++
man/tlbmiss_cost.sh.8 |6 ++
2 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/cpupcstat b/cpupcstat
index a951f66..951ae05 100755
--- a/cpupcstat
+++ b
The current header is misleading. Fix it.
Signed-off-by: Mel Gorman
---
cpupcstat |5 -
1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/cpupcstat b/cpupcstat
index deca179..a951f66 100755
--- a/cpupcstat
+++ b/cpupcstat
@@ -120,9 +120,12 @@ sub run_profile()
if
*/
> + HUGEPAGES_OC, /* can be allocated on demand - maximum */
> HUGEPAGES_MAX_COUNTERS,
> };
> #define get_huge_page_counter __pu_get_huge_page_counter
> diff --git a/man/hugeadm.8 b/man/hugeadm.8
> index 05cdceb..13d6199 100644
> --- a/man/hugeadm.8
> ++
The patch that allows MAP_NORESERVE to be safely used was unexpectedly
merged for 2.6.34 instead of 2.6.35. Update the kernel features test
accordingly.
Signed-off-by: Mel Gorman
---
kernel-features.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel-features.c b
. Hence,
this patch also checks the kernel version and only allows use of
MAP_NORESERVE if it's safe to do so.
Signed-off-by: Mel Gorman
---
alloc.c |3 ++-
elflink.c| 12 +++-
hugectl.c| 12
hugeut
Rather than just flopping around uselessly when the helpers are not
available, suggest switches that solve the problem.
Signed-off-by: Mel Gorman
---
contrib/tlbmiss_cost.sh |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/contrib/tlbmiss_cost.sh b/contrib
FIX=tlbmiss-cost-results
diff --git a/cpumhz b/cpumhz
deleted file mode 100755
index da84543..000
--- a/cpumhz
+++ /dev/null
@@ -1,48 +0,0 @@
-#!/bin/bash
-# Simple script to print out the max MHz
-# Licensed under LGPL 2.1 as packaged with libhugetlbfs
-# (c) Mel Gorman 2009
-
-MAX_MHZ=0
-SYSFS_SCALIN
The manual page is there, but people (well me) expect a -h and --help
switch too. This patch adds the necessary support.
Signed-off-by: Mel Gorman
---
contrib/tlbmiss_cost.sh | 16 +++-
1 files changed, 15 insertions(+), 1 deletions(-)
diff --git a/contrib/tlbmiss_cost.sh b
The bold tags are not closed off properly in the manual page. It looks silly.
Signed-off-by: Mel Gorman
---
man/tlbmiss_cost.sh.8 | 24
1 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/man/tlbmiss_cost.sh.8 b/man/tlbmiss_cost.sh.8
index ee73683
Currently, hugeadm is passing in NULL as the user or group name to
create_mounts(). The impact is that when --create-group-mounts or
--create-user-mounts is used, the same mount point is used. This causes
problems when more than one user mount is created.
Signed-off-by: Mel Gorman
---
hugeadm.c
hugetlbfs can limit the size of a mount point by either the amount of
memory it uses or the number of inodes that can be created. This patches
gives hugeadm the necessary smarts and documentation to set the
limitations.
Signed-off-by: Mel Gorman
---
hugeadm.c | 48
The wrong index is being used when parsing the oprofile for the DTLB event. The
result is in ppc970, the TLB miss cost is always 0. This patch fixes the
problem.
Signed-off-by: Mel Gorman
---
contrib/tlbmiss_cost.sh |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/contrib
) (((x) + (a) - 1) & ~((a) - 1))
Eric, could you check what the old and rounded values are and see are
the alignment macros screwing things up? If so and the problem is with
ALIGN_UP, try this version.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
Un
.
Signed-off-by: Mel Gorman
---
contrib/tlbmiss_cost.sh | 93 +++
1 files changed, 93 insertions(+), 0 deletions(-)
diff --git a/contrib/tlbmiss_cost.sh b/contrib/tlbmiss_cost.sh
index 81e2c79..a78b857 100755
--- a/contrib/tlbmiss_cost.sh
+++ b/contrib
On Wed, Nov 04, 2009 at 03:34:26PM +, Mel Gorman wrote:
> I had a large number of comments to make but the length of time required to
> describe each of the points and the time required to do it in patch-format
> was roughly comparable so here are a load of patches.
>
> Ther
Different distros default what the default bitness of a compiled binary
is but the size of N assumes 32 bit. This patch specifies -m32 to be
sure.
Signed-off-by: Mel Gorman
---
tlbmiss_cost.sh |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tlbmiss_cost.sh b
accepted or
rejected. The patch also renames the stream binary to STREAM as "stream" is an
unrelated binary packaged with ImageMagick that might be commonly installed.
Signed-off-by: Mel Gorman
---
tlbmiss_cost.sh | 76 +-
1 files c
+ 1359 = 517459
TLB_MISS_COST=523
So, on PPC970 at least, the cost of a TLB miss is approximately 523
cycles. The method still needs to be verified on different families of POWER.
Signed-off-by: Mel Gorman
---
tlbmiss_cost.sh | 164 ---
1 files
accepted or rejected.
Signed-off-by: Mel Gorman
---
tlbmiss_cost.sh | 47 ++-
1 files changed, 46 insertions(+), 1 deletions(-)
diff --git a/tlbmiss_cost.sh b/tlbmiss_cost.sh
index 3e4b260..dbdd023 100755
--- a/tlbmiss_cost.sh
+++ b/tlbmiss_cost.sh
The URL provided for Calibrator appears invalid. This is a currrent
link to the tool.
Signed-off-by: Mel Gorman
---
tlbmiss_cost.sh |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tlbmiss_cost.sh b/tlbmiss_cost.sh
index 1717d9c..af3daa6 100755
--- a/tlbmiss_cost.sh
Rather than using echo to output logging messages, this patch adds a
loglevel-like interface to print errors and trace messages.
Signed-off-by: Mel Gorman
---
tlbmiss_cost.sh | 54 +-
1 files changed, 37 insertions(+), 17 deletions(-)
diff
rance. Matched 3/3
TLB_MISS_COST=19
Signed-off-by: Mel Gorman
---
tlbmiss_cost.sh | 14 +++---
1 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/tlbmiss_cost.sh b/tlbmiss_cost.sh
index 0a71ccd..3e4b260 100755
--- a/tlbmiss_cost.sh
+++ b/tlbmiss_cost.sh
@@ -85,6 +
I had a large number of comments to make but the length of time required to
describe each of the points and the time required to do it in patch-format
was roughly comparable so here are a load of patches.
There is still work that needs to be done. --help output is needed and a
manual page describi
. This patch catches when this situation occurs and truncates the
recommended shmmax based on a unsigned long.
Tested on a 32-bit X86 machine.
Signed-off-by: Mel Gorman
---
hugeadm.c | 17 +++--
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/hugeadm.c b/hugeadm.c
size_to_smaller_units() to unsigned long long to manage
the overflow. The existing callers of size_to_smaller_units() should be
all right as they are always talking about the context of a huge page size
which is never going to overflow the long type.
Tested on 32-bit X86.
Signed-off-by: Mel
On Thu, Oct 01, 2009 at 04:04:44PM -0400, Jarod Wilson wrote:
> On 10/01/2009 11:14 AM, Mel Gorman wrote:
>> On Thu, Oct 01, 2009 at 10:18:50AM -0400, Jarod Wilson wrote:
> ...
>>>>Is it really a good idea fix shmmax as the total of maximum
>>>>memory.
On Wed, Sep 30, 2009 at 12:22:34PM -0400, Jarod Wilson wrote:
> On 09/18/2009 03:43 AM, Mel Gorman wrote:
>> On Thu, Sep 17, 2009 at 04:59:15PM -0400, Jarod Wilson wrote:
>>>>> The attached python script has been used successfully on Red Hat
>>>>> Enterpris
On Thu, Oct 01, 2009 at 10:18:50AM -0400, Jarod Wilson wrote:
> On 10/01/2009 09:07 AM, Mel Gorman wrote:
>> On Wed, Sep 30, 2009 at 12:22:34PM -0400, Jarod Wilson wrote:
> ...
>>> So hopefully, I've not butchered anything *too* badly...
>>>
>>
>> Th
On Thu, Sep 17, 2009 at 04:59:15PM -0400, Jarod Wilson wrote:
> On 09/17/2009 06:46 AM, Mel Gorman wrote:
>> On Wed, Sep 09, 2009 at 11:04:57AM -0400, Jarod Wilson wrote:
>>> Hey folks,
>>>
>>> We (Red Hat) get the occasional complaint, particularly from jboss
&
try:
> limitsConfLines = open(limitsConf).readlines()
> os.rename(limitsConf, limitsConf + ".backup")
> print("Saved original %s as %s.backup" % (limitsConf, limitsConf))
> except:
> pass
>
> fd = op
On Thu, Aug 27, 2009 at 10:28:58PM +0200, Toon Moene wrote:
> Mel Gorman wrote:
>
>> On Mon, Aug 24, 2009 at 11:51:07PM +0200, Toon Moene wrote:
>
>>> [ I found this e-mail address while surfing http://linux-mm.org -
>>>hope it's relevant. ]
>>>
to 64K so those page sizes
can be used at least but on x86, a recompile is needed.
> Thanks in advance for any insight provided ... and happy hacking !
>
Hope this helped.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick
The sample period for timer can be scaled with --sample-cycle-factor.
However, on ppc64, there are multiple timer events depending on which group
is being used. This patch pattern matches for the timer events properly.
Signed-off-by: Mel Gorman
---
oprofile_map_events.pl |2 +-
1 file
supported.
Signed-off-by: Mel Gorman
---
oprofile_map_events.pl |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/oprofile_map_events.pl b/oprofile_map_events.pl
index 5c9e9b6..a172325 100755
--- a/oprofile_map_events.pl
+++ b/oprofile_map_events.pl
@@ -40,11 +40,11
DTLB miss.
This patch adds a mapping for the necessary event.
Signed-off-by: Mel Gorman
---
oprofile_map_events.pl |4
1 file changed, 4 insertions(+)
diff --git a/oprofile_map_events.pl b/oprofile_map_events.pl
index 2aff660..5c9e9b6 100755
--- a/oprofile_map_events.pl
+++ b
cator: Limit
the number of MIGRATE_RESERVE pageblocks per zone" applied. It has been
sent for consideration in -mm.
Signed-off-by: Mel Gorman
---
hugeadm.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/hugeadm.c b/hugeadm.c
index 6ebe153..a793267 100644
--- a
the
system. This may turn out to be too conservative, particularly where
there are large variances between zone sizes but it is a reasonable
starting point.
Signed-off-by: Mel Gorman
---
hugeadm.c | 21 +
man/hugeadm.8 | 10 ++
2 files changed, 31 insertions
The following two patches are concerned with min_free_kbytes.
The first patch adds support to --explain that recommends a value for
min_free_kbytes that should help fragmentation avoidance. A higher value
for min_free_kbytes will be of use in situations where the static hugepage
pool is being resi
mixing is to increase the value
of min_free_kbytes. This patch compares the current value of min_free_kbytes
with a recommended value and warns if the current value is too low.
Signed-off-by: Mel Gorman
---
hugeadm.c | 46 ++
1 files changed, 46
oprofile_start.sh was lifted from VMRegress which while maintained is no
longer released. When moving to libhugetlbfs, a reference to VMRegress
was improperly left behind. This patch removes it.
Signed-off-by: Mel Gorman
---
oprofile_start.sh |4 +---
1 file changed, 1 insertion(+), 3
vents such as DTLB misses are significantly different. CPU
cycles may know need to be scaled by a factor of 4 and DTLB misses by a
factor of 16 to get acceptable performance overhead of profiling.
This patch allows CPU cycles and events to be scaled by separate values.
Signed-off-by: Mel G
1
> TMPLIB64 = lib64
> +CFLAGS += -DNO_ELFLINK
> +else
> +ifeq ($(ARCH),s390)
> +CC32 = gcc -m31
> TMPLIB32 = lib
> CFLAGS += -DNO_ELFLINK
> else
> @@ -84,6 +87,7 @@ endif
> endif
> endif
> endif
&g
On Fri, May 29, 2009 at 06:24:41PM -0700, Avantika Mathur wrote:
> Mel Gorman wrote:
>
>>> +
>>> + /* swapsize is 5 hugepages (in KB) */
>>> + swap_size = gethugepagesize() * 5;
>>> + buf = malloc(swap_size);
>>> + memset(buf, 0, swap_si
==
> --- libhugetlbfs-tempswap.orig/man/hugeadm.8 2009-05-22 14:31:19.0
> -0700
> +++ libhugetlbfs-tempswap/man/hugeadm.8 2009-05-22 14:55:47.0
> -0700
> @@ -148,6 +148,13 @@
> to resize the pool up to 5 times and continues to try if
bug.
Note if this test fails, the system may no longer be usable for hugepage
testing as the system will always think it has insufficient pages.
A patch is currently being tested for this bug but no fix is merged
upstream yet.
Signed-off-by: Mel Gorman
---
tests/Makefile |3 -
tests
The upstream kernel now has one fix that covers the readahead, fadvise
and madvise bugs. Record what commit fixes the problem in the test in
case they need to be found for backporting later.
Signed-off-by: Mel Gorman
---
tests/fadvise_reserve.c |1 +
tests/madvise_reserve.c |2
size and
> ignores all pool resize requests after the first POOL_MAX.
>
> Signed-off-by: Eric B Munson
Acked-by: Mel Gorman
> ---
> Changes from V1:
> -Fix problem that skipped index 0 if adjust arrays.
>
> hugeadm.c | 17 +++--
> 1 files changed, 15 ins
less than 2.6.30.
>
The fixes for this will need to be backported and this check will need
to be more sophisicated, but sure, this is ok for the moment.
> Signed-off-by: Eric B Munson
Acked-by: Mel Gorman
> ---
> tests/Makefile |3 ++-
> tests/fad
tion
> that is called during hugetlb_setup.
>
> Signed-off-by: Eric B Munson
Looks good.
Acked-by: Mel Gorman
> ---
> hugeutils.c | 14 ++
> init.c |1 +
> libhugetlbfs_internal.h |2 ++
> morecore.c | 11
;
> + if (maxadj_count >= MAX_POOLS) {
> + WARNING("Attempting to adjust an invalid "
> + "pool or a pool multiple times, "
> + "ignor
min value again to actually configure overcommit.
The problem is that opt_min_adj[] is being manipulated when --pool-pages-max is
specified. This patch populates the opt_max_adj[] array for --pool-pages-max,
instead of opt_min_adj[].
Signed-off-by: Mel Gorman
---
hugeadm.c |2 +-
1 file
ZONE_MOVABLE for the allocation of huge pages.
Signed-off-by: Mel Gorman
---
hugeadm.c | 33 +
man/hugeadm.8 | 22 ++
2 files changed, 55 insertions(+)
diff --git a/hugeadm.c b/hugeadm.c
index 0ad69cc..cb2f1dd 100644
--- a/hugeadm.c
Use of fadvise() or readahead() on a hugetlbfs-backed memory region can
result in reservations being leaked and in some cases the kernel triggering
a BUG. This patch adds a regression tests for these conditions.
Signed-off-by: Mel Gorman
---
tests/Makefile|3 +
tests
commit fixes the bug
Signed-off-by: Mel Gorman
---
tests/Makefile |2 -
tests/madvise_reserve.c | 81
tests/run_tests.py |1
3 files changed, 83 insertions(+), 1 deletion(-)
diff --git a/tests/Makefile b/tests/Makefile
ff-by: Mel Gorman
---
tests/Makefile |2 -
tests/madvise_reserve.c | 85
tests/run_tests.py |1
3 files changed, 87 insertions(+), 1 deletion(-)
diff --git a/tests/Makefile b/tests/Makefile
index 31b1b3b..d3efe79 100644
--- a/
previous = list;
> + list = list->next;
> + free(previous);
> + }
> + return 0;
> + }
> +
> + while (list) {
> + previous = list;
> + list = list->next;
>
t;next;
> +void mounts_list_all(void)
> +{
> + struct mount_list *list, *previous;
> + int longest = MIN_COL;
> +
> + list = collect_active_mounts(&longest);
> +
> + if (!list) {
> + ERROR("No hugetlbfs mount points found\n"
> + if (opt_global_mounts) {
> + snprintf(base, PATH_MAX, "%s/global", MOUNT_DIR);
> + create_mounts(NULL, NULL, base, S_IRWXU | S_IRWXG | S_IRWXO);
> + }
> +
> + if (opt_pgsizes)
> + page_sizes(0);
> +
> + if (opt_pgs
= collect_mounts(&dummy);
> +
> + if (list && check_mount(list, path)) {
> + INFO("Directory %s is already mounted\n", path);
I think this should be a warning. Otherwise without increasing the
verbosity, this will be easily missed.
> +
ximum Default
>4194304 30 30 30*
>
> Huge page sizes with configured pools:
> 4194304
> hugeadm: WARNING: Swap is full or no swap space configured, resizing pool may
> fail.
> emun...@lappy-486:~$
>
> Signed-off-by: Eric B Munson
Acked-
WARNING("There is very little swap space free, resizing
> hugepage pool may fail\n");
> +}
> +
> enum {
> POOL_MIN,
> POOL_MAX,
> @@ -502,6 +527,8 @@ void pool_adjust(char *cmd, unsigned int counter)
> exit(EXIT_FAILURE);
>
7;d'},
> + {"explain", no_argument, NULL, LONG_EXPLAIN},
>
> {0},
> };
> @@ -721,6 +737,10 @@ int main(int argc, char** arg
oid pool_adjust(char *cmd, unsigned int counter)
> exit(EXIT_FAILURE);
> }
>
> + check_swap();
> +
> min = pools[pos].minimum;
> max = pools[pos].maximum;
>
> --
> 1.6.1.2
>
>
> ---------
libhugetlbfs linker script.
> +
It doesn't explain *why* you would use it. What about;
Force pre-loading of the \fBlibhugetlbfs\fP library. This option is used when
the segments of the binary are aligned
to be specified
anywhere. --pool-pages-min can be specified multiple times for different
pools. My bad, so here is a patch on top of yours.
From: Mel Gorman
Subject: [PATCH] Allow --pool-pages-min to be specified multiple times after
--hard implementation
--pool-pages-min can be specified multiple t
On Mon, Mar 23, 2009 at 11:12:47PM -0700, Avantika Mathur wrote:
> Mel Gorman wrote:
>
>> On Fri, Mar 20, 2009 at 05:48:25PM -0700, Avantika Mathur wrote:
>>
>>> Mel Gorman wrote:
>>>
>> Fix it up so that --hard can be specified anywhere and I'
On Fri, Mar 20, 2009 at 11:32:24AM +, Eric B Munson wrote:
> > > int opt_dry_run = 0;
> >
> > Total aside, opt_dry_run appears to be totally dead in this C file. Who
> > sets it?
> >
>
> It is set by specifying --dry-run or -d.
>
gack, I was
On Fri, Mar 20, 2009 at 05:48:25PM -0700, Avantika Mathur wrote:
> Mel Gorman wrote:
>
>> On Thu, Mar 19, 2009 at 09:53:39PM -0700, Avantika Mathur wrote:
>>
>>
>> Patch cleaniness issues. Not a massive deal here but a kernel
>> submission patch would attr
t, NULL, LONG_PAGE_SIZES},
> {"page-sizes-all", no_argument, NULL, LONG_PAGE_AVAIL},
> {"dry-run", no_argument, NULL, 'd'},
> + {"hard", no_argument, NULL, 'r'},
The 'r' of the switch appears to
s topic in the documentation as well but it's
not mandatory.
> +
> enum {
> POOL_MIN,
> POOL_MAX,
> @@ -495,6 +511,8 @@ void pool_adjust(char *cmd, unsigned int counter)
> exit(EXIT_FAILURE);
> }
>
> + check_swap();
> + S_IRGRP | S_IWGRP | S_IXGRP |
> + S_IROTH | S_IWOTH | S_IXOTH);
> + create_global_mounts();
> + break;
> +
> case LONG_PAGE_SIZES:
> page_sizes(0);
>
main(int argc, char** argv)
> {"pool-list", no_argument, NULL, LONG_POOL_LIST},
> {"pool-pages-min", required_argument, NULL, LONG_POOL_MIN_ADJ},
> {"pool-pages-max", required_argument, NULL, LONG_POOL_MAX_ADJ},
> + {"create-mo
>gr_gid);
> + break;
> +
> + case LONG_CREATE_USER_MOUNTS:
> + ensure_dir(MOUNT_DIR);
> + chmod(MOUNT_DIR, S_IRUSR | S_IWUSR | S_IXUSR |
> + S_IRGRP | S_IWGRP | S_IXGRP |
>
opt_library = optarg;
> break;
>
> + case LONG_SHARE:
> + opt_share = 1;
> + break;
> +
> case -1:
> break;
>
> @@ -414,6 +423,9 @@ int main(int argc, char** argv
The benchmark is readily modfied to use malloc() or shared memory instead
so you can verify each aspect of libhugetlbfs and kernel hugepage support
is working as expected by monitoring /proc/meminfo for hugepage usage.
--
Mel Gorman
Part-time Phd Student
1 - 100 of 406 matches
Mail list logo