Joel Schopp wrote:
>> But if you don't require a lot of higher order allocations anyway, then
>> guest fragmentation caused by ballooning doesn't seem like much problem.
>
> If you only need to allocate 1 page size and smaller allocations then no
> it's not a problem. As soon as you go above
Joel Schopp wrote:
But if you don't require a lot of higher order allocations anyway, then
guest fragmentation caused by ballooning doesn't seem like much problem.
If you only need to allocate 1 page size and smaller allocations then no
it's not a problem. As soon as you go above that it
Linus Torvalds writes:
> The point being that in the guests, hotunplug is almost useless (for
> bigger ranges), and we're much better off just telling the virtualization
> hosts on a per-page level whether we care about a page or not, than to
> worry about fragmentation.
We don't have that
If you only need to allocate 1 page size and smaller allocations then no
it's not a problem. As soon as you go above that it will be. You don't
need to go all the way up to MAX_ORDER size to see an impact, it's just
increasingly more severe as you get away from 1 page and towards MAX_ORDER.
On Mon, Mar 05, 2007 at 09:20:10AM -0600, Joel Schopp wrote:
> >But if you don't require a lot of higher order allocations anyway, then
> >guest fragmentation caused by ballooning doesn't seem like much problem.
>
> If you only need to allocate 1 page size and smaller allocations then no
> it's
But if you don't require a lot of higher order allocations anyway, then
guest fragmentation caused by ballooning doesn't seem like much problem.
If you only need to allocate 1 page size and smaller allocations then no it's not a
problem. As soon as you go above that it will be. You don't
But if you don't require a lot of higher order allocations anyway, then
guest fragmentation caused by ballooning doesn't seem like much problem.
If you only need to allocate 1 page size and smaller allocations then no it's not a
problem. As soon as you go above that it will be. You don't
On Mon, Mar 05, 2007 at 09:20:10AM -0600, Joel Schopp wrote:
But if you don't require a lot of higher order allocations anyway, then
guest fragmentation caused by ballooning doesn't seem like much problem.
If you only need to allocate 1 page size and smaller allocations then no
it's not a
If you only need to allocate 1 page size and smaller allocations then no
it's not a problem. As soon as you go above that it will be. You don't
need to go all the way up to MAX_ORDER size to see an impact, it's just
increasingly more severe as you get away from 1 page and towards MAX_ORDER.
Linus Torvalds writes:
The point being that in the guests, hotunplug is almost useless (for
bigger ranges), and we're much better off just telling the virtualization
hosts on a per-page level whether we care about a page or not, than to
worry about fragmentation.
We don't have that luxury
On Fri, Mar 02, 2007 at 11:05:15AM -0600, Joel Schopp wrote:
> Linus Torvalds wrote:
> >
> >On Thu, 1 Mar 2007, Andrew Morton wrote:
> >>So some urgent questions are: how are we going to do mem hotunplug and
> >>per-container RSS?
>
> The people who were trying to do memory hot-unplug basically
On Fri, Mar 02, 2007 at 11:05:15AM -0600, Joel Schopp wrote:
Linus Torvalds wrote:
On Thu, 1 Mar 2007, Andrew Morton wrote:
So some urgent questions are: how are we going to do mem hotunplug and
per-container RSS?
The people who were trying to do memory hot-unplug basically all stopped
Andrew Morton wrote:
On Sat, 03 Mar 2007 20:26:15 -0500 Rik van Riel <[EMAIL PROTECTED]> wrote:
Nick Piggin wrote:
Different issue, isn't it? Rik wants to be smarter in figuring out which
pages to throw away. More work per page == worse for you.
Being smarter about figuring out which pages
On Sat, 03 Mar 2007 20:26:15 -0500 Rik van Riel <[EMAIL PROTECTED]> wrote:
> Nick Piggin wrote:
>
> > Different issue, isn't it? Rik wants to be smarter in figuring out which
> > pages to throw away. More work per page == worse for you.
>
> Being smarter about figuring out which pages to evict
Nick Piggin wrote:
Different issue, isn't it? Rik wants to be smarter in figuring out which
pages to throw away. More work per page == worse for you.
Being smarter about figuring out which pages to evict does
not equate to spending more work. One big component is
sorting the pages
On Sat, 3 Mar 2007, Martin J. Bligh wrote:
> That'd be nice. Unfortunately we're stuck in the real world with
> real hardware, and the situation is likely to remain thus for
> quite some time ...
Our real hardware does behave as described and therefore does not suffer
from the problem.
If you
Christoph Lameter wrote:
On Fri, 2 Mar 2007, William Lee Irwin III wrote:
On Fri, Mar 02, 2007 at 02:22:56PM -0800, Andrew Morton wrote:
Opterons seem to be particularly prone to lock starvation where a cacheline
gets captured in a single package for ever.
AIUI that phenomenon is universal
Christoph Lameter wrote:
On Fri, 2 Mar 2007, William Lee Irwin III wrote:
On Fri, Mar 02, 2007 at 02:22:56PM -0800, Andrew Morton wrote:
Opterons seem to be particularly prone to lock starvation where a cacheline
gets captured in a single package for ever.
AIUI that phenomenon is universal
On Sat, 3 Mar 2007, Martin J. Bligh wrote:
That'd be nice. Unfortunately we're stuck in the real world with
real hardware, and the situation is likely to remain thus for
quite some time ...
Our real hardware does behave as described and therefore does not suffer
from the problem.
If you
Nick Piggin wrote:
Different issue, isn't it? Rik wants to be smarter in figuring out which
pages to throw away. More work per page == worse for you.
Being smarter about figuring out which pages to evict does
not equate to spending more work. One big component is
sorting the pages
On Sat, 03 Mar 2007 20:26:15 -0500 Rik van Riel [EMAIL PROTECTED] wrote:
Nick Piggin wrote:
Different issue, isn't it? Rik wants to be smarter in figuring out which
pages to throw away. More work per page == worse for you.
Being smarter about figuring out which pages to evict does
not
Andrew Morton wrote:
On Sat, 03 Mar 2007 20:26:15 -0500 Rik van Riel [EMAIL PROTECTED] wrote:
Nick Piggin wrote:
Different issue, isn't it? Rik wants to be smarter in figuring out which
pages to throw away. More work per page == worse for you.
Being smarter about figuring out which pages to
On Fri, 2 Mar 2007 16:32:07 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
> The zone-based patches for memory partitioning should be providing what is
> required for memory hot-remove of an entire DIMM or bank of memory (PPC64
> also cares about removing smaller blocks of memory but zones are
On Fri, 2 Mar 2007, William Lee Irwin III wrote:
>> AIUI that phenomenon is universal to NUMA. Maybe it's time we
>> reexamined our locking algorithms in the light of fairness
>> considerations.
On Fri, Mar 02, 2007 at 07:15:38PM -0800, Christoph Lameter wrote:
> This is a phenomenon that is
On Fri, 2 Mar 2007 17:40:04 -0800 William Lee Irwin III <[EMAIL PROTECTED]>
wrote:
>> My gut feeling is to agree, but I get nagging doubts when I try to
>> think of how to boil things like [major benchmarks whose names are
>> trademarked/copyrighted/etc. censored] down to simple testcases. Some
On Fri, 2 Mar 2007, William Lee Irwin III wrote:
> On Fri, Mar 02, 2007 at 02:22:56PM -0800, Andrew Morton wrote:
> > Opterons seem to be particularly prone to lock starvation where a cacheline
> > gets captured in a single package for ever.
>
> AIUI that phenomenon is universal to NUMA. Maybe
On Fri, 2 Mar 2007 17:40:04 -0800
William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> On Fri, Mar 02, 2007 at 02:59:06PM -0800, Andrew Morton wrote:
> > Somehow I don't believe that a person or organisation which is incapable of
> > preparing even a simple testcase will be capable of fixing
On Fri, Mar 02, 2007 at 02:59:06PM -0800, Andrew Morton wrote:
> What is it with vendors finding MM problems and either not fixing them or
> kludging around them and not telling the upstream maintainers about *any*
> of it?
I'm not in the business of defending vendors, but a lot of times the
base
On Fri, 2 Mar 2007 16:33:19 -0800
William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> On Fri, Mar 02, 2007 at 02:22:56PM -0800, Andrew Morton wrote:
> > Opterons seem to be particularly prone to lock starvation where a cacheline
> > gets captured in a single package for ever.
>
> AIUI that
On Fri, Mar 02, 2007 at 02:22:56PM -0800, Andrew Morton wrote:
> Opterons seem to be particularly prone to lock starvation where a cacheline
> gets captured in a single package for ever.
AIUI that phenomenon is universal to NUMA. Maybe it's time we
reexamined our locking algorithms in the light
On Fri, 02 Mar 2007 15:28:43 -0800
"Martin J. Bligh" <[EMAIL PROTECTED]> wrote:
> >>> 32GB is pretty much the minimum size to reproduce some of these
> >>> problems. Some workloads may need larger systems to easily trigger
> >>> them.
> >>
> >> We can find a 32GB system here pretty easily to test
.. and think about a realistic future.
EVERYBODY will do on-die memory controllers. Yes, Intel doesn't do it
today, but in the one- to two-year timeframe even Intel will.
What does that mean? It means that in bigger systems, you will no longer
even *have* 8 or 16 banks where turning off a
32GB is pretty much the minimum size to reproduce some of these
problems. Some workloads may need larger systems to easily trigger
them.
We can find a 32GB system here pretty easily to test things on if
need be. Setting up large commercial databases is much harder.
That's my problem, too.
Andrew Morton wrote:
Somehow I don't believe that a person or organisation which is incapable of
preparing even a simple testcase will be capable of fixing problems such as
this without breaking things.
I don't believe anybody who relies on one simple test case will
ever be capable of
On Fri, 02 Mar 2007 17:34:31 -0500
Rik van Riel <[EMAIL PROTECTED]> wrote:
> The main reason they end up pounding the LRU locks is the
> swappiness heuristic. They scan too much before deciding
> that it would be a good idea to actually swap something
> out, and with 32 CPUs
Martin Bligh wrote:
None of this is going anywhere, is is it?
I will test my changes before I send them to you, but I cannot
promise you that you'll have the computers or software needed
to reproduce the problems. I doubt I'll have full time access
to such systems myself, either.
32GB is
Rik van Riel wrote:
> 32GB is pretty much the minimum size to reproduce some of these
> problems. Some workloads may need larger systems to easily trigger
> them.
>
Hundreds of disks all doing IO at once may also be needed, as
wli points out. Such systems are not readily available for testing.
None of this is going anywhere, is is it?
I will test my changes before I send them to you, but I cannot
promise you that you'll have the computers or software needed
to reproduce the problems. I doubt I'll have full time access
to such systems myself, either.
32GB is pretty much the minimum
Andrew Morton wrote:
On Fri, 02 Mar 2007 17:03:10 -0500
Rik van Riel <[EMAIL PROTECTED]> wrote:
Andrew Morton wrote:
On Fri, 02 Mar 2007 16:19:19 -0500
Rik van Riel <[EMAIL PROTECTED]> wrote:
Bill Irwin wrote:
On Fri, Mar 02, 2007 at 01:23:28PM -0500, Rik van Riel wrote:
With 32 CPUs
Christoph Lameter wrote:
On Fri, 2 Mar 2007, Rik van Riel wrote:
I would like to see separate pageout selection queues
for anonymous/tmpfs and page cache backed pages. That
way we can simply scan only that what we want to scan.
There are several ways available to balance pressure
between
On Fri, 02 Mar 2007 17:03:10 -0500
Rik van Riel <[EMAIL PROTECTED]> wrote:
> Andrew Morton wrote:
> > On Fri, 02 Mar 2007 16:19:19 -0500
> > Rik van Riel <[EMAIL PROTECTED]> wrote:
> >> Bill Irwin wrote:
> >>> On Fri, Mar 02, 2007 at 01:23:28PM -0500, Rik van Riel wrote:
> With 32 CPUs
Andrew Morton wrote:
On Fri, 02 Mar 2007 16:19:19 -0500
Rik van Riel <[EMAIL PROTECTED]> wrote:
Bill Irwin wrote:
On Fri, Mar 02, 2007 at 01:23:28PM -0500, Rik van Riel wrote:
With 32 CPUs diving into the page reclaim simultaneously,
each trying to scan a fraction of memory, this is
On Fri, 02 Mar 2007 16:19:19 -0500
Rik van Riel <[EMAIL PROTECTED]> wrote:
> Bill Irwin wrote:
> > On Fri, Mar 02, 2007 at 01:23:28PM -0500, Rik van Riel wrote:
> >> With 32 CPUs diving into the page reclaim simultaneously,
> >> each trying to scan a fraction of memory, this is disastrous
> >>
At some point in the past, Mel Gorman wrote:
>> I can't think of a workload that totally makes a mess out of list-based.
>> However, list-based makes no guarantees on availability. If a system
>> administrator knows they need between 10,000 and 100,000 huge pages and
>> doesn't want to waste
Bill Irwin wrote:
On Fri, Mar 02, 2007 at 01:23:28PM -0500, Rik van Riel wrote:
With 32 CPUs diving into the page reclaim simultaneously,
each trying to scan a fraction of memory, this is disastrous
for performance. A 256GB system should be even worse.
Thundering herds of a sort pounding the
On Fri, Mar 02, 2007 at 01:23:28PM -0500, Rik van Riel wrote:
> With 32 CPUs diving into the page reclaim simultaneously,
> each trying to scan a fraction of memory, this is disastrous
> for performance. A 256GB system should be even worse.
Thundering herds of a sort pounding the LRU locks from
On Fri, 02 Mar 2007 12:43:42 -0500 Rik van Riel <[EMAIL PROTECTED]> wrote:
>> I can't share all the details, since a lot of the problems are customer
>> workloads.
>> One particular case is a 32GB system with a database that takes most
>> of memory. The amount of actually freeable page cache
On Fri, 2 Mar 2007, Rik van Riel wrote:
> I would like to see separate pageout selection queues
> for anonymous/tmpfs and page cache backed pages. That
> way we can simply scan only that what we want to scan.
>
> There are several ways available to balance pressure
> between both sets of lists.
On Fri, Mar 02, 2007 at 10:02:57AM -0800, Andrew Morton wrote:
> On Fri, 2 Mar 2007 09:35:27 -0800
> Mark Gross <[EMAIL PROTECTED]> wrote:
>
> > >
> > > Will it be possible to just power the DIMMs off? I don't see much point
> > > in
> > > some half-power non-destructive mode.
> >
> > I think
On Fri, 2 Mar 2007, Mark Gross wrote:
>
> I think there will be more than just 2 dims per cpu socket on systems
> that care about this type of capability.
I agree. I think you'll have a nice mix of 2 and 4, although not likely a
lot more. You want to have independent channels, and then within
On Fri, Mar 02, 2007 at 09:16:17AM -0800, Linus Torvalds wrote:
>
>
> On Fri, 2 Mar 2007, Mark Gross wrote:
> > >
> > > Yes, the same issues exist for other DRAM forms too, but to a *much*
> > > smaller degree.
> >
> > DDR3-1333 may be better than FBDIMM's but don't count on it being much
> >
Christoph Lameter wrote:
On Fri, 2 Mar 2007, Andrew Morton wrote:
One particular case is a 32GB system with a database that takes most
of memory. The amount of actually freeable page cache memory is in
the hundreds of MB.
Where's the rest of the memory? tmpfs? mlocked? hugetlb?
The
On Fri, 2 Mar 2007 10:15:36 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Fri, 2 Mar 2007, Andrew Morton wrote:
>
> > > One particular case is a 32GB system with a database that takes most
> > > of memory. The amount of actually freeable page cache memory is in
> > > the
On Fri, 2 Mar 2007, Andrew Morton wrote:
> > One particular case is a 32GB system with a database that takes most
> > of memory. The amount of actually freeable page cache memory is in
> > the hundreds of MB.
>
> Where's the rest of the memory? tmpfs? mlocked? hugetlb?
The memory is likely
On Fri, 2 Mar 2007 09:35:27 -0800
Mark Gross <[EMAIL PROTECTED]> wrote:
> >
> > Will it be possible to just power the DIMMs off? I don't see much point in
> > some half-power non-destructive mode.
>
> I think so, but need to double check with the HW folks.
>
> Technically, the dims could be
On Fri, 02 Mar 2007 12:43:42 -0500
Rik van Riel <[EMAIL PROTECTED]> wrote:
> Andrew Morton wrote:
> > On Fri, 2 Mar 2007 09:23:49 -0800 (PST) Christoph Lameter <[EMAIL
> > PROTECTED]> wrote:
> >
> >> On Fri, 2 Mar 2007, Andrew Morton wrote:
> >>
> Linux is *not* happy on 256GB systems.
On Fri, 2 Mar 2007, Christoph Lameter wrote:
On Fri, 2 Mar 2007, Mel Gorman wrote:
I still think that the list based approach is sufficient for memory
hotplug if one restricts the location of the unmovable MAX_ORDER chunks
to not overlap the memory area where we would like to be able to
On Fri, 2 Mar 2007, Mel Gorman wrote:
> > I still think that the list based approach is sufficient for memory
> > hotplug if one restricts the location of the unmovable MAX_ORDER chunks
> > to not overlap the memory area where we would like to be able to remove
> > memory.
>
> Yes, true. In
Andrew Morton wrote:
On Fri, 2 Mar 2007 09:23:49 -0800 (PST) Christoph Lameter <[EMAIL PROTECTED]>
wrote:
On Fri, 2 Mar 2007, Andrew Morton wrote:
Linux is *not* happy on 256GB systems. Even on some 32GB systems
the swappiness setting *needs* to be tweaked before Linux will even
run in a
On Fri, Mar 02, 2007 at 09:07:53AM -0800, Andrew Morton wrote:
> On Fri, 2 Mar 2007 08:20:23 -0800 Mark Gross <[EMAIL PROTECTED]> wrote:
>
> > > The whole DRAM power story is a bedtime story for gullible children.
> > > Don't
> > > fall for it. It's not realistic. The hardware support for it
On Fri, 2 Mar 2007 09:23:49 -0800 (PST) Christoph Lameter <[EMAIL PROTECTED]>
wrote:
> On Fri, 2 Mar 2007, Andrew Morton wrote:
>
> > > Linux is *not* happy on 256GB systems. Even on some 32GB systems
> > > the swappiness setting *needs* to be tweaked before Linux will even
> > > run in a
On (02/03/07 09:19), Christoph Lameter didst pronounce:
> On Fri, 2 Mar 2007, Mel Gorman wrote:
>
> > However, if that is objectionable, I'd at least like to see zone-based
> > patches
> > go into -mm on the expectation that the memory hot-remove patches will be
> > able to use the
On Fri, 2 Mar 2007, Andrew Morton wrote:
> > Linux is *not* happy on 256GB systems. Even on some 32GB systems
> > the swappiness setting *needs* to be tweaked before Linux will even
> > run in a reasonable way.
>
> Please send testcases.
It is not happy if you put 256GB into one zone. We are
On Fri, 2 Mar 2007, Mel Gorman wrote:
> However, if that is objectionable, I'd at least like to see zone-based patches
> go into -mm on the expectation that the memory hot-remove patches will be
> able to use the infrastructure. It's not ideal for hugepages and it is not my
> first preference,
On Fri, 2 Mar 2007, Mark Gross wrote:
> >
> > Yes, the same issues exist for other DRAM forms too, but to a *much*
> > smaller degree.
>
> DDR3-1333 may be better than FBDIMM's but don't count on it being much
> better.
Hey, fair enough. But it's not a problem (and it doesn't have a
On Fri, 2 Mar 2007 08:20:23 -0800 Mark Gross <[EMAIL PROTECTED]> wrote:
> > The whole DRAM power story is a bedtime story for gullible children. Don't
> > fall for it. It's not realistic. The hardware support for it DOES NOT
> > EXIST today, and probably won't for several years. And the real
On (02/03/07 08:58), Andrew Morton didst pronounce:
> On Fri, 02 Mar 2007 10:29:58 -0500 Rik van Riel <[EMAIL PROTECTED]> wrote:
>
> > Andrew Morton wrote:
> >
> > > And I'd judge that per-container RSS limits are of considerably more value
> > > than antifrag (in fact per-container RSS might be
On Fri, 2 Mar 2007, Nick Piggin wrote:
> > Oh just run a 32GB SMP system with sparsely freeable pages and lots of
> > allocs and frees and you will see it too. F.e try Linus tree and mlock
> > a large portion of the memory and then see the fun starting. See also
> > Rik's list of pathological
Linus Torvalds wrote:
On Thu, 1 Mar 2007, Andrew Morton wrote:
So some urgent questions are: how are we going to do mem hotunplug and
per-container RSS?
The people who were trying to do memory hot-unplug basically all stopped waiting for
these patches, or something similar, to solve the
On Fri, 02 Mar 2007 10:29:58 -0500 Rik van Riel <[EMAIL PROTECTED]> wrote:
> Andrew Morton wrote:
>
> > And I'd judge that per-container RSS limits are of considerably more value
> > than antifrag (in fact per-container RSS might be a superset of antifrag,
> > in the sense that per-container RSS
On (02/03/07 15:15), Paul Mundt didst pronounce:
> On Fri, Mar 02, 2007 at 02:50:29PM +0900, KAMEZAWA Hiroyuki wrote:
> > On Thu, 1 Mar 2007 21:11:58 -0800 (PST)
> > Linus Torvalds <[EMAIL PROTECTED]> wrote:
> >
> > > The whole DRAM power story is a bedtime story for gullible children.
> > >
On (01/03/07 16:44), Linus Torvalds didst pronounce:
>
>
> On Thu, 1 Mar 2007, Andrew Morton wrote:
> >
> > So some urgent questions are: how are we going to do mem hotunplug and
> > per-container RSS?
>
> Also: how are we going to do this in virtualized environments? Usually the
> people who
On (01/03/07 16:09), Andrew Morton didst pronounce:
> On Thu, 1 Mar 2007 10:12:50 +
> [EMAIL PROTECTED] (Mel Gorman) wrote:
>
> > Any opinion on merging these patches into -mm
> > for wider testing?
>
> I'm a little reluctant to make changes to -mm's core mm unless those
> changes are
Exhibiting a workload where the list patch breaks down and the zone
patch rescues it might help if it's felt that the combination isn't as
good as lists in isolation. I'm sure one can be dredged up somewhere.
I can't think of a workload that totally makes a mess out of list-based.
However,
On Thu, Mar 01, 2007 at 09:11:58PM -0800, Linus Torvalds wrote:
>
> On Thu, 1 Mar 2007, Andrew Morton wrote:
> >
> > On Thu, 1 Mar 2007 19:44:27 -0800 (PST) Linus Torvalds <[EMAIL PROTECTED]>
> > wrote:
> >
> > > In other words, I really don't see a huge upside. I see *lots* of
> > >
Andrew Morton wrote:
And I'd judge that per-container RSS limits are of considerably more value
than antifrag (in fact per-container RSS might be a superset of antifrag,
in the sense that per-container RSS and containers could be abused to fix
the i-cant-get-any-hugepages problem, dunno).
The
On Thu, 2007-03-01 at 16:09 -0800, Andrew Morton wrote:
> And I'd judge that per-container RSS limits are of considerably more value
> than antifrag (in fact per-container RSS might be a superset of antifrag,
> in the sense that per-container RSS and containers could be abused to fix
> the
On Thu, 1 Mar 2007, Bill Irwin wrote:
On Thu, Mar 01, 2007 at 10:12:50AM +, Mel Gorman wrote:
These are figures based on kernels patches with Andy Whitcrofts reclaim
patches. You will see that the zone-based kernel is getting success rates
closer to 40% as one would expect although there
On Fri, Mar 02, 2007 at 12:21:49AM -0800, Christoph Lameter wrote:
> On Fri, 2 Mar 2007, Nick Piggin wrote:
>
> > > If there are billions of pages in the system and we are allocating and
> > > deallocating then pages need to be aged. If there are just few pages
> > > freeable then we run into
On Fri, 2 Mar 2007, Nick Piggin wrote:
> > If there are billions of pages in the system and we are allocating and
> > deallocating then pages need to be aged. If there are just few pages
> > freeable then we run into issues.
>
> page writeout and vmscan don't work too badly. What are the
On Thu, Mar 01, 2007 at 11:44:05PM -0800, Christoph Lameter wrote:
> On Fri, 2 Mar 2007, Nick Piggin wrote:
>
> > > Sure we will. And you believe that the the newer controllers will be able
> > > to magically shrink the the SG lists somehow? We will offload the
> > > coalescing of the page
On Thu, Mar 01, 2007 at 11:44:05PM -0800, Christoph Lameter wrote:
On Fri, 2 Mar 2007, Nick Piggin wrote:
Sure we will. And you believe that the the newer controllers will be able
to magically shrink the the SG lists somehow? We will offload the
coalescing of the page structs into
On Fri, 2 Mar 2007, Nick Piggin wrote:
If there are billions of pages in the system and we are allocating and
deallocating then pages need to be aged. If there are just few pages
freeable then we run into issues.
page writeout and vmscan don't work too badly. What are the issues?
Slow
On Fri, Mar 02, 2007 at 12:21:49AM -0800, Christoph Lameter wrote:
On Fri, 2 Mar 2007, Nick Piggin wrote:
If there are billions of pages in the system and we are allocating and
deallocating then pages need to be aged. If there are just few pages
freeable then we run into issues.
On Thu, 1 Mar 2007, Bill Irwin wrote:
On Thu, Mar 01, 2007 at 10:12:50AM +, Mel Gorman wrote:
These are figures based on kernels patches with Andy Whitcrofts reclaim
patches. You will see that the zone-based kernel is getting success rates
closer to 40% as one would expect although there
On Thu, 2007-03-01 at 16:09 -0800, Andrew Morton wrote:
And I'd judge that per-container RSS limits are of considerably more value
than antifrag (in fact per-container RSS might be a superset of antifrag,
in the sense that per-container RSS and containers could be abused to fix
the
Andrew Morton wrote:
And I'd judge that per-container RSS limits are of considerably more value
than antifrag (in fact per-container RSS might be a superset of antifrag,
in the sense that per-container RSS and containers could be abused to fix
the i-cant-get-any-hugepages problem, dunno).
The
On Thu, Mar 01, 2007 at 09:11:58PM -0800, Linus Torvalds wrote:
On Thu, 1 Mar 2007, Andrew Morton wrote:
On Thu, 1 Mar 2007 19:44:27 -0800 (PST) Linus Torvalds [EMAIL PROTECTED]
wrote:
In other words, I really don't see a huge upside. I see *lots* of
downsides, but upsides? Not
Exhibiting a workload where the list patch breaks down and the zone
patch rescues it might help if it's felt that the combination isn't as
good as lists in isolation. I'm sure one can be dredged up somewhere.
I can't think of a workload that totally makes a mess out of list-based.
However,
On (01/03/07 16:09), Andrew Morton didst pronounce:
On Thu, 1 Mar 2007 10:12:50 +
[EMAIL PROTECTED] (Mel Gorman) wrote:
Any opinion on merging these patches into -mm
for wider testing?
I'm a little reluctant to make changes to -mm's core mm unless those
changes are reasonably
On (01/03/07 16:44), Linus Torvalds didst pronounce:
On Thu, 1 Mar 2007, Andrew Morton wrote:
So some urgent questions are: how are we going to do mem hotunplug and
per-container RSS?
Also: how are we going to do this in virtualized environments? Usually the
people who care abotu
On Fri, 02 Mar 2007 10:29:58 -0500 Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
And I'd judge that per-container RSS limits are of considerably more value
than antifrag (in fact per-container RSS might be a superset of antifrag,
in the sense that per-container RSS and
On (02/03/07 15:15), Paul Mundt didst pronounce:
On Fri, Mar 02, 2007 at 02:50:29PM +0900, KAMEZAWA Hiroyuki wrote:
On Thu, 1 Mar 2007 21:11:58 -0800 (PST)
Linus Torvalds [EMAIL PROTECTED] wrote:
The whole DRAM power story is a bedtime story for gullible children.
Don't
fall for
Linus Torvalds wrote:
On Thu, 1 Mar 2007, Andrew Morton wrote:
So some urgent questions are: how are we going to do mem hotunplug and
per-container RSS?
The people who were trying to do memory hot-unplug basically all stopped waiting for
these patches, or something similar, to solve the
On Fri, 2 Mar 2007, Nick Piggin wrote:
Oh just run a 32GB SMP system with sparsely freeable pages and lots of
allocs and frees and you will see it too. F.e try Linus tree and mlock
a large portion of the memory and then see the fun starting. See also
Rik's list of pathological cases on
On (02/03/07 08:58), Andrew Morton didst pronounce:
On Fri, 02 Mar 2007 10:29:58 -0500 Rik van Riel [EMAIL PROTECTED] wrote:
Andrew Morton wrote:
And I'd judge that per-container RSS limits are of considerably more value
than antifrag (in fact per-container RSS might be a superset of
On Fri, 2 Mar 2007 08:20:23 -0800 Mark Gross [EMAIL PROTECTED] wrote:
The whole DRAM power story is a bedtime story for gullible children. Don't
fall for it. It's not realistic. The hardware support for it DOES NOT
EXIST today, and probably won't for several years. And the real fix is
On Fri, 2 Mar 2007, Mark Gross wrote:
Yes, the same issues exist for other DRAM forms too, but to a *much*
smaller degree.
DDR3-1333 may be better than FBDIMM's but don't count on it being much
better.
Hey, fair enough. But it's not a problem (and it doesn't have a solution)
today.
On Fri, 2 Mar 2007, Mel Gorman wrote:
However, if that is objectionable, I'd at least like to see zone-based patches
go into -mm on the expectation that the memory hot-remove patches will be
able to use the infrastructure. It's not ideal for hugepages and it is not my
first preference, but
On Fri, 2 Mar 2007, Andrew Morton wrote:
Linux is *not* happy on 256GB systems. Even on some 32GB systems
the swappiness setting *needs* to be tweaked before Linux will even
run in a reasonable way.
Please send testcases.
It is not happy if you put 256GB into one zone. We are fine with
1 - 100 of 204 matches
Mail list logo