We've re-evaluated the need for a patch to support some sort of finer
grained control over thp, and, based on some tests performed by our
benchmarking team, we're seeing that we'd definitely still like to
implement some method to support this. Here's an e-mail from John
Baron (jba...@sgi.com),
We've re-evaluated the need for a patch to support some sort of finer
grained control over thp, and, based on some tests performed by our
benchmarking team, we're seeing that we'd definitely still like to
implement some method to support this. Here's an e-mail from John
Baron (jba...@sgi.com),
On Thu, 20 Jun 2013, Mike Galbraith wrote:
> > I'm suspecting that you're referring to enlarged rss because of
> > khugepaged's max_ptes_none and because you're abusing the purpose of
> > cpusets for containerization.
>
> Why is containerization an abuse? What's wrong with renting out chunks
On Thu, 20 Jun 2013, Mike Galbraith wrote:
I'm suspecting that you're referring to enlarged rss because of
khugepaged's max_ptes_none and because you're abusing the purpose of
cpusets for containerization.
Why is containerization an abuse? What's wrong with renting out chunks
of a
Cc: Tejun, and cgroup ML
>> Here are the entries in the cpuset:
>> cgroup.event_control mem_exclusivememory_pressure_enabled
>> notify_on_release tasks
>> cgroup.procs mem_hardwall memory_spread_page release_agent
>> cpu_exclusive memory_migrate
On Wed, 2013-06-19 at 19:43 -0700, David Rientjes wrote:
>
> I'm suspecting that you're referring to enlarged rss because of
> khugepaged's max_ptes_none and because you're abusing the purpose of
> cpusets for containerization.
Why is containerization an abuse? What's wrong with renting
On Wed, 19 Jun 2013, Robin Holt wrote:
> cpusets was not for NUMA. It has no preference for "nodes" or anything like
> that. It was for splitting a machine into layered smaller groups. Usually,
> we see one cpuset with contains the batch scheduler. The batch scheduler then
> creates cpusets
On Wed, Jun 19, 2013 at 02:24:07PM -0700, David Rientjes wrote:
> On Wed, 19 Jun 2013, Robin Holt wrote:
>
> > The convenience being that many batch schedulers have added cpuset
> > support. They create the cpuset's and configure them as appropriate
> > for the job as determined by a mixture of
On Wed, 19 Jun 2013, Robin Holt wrote:
> The convenience being that many batch schedulers have added cpuset
> support. They create the cpuset's and configure them as appropriate
> for the job as determined by a mixture of input from the submitting
> user but still under the control of the
On Tue, Jun 18, 2013 at 05:01:23PM -0700, David Rientjes wrote:
> On Tue, 18 Jun 2013, Alex Thorlton wrote:
>
> > Thanks for your input, however, I believe the method of using a malloc
> > hook falls apart when it comes to static binaries, since we wont' have
> > any shared libraries to hook
On Tue, Jun 18, 2013 at 05:01:23PM -0700, David Rientjes wrote:
On Tue, 18 Jun 2013, Alex Thorlton wrote:
Thanks for your input, however, I believe the method of using a malloc
hook falls apart when it comes to static binaries, since we wont' have
any shared libraries to hook into.
On Wed, 19 Jun 2013, Robin Holt wrote:
The convenience being that many batch schedulers have added cpuset
support. They create the cpuset's and configure them as appropriate
for the job as determined by a mixture of input from the submitting
user but still under the control of the
On Wed, Jun 19, 2013 at 02:24:07PM -0700, David Rientjes wrote:
On Wed, 19 Jun 2013, Robin Holt wrote:
The convenience being that many batch schedulers have added cpuset
support. They create the cpuset's and configure them as appropriate
for the job as determined by a mixture of input
On Wed, 19 Jun 2013, Robin Holt wrote:
cpusets was not for NUMA. It has no preference for nodes or anything like
that. It was for splitting a machine into layered smaller groups. Usually,
we see one cpuset with contains the batch scheduler. The batch scheduler then
creates cpusets for
On Wed, 2013-06-19 at 19:43 -0700, David Rientjes wrote:
I'm suspecting that you're referring to enlarged rss because of
khugepaged's max_ptes_none and because you're abusing the purpose of
cpusets for containerization.
Why is containerization an abuse? What's wrong with renting out
Cc: Tejun, and cgroup ML
Here are the entries in the cpuset:
cgroup.event_control mem_exclusivememory_pressure_enabled
notify_on_release tasks
cgroup.procs mem_hardwall memory_spread_page release_agent
cpu_exclusive memory_migrate
On Tue, 18 Jun 2013, Alex Thorlton wrote:
> Thanks for your input, however, I believe the method of using a malloc
> hook falls apart when it comes to static binaries, since we wont' have
> any shared libraries to hook into. Although using a malloc hook is a
> perfectly suitable solution for
On Tue, Jun 11, 2013 at 03:20:09PM -0700, David Rientjes wrote:
> On Tue, 11 Jun 2013, Alex Thorlton wrote:
>
> > This patch adds the ability to control THPs on a per cpuset basis.
> > Please see
> > the additions to Documentation/cgroups/cpusets.txt for more
> > information.
> >
>
> What's
On Tue, Jun 11, 2013 at 03:20:09PM -0700, David Rientjes wrote:
On Tue, 11 Jun 2013, Alex Thorlton wrote:
This patch adds the ability to control THPs on a per cpuset basis.
Please see
the additions to Documentation/cgroups/cpusets.txt for more
information.
What's missing from both
On Tue, 18 Jun 2013, Alex Thorlton wrote:
Thanks for your input, however, I believe the method of using a malloc
hook falls apart when it comes to static binaries, since we wont' have
any shared libraries to hook into. Although using a malloc hook is a
perfectly suitable solution for most
On Tue, 11 Jun 2013, Alex Thorlton wrote:
> This patch adds the ability to control THPs on a per cpuset basis. Please see
> the additions to Documentation/cgroups/cpusets.txt for more information.
>
What's missing from both this changelog and the documentation you point to
is why this change
This patch adds the ability to control THPs on a per cpuset basis. Please see
the additions to Documentation/cgroups/cpusets.txt for more information.
Signed-off-by: Alex Thorlton
Reviewed-by: Robin Holt
Cc: Li Zefan
Cc: Rob Landley
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Rik van Riel
Cc:
This patch adds the ability to control THPs on a per cpuset basis. Please see
the additions to Documentation/cgroups/cpusets.txt for more information.
Signed-off-by: Alex Thorlton athorl...@sgi.com
Reviewed-by: Robin Holt h...@sgi.com
Cc: Li Zefan lize...@huawei.com
Cc: Rob Landley
On Tue, 11 Jun 2013, Alex Thorlton wrote:
This patch adds the ability to control THPs on a per cpuset basis. Please see
the additions to Documentation/cgroups/cpusets.txt for more information.
What's missing from both this changelog and the documentation you point to
is why this change is
24 matches
Mail list logo