On Wed, Aug 15 2012, Christoph Lameter wrote:
> On Wed, 15 Aug 2012, Michal Hocko wrote:
>
>> > That is not what the kernel does, in general. We assume that if he wants
>> > that memory and we can serve it, we should. Also, not all kernel memory
>> > is unreclaimable. We can shrink the slabs, for
On Wed, Aug 15 2012, Glauber Costa wrote:
> On 08/14/2012 10:58 PM, Greg Thelen wrote:
>> On Mon, Aug 13 2012, Glauber Costa wrote:
>>
>>>>>> +WARN_ON(mem_cgroup_is_root(memcg));
>>>>>> +size = (1 << order) <&
On Wed, Aug 15 2012, Glauber Costa wrote:
> On 08/15/2012 08:38 PM, Greg Thelen wrote:
>> On Wed, Aug 15 2012, Glauber Costa wrote:
>>
>>> On 08/14/2012 10:58 PM, Greg Thelen wrote:
>>>> On Mon, Aug 13 2012, Glauber Costa wrote:
>>>>
Since 628f423553 "memcg: limit change shrink usage" both
res_counter_write() and write_strategy_fn have been unused. This
patch deletes them both.
Signed-off-by: Greg Thelen
---
include/linux/res_counter.h |5 -
kernel/res_counter.c| 22 --
2 fil
We ran some netperf comparisons measuring the overhead of enabling
CONFIG_MEMCG_KMEM with a kmem limit. Short answer: no regression seen.
This is a multiple machine (client,server) netperf test. Both client
and server machines were running the same kernel with the same
configuration.
A
Move the cgroup_event_listener.c tool from Documentation into the new
tools/cgroup directory.
This change involves wiring cgroup_event_listener.c into the tools/
make system so that is can be built with:
$ make tools/cgroup
Signed-off-by: Greg Thelen
---
Documentation/cgroups/00-INDEX
this:
$ gcc -Wall -O2 cgroup_event_listener.c
cgroup_event_listener.c: In function ‘main’:
cgroup_event_listener.c:109:2: warning: ‘ret’ may be used uninitialized in
this function [-Wuninitialized]
Signed-off-by: Greg Thelen
---
tools/cgroup/cgroup_event_listener.c |2 +-
1 files changed, 1
On Tue, Dec 25 2012, Sha Zhengju wrote:
> From: Sha Zhengju
>
> Similar to dirty page, we add per cgroup writeback pages accounting. The lock
> rule still is:
> mem_cgroup_begin_update_page_stat()
> modify page WRITEBACK stat
> mem_cgroup_update_page_stat()
>
; incrementing (2):
> __set_page_dirty
> __set_page_dirty_nobuffers
> decrementing (2):
> clear_page_dirty_for_io
> cancel_dirty_page
>
> To prevent AB/BA deadlock mentioned by Greg Thelen in previous version
>
On Fri, Jul 27 2012, Sha Zhengju wrote:
> From: Sha Zhengju
>
> This patch adds memcg routines to count dirty pages, which allows memory
> controller
> to maintain an accurate view of the amount of its dirty memory and can
> provide some
> info for users while group's direct reclaim is
On Fri, Jul 27 2012, Sha Zhengju wrote:
> From: Sha Zhengju
>
> Similar to dirty page, we add per cgroup writeback pages accounting. The lock
> rule still is:
> mem_cgroup_begin_update_page_stat()
> modify page WRITEBACK stat
> mem_cgroup_update_page_stat()
>
On Wed, Aug 15 2012, Glauber Costa wrote:
> On 08/15/2012 09:12 PM, Greg Thelen wrote:
>> On Wed, Aug 15 2012, Glauber Costa wrote:
>>
>>> On 08/15/2012 08:38 PM, Greg Thelen wrote:
>>>> On Wed, Aug 15 2012, Glauber Costa wrote:
>>>>
>>>&g
On Sun, Feb 10 2013, Anton Vorontsov wrote:
> With this patch userland applications that want to maintain the
> interactivity/memory allocation cost can use the new pressure level
> notifications. The levels are defined like this:
>
> The "low" level means that the system is reclaiming memory for
On Tue, Feb 12 2013, Anton Vorontsov wrote:
> Hi Greg,
>
> Thanks for taking a look!
>
> On Tue, Feb 12, 2013 at 10:42:51PM -0800, Greg Thelen wrote:
> [...]
>> > +static bool vmpressure_event(struct vmpressure *vmpr,
>> > +
rink_slab()")
Cc: # 4.19+
Signed-off-by: Greg Thelen
---
mm/shmem.c | 61 +++---
1 file changed, 35 insertions(+), 26 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index bd8840082c94..e11090f78cb5 100644
--- a/mm/shmem.c
+++ b/mm
Kirill Tkhai wrote:
> Hi, Greg,
>
> good finding. See comments below.
>
> On 01.06.2020 06:22, Greg Thelen wrote:
>> Since v4.19 commit b0dedc49a2da ("mm/vmscan.c: iterate only over charged
>> shrinkers during memcg shrink_slab()") a memcg aware shrinker is
Michal Hocko wrote:
> On Tue 24-10-17 14:58:54, Johannes Weiner wrote:
>> On Tue, Oct 24, 2017 at 07:55:58PM +0200, Michal Hocko wrote:
>> > On Tue 24-10-17 13:23:30, Johannes Weiner wrote:
>> > > On Tue, Oct 24, 2017 at 06:22:13PM +0200, Michal Hocko wrote:
>> > [...]
>> > > > What would
Johannes Weiner wrote:
> On Wed, Oct 25, 2017 at 09:00:57PM +0200, Michal Hocko wrote:
>> On Wed 25-10-17 14:11:06, Johannes Weiner wrote:
>> > "Safe" is a vague term, and it doesn't make much sense to me in this
>> > situation. The OOM behavior should be predictable and consistent.
>> >
>> >
sible compression formats.
Once patched usr/initramfs_data.cpio.gz and friends are deleted by
"make clean".
Fixes: 9e3596b0c653 ("kbuild: initramfs cleanup, set target from Kconfig")
Signed-off-by: Greg Thelen
---
usr/Makefile | 3 +++
1 file changed, 3 insertions(+)
SeongJae Park wrote:
> From: SeongJae Park
>
> This commit adds documents for DAMON under
> `Documentation/admin-guide/mm/damon/` and `Documentation/vm/damon/`.
>
> Signed-off-by: SeongJae Park
> ---
> Documentation/admin-guide/mm/damon/guide.rst | 157 ++
>
SeongJae Park wrote:
> From: SeongJae Park
>
> This commit introduces a reference implementation of the address space
> specific low level primitives for the virtual address space, so that
> users of DAMON can easily monitor the data accesses on virtual address
> spaces of specific processes by
e
direct side effect of "make -R". This enables arbitrary makefile
nesting.
Signed-off-by: Greg Thelen
---
tools/testing/selftests/Makefile | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
in
Oliver O'Halloran wrote:
> On Mon, Jun 15, 2020 at 9:33 AM Greg Thelen wrote:
>>
>> Commit dc3d8f85bb57 ("powerpc/powernv/pci: Re-work bus PE
>> configuration") removed a couple pnv_ioda_setup_bus_dma() calls. The
>> only remaining calls are behind C
Roman Gushchin wrote:
> # Why do we need this?
>
> We've noticed that the number of dying cgroups is steadily growing on most
> of our hosts in production. The following investigation revealed an issue
> in userspace memory reclaim code [1], accounting of kernel stacks [2],
> and also the
Dave Hansen wrote:
> From: Keith Busch
>
> Migrating pages had been allocating the new page before it was actually
> needed. Subsequent operations may still fail, which would have to handle
> cleaning up the newly allocated page when it was never used.
>
> Defer allocating the page until we are
Yang Shi wrote:
> On Sun, May 31, 2020 at 8:22 PM Greg Thelen wrote:
>>
>> Since v4.19 commit b0dedc49a2da ("mm/vmscan.c: iterate only over charged
>> shrinkers during memcg shrink_slab()") a memcg aware shrinker is only
>> called when the per-
:13: error:
'pnv_ioda_setup_bus_dma' defined but not used
Add CONFIG_IOMMU_API ifdef guard to avoid dead code.
Fixes: dc3d8f85bb57 ("powerpc/powernv/pci: Re-work bus PE configuration")
Signed-off-by: Greg Thelen
---
arch/powerpc/platforms/powernv/pci-ioda.c | 2 ++
1 file changed, 2 insertions(+)
diff
'
defined but not used [-Wunused-function]
Add CONFIG_PM_SLEEP ifdef guard to avoid dead code.
Fixes: e086ba2fccda ("e1000e: disable s0ix entry and exit flows for ME systems")
Signed-off-by: Greg Thelen
---
drivers/net/ethernet/intel/e1000e/netdev.c | 2 ++
1 file changed, 2 inser
:13: error:
'pnv_ioda_setup_bus_dma' defined but not used
Move pnv_ioda_setup_bus_dma() under CONFIG_IOMMU_API to avoid dead code.
Fixes: dc3d8f85bb57 ("powerpc/powernv/pci: Re-work bus PE configuration")
Signed-off-by: Greg Thelen
---
arch/powerpc/platforms/powernv/pci-ioda.c | 26 +++
On Tue, May 29, 2018 at 11:12 PM Greg Thelen wrote:
>
> Use smaller scan_control fields for order, priority, and reclaim_idx.
> Convert fields from int => s8. All easily fit within a byte:
> * allocation order range: 0..MAX_ORDER(64?)
> * priority range:
commit 93f78d882865 ("writeback: move backing_dev_info->bdi_stat[] into
bdi_writeback") replaced BDI_DIRTIED with WB_DIRTIED in
account_page_redirty(). Update comment to track that change.
BDI_DIRTIED => WB_DIRTIED
BDI_WRITTEN => WB_WRITTEN
Signed-off-by: Greg T
now
> - s@mem_cgroup_oom_enable@mem_cgroup_enter_user_fault@g
> s@mem_cgroup_oom_disable@mem_cgroup_exit_user_fault@g as per Johannes
> - make oom_kill_disable an exceptional case because it should be rare
> and the normal oom handling a core of the function - per Johannes
>
Michal Hocko wrote:
> On Thu 28-06-18 16:19:07, Greg Thelen wrote:
>> Michal Hocko wrote:
> [...]
>> > + if (mem_cgroup_out_of_memory(memcg, mask, order))
>> > + return OOM_SUCCESS;
>> > +
>> > + WARN(1,"Memory
Michal Hocko wrote:
> On Fri 29-06-18 11:59:04, Greg Thelen wrote:
>> Michal Hocko wrote:
>>
>> > On Thu 28-06-18 16:19:07, Greg Thelen wrote:
>> >> Michal Hocko wrote:
>> > [...]
>> >> > + if (mem_cgroup_out_of_memory(mem
es: 9533b292a7ac ("IB: remove redundant INFINIBAND kconfig
dependencies")
> Signed-off-by: Arnd Bergmann
Acked-by: Greg Thelen
Sorry for the 9533b292a7ac problem.
At this point the in release cycle, I think Arnd's revert is best.
If there is interest, I've put a little thought i
Jason Gunthorpe wrote:
On Fri, May 25, 2018 at 05:32:52PM -0700, Greg Thelen wrote:
On Fri, May 25, 2018 at 2:32 PM Arnd Bergmann wrote:
> Several subsystems depend on INFINIBAND_ADDR_TRANS, which in turn
depends
> on INFINIBAND. However, when with CONFIG_INIFIBAND=m, this
te) rather than u8 to allow for loops like:
do {
...
} while (--sc.priority >= 0);
This reduces sizeof(struct scan_control) from 96 => 88 bytes (x86_64),
which saves some stack.
scan_control.priority field order is changed to occupy otherwise unused
padding.
Sig
On Fri, Mar 22, 2019 at 11:15 AM Roman Gushchin wrote:
>
> On Thu, Mar 07, 2019 at 08:56:32AM -0800, Greg Thelen wrote:
> > Since commit a983b5ebee57 ("mm: memcontrol: fix excessive complexity in
> > memory.stat reporting") memcg dirty and writeback counters are mana
weight than is required.
It probably also makes sense to use exact dirty and writeback counters
in memcg oom reports. But that is saved for later.
Cc: sta...@vger.kernel.org # v4.16+
Signed-off-by: Greg Thelen
---
Changelog since v1:
- Move memcg_exact_page_state() into memcontrol.c.
- Uncon
Andrew Morton wrote:
> On Thu, 7 Mar 2019 08:56:32 -0800 Greg Thelen wrote:
>
>> Since commit a983b5ebee57 ("mm: memcontrol: fix excessive complexity in
>> memory.stat reporting") memcg dirty and writeback counters are managed
>> as:
>> 1) per-memcg p
Johannes Weiner wrote:
> On Thu, Mar 07, 2019 at 08:56:32AM -0800, Greg Thelen wrote:
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -3880,6 +3880,7 @@ struct wb_domain *mem_cgroup_wb_domain(struct
>> bdi_writeback *wb)
>> * @pheadroom: out paramete
_wb transaction and
use it for stat updates")
Signed-off-by: Greg Thelen
---
include/linux/backing-dev.h | 2 +-
include/linux/fs.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index c28a47cbe355.
ercpu_counter. And the
percpu_counter spinlocks are more heavyweight than is required.
It probably also makes sense to include exact dirty and writeback
counters in memcg oom reports. But that is saved for later.
Signed-off-by: Greg Thelen
---
include/linux/memcontrol.h | 33 +
Yang Shi wrote:
> On 1/3/19 11:23 AM, Michal Hocko wrote:
>> On Thu 03-01-19 11:10:00, Yang Shi wrote:
>>>
>>> On 1/3/19 10:53 AM, Michal Hocko wrote:
On Thu 03-01-19 10:40:54, Yang Shi wrote:
> On 1/3/19 10:13 AM, Michal Hocko wrote:
>> [...]
>> Is there any reason for your scripts
at s8 is capable of storing max values.
This reduces sizeof(struct scan_control):
* 96 => 80 bytes (x86_64)
* 68 => 56 bytes (i386)
scan_control structure field order is changed to utilize padding.
After this patch there is 1 bit of scan_control padding.
Signed-off-by: Greg Thelen
Suggested-by: M
Matthew Wilcox wrote:
> On Mon, May 28, 2018 at 07:40:25PM -0700, Greg Thelen wrote:
>> Reclaim priorities range from 0..12(DEF_PRIORITY).
>> scan_control.priority is a 4 byte int, which is overkill.
>>
>> Since commit 6538b8ea886e ("x86_64: expand kerne
rdma_cm.ko?
That
> > >>> is not correct.
> > >> That seems like a reasonable thing to do..
> > > rdma_ucm.ko is for usermode users, rdma_cm.ko is for kernel users, and
> > > is required for iwarp drivers. It seems rdma_cm.ko is not being
> > > compiled if ADDR_TRANS is not set.
> I think the intention was to completely disable rdma-cm, including all
> support for rx'ing remote packets? Greg?
Yes. That's my goal when INFINIBAND_ADDR_TRANS is unset.
> If this is required for iwarp then Arnd's patch is probably the right
> way to go..
> Jason
Agreed.
Acked-by: Greg Thelen
Michal Hocko wrote:
> On Tue 03-07-18 00:08:05, Greg Thelen wrote:
>> Michal Hocko wrote:
>>
>> > On Fri 29-06-18 11:59:04, Greg Thelen wrote:
>> >> Michal Hocko wrote:
>> >>
>> >> > On Thu 28-
On Mon, Jun 4, 2018 at 4:07 PM Jason Gunthorpe wrote:
>
> On Thu, May 31, 2018 at 02:40:59PM -0400, Doug Ledford wrote:
> > On Wed, 2018-05-30 at 21:03 -0700, Greg Thelen wrote:
> > > On Wed, May 30, 2018 at 4:01 PM Jason Gunthorpe wrote:
> > >
> > > >
akenly
set.
Relocate endif to balance the newly added -record-mcount check.
Fixes: 96f60dfa5819 ("trace: Use -mcount-record for dynamic ftrace")
Signed-off-by: Greg Thelen
---
scripts/Makefile.build | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/Makefile.build b/sc
On Fri, Oct 31 2014, Junjie Mao wrote:
> When choosing a random address, the current implementation does not take into
> account the reversed space for .bss and .brk sections. Thus the relocated
> kernel
> may overlap other components in memory. Here is an example of the overlap
> from a
>
On Mon, Nov 17 2014, Greg Thelen wrote:
[...]
> Given that bss and brk are nobits (i.e. only ALLOC) sections, does
> file_offset make sense as a load address. This fails with gold:
>
> $ git checkout v3.18-rc5
> $ make # with gold
> [...]
> ..bss and .brk lack commo
On Tue, Sep 16 2014, Vladimir Davydov wrote:
> Hi Suleiman,
>
> On Mon, Sep 15, 2014 at 12:13:33PM -0700, Suleiman Souhlal wrote:
>> On Mon, Sep 15, 2014 at 3:44 AM, Vladimir Davydov
>> wrote:
>> > Hi,
>> >
>> > I'd like to discuss downsides of the kmem accounting part of the memory
>> > cgroup
On Fri, Sep 19 2014, Johannes Weiner wrote:
> In a memcg with even just moderate cache pressure, success rates for
> transparent huge page allocations drop to zero, wasting a lot of
> effort that the allocator puts into assembling these pages.
>
> The reason for this is that the memcg reclaim
On Tue, Sep 23 2014, Johannes Weiner wrote:
> On Mon, Sep 22, 2014 at 10:52:50PM -0700, Greg Thelen wrote:
>>
>> On Fri, Sep 19 2014, Johannes Weiner wrote:
>>
>> > In a memcg with even just moderate cache pressure, success rates for
>> > transparent huge
mho...@kernel.org wrote:
> From: Michal Hocko
>
> Journal transaction might fail prematurely because the frozen_buffer
> is allocated by GFP_NOFS request:
> [ 72.440013] do_get_write_access: OOM for frozen_buffer
> [ 72.440014] EXT4-fs: ext4_reserve_inode_write:4729: aborting transaction:
Use BUILD_BUG_ON() to compile assert that memcg string tables are in
sync with corresponding enums. There aren't currently any issues with
these tables. This is just defensive.
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm
cnt 240649
Fixes: e61734c55c24 ("cgroup: remove cgroup->name")
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 851924fa5170..683b4782019b 100644
--- a/mm/memcontrol.c
+++
On Thu, Jan 08 2015, Johannes Weiner wrote:
> Introduce the basic control files to account, partition, and limit
> memory using cgroups in default hierarchy mode.
>
> This interface versioning allows us to address fundamental design
> issues in the existing memory cgroup interface, further
On Wed, Feb 04 2015, Tejun Heo wrote:
> Hello,
>
> On Tue, Feb 03, 2015 at 03:30:31PM -0800, Greg Thelen wrote:
>> If a machine has several top level memcg trying to get some form of
>> isolation (using low, min, soft limit) then a shared libc will be
>> moved t
On Thu, Feb 05 2015, Tejun Heo wrote:
> Hello, Greg.
>
> On Wed, Feb 04, 2015 at 03:51:01PM -0800, Greg Thelen wrote:
>> I think the linux-next low (and the TBD min) limits also have the
>> problem for more than just the root memcg. I'm thinking of a 2M file
>> s
On Thu, Feb 05 2015, Tejun Heo wrote:
> Hey,
>
> On Thu, Feb 05, 2015 at 02:05:19PM -0800, Greg Thelen wrote:
>> >A
>> >+-B(usage=2M lim=3M min=2M hosted_usage=2M)
>> > +-C (usage=0 lim=2M min=1M shared_usage=2M)
>> >
On Mon, Feb 2, 2015 at 11:46 AM, Tejun Heo wrote:
> Hey,
>
> On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
>
>> Keeping shared inodes in common ancestor is reasonable.
>> We could schedule asynchronous moving when somebody opens or mmaps
>> inode from outside of its
On Fri, Feb 6, 2015 at 6:17 AM, Tejun Heo wrote:
> Hello, Greg.
>
> On Thu, Feb 05, 2015 at 04:03:34PM -0800, Greg Thelen wrote:
>> So this is a system which charges all cgroups using a shared inode
>> (recharge on read) for all resident pages of that shared inode. The
On Thu, Jan 29 2015, Tejun Heo wrote:
> Hello,
>
> Since the cgroup writeback patchset[1] have been posted, several
> people brought up concerns about the complexity of allowing an inode
> to be dirtied against multiple cgroups is necessary for the purpose of
> writeback and it is true that a
On Mon, Mar 09 2015, David Rientjes wrote:
> If __get_user_pages() is faulting a significant number of hugetlb pages,
> usually as the result of mmap(MAP_LOCKED), it can potentially allocate a
> very large amount of memory.
>
> If the process has been oom killed, this will cause a lot of memory
> allocating user memory if TIF_MEMDIE is set"), hugetlb page faults now
> terminate when the process has been oom killed.
>
> Cc: Greg Thelen
> Cc: Naoya Horiguchi
> Cc: Davidlohr Bueso
> Acked-by: "Kirill A. Shutemov"
> Signed-off-by: David Rientjes
L
On Tue, Feb 10, 2015 at 6:19 PM, Tejun Heo wrote:
> Hello, again.
>
> On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
>> If we can argue that memcg and blkcg having different views is
>> meaningful and characterize and justify the behaviors stemming from
>> the deviation, sure, that'd
On Wed, Feb 11, 2015 at 12:33 PM, Tejun Heo wrote:
[...]
>> page count to throttle based on blkcg's bandwidth. Note: memcg
>> doesn't yet have dirty page counts, but several of us have made
>> attempts at adding the counters. And it shouldn't be hard to get them
>> merged.
>
> Can you please
quot;cgroup.procs")
for i in range(n):
os.rmdir(str(i))
patched: 1 loops: 1069 => 1170 (+101 ipis)
unpatched: 1 loops: 1192 => 48933 (+47741 ipis)
Signed-off-by: Greg Thelen
---
mm/slab.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
d
in this function [-Wmaybe-uninitialized]
mpt->mtt = mtt;
I think this warning is a false complaint. mpt is only used when
mr_res_start_move_to() return zero, and in all such cases it initializes
mpt. But apparently gcc cannot see that.
Initialize mpt to avoid the warning.
Signed-off-by: G
Leon Romanovsky wrote:
> [ Unknown signature status ]
> On Mon, Apr 17, 2017 at 11:21:35PM -0700, Greg Thelen wrote:
>> gcc 4.8.4 complains that mlx4_SW2HW_MPT_wrapper() uses an uninitialized
>> 'mpt' variable:
>> drivers/net/ethernet/mellanox/mlx4/resourc
().
This leak only affects destroyed SLAB_ACCOUNT kmem caches when kasan is
enabled. So I don't think it's worth patching stable kernels.
Signed-off-by: Greg Thelen
---
include/linux/kasan.h | 4 ++--
mm/kasan/kasan.c | 2 +-
mm/kasan/quarantine.c | 1 +
mm/slab_common.c | 4 +++-
ccounted
object
[ 124.456789] kmem_cache_destroy test_cache: Slab cache still has objects
Kernels with fix [1] don't have the "Slab cache still has objects"
warning or the underlying leak.
The new test runs and passes in the default (root) memcg, though in the
root memcg it won't uncover the pro
commit f61c42a7d911 ("memcg: remove tasks/children test from
mem_cgroup_force_empty()") removed memory reparenting from the function.
Fix the function's comment.
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/me
Theodore Ts'o wrote:
> The following changes since commit 243d50678583100855862bc084b8b307eea67f68:
>
> Merge branch 'overlayfs-linus' of
> git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs (2016-03-22
> 13:11:15 -0700)
>
n> are available in the git repository at:
>
>
Dave Hansen wrote:
> I've been seeing some strange behavior with 4.3-rc1 kernels on my Ubuntu
> 14.04.3 system. The system will run fine for a few hours, but suddenly
> start becoming horribly I/O bound. A compile of perf for instance takes
> 20-30 minutes and the compile seems entirely I/O
Greg Thelen wrote:
> Dave Hansen wrote:
>
>> I've been seeing some strange behavior with 4.3-rc1 kernels on my Ubuntu
>> 14.04.3 system. The system will run fine for a few hours, but suddenly
>> start becoming horribly I/O bound. A compile of perf for instanc
Dave Hansen wrote:
> On 09/17/2015 11:09 PM, Greg Thelen wrote:
>> I'm not denying the issue, bug the WARNING splat isn't necessarily
>> catching a problem. The corresponding code comes from your debug patch:
>> +
>> WARN_ONCE(__this_cpu_read(memcg->sta
Vladimir Davydov wrote:
> Currently, to charge a page to kmemcg one should use alloc_kmem_pages
> helper. When the page is not needed anymore it must be freed with
> free_kmem_pages helper, which will uncharge the page before freeing it.
> Such a design is acceptable for thread info pages and
shouldn't show confusing negative usage.
- tree_usage() already avoids negatives.
Avoid returning negative page counts from mem_cgroup_read_stat() and
convert it to unsigned.
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 30 ++
1 file changed, 18 insertions(+), 12
Andrew Morton wrote:
> On Tue, 22 Sep 2015 15:16:32 -0700 Greg Thelen wrote:
>
>> mem_cgroup_read_stat() returns a page count by summing per cpu page
>> counters. The summing is racy wrt. updates, so a transient negative sum
>> is possible. Callers do
Commit 733a572e66d2 ("memcg: make mem_cgroup_read_{stat|event}() iterate
possible cpus instead of online") removed the last use of the per memcg
pcp_counter_lock but forgot to remove the variable.
Kill the vestigial variable.
Signed-off-by: Greg Thelen
---
include/linux/memcontrol.h
Andrew Morton wrote:
> On Tue, 22 Sep 2015 17:42:13 -0700 Greg Thelen wrote:
>
>> Andrew Morton wrote:
>>
>> > On Tue, 22 Sep 2015 15:16:32 -0700 Greg Thelen wrote:
>> >
>> >> mem_cgroup_read_stat() returns a page count by summing per cpu page
&
but
larger files use the oom killer to avoid ENOMEM.
Memory overcommit requires use of the oom killer to select a victim
regardless of file size.
Enable oom killer for small seq_buf_alloc() allocations.
Signed-off-by: David Rientjes
Signed-off-by: Greg Thelen
---
fs/seq_file.c | 11 -
Michal Hocko wrote:
> On Tue 22-09-15 15:16:32, Greg Thelen wrote:
>> mem_cgroup_read_stat() returns a page count by summing per cpu page
>> counters. The summing is racy wrt. updates, so a transient negative sum
>> is possible. Callers don't want negative values:
>
On Mon, Apr 28 2014, Roman Gushchin wrote:
> 28.04.2014, 16:27, "Michal Hocko" :
>> The series is based on top of the current mmotm tree. Once the series
>> gets accepted I will post a patch which will mark the soft limit as
>> deprecated with a note that it will be eventually dropped. Let me
Alex Shi wrote:
> 在 2020/11/11 上午3:50, Andrew Morton 写道:
>> On Tue, 10 Nov 2020 08:39:24 +0530 Souptick Joarder
>> wrote:
>>
>>> On Fri, Nov 6, 2020 at 4:55 PM Alex Shi wrote:
Otherwise it cause gcc warning:
^~~
../mm/filemap.c:830:14: warning: no
FO_BTF=y.
>
> Link:
> https://lkml.kernel.org/r/caadnvqj6tmzbxvtrobueh6qa0h+q7yaskxrvvvxhqr3kbzd...@mail.gmail.com
> Cc: Michal Kubecek
> Cc: Justin Forbes
> Cc: Alex Shi
> Cc: Souptick Joarder
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Josef Bacik
>
; logically significant name, and check for the possibility of page
> demotion.
Reviewed-by: Greg Thelen
> Signed-off-by: Dave Hansen
> Cc: David Rientjes
> Cc: Huang Ying
> Cc: Dan Williams
> Cc: David Hildenbrand
> Cc: osalvador
> ---
>
> b/mm/vmscan.c | 2
301 - 390 of 390 matches
Mail list logo