This series of patches adds support to configure a cgroup to swap to a
particular file by using control file memory.swapfile.
A value of default in memory.swapfile indicates that this cgroup should
use the default, system-wide, swap files. A value of none indicates that
this cgroup should never
_enable_swap_info() to not insert
private swap files onto swap_list; this improves the performance of
get_swap_page() in such cases, at the cost of making
swap_store_swap_device() and swapoff() minutely slower (both of which
are non-critical).
Signed-off-by: Jamie Liu jamie...@google.com
Signed-off-by: Yu Zhao
available for swap
offsets in the PTE, it does not actually impose any new restrictions on
the maximum size of swap files, as that is currently limited by the use
of 32bit values in other parts of the swap code.
Signed-off-by: Suleiman Souhlal sulei...@google.com
Signed-off-by: Yu Zhao yuz
any swap files, they go up the
hierarchy until someone who has swap file set up is found).
The path of the swap file is set by writing to memory.swapfile. Details
of the API can be found in Documentation/cgroups/memory.txt.
Signed-off-by: Suleiman Souhlal sulei...@google.com
Signed-off-by: Yu Zhao
available for swap
offsets in the PTE, it does not actually impose any new restrictions on
the maximum size of swap files, as that is currently limited by the use
of 32bit values in other parts of the swap code.
Signed-off-by: Suleiman Souhlal sulei...@google.com
Signed-off-by: Yu Zhao yuz
_enable_swap_info() to not insert
private swap files onto swap_list; this improves the performance of
get_swap_page() in such cases, at the cost of making
swap_store_swap_device() and swapoff() minutely slower (both of which
are non-critical).
Signed-off-by: Jamie Liu jamie...@google.com
Signed-off-by: Yu Zhao
This series of patches adds support to configure a cgroup to swap to a
particular file by using control file memory.swapfile.
Originally, cgroups share system-wide swap space and limiting cgroup swapping
is not possible. This patchset solves the problem by adding mechanism that
isolates cgroup
any swap files, they go up the
hierarchy until someone who has swap file set up is found).
The path of the swap file is set by writing to memory.swapfile. Details
of the API can be found in Documentation/cgroups/memory.txt.
Signed-off-by: Suleiman Souhlal sulei...@google.com
Signed-off-by: Yu Zhao
On Wed, Apr 02, 2014 at 04:54:33PM -0400, Johannes Weiner wrote:
On Wed, Apr 02, 2014 at 01:34:06PM -0700, Yu Zhao wrote:
This series of patches adds support to configure a cgroup to swap to a
particular file by using control file memory.swapfile.
Originally, cgroups share system-wide
On Wed, Oct 15, 2014 at 12:30:44PM -0700, Andrew Morton wrote:
On Wed, 15 Oct 2014 12:20:04 -0700 Yu Zhao yuz...@google.com wrote:
Compound page should be freed by put_page() or free_pages() with
correct order. Not doing so will cause tail pages leaked.
The compound order can
...@linux.intel.com
Signed-off-by: Yu Zhao yuz...@google.com
---
mm/page_alloc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 736d8e1..5bf44e4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -750,6 +750,9 @@ static bool free_pages_prepare(struct page
general.
Acked-by: Kirill A. Shutemov kirill.shute...@linux.intel.com
Fixes: 97ae17497e99 (thp: implement refcounting for huge zero page)
Cc: sta...@vger.kernel.org (v3.8+)
Signed-off-by: Yu Zhao yuz...@google.com
---
mm/huge_memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff
is more general.
Signed-off-by: Yu Zhao yuz...@google.com
---
mm/huge_memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 74c78aa..780d12c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -200,7 +200,7 @@ retry
This allows us to easily catch the bug fixed in previous patch.
Here we also verify whether a page is tail page or not -- tail
pages are supposed to be freed along with their head, not by
themselves.
Signed-off-by: Yu Zhao yuz...@google.com
---
mm/page_alloc.c | 3 +++
1 file changed, 3
-by: Yu Zhao yuz...@google.com
---
mm/shmem.c | 16
1 file changed, 16 insertions(+)
diff --git a/mm/shmem.c b/mm/shmem.c
index 4caf8ed..37e7933 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -542,6 +542,21 @@ void shmem_truncate_range(struct inode *inode, loff_t
lstart, loff_t lend
On Thu, Mar 31, 2016 at 05:46:39PM +0900, Sergey Senozhatsky wrote:
> On (03/30/16 08:59), Minchan Kim wrote:
> > On Tue, Mar 29, 2016 at 03:02:57PM -0700, Yu Zhao wrote:
> > > zs_destroy_pool() might sleep so it shouldn't be used in zpool
> > > destroy callback whic
On Mon, Apr 25, 2016 at 05:20:10PM -0400, Dan Streetman wrote:
> Add a work_struct to struct zpool, and change zpool_destroy_pool to
> defer calling the pool implementation destroy.
>
> The zsmalloc pool destroy function, which is one of the zpool
> implementations, may sleep during destruction
low in expression
[-Werror=overflow]
Signed-off-by: Yu Zhao <yuz...@google.com>
---
include/linux/page-flags.h | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index f4ed4f1b..a5c 100644
--- a/incl
simply disable CPU notifier when CPU hotplug
is not configured (which is perfectly safe because the code in question
is called after all possible CPUs are online and will remain online
until power off).
Signed-off-by: Yu Zhao <yuz...@google.com>
---
mm/zswap.c | 12
1 file chang
simply disable CPU notifier when CPU hotplug
is not configured (which is perfectly safe because the code in question
is called after all possible CPUs are online and will remain online
until power off).
v2: #ifdef for cpu_notifier_register_done during cleanup.
Signe-off-by: Yu Zhao <yuz...@google.
Michal Hocko <mho...@kernel.org> wrote:
> > > On Fri 02-12-16 15:38:48, Michal Hocko wrote:
> > >> On Fri 02-12-16 09:24:35, Dan Streetman wrote:
> > >> > On Fri, Dec 2, 2016 at 8:46 AM, Michal Hocko <mho...@kernel.org> wrote:
> > >> > > O
On Fri, Dec 02, 2016 at 02:46:06PM +0100, Michal Hocko wrote:
> On Wed 30-11-16 13:15:16, Yu Zhao wrote:
> > __unregister_cpu_notifier() only removes registered notifier from its
> > linked list when CPU hotplug is configured. If we free registered CPU
> > notifier when HOTP
mem_cgroup_resize_limit() and mem_cgroup_resize_memsw_limit() have
identical logics. Refactor code so we don't need to keep two pieces
of code that does same thing.
Signed-off-by: Yu Zhao <yuz...@google.com>
---
mm/memcontrol.c | 71 +-
On Sun, Jun 04, 2017 at 11:09:42PM +0300, Vladimir Davydov wrote:
> On Sun, Jun 04, 2017 at 01:04:37PM -0700, Yu Zhao wrote:
> > @@ -2498,22 +2449,24 @@ static int mem_cgroup_resize_memsw_limit(struct
> > mem_cgroup *memcg,
> > }
> >
> >
mem_cgroup_resize_limit() and mem_cgroup_resize_memsw_limit() have
identical logics. Refactor code so we don't need to keep two pieces
of code that does same thing.
Signed-off-by: Yu Zhao <yuz...@google.com>
Acked-by: Vladimir Davydov <vdavydov@gmail.com>
---
Changelog since v1:
*
On Fri, Jun 02, 2017 at 10:18:57AM +0200, Michal Hocko wrote:
> On Thu 01-06-17 12:56:35, Yu Zhao wrote:
> > Saw need_resched() warnings when swapping on large swapfile (TBs)
> > because page allocation in swap_cgroup_prepare() took too long.
>
> Hmm, but the page
On Fri, Jun 02, 2017 at 10:32:52AM +0300, Nikolay Borisov wrote:
>
>
> On 2.06.2017 02:02, Yu Zhao wrote:
> > mem_cgroup_resize_limit() and mem_cgroup_resize_memsw_limit() have
> > identical logics. Refactor code so we don't need to keep two pieces
> > o
Saw need_resched() warnings when swapping on large swapfile (TBs)
because continuously allocating many pages in swap_cgroup_prepare()
took too long.
We already cond_resched when freeing page in swap_cgroup_swapoff().
Do the same for the page allocation.
Signed-off-by: Yu Zhao <yuz...@google.
mem_cgroup_resize_limit() and mem_cgroup_resize_memsw_limit() have
identical logics. Refactor code so we don't need to keep two pieces
of code that does same thing.
Signed-off-by: Yu Zhao <yuz...@google.com>
Acked-by: Vladimir Davydov <vdavydov@gmail.com>
---
Changelog since v1:
*
Saw need_resched() warnings when swapping on large swapfile (TBs)
because page allocation in swap_cgroup_prepare() took too long.
We already cond_resched when freeing page in swap_cgroup_swapoff().
Do the same for the page allocation.
Signed-off-by: Yu Zhao <yuz...@google.com>
-
mem_cgroup_resize_limit() and mem_cgroup_resize_memsw_limit() have
identical logics. Refactor code so we don't need to keep two pieces
of code that does same thing.
Signed-off-by: Yu Zhao <yuz...@google.com>
Acked-by: Vladimir Davydov <vdavydov@gmail.com>
Acked-by: Micha
On Tue, Jan 09, 2018 at 01:25:18PM -0500, Dan Streetman wrote:
> On Mon, Jan 8, 2018 at 5:51 PM, Yu Zhao <yuz...@google.com> wrote:
> > We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
> > as zpool driver because zsmalloc doesn't support eviction.
> &g
mem_cgroup_resize_limit() and mem_cgroup_resize_memsw_limit() have
identical logics. Refactor code so we don't need to keep two pieces
of code that does same thing.
Signed-off-by: Yu Zhao <yuz...@google.com>
Acked-by: Vladimir Davydov <vdavydov@gmail.com>
Acked-by: Micha
We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
as zpool driver because zsmalloc doesn't support eviction.
Add zpool_shrinkable() to detect if zpool is shrinkable, and use
it in zswap to avoid waste memory for zswap header.
Signed-off-by: Yu Zhao <yuz...@google.
by delaying set_pte_at() until page is ready.
Signed-off-by: Yu Zhao <yuz...@google.com>
---
mm/memory.c | 2 +-
mm/swapfile.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index ca5674cbaff2..b8be1a4adf93 100644
--- a/mm/memory.c
++
On Wed, Jan 10, 2018 at 03:06:47PM -0500, Dan Streetman wrote:
> On Tue, Jan 9, 2018 at 5:47 PM, Yu Zhao <yuz...@google.com> wrote:
> > On Tue, Jan 09, 2018 at 01:25:18PM -0500, Dan Streetman wrote:
> >> On Mon, Jan 8, 2018 at 5:51 PM, Yu Zhao <yuz...@google.com>
On Wed, Jan 10, 2018 at 02:47:41PM -0800, Yu Zhao wrote:
> We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
> as zpool driver because zsmalloc doesn't support eviction.
>
> Add zpool_evictable() to detect if zpool is potentially evictable,
> and use it in zswap
We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
as zpool driver because zsmalloc doesn't support eviction.
Add zpool_evictable() to detect if zpool is potentially evictable,
and use it in zswap to avoid waste memory for zswap header.
Signed-off-by: Yu Zhao <yuz...@google.
We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
as zpool driver because zsmalloc doesn't support eviction.
Add zpool_evictable() to detect if zpool is potentially evictable,
and use it in zswap to avoid waste memory for zswap header.
Signed-off-by: Yu Zhao <yuz...@google.
On Tue, Jan 09, 2018 at 01:48:17PM +0900, Sergey Senozhatsky wrote:
> On (01/08/18 14:51), Yu Zhao wrote:
> [..]
> > int zpool_shrink(struct zpool *zpool, unsigned int pages,
> > unsigned int *reclaimed)
> > {
> > - return zpool->driver->
On Tue, Jan 09, 2018 at 09:46:22AM +0100, Michal Hocko wrote:
> On Mon 08-01-18 14:56:32, Yu Zhao wrote:
> > We don't want to expose page before it's properly setup. During
> > page setup, we may call page_add_new_anon_rmap() which uses non-
> > atomic bit op. If page is exp
ble to do
so; 2) we are ready to handle interrupt yet, and kernel crashes when
interrupt comes in.
Rename azx_reset() to snd_hdac_bus_reset_link(), and use it to reset
device properly.
Fixes: 60767abcea3d ("ASoC: Intel: Skylake: Reset the controller in probe")
Signed-off-by: Yu Zhao
0xc80 from 0x8100 (relocation
range: 0x8000-0xbfff)
Signed-off-by: Yu Zhao
---
sound/soc/intel/skylake/skl.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
index cf09721ca13
once on null dma buffer pointer during the
initialization.
Signed-off-by: Yu Zhao
---
sound/hda/hdac_controller.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
index 560ec0986e1a..11057d9f84ec 100644
On Mon, Oct 15, 2018 at 08:41:52PM +0200, Jann Horn wrote:
> On Mon, Oct 15, 2018 at 8:38 PM Yu Zhao wrote:
> > There were mismatches between number of vmstat keys and number of
> > vmstat values. They were fixed recently by:
> > commit 58bc4c34d249 ("mm/vmstat.c
detect such mismatch and hopefully prevent
it from happening again.
Signed-off-by: Yu Zhao
---
include/linux/vmstat.h | 4
mm/vmstat.c| 18 --
2 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index f25cef
+0x184/0x1bb
[ 25.824804] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[ 25.895502] RIP: rdev_get_name+0x29/0xa5 RSP: 8801d45779f0
[ 26.550863] ---[ end trace fb2a7bb4f63aeba5 ]---
Signed-off-by: Yu Zhao
---
drivers/regulator/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
On Thu, Sep 27, 2018 at 01:48:58AM +0200, Ulf Hansson wrote:
> On 23 September 2018 at 22:39, Yu Zhao wrote:
> > This device reports SDHCI_CLOCK_INT_STABLE even though it's not
> > ready to take SDHCI_CLOCK_CARD_EN. The symptom is that reading
> > SDHCI_CLOCK_CONTROL aft
ctl2: 0x0008
mmc1: sdhci: ADMA Err: 0x | ADMA Ptr: 0x
mmc1: sdhci:
The problem happens during wakeup from S3. Adding a delay quirk
after power up reliably fixes the problem.
Signed-off-by: Yu Zhao
---
drivers/mmc/host/sdhci-pci
reg_process_hint+0x31e/0x8aa [cfg80211]
reg_todo+0x204/0x5b9 [cfg80211]
process_one_work+0x55f/0x8d0
worker_thread+0x5dd/0x841
kthread+0x270/0x285
ret_from_fork+0x22/0x40
Signed-off-by: Yu Zhao
---
net/wireless/reg.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/net/wireless
option does not override iommu=pt
Fixes: aafd8ba0ca74 ("iommu/amd: Implement add_device and remove_device")
Signed-off-by: Yu Zhao
---
drivers/iommu/amd_iommu.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/a
different from CPU physical address when AMD
IOMMU is not in passthrough mode.
Signed-off-by: Yu Zhao
---
sound/soc/amd/acp-pcm-dma.c | 15 +--
sound/soc/amd/acp.h | 2 +-
2 files changed, 6 insertions(+), 11 deletions(-)
diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp
device or uses
the default dma_ops if struct device doesn't have it set.
Signed-off-by: Yu Zhao
---
sound/soc/amd/acp-pcm-dma.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp-pcm-dma.c
index fd3db4c37882..f4011bebc7ec
We changed key of swap cache tree from swp_entry_t.val to
swp_offset. Need to do so in shmem_replace_page() as well.
Fixes: f6ab1f7f6b2d ("mm, swap: use offset of swap entry as key of swap cache")
Cc: sta...@vger.kernel.org # v4.9+
Signed-off-by: Yu Zhao
---
mm/shmem.c | 6
On Mon, Nov 19, 2018 at 02:11:27PM -0800, Hugh Dickins wrote:
> On Sun, 18 Nov 2018, Yu Zhao wrote:
>
> > We used to have a single swap address space with swp_entry_t.val
> > as its radix tree index. This is not the case anymore. Now Each
> > swp_type() has its own addr
Pagetable page doesn't touch page->mapping or have any used field
that overlaps with it. No need to clear mapping in dtor. In fact,
doing so might mask problems that otherwise would be detected by
bad_page().
Signed-off-by: Yu Zhao
---
include/linux/mm.h | 11 ++-
1 file changed
We used to have a single swap address space with swp_entry_t.val
as its radix tree index. This is not the case anymore. Now Each
swp_type() has its own address space and should use swp_offset()
as radix tree index.
Signed-off-by: Yu Zhao
---
mm/shmem.c | 11 +++
1 file changed, 7
We used to have a single swap address space with swp_entry_t.val
as its radix tree index. This is not the case anymore. Now Each
swp_type() has its own address space and should use swp_offset()
as radix tree index.
Signed-off-by: Yu Zhao
---
mm/shmem.c | 11 +++
1 file changed, 7
On Wed, Sep 12, 2018 at 11:20:20AM +0100, Mark Brown wrote:
> On Tue, Sep 11, 2018 at 03:12:46PM -0600, Yu Zhao wrote:
> > This reverts commit 12eeeb4f4733bbc4481d01df35933fc15beb8b19.
> >
> > The patch doesn't fix accessing memory with null pointer in
> > skl_interrup
once on null dma buffer pointer during the
initialization.
Reviewed-by: Takashi Iwai
Signed-off-by: Yu Zhao
---
sound/hda/hdac_controller.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
index 560ec0986e1a
i
Signed-off-by: Yu Zhao
---
include/sound/hdaudio.h | 1 +
sound/hda/hdac_controller.c | 7 ---
sound/soc/intel/skylake/skl.c | 2 +-
3 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
index 6f1e1f3b3063..cd1773d0e08f 100644
eb4f4733b ("ASoC: Intel: Skylake: Acquire irq after RIRB allocation")
Signed-off-by: Yu Zhao
---
sound/soc/intel/skylake/skl.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
index e7fd14daeb4f..
On Tue, Sep 11, 2018 at 05:36:36PM +0100, Mark Brown wrote:
> On Tue, Sep 11, 2018 at 08:03:21AM +0200, Takashi Iwai wrote:
> > Yu Zhao wrote:
>
> > > Will fix the problems in the following patches. Also attaching the
> > > crash for future reference.
>
On Tue, Sep 11, 2018 at 08:06:49AM +0200, Takashi Iwai wrote:
> On Mon, 10 Sep 2018 23:21:50 +0200,
> Yu Zhao wrote:
> >
> > In snd_hdac_bus_init_chip(), we enable interrupt before
> > snd_hdac_bus_init_cmd_io() initializing dma buffers. If irq has
> > been acquire
n")
Signed-off-by: Yu Zhao
---
sound/soc/intel/skylake/skl.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
index e7fd14daeb4f..d174cbe35f7a 100644
--- a/sound/soc/intel/skylake/skl.c
+++ b/soun
once on null dma buffer pointer during the
initialization.
Reviewed-by: Takashi Iwai
Signed-off-by: Yu Zhao
---
sound/hda/hdac_controller.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
index 560ec0986e1a
i
Signed-off-by: Yu Zhao
---
include/sound/hdaudio.h | 1 +
sound/hda/hdac_controller.c | 7 ---
sound/soc/intel/skylake/skl.c | 2 +-
3 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
index 6f1e1f3b3063..cd1773d0e08f 100644
offsets in the PTE, it does not actually impose any new restrictions on
the maximum size of swap files, as that is currently limited by the use
of 32bit values in other parts of the swap code.
Signed-off-by: Suleiman Souhlal
Signed-off-by: Yu Zhao
---
arch/x86/include/asm/pgtable_64.h | 62
private swap files onto swap_list; this improves the performance of
get_swap_page() in such cases, at the cost of making
swap_store_swap_device() and swapoff() minutely slower (both of which
are non-critical).
Signed-off-by: Jamie Liu
Signed-off-by: Yu Zhao
---
mm/swapfile.c | 84
This series of patches adds support to configure a cgroup to swap to a
particular file by using control file memory.swapfile.
Originally, cgroups share system-wide swap space and limiting cgroup swapping
is not possible. This patchset solves the problem by adding mechanism that
isolates cgroup
, they go up the
hierarchy until someone who has swap file set up is found).
The path of the swap file is set by writing to memory.swapfile. Details
of the API can be found in Documentation/cgroups/memory.txt.
Signed-off-by: Suleiman Souhlal
Signed-off-by: Yu Zhao
---
Documentation/cgroups/memory.txt
On Wed, Apr 02, 2014 at 04:54:33PM -0400, Johannes Weiner wrote:
> On Wed, Apr 02, 2014 at 01:34:06PM -0700, Yu Zhao wrote:
> > This series of patches adds support to configure a cgroup to swap to a
> > particular file by using control file memory.swapfile.
> >
> >
This series of patches adds support to configure a cgroup to swap to a
particular file by using control file memory.swapfile.
A value of "default" in memory.swapfile indicates that this cgroup should
use the default, system-wide, swap files. A value of "none" indicates that
this cgroup should
private swap files onto swap_list; this improves the performance of
get_swap_page() in such cases, at the cost of making
swap_store_swap_device() and swapoff() minutely slower (both of which
are non-critical).
Signed-off-by: Jamie Liu
Signed-off-by: Yu Zhao
---
mm/swapfile.c | 84
offsets in the PTE, it does not actually impose any new restrictions on
the maximum size of swap files, as that is currently limited by the use
of 32bit values in other parts of the swap code.
Signed-off-by: Suleiman Souhlal
Signed-off-by: Yu Zhao
---
arch/x86/include/asm/pgtable_64.h | 63
, they go up the
hierarchy until someone who has swap file set up is found).
The path of the swap file is set by writing to memory.swapfile. Details
of the API can be found in Documentation/cgroups/memory.txt.
Signed-off-by: Suleiman Souhlal
Signed-off-by: Yu Zhao
---
Documentation/cgroups/memory.txt
mem_cgroup_resize_limit() and mem_cgroup_resize_memsw_limit() have
identical logics. Refactor code so we don't need to keep two pieces
of code that does same thing.
Signed-off-by: Yu Zhao
Acked-by: Vladimir Davydov
Acked-by: Michal Hocko
---
mm/memcontrol.c | 77
We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
as zpool driver because zsmalloc doesn't support eviction.
Add zpool_shrinkable() to detect if zpool is shrinkable, and use
it in zswap to avoid waste memory for zswap header.
Signed-off-by: Yu Zhao
---
include/linux/zpool.h
by delaying set_pte_at() until page is ready.
Signed-off-by: Yu Zhao
---
mm/memory.c | 2 +-
mm/swapfile.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index ca5674cbaff2..b8be1a4adf93 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3010,7
On Tue, Jan 09, 2018 at 09:46:22AM +0100, Michal Hocko wrote:
> On Mon 08-01-18 14:56:32, Yu Zhao wrote:
> > We don't want to expose page before it's properly setup. During
> > page setup, we may call page_add_new_anon_rmap() which uses non-
> > atomic bit op. If page is exp
On Tue, Jan 09, 2018 at 01:48:17PM +0900, Sergey Senozhatsky wrote:
> On (01/08/18 14:51), Yu Zhao wrote:
> [..]
> > int zpool_shrink(struct zpool *zpool, unsigned int pages,
> > unsigned int *reclaimed)
> > {
> > - return zpool->driver->
On Tue, Jan 09, 2018 at 01:25:18PM -0500, Dan Streetman wrote:
> On Mon, Jan 8, 2018 at 5:51 PM, Yu Zhao wrote:
> > We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
> > as zpool driver because zsmalloc doesn't support eviction.
> >
> > Add zpool_shri
On Wed, Jan 10, 2018 at 03:06:47PM -0500, Dan Streetman wrote:
> On Tue, Jan 9, 2018 at 5:47 PM, Yu Zhao wrote:
> > On Tue, Jan 09, 2018 at 01:25:18PM -0500, Dan Streetman wrote:
> >> On Mon, Jan 8, 2018 at 5:51 PM, Yu Zhao wrote:
> >> > We waste sizeof(swp_entry_
We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
as zpool driver because zsmalloc doesn't support eviction.
Add zpool_evictable() to detect if zpool is potentially evictable,
and use it in zswap to avoid waste memory for zswap header.
Signed-off-by: Yu Zhao
---
include/linux
We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
as zpool driver because zsmalloc doesn't support eviction.
Add zpool_evictable() to detect if zpool is potentially evictable,
and use it in zswap to avoid waste memory for zswap header.
Signed-off-by: Yu Zhao
---
include/linux
On Wed, Jan 10, 2018 at 02:47:41PM -0800, Yu Zhao wrote:
> We waste sizeof(swp_entry_t) for zswap header when using zsmalloc
> as zpool driver because zsmalloc doesn't support eviction.
>
> Add zpool_evictable() to detect if zpool is potentially evictable,
> and use it in zswap
Saw need_resched() warnings when swapping on large swapfile (TBs)
because page allocation in swap_cgroup_prepare() took too long.
We already cond_resched when freeing page in swap_cgroup_swapoff().
Do the same for the page allocation.
Signed-off-by: Yu Zhao
---
mm/swap_cgroup.c | 3 +++
1 file
mem_cgroup_resize_limit() and mem_cgroup_resize_memsw_limit() have
identical logics. Refactor code so we don't need to keep two pieces
of code that does same thing.
Signed-off-by: Yu Zhao
---
mm/memcontrol.c | 71 +
1 file changed, 11
On Sun, Jan 17, 2021 at 02:13:43AM -0800, Nadav Amit wrote:
> > On Jan 17, 2021, at 1:16 AM, Yu Zhao wrote:
> >
> > On Sat, Jan 16, 2021 at 11:32:22PM -0800, Nadav Amit wrote:
> >>> On Jan 16, 2021, at 8:41 PM, Yu Zhao wrote:
> >>>
> >>>
On Tue, Jan 12, 2021 at 09:43:38PM +, Will Deacon wrote:
> On Tue, Jan 12, 2021 at 12:38:34PM -0800, Nadav Amit wrote:
> > > On Jan 12, 2021, at 11:56 AM, Yu Zhao wrote:
> > > On Tue, Jan 12, 2021 at 11:15:43AM -0800, Nadav Amit wrote:
> > >> I will send an RF
:
dropped the last patch in this series based on the discussion here:
https://lore.kernel.org/patchwork/patch/1350552/#1550430
Yu Zhao (10):
mm: use add_page_to_lru_list()
mm: shuffle lru list addition and deletion functions
mm: don't pass "enum lru_list" to lru list addition funct
There is add_page_to_lru_list(), and move_pages_to_lru() should reuse
it, not duplicate it.
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-2-yuz...@google.com/
Signed-off-by: Yu Zhao
Reviewed-by: Alex Shi
---
mm/vmscan.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions
The parameter is redundant in the sense that it can be extracted
from the "struct page" parameter by page_lru() correctly.
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-5-yuz...@google.com/
Signed-off-by: Yu Zhao
Reviewed-by: Alex Shi
---
include/trace/events/page
change that
is meant to help debug.
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-7-yuz...@google.com/
Signed-off-by: Yu Zhao
---
include/linux/mm_inline.h | 28 ++--
mm/swap.c | 6 ++
mm/vmscan.c | 3 +--
3 files changed
These functions will call page_lru() in the following patches. Move
them below page_lru() to avoid the forward declaration.
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-3-yuz...@google.com/
Signed-off-by: Yu Zhao
---
include/linux/mm_inline.h | 42
We've removed all other references to this function.
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-9-yuz...@google.com/
Signed-off-by: Yu Zhao
Reviewed-by: Alex Shi
---
include/linux/mm_inline.h | 27 ++-
1 file changed, 6 insertions(+), 21 deletions
Move scattered VM_BUG_ONs to two essential places that cover all
lru list additions and deletions.
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-8-yuz...@google.com/
Signed-off-by: Yu Zhao
---
include/linux/mm_inline.h | 4
mm/swap.c | 2 --
mm/vmscan.c
All other references to the function were removed after
commit b910718a948a ("mm: vmscan: detect file thrashing at the reclaim
root").
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-11-yuz...@google.com/
Signed-off-by: Yu Zhao
Reviewed-by: Alex Shi
---
include/linux/mm
All other references to the function were removed after
commit a892cb6b977f ("mm/vmscan.c: use update_lru_size() in
update_lru_sizes()").
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-10-yuz...@google.com/
Signed-off-by: Yu Zhao
Reviewed-by: Alex Shi
---
inc
fixes them.
This patch may have left page_off_lru() seemingly odd, and we'll take
care of it in the next patch.
Link:
https://lore.kernel.org/linux-mm/20201207220949.830352-6-yuz...@google.com/
Signed-off-by: Yu Zhao
---
include/linux/mm_inline.h | 5 +++--
mm/compaction.c | 2 +-
1 - 100 of 404 matches
Mail list logo