Re: [PATCH 07/13] memcontrol: schedule throttling if we are congested

2018-06-11 Thread Johannes Weiner
ng. So instead check to > see if our cgroup is congested, and if so schedule the throttling. > Before we return to user space the throttling stuff will only throttle > if we actually required it. > > Signed-off-by: Tejun Heo Looks good to me now, thanks. Acked-by: Johannes Weiner

Re: [PATCH 05/13] swap,blkcg: issue swap io with the appropriate context

2018-06-11 Thread Johannes Weiner
On Tue, Jun 05, 2018 at 09:29:40AM -0400, Josef Bacik wrote: > From: Tejun Heo > > For backcharging we need to know who the page belongs to when swapping > it out. > > Signed-off-by: Tejun Heo > Signed-off-by: Josef Bacik Acked-by: Johannes Weiner

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-23 Thread Johannes Weiner
On Wed, May 09, 2018 at 04:33:24PM +0530, Vinayak Menon wrote: > On 5/8/2018 2:31 AM, Johannes Weiner wrote: > > + /* Kick the stats aggregation worker if it's gone to sleep */ > > + if (!delayed_work_pending(>clock_work)) > > This causes a crash when the work is sc

Re: [PATCH 0/7] psi: pressure stall information for CPU, memory, and IO

2018-05-14 Thread Johannes Weiner
On Mon, May 14, 2018 at 03:39:33PM +, Christopher Lameter wrote: > On Mon, 7 May 2018, Johannes Weiner wrote: > > > What to make of this number? If CPU utilization is at 100% and CPU > > pressure is 0, it means the system is perfectly utilized, with one > > runnable t

Re: [PATCH 7/7] psi: cgroup support

2018-05-10 Thread Johannes Weiner
On Wed, May 09, 2018 at 01:07:36PM +0200, Peter Zijlstra wrote: > On Mon, May 07, 2018 at 05:01:35PM -0400, Johannes Weiner wrote: > > --- a/kernel/sched/psi.c > > +++ b/kernel/sched/psi.c > > @@ -260,6 +260,18 @@ void psi_task_change(struct task_struct *task, u64 > &

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-10 Thread Johannes Weiner
On Wed, May 09, 2018 at 12:21:00PM +0200, Peter Zijlstra wrote: > On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote: > > + local_irq_disable(); > > + rq = this_rq(); > > + raw_spin_lock(>lock); > > + rq_pin_lock(rq, ); > > Given tha

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-10 Thread Johannes Weiner
On Wed, May 09, 2018 at 12:14:54PM +0200, Peter Zijlstra wrote: > On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote: > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > index 15750c222ca2..1658477466d5 100644 > > --- a/kernel/sched/sched.h > >

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-10 Thread Johannes Weiner
On Wed, May 09, 2018 at 12:05:51PM +0200, Peter Zijlstra wrote: > On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote: > > + u64 some[NR_PSI_RESOURCES] = { 0, }; > > + u64 full[NR_PSI_RESOURCES] = { 0, }; > > > + some[r] /= max(nonidle_total, 1UL)

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-10 Thread Johannes Weiner
On Wed, May 09, 2018 at 12:04:55PM +0200, Peter Zijlstra wrote: > On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote: > > +static void psi_clock(struct work_struct *work) > > +{ > > + u64 some[NR_PSI_RESOURCES] = { 0, }; > > + u64 full[NR_PSI_RESOURCES]

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-10 Thread Johannes Weiner
On Wed, May 09, 2018 at 11:59:38AM +0200, Peter Zijlstra wrote: > On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote: > > diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h > > new file mode 100644 > > index ..b22b0ffc729d > &

Re: [PATCH 5/7] sched: loadavg: make calc_load_n() public

2018-05-10 Thread Johannes Weiner
On Wed, May 09, 2018 at 11:49:06AM +0200, Peter Zijlstra wrote: > On Mon, May 07, 2018 at 05:01:33PM -0400, Johannes Weiner wrote: > > +static inline unsigned long > > +fixed_power_int(unsigned long x, unsigned int frac_bits, unsigned int n) > > +{ > > + unsigned lon

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-10 Thread Johannes Weiner
On Wed, May 09, 2018 at 01:38:49PM +0200, Peter Zijlstra wrote: > On Wed, May 09, 2018 at 12:46:18PM +0200, Peter Zijlstra wrote: > > On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote: > > > > > @@ -2038,6 +2038,7 @@ try_to_wake_up(struct task_struct *p, u

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-08 Thread Johannes Weiner
On Mon, May 07, 2018 at 05:42:36PM -0700, Randy Dunlap wrote: > On 05/07/2018 02:01 PM, Johannes Weiner wrote: > > + * The ratio is tracked in decaying time averages over 10s, 1m, 5m > > + * windows. Cumluative stall times are tracked and exported as well to > >

Re: [PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-08 Thread Johannes Weiner
On Tue, May 08, 2018 at 11:04:09AM +0800, kbuild test robot wrote: >118#else /* CONFIG_PSI */ >119static inline void psi_enqueue(struct task_struct *p, u64 now) >120{ >121} >122static inline void psi_dequeue(struct task_struct *p, u64

[PATCH 1/7] mm: workingset: don't drop refault information prematurely

2018-05-07 Thread Johannes Weiner
From: Johannes Weiner <jwei...@fb.com> If we just keep enough refault information to match the CURRENT page cache during reclaim time, we could lose a lot of events when there is only a temporary spike in non-cache memory consumption that pushes out all the cache. Once cache comes back, we

[PATCH 2/7] mm: workingset: tell cache transitions from workingset thrashing

2018-05-07 Thread Johannes Weiner
r 7 sparsemem section bits. Signed-off-by: Johannes Weiner <han...@cmpxchg.org> --- include/linux/mmzone.h | 1 + include/linux/page-flags.h | 5 +- include/linux/swap.h | 2 +- include/trace/events/mmflags.h | 1 + mm/filemap.c | 9 ++-- mm/huge

[PATCH 7/7] psi: cgroup support

2018-05-07 Thread Johannes Weiner
pressure stall tracking for cgroups. In kernels with CONFIG_PSI=y, cgroups will have cpu.pressure, memory.pressure, and io.pressure files that track aggregate pressure stall times for only the tasks inside the cgroup. Signed-off-by: Johannes Weiner <han...@cmpxchg.org> --- Documentation/cgr

[PATCH 3/7] delayacct: track delays from thrashing cache pages

2018-05-07 Thread Johannes Weiner
0ms Signed-off-by: Johannes Weiner <han...@cmpxchg.org> --- include/linux/delayacct.h | 23 +++ include/uapi/linux/taskstats.h | 6 +- kernel/delayacct.c | 15 +++ mm/filemap.c | 11 +++ tools/acco

[PATCH 5/7] sched: loadavg: make calc_load_n() public

2018-05-07 Thread Johannes Weiner
It's going to be used in the following patch. Keep the churn separate. Signed-off-by: Johannes Weiner <han...@cmpxchg.org> --- include/linux/sched/loadavg.h | 69 +++ kernel/sched/loadavg.c| 69 --- 2 files chang

[PATCH 6/7] psi: pressure stall information for CPU, memory, and IO

2018-05-07 Thread Johannes Weiner
into percentages of walltime. A running average of those percentages is maintained over 10s, 1m, and 5m periods (similar to the loadaverage). Signed-off-by: Johannes Weiner <han...@cmpxchg.org> --- Documentation/accounting/psi.txt | 73 ++ include/linux/psi.h | 27 ++ i

[PATCH 0/7] psi: pressure stall information for CPU, memory, and IO

2018-05-07 Thread Johannes Weiner
Hi, I previously submitted a version of this patch set called "memdelay", which translated delays from reclaim, swap-in, thrashing page cache into a pressure percentage of lost walltime. I've since extended this code to aggregate all delay states tracked by delayacct in order to have generalized

[PATCH 4/7] sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD

2018-05-07 Thread Johannes Weiner
There are several definitions of those functions/macso in places that mess with fixed-point load averages. Provide an official version. Signed-off-by: Johannes Weiner <han...@cmpxchg.org> --- .../platforms/cell/cpufreq_spudemand.c| 2 +- arch/powerpc/platforms/cell/spufs/s

LSF/MM 2018: Call for Proposals

2018-01-15 Thread Johannes Weiner
you on behalf of the program committee: Anna Schumaker (Filesystems) Jens Axboe (Storage) Josef Bacik (Filesystems) Martin K. Petersen (Storage) Michal Hocko (MM) Rik van Riel (MM) Johannes Weiner

Re: [4.10, panic, regression] iscsi: null pointer deref at iscsi_tcp_segment_done+0x20d/0x2e0

2017-01-09 Thread Johannes Weiner
On Mon, Jan 09, 2017 at 09:30:05PM +0100, Jan Kara wrote: > On Sat 07-01-17 21:02:00, Johannes Weiner wrote: > > On Tue, Jan 03, 2017 at 01:28:25PM +0100, Jan Kara wrote: > > > On Mon 02-01-17 16:11:36, Johannes Weiner wrote: > > > > On Fri, Dec 23, 2016 at 03:33:29A

Re: [4.10, panic, regression] iscsi: null pointer deref at iscsi_tcp_segment_done+0x20d/0x2e0

2017-01-02 Thread Johannes Weiner
On Fri, Dec 23, 2016 at 03:33:29AM -0500, Johannes Weiner wrote: > On Fri, Dec 23, 2016 at 02:32:41AM -0500, Johannes Weiner wrote: > > On Thu, Dec 22, 2016 at 12:22:27PM -0800, Hugh Dickins wrote: > > > On Wed, 21 Dec 2016, Linus Torvalds wrote: > > > > On Wed,

Re: [4.10, panic, regression] iscsi: null pointer deref at iscsi_tcp_segment_done+0x20d/0x2e0

2016-12-23 Thread Johannes Weiner
On Fri, Dec 23, 2016 at 02:32:41AM -0500, Johannes Weiner wrote: > On Thu, Dec 22, 2016 at 12:22:27PM -0800, Hugh Dickins wrote: > > On Wed, 21 Dec 2016, Linus Torvalds wrote: > > > On Wed, Dec 21, 2016 at 9:13 PM, Dave Chinner <da...@fromorbit.com> wrote: > >

Re: [PATCH v4] mm: don't cap request size based on read-ahead setting

2016-11-18 Thread Johannes Weiner
gt; > /* > + * If the request exceeds the readahead window, allow the read to > + * be up to the optimal hardware IO size > + */ > + if (req_size > max_pages && bdi->io_pages > max_pages) > + max_pages = min(req_size, bdi->io_pages); >

Re: [PATCH/RFC] mm: don't cap request size based on read-ahead setting

2016-11-16 Thread Johannes Weiner
On Tue, Nov 15, 2016 at 03:41:58PM -0700, Jens Axboe wrote: > On 11/15/2016 03:27 PM, Johannes Weiner wrote: > > Hi Jens, > > > > On Thu, Nov 10, 2016 at 10:00:37AM -0700, Jens Axboe wrote: > > > Hi, > > > > > > We ran into a funky issue, where

Re: [PATCH/RFC] mm: don't cap request size based on read-ahead setting

2016-11-15 Thread Johannes Weiner
Hi Jens, On Thu, Nov 10, 2016 at 10:00:37AM -0700, Jens Axboe wrote: > Hi, > > We ran into a funky issue, where someone doing 256K buffered reads saw > 128K requests at the device level. Turns out it is read-ahead capping > the request size, since we use 128K as the default setting. This doesn't