ng. So instead check to
> see if our cgroup is congested, and if so schedule the throttling.
> Before we return to user space the throttling stuff will only throttle
> if we actually required it.
>
> Signed-off-by: Tejun Heo
Looks good to me now, thanks.
Acked-by: Johannes Weiner
On Tue, Jun 05, 2018 at 09:29:40AM -0400, Josef Bacik wrote:
> From: Tejun Heo
>
> For backcharging we need to know who the page belongs to when swapping
> it out.
>
> Signed-off-by: Tejun Heo
> Signed-off-by: Josef Bacik
Acked-by: Johannes Weiner
On Wed, May 09, 2018 at 04:33:24PM +0530, Vinayak Menon wrote:
> On 5/8/2018 2:31 AM, Johannes Weiner wrote:
> > + /* Kick the stats aggregation worker if it's gone to sleep */
> > + if (!delayed_work_pending(>clock_work))
>
> This causes a crash when the work is sc
On Mon, May 14, 2018 at 03:39:33PM +, Christopher Lameter wrote:
> On Mon, 7 May 2018, Johannes Weiner wrote:
>
> > What to make of this number? If CPU utilization is at 100% and CPU
> > pressure is 0, it means the system is perfectly utilized, with one
> > runnable t
On Wed, May 09, 2018 at 01:07:36PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:35PM -0400, Johannes Weiner wrote:
> > --- a/kernel/sched/psi.c
> > +++ b/kernel/sched/psi.c
> > @@ -260,6 +260,18 @@ void psi_task_change(struct task_struct *task, u64
> &
On Wed, May 09, 2018 at 12:21:00PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > + local_irq_disable();
> > + rq = this_rq();
> > + raw_spin_lock(>lock);
> > + rq_pin_lock(rq, );
>
> Given tha
On Wed, May 09, 2018 at 12:14:54PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 15750c222ca2..1658477466d5 100644
> > --- a/kernel/sched/sched.h
> >
On Wed, May 09, 2018 at 12:05:51PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > + u64 some[NR_PSI_RESOURCES] = { 0, };
> > + u64 full[NR_PSI_RESOURCES] = { 0, };
>
> > + some[r] /= max(nonidle_total, 1UL)
On Wed, May 09, 2018 at 12:04:55PM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > +static void psi_clock(struct work_struct *work)
> > +{
> > + u64 some[NR_PSI_RESOURCES] = { 0, };
> > + u64 full[NR_PSI_RESOURCES]
On Wed, May 09, 2018 at 11:59:38AM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> > diff --git a/include/linux/psi_types.h b/include/linux/psi_types.h
> > new file mode 100644
> > index ..b22b0ffc729d
> &
On Wed, May 09, 2018 at 11:49:06AM +0200, Peter Zijlstra wrote:
> On Mon, May 07, 2018 at 05:01:33PM -0400, Johannes Weiner wrote:
> > +static inline unsigned long
> > +fixed_power_int(unsigned long x, unsigned int frac_bits, unsigned int n)
> > +{
> > + unsigned lon
On Wed, May 09, 2018 at 01:38:49PM +0200, Peter Zijlstra wrote:
> On Wed, May 09, 2018 at 12:46:18PM +0200, Peter Zijlstra wrote:
> > On Mon, May 07, 2018 at 05:01:34PM -0400, Johannes Weiner wrote:
> >
> > > @@ -2038,6 +2038,7 @@ try_to_wake_up(struct task_struct *p, u
On Mon, May 07, 2018 at 05:42:36PM -0700, Randy Dunlap wrote:
> On 05/07/2018 02:01 PM, Johannes Weiner wrote:
> > + * The ratio is tracked in decaying time averages over 10s, 1m, 5m
> > + * windows. Cumluative stall times are tracked and exported as well to
>
>
On Tue, May 08, 2018 at 11:04:09AM +0800, kbuild test robot wrote:
>118#else /* CONFIG_PSI */
>119static inline void psi_enqueue(struct task_struct *p, u64 now)
>120{
>121}
>122static inline void psi_dequeue(struct task_struct *p, u64
From: Johannes Weiner <jwei...@fb.com>
If we just keep enough refault information to match the CURRENT page
cache during reclaim time, we could lose a lot of events when there is
only a temporary spike in non-cache memory consumption that pushes out
all the cache. Once cache comes back, we
r 7 sparsemem section bits.
Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
---
include/linux/mmzone.h | 1 +
include/linux/page-flags.h | 5 +-
include/linux/swap.h | 2 +-
include/trace/events/mmflags.h | 1 +
mm/filemap.c | 9 ++--
mm/huge
pressure stall tracking for cgroups. In kernels
with CONFIG_PSI=y, cgroups will have cpu.pressure, memory.pressure,
and io.pressure files that track aggregate pressure stall times for
only the tasks inside the cgroup.
Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
---
Documentation/cgr
0ms
Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
---
include/linux/delayacct.h | 23 +++
include/uapi/linux/taskstats.h | 6 +-
kernel/delayacct.c | 15 +++
mm/filemap.c | 11 +++
tools/acco
It's going to be used in the following patch. Keep the churn separate.
Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
---
include/linux/sched/loadavg.h | 69 +++
kernel/sched/loadavg.c| 69 ---
2 files chang
into
percentages of walltime. A running average of those percentages is
maintained over 10s, 1m, and 5m periods (similar to the loadaverage).
Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
---
Documentation/accounting/psi.txt | 73 ++
include/linux/psi.h | 27 ++
i
Hi,
I previously submitted a version of this patch set called "memdelay",
which translated delays from reclaim, swap-in, thrashing page cache
into a pressure percentage of lost walltime. I've since extended this
code to aggregate all delay states tracked by delayacct in order to
have generalized
There are several definitions of those functions/macso in places that
mess with fixed-point load averages. Provide an official version.
Signed-off-by: Johannes Weiner <han...@cmpxchg.org>
---
.../platforms/cell/cpufreq_spudemand.c| 2 +-
arch/powerpc/platforms/cell/spufs/s
you on behalf of the program committee:
Anna Schumaker (Filesystems)
Jens Axboe (Storage)
Josef Bacik (Filesystems)
Martin K. Petersen (Storage)
Michal Hocko (MM)
Rik van Riel (MM)
Johannes Weiner
On Mon, Jan 09, 2017 at 09:30:05PM +0100, Jan Kara wrote:
> On Sat 07-01-17 21:02:00, Johannes Weiner wrote:
> > On Tue, Jan 03, 2017 at 01:28:25PM +0100, Jan Kara wrote:
> > > On Mon 02-01-17 16:11:36, Johannes Weiner wrote:
> > > > On Fri, Dec 23, 2016 at 03:33:29A
On Fri, Dec 23, 2016 at 03:33:29AM -0500, Johannes Weiner wrote:
> On Fri, Dec 23, 2016 at 02:32:41AM -0500, Johannes Weiner wrote:
> > On Thu, Dec 22, 2016 at 12:22:27PM -0800, Hugh Dickins wrote:
> > > On Wed, 21 Dec 2016, Linus Torvalds wrote:
> > > > On Wed,
On Fri, Dec 23, 2016 at 02:32:41AM -0500, Johannes Weiner wrote:
> On Thu, Dec 22, 2016 at 12:22:27PM -0800, Hugh Dickins wrote:
> > On Wed, 21 Dec 2016, Linus Torvalds wrote:
> > > On Wed, Dec 21, 2016 at 9:13 PM, Dave Chinner <da...@fromorbit.com> wrote:
> >
gt;
> /*
> + * If the request exceeds the readahead window, allow the read to
> + * be up to the optimal hardware IO size
> + */
> + if (req_size > max_pages && bdi->io_pages > max_pages)
> + max_pages = min(req_size, bdi->io_pages);
>
On Tue, Nov 15, 2016 at 03:41:58PM -0700, Jens Axboe wrote:
> On 11/15/2016 03:27 PM, Johannes Weiner wrote:
> > Hi Jens,
> >
> > On Thu, Nov 10, 2016 at 10:00:37AM -0700, Jens Axboe wrote:
> > > Hi,
> > >
> > > We ran into a funky issue, where
Hi Jens,
On Thu, Nov 10, 2016 at 10:00:37AM -0700, Jens Axboe wrote:
> Hi,
>
> We ran into a funky issue, where someone doing 256K buffered reads saw
> 128K requests at the device level. Turns out it is read-ahead capping
> the request size, since we use 128K as the default setting. This doesn't
29 matches
Mail list logo