On Fri, Sep 07, 2018 at 04:58:58PM +0200, Peter Zijlstra wrote:
> On Fri, Sep 07, 2018 at 10:44:22AM -0400, Johannes Weiner wrote:
>
> > > This does the whole seqcount thing 6x, which is a bit of a waste.
> >
> > [...]
> >
> > > It's a bit cumbersome, but that's because of C.
> >
> > I was actu
On Fri, Sep 07, 2018 at 10:44:22AM -0400, Johannes Weiner wrote:
> > This does the whole seqcount thing 6x, which is a bit of a waste.
>
> [...]
>
> > It's a bit cumbersome, but that's because of C.
>
> I was actually debating exactly this with Suren before, but since this
> is a super cold pat
On Fri, Sep 07, 2018 at 12:24:58PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 28, 2018 at 01:22:57PM -0400, Johannes Weiner wrote:
> > +static void psi_clock(struct work_struct *work)
> > +{
> > + struct delayed_work *dwork;
> > + struct psi_group *group;
> > + bool nonidle;
> > +
> > + dwo
On Fri, Sep 07, 2018 at 12:16:34PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 28, 2018 at 01:22:57PM -0400, Johannes Weiner wrote:
> > +enum psi_states {
> > + PSI_IO_SOME,
> > + PSI_IO_FULL,
> > + PSI_MEM_SOME,
> > + PSI_MEM_FULL,
> > + PSI_CPU_SOME,
> > + /* Only per-CPU, to weigh the
On Tue, Aug 28, 2018 at 01:22:57PM -0400, Johannes Weiner wrote:
> +static void psi_clock(struct work_struct *work)
> +{
> + struct delayed_work *dwork;
> + struct psi_group *group;
> + bool nonidle;
> +
> + dwork = to_delayed_work(work);
> + group = container_of(dwork, struct p
On Fri, Sep 07, 2018 at 12:16:34PM +0200, Peter Zijlstra wrote:
> This does the whole seqcount thing 6x, which is a bit of a waste.
>
> struct snapshot {
> u32 times[NR_PSI_STATES];
> };
>
> static inline struct snapshot get_times_snapshot(struct psi_group *pg, int
> cpu)
> {
> struc
On Tue, Aug 28, 2018 at 01:22:57PM -0400, Johannes Weiner wrote:
> +enum psi_states {
> + PSI_IO_SOME,
> + PSI_IO_FULL,
> + PSI_MEM_SOME,
> + PSI_MEM_FULL,
> + PSI_CPU_SOME,
> + /* Only per-CPU, to weigh the CPU in the global average: */
> + PSI_NONIDLE,
> + NR_PSI_S
On 08/28/2018 01:56 PM, Johannes Weiner wrote:
> On Tue, Aug 28, 2018 at 01:11:11PM -0700, Randy Dunlap wrote:
>> On 08/28/2018 10:22 AM, Johannes Weiner wrote:
>>> diff --git a/Documentation/accounting/psi.txt
>>> b/Documentation/accounting/psi.txt
>>> new file mode 100644
>>> index .
On Tue, Aug 28, 2018 at 01:11:11PM -0700, Randy Dunlap wrote:
> On 08/28/2018 10:22 AM, Johannes Weiner wrote:
> > diff --git a/Documentation/accounting/psi.txt
> > b/Documentation/accounting/psi.txt
> > new file mode 100644
> > index ..51e7ef14142e
> > --- /dev/null
> > +++ b/Document
On 08/28/2018 10:22 AM, Johannes Weiner wrote:
> diff --git a/Documentation/accounting/psi.txt
> b/Documentation/accounting/psi.txt
> new file mode 100644
> index ..51e7ef14142e
> --- /dev/null
> +++ b/Documentation/accounting/psi.txt
> @@ -0,0 +1,64 @@
> +=
When systems are overcommitted and resources become contended, it's
hard to tell exactly the impact this has on workload productivity, or
how close the system is to lockups and OOM kills. In particular, when
machines work multiple jobs concurrently, the impact of overcommit in
terms of latency and
On Wed, Aug 22, 2018 at 11:10:24AM +0200, Peter Zijlstra wrote:
> On Tue, Aug 21, 2018 at 04:11:15PM -0400, Johannes Weiner wrote:
> > On Fri, Aug 03, 2018 at 07:21:39PM +0200, Peter Zijlstra wrote:
> > > On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > > > +
On Tue, Aug 21, 2018 at 03:44:13PM -0400, Johannes Weiner wrote:
> > > + for (s = PSI_NONIDLE; s >= 0; s--) {
> > > + u32 time, delta;
> > > +
> > > + time = READ_ONCE(groupc->times[s]);
> > > + /*
> > > + * In addition to al
On Tue, Aug 21, 2018 at 04:11:15PM -0400, Johannes Weiner wrote:
> On Fri, Aug 03, 2018 at 07:21:39PM +0200, Peter Zijlstra wrote:
> > On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > > + time = READ_ONCE(groupc->times[s]);
> > > + /*
> > > +
On Fri, Aug 03, 2018 at 07:21:39PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > + time = READ_ONCE(groupc->times[s]);
> > + /*
> > +* In addition to already concluded states, we
> > +
Hi,
a quick update on that feedback before I send out v4:
On Fri, Aug 03, 2018 at 06:56:41PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > +static bool test_state(unsigned int *tasks, int cpu, enum psi_states state)
> > +{
> > + switch (stat
On Mon, Aug 06, 2018 at 11:19:28AM -0400, Johannes Weiner wrote:
> On Fri, Aug 03, 2018 at 06:56:41PM +0200, Peter Zijlstra wrote:
> > On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > > + u32 uninitialized_var(nonidle);
> >
> > urgh.. I can see why the compiler got con
On Mon, Aug 06, 2018 at 05:25:28PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 06, 2018 at 11:05:50AM -0400, Johannes Weiner wrote:
> > Argh, that's right. This needs an explicit count if we want to access
> > it locklessly. And you already said you didn't like that this is the
> > only state not de
On Mon, Aug 06, 2018 at 11:05:50AM -0400, Johannes Weiner wrote:
> Argh, that's right. This needs an explicit count if we want to access
> it locklessly. And you already said you didn't like that this is the
> only state not derived purely from the task counters, so maybe this is
> the way to go af
On Fri, Aug 03, 2018 at 07:07:33PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > +static bool psi_update_stats(struct psi_group *group)
> > +{
> > + u64 deltas[NR_PSI_STATES - 1] = { 0, };
> > + unsigned long missed_periods = 0;
> > + unsi
On Fri, Aug 03, 2018 at 06:56:41PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > +static bool psi_update_stats(struct psi_group *group)
> > +{
> > + u64 deltas[NR_PSI_STATES - 1] = { 0, };
> > + unsigned long missed_periods = 0;
> > + unsi
On Fri, Aug 03, 2018 at 06:56:41PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > +static bool test_state(unsigned int *tasks, int cpu, enum psi_states state)
> > +{
> > + switch (state) {
> > + case PSI_IO_SOME:
> > + return tasks[
On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> + time = READ_ONCE(groupc->times[s]);
> + /*
> + * In addition to already concluded states, we
> + * also incorporate currently active states on
> +
On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> + /* total= */
> + for (s = 0; s < NR_PSI_STATES - 1; s++)
> + group->total[s] += div_u64(deltas[s], max(nonidle_total, 1UL));
Just a nit; probably not worth fixing.
This looses the remainder of that division.
On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> +static bool psi_update_stats(struct psi_group *group)
> +{
> + u64 deltas[NR_PSI_STATES - 1] = { 0, };
> + unsigned long missed_periods = 0;
> + unsigned long nonidle_total = 0;
> + u64 now, expires, period;
> +
On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> +static bool test_state(unsigned int *tasks, int cpu, enum psi_states state)
> +{
> + switch (state) {
> + case PSI_IO_SOME:
> + return tasks[NR_IOWAIT];
> + case PSI_IO_FULL:
> + return tasks[NR_
When systems are overcommitted and resources become contended, it's
hard to tell exactly the impact this has on workload productivity, or
how close the system is to lockups and OOM kills. In particular, when
machines work multiple jobs concurrently, the impact of overcommit in
terms of latency and
27 matches
Mail list logo