On Fri, 07 Nov, at 01:38:36PM, Peter Zijlstra wrote:
>
> For optional goodness:
>
> if (nr_limbo > max_scan_size)
> break;
>
> Which will limit the number of RMIDs you'll scan from the IPI, and
> thereby limit the time taken there.
To limit the amount of
On Fri, 07 Nov, at 01:38:36PM, Peter Zijlstra wrote:
For optional goodness:
if (nr_limbo max_scan_size)
break;
Which will limit the number of RMIDs you'll scan from the IPI, and
thereby limit the time taken there.
To limit the amount of magic
On Mon, Nov 10, 2014 at 09:31:40PM +, Matt Fleming wrote:
> Actually, yeah, that does look like it'd work. Are you OK with me adding
> an enum to the cqm_rmid_entry? You had concerns in the past about
> growing the size of the struct.
That was because I was thinking you did
On Mon, Nov 10, 2014 at 09:31:40PM +, Matt Fleming wrote:
Actually, yeah, that does look like it'd work. Are you OK with me adding
an enum to the cqm_rmid_entry? You had concerns in the past about
growing the size of the struct.
That was because I was thinking you did
On Fri, 07 Nov, at 01:34:31PM, Peter Zijlstra wrote:
> On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> > + min_queue_time = entry->queue_time +
> > + msecs_to_jiffies(__rotation_period);
> > +
> > + if (time_after(min_queue_time, now))
> > +
On Mon, Nov 10, 2014 at 08:43:53PM +, Matt Fleming wrote:
> On Fri, 07 Nov, at 01:06:12PM, Peter Zijlstra wrote:
> > On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> > > +/*
> > > + * Exchange the RMID of a group of events.
> > > + */
> > > +static unsigned int
> > >
On Fri, 07 Nov, at 01:20:52PM, Peter Zijlstra wrote:
> On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> > +/*
> > + * If we fail to assign a new RMID for intel_cqm_rotation_rmid because
> > + * cachelines are still tagged with RMIDs in limbo, we progressively
> > + * increment the
On Fri, 07 Nov, at 01:18:03PM, Peter Zijlstra wrote:
> On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> > + /*
> > +* Test whether an RMID is free for each package.
> > +*/
> > + preempt_disable();
> > + smp_call_function_many(_cpumask, intel_cqm_stable, NULL, true);
>
On Mon, Nov 10, 2014 at 08:56:53PM +, Matt Fleming wrote:
> > Should we initialize that to a finite value? Surely results are absolute
> > crap if we do indeed reach that max?
>
> I don't think we'll ever reach that max, it'll bottom out once it
> reaches the size of the LLC, since the
On Fri, 07 Nov, at 01:06:12PM, Peter Zijlstra wrote:
> On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> > +/*
> > + * Exchange the RMID of a group of events.
> > + */
> > +static unsigned int
> > +intel_cqm_xchg_rmid(struct perf_event *group, unsigned int rmid)
> > +{
> > + struct
On Fri, 07 Nov, at 01:06:12PM, Peter Zijlstra wrote:
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+/*
+ * Exchange the RMID of a group of events.
+ */
+static unsigned int
+intel_cqm_xchg_rmid(struct perf_event *group, unsigned int rmid)
+{
+ struct perf_event
On Mon, Nov 10, 2014 at 08:56:53PM +, Matt Fleming wrote:
Should we initialize that to a finite value? Surely results are absolute
crap if we do indeed reach that max?
I don't think we'll ever reach that max, it'll bottom out once it
reaches the size of the LLC, since the pathological
On Fri, 07 Nov, at 01:18:03PM, Peter Zijlstra wrote:
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+ /*
+* Test whether an RMID is free for each package.
+*/
+ preempt_disable();
+ smp_call_function_many(cqm_cpumask, intel_cqm_stable, NULL, true);
+
On Fri, 07 Nov, at 01:20:52PM, Peter Zijlstra wrote:
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+/*
+ * If we fail to assign a new RMID for intel_cqm_rotation_rmid because
+ * cachelines are still tagged with RMIDs in limbo, we progressively
+ * increment the threshold
On Mon, Nov 10, 2014 at 08:43:53PM +, Matt Fleming wrote:
On Fri, 07 Nov, at 01:06:12PM, Peter Zijlstra wrote:
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+/*
+ * Exchange the RMID of a group of events.
+ */
+static unsigned int
+intel_cqm_xchg_rmid(struct
On Fri, 07 Nov, at 01:34:31PM, Peter Zijlstra wrote:
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+ min_queue_time = entry-queue_time +
+ msecs_to_jiffies(__rotation_period);
+
+ if (time_after(min_queue_time, now))
+
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> +static unsigned int __rotation_period = 250; /* ms */
There's two things being conflated here I think, even though they're
related.
The one is how long we'll let RMIDs settle before trying to reuse them,
the second is how often we
On Fri, Nov 07, 2014 at 01:34:31PM +0100, Peter Zijlstra wrote:
>
> enum rmid_cycle_state {
> RMID_AVAILABLE = 0,
> RMID_LIMBO,
> RMID_YOUNG,
> };
>
> struct cqm_rmid_entry {
> ...
> enum rmid_cycle_state state;
> };
>
> static void __intel_sqm_stable(void *arg)
>
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> +/*
> + * Test whether an RMID has a zero occupancy value on this cpu.
> + */
> +static void intel_cqm_stable(void *arg)
> +{
> + unsigned int nr_bits;
> + int i = -1;
> +
> + nr_bits = cqm_max_rmid + 1;
> +
> + for (;
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> +/*
> + * If we fail to assign a new RMID for intel_cqm_rotation_rmid because
> + * cachelines are still tagged with RMIDs in limbo, we progressively
> + * increment the threshold until we find an RMID in limbo with <=
> + *
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> + /*
> + * Test whether an RMID is free for each package.
> + */
> + preempt_disable();
> + smp_call_function_many(_cpumask, intel_cqm_stable, NULL, true);
> + preempt_enable();
Same problem again, the
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
> +/*
> + * Exchange the RMID of a group of events.
> + */
> +static unsigned int
> +intel_cqm_xchg_rmid(struct perf_event *group, unsigned int rmid)
> +{
> + struct perf_event *event;
> + unsigned int old_rmid =
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+/*
+ * Exchange the RMID of a group of events.
+ */
+static unsigned int
+intel_cqm_xchg_rmid(struct perf_event *group, unsigned int rmid)
+{
+ struct perf_event *event;
+ unsigned int old_rmid = group-hw.cqm_rmid;
+
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+ /*
+ * Test whether an RMID is free for each package.
+ */
+ preempt_disable();
+ smp_call_function_many(cqm_cpumask, intel_cqm_stable, NULL, true);
+ preempt_enable();
Same problem again, the current
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+/*
+ * If we fail to assign a new RMID for intel_cqm_rotation_rmid because
+ * cachelines are still tagged with RMIDs in limbo, we progressively
+ * increment the threshold until we find an RMID in limbo with =
+ *
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+/*
+ * Test whether an RMID has a zero occupancy value on this cpu.
+ */
+static void intel_cqm_stable(void *arg)
+{
+ unsigned int nr_bits;
+ int i = -1;
+
+ nr_bits = cqm_max_rmid + 1;
+
+ for (; i =
On Fri, Nov 07, 2014 at 01:34:31PM +0100, Peter Zijlstra wrote:
enum rmid_cycle_state {
RMID_AVAILABLE = 0,
RMID_LIMBO,
RMID_YOUNG,
};
struct cqm_rmid_entry {
...
enum rmid_cycle_state state;
};
static void __intel_sqm_stable(void *arg)
{
On Thu, Nov 06, 2014 at 12:23:21PM +, Matt Fleming wrote:
+static unsigned int __rotation_period = 250; /* ms */
There's two things being conflated here I think, even though they're
related.
The one is how long we'll let RMIDs settle before trying to reuse them,
the second is how often we
From: Matt Fleming
There are many use cases where people will want to monitor more tasks
than there exist RMIDs in the hardware, meaning that we have to perform
some kind of multiplexing.
We do this by "rotating" the RMIDs in a workqueue, and assigning an RMID
to a waiting event when the RMID
From: Matt Fleming matt.flem...@intel.com
There are many use cases where people will want to monitor more tasks
than there exist RMIDs in the hardware, meaning that we have to perform
some kind of multiplexing.
We do this by rotating the RMIDs in a workqueue, and assigning an RMID
to a waiting
30 matches
Mail list logo