On Mon, Aug 10, 2015 at 05:29:22PM +0200, Frederic Weisbecker wrote:
> Well, there could be a more proper way to do this without tying that
> to the scheduler tick. This could be some sort of
> hrtimer_cancel_soft() which more generally cancels a timer without
> cancelling the interrupt itself.
S
Hi,
On 10/08/15 16:29, Frederic Weisbecker wrote:
> On Mon, Aug 10, 2015 at 05:11:51PM +0200, Peter Zijlstra wrote:
>> On Mon, Aug 10, 2015 at 04:28:47PM +0200, Peter Zijlstra wrote:
>>> On Mon, Aug 10, 2015 at 04:16:58PM +0200, Frederic Weisbecker wrote:
>>>
I considered many times relying o
On Mon, 10 Aug 2015, Frederic Weisbecker wrote:
> I considered many times relying on hrtick btw but everyone seem to say it has
> a lot
> of overhead, especially due to clock reprogramming on schedule() calls.
Depends on how many ticks you can save I would think. It certainly is
worthwhile if yo
On Mon, Aug 10, 2015 at 05:11:51PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 10, 2015 at 04:28:47PM +0200, Peter Zijlstra wrote:
> > On Mon, Aug 10, 2015 at 04:16:58PM +0200, Frederic Weisbecker wrote:
> >
> > > I considered many times relying on hrtick btw but everyone seem to say it
> > > has
On Mon, Aug 10, 2015 at 04:28:47PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 10, 2015 at 04:16:58PM +0200, Frederic Weisbecker wrote:
>
> > I considered many times relying on hrtick btw but everyone seem to say it
> > has a lot
> > of overhead, especially due to clock reprogramming on schedule()
On Mon, Aug 10, 2015 at 04:16:58PM +0200, Frederic Weisbecker wrote:
> I considered many times relying on hrtick btw but everyone seem to say it has
> a lot
> of overhead, especially due to clock reprogramming on schedule() calls.
Yeah, I have some vague ideas of how to take out much of that ove
On Mon, Aug 10, 2015 at 03:02:04PM +0100, Juri Lelli wrote:
> Hi,
>
> On 04/08/15 08:41, Peter Zijlstra wrote:
> > On Mon, Aug 03, 2015 at 07:30:32PM +0200, Frederic Weisbecker wrote:
> >>> But you've forgotten about SCHED_DEADLINE, we count those in:
> >>> rq->dl.dl_nr_running.
> >>
> >> Indeed.
Hi,
On 04/08/15 08:41, Peter Zijlstra wrote:
> On Mon, Aug 03, 2015 at 07:30:32PM +0200, Frederic Weisbecker wrote:
>>> But you've forgotten about SCHED_DEADLINE, we count those in:
>>> rq->dl.dl_nr_running.
>>
>> Indeed. Hmm, there is no preemption between SCHED_DEALINE tasks, right?
>> So I can
On Mon, Aug 03, 2015 at 07:30:32PM +0200, Frederic Weisbecker wrote:
> > But you've forgotten about SCHED_DEADLINE, we count those in:
> > rq->dl.dl_nr_running.
>
> Indeed. Hmm, there is no preemption between SCHED_DEALINE tasks, right?
> So I can treat it like SCHED_FIFO.
Sadly no. Even though E
On Mon, Aug 03, 2015 at 07:09:11PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 03, 2015 at 04:50:33PM +0200, Frederic Weisbecker wrote:
> > I think I could remove the context switch part. But then I need to find a
> > way to perform these checks on enqueue and dequeue task time:
>
> Uhm, but you al
On Mon, Aug 03, 2015 at 04:50:33PM +0200, Frederic Weisbecker wrote:
> On Mon, Aug 03, 2015 at 04:00:46PM +0200, Peter Zijlstra wrote:
> > On Thu, Jul 23, 2015 at 06:42:12PM +0200, Frederic Weisbecker wrote:
> > > Instead of providing asynchronous checks for the nohz subsystem to verify
> > > sched
On Mon, Aug 03, 2015 at 04:00:46PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 23, 2015 at 06:42:12PM +0200, Frederic Weisbecker wrote:
> > Instead of providing asynchronous checks for the nohz subsystem to verify
> > sched tick dependency, migrate sched to the new mask.
> >
> > The easiest is to r
On Thu, Jul 23, 2015 at 06:42:12PM +0200, Frederic Weisbecker wrote:
> Instead of providing asynchronous checks for the nohz subsystem to verify
> sched tick dependency, migrate sched to the new mask.
>
> The easiest is to recycle the current asynchronous tick dependency check
> which verifies the
On Fri, Jul 24, 2015 at 12:56:42PM -0400, Chris Metcalf wrote:
> On 07/23/2015 12:42 PM, Frederic Weisbecker wrote:
> >+static inline void sched_update_tick_dependency(struct rq *rq)
> >+{
> >+int cpu;
> >+
> >+if (!tick_nohz_full_enabled())
> >+return;
> >+
> >+cpu = cpu_of
On 07/23/2015 12:42 PM, Frederic Weisbecker wrote:
+static inline void sched_update_tick_dependency(struct rq *rq)
+{
+ int cpu;
+
+ if (!tick_nohz_full_enabled())
+ return;
+
+ cpu = cpu_of(rq);
+
+ if (!tick_nohz_full_cpu(rq->cpu))
+ return;
+
On Thu, Jul 23, 2015 at 06:42:12PM +0200, Frederic Weisbecker wrote:
> Instead of providing asynchronous checks for the nohz subsystem to verify
> sched tick dependency, migrate sched to the new mask.
>
> The easiest is to recycle the current asynchronous tick dependency check
> which verifies the
Instead of providing asynchronous checks for the nohz subsystem to verify
sched tick dependency, migrate sched to the new mask.
The easiest is to recycle the current asynchronous tick dependency check
which verifies the class of the current task and its requirements for
periodic preemption checks.
17 matches
Mail list logo