Re: [Intel-gfx] [PATCH 18/19] drm/i915/execlists: Direct submission (avoid tasklet/ksoftirqd)

2018-05-18 Thread Chris Wilson
Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
> 
> On 17/05/2018 08:40, Chris Wilson wrote:
> > Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> > bottom half"), we came to the conclusion that running our CSB processing
> > and ELSP submission from inside the irq handler was a bad idea. A really
> > bad idea as we could impose nearly 1s latency on other users of the
> > system, on average! Deferring our work to a tasklet allowed us to do the
> > processing with irqs enabled, reducing the impact to an average of about
> > 50us.
> > 
> > We have since eradicated the use of forcewaked mmio from inside the CSB
> > processing and ELSP submission, bringing the impact down to around 5us
> > (on Kabylake); an order of magnitude better than our measurements 2
> > years ago on Broadwell and only about 2x worse on average than the
> > gem_syslatency on an unladen system.
> > 
> > Comparing the impact on the maximum latency observed over a 120s interval,
> > repeated several times (using gem_syslatency, similar to RT's cyclictest)
> > while the system is fully laden with i915 nops, we see that direct
> > submission definitely worsens the response but not to the same outlandish
> > degree as before.
> > 
> > x Unladen baseline
> > + Using tasklet
> > * Direct submission
> > 
> > ++
> > |xx x  +++++ +   *  * *   ** *** *  *|
> > ||A|  |__AM__|   |_A_M___|   |
> > ++
> 
> What are these headers? This one and below, I cannot decipher them at all.
> 
> >  N   Min   MaxMedian   AvgStddev
> > x  10 51810   9.3 3.6530049
> > +  1072   120   108 102.9 15.758243
> > *  10   255   348   316 305.7  28.74814
> 
> In micro-seconds? so tasklet is 108us median? Direct submission 316us 
> median?

Perspective, ivb ringbuf:

x syslatency-ringbuf-mean.txt
+ syslatency-ringbuf-max.txt
++
|   xx   +   |
|   xx   +   |
|   xx   +   |
|   xx  ++   |
|     +  +  +  ++ + ++   |
|x   + +  +  ++  +  ++  +|
|  |AM |__A__M___|   |
++
N   Min   MaxMedian   AvgStddev
x  30 5.39517.89313.626 13.135367 2.2143809
+  3033   169   143 135.86667 25.609445

Using execlists+tasklet is on par with irqoff (i.e. max) latency of
ringbuf. (There should be very little irq disabling for ringbuf.) Normal
average latency of execlists+direct_submission is still better than ivb,
but that is almost entirely generational differences.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 18/19] drm/i915/execlists: Direct submission (avoid tasklet/ksoftirqd)

2018-05-18 Thread Chris Wilson
Quoting Tvrtko Ursulin (2018-05-18 09:06:03)
> 
> On 17/05/2018 18:07, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
> >>
> >> On 17/05/2018 08:40, Chris Wilson wrote:
> >>> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> >>> bottom half"), we came to the conclusion that running our CSB processing
> >>> and ELSP submission from inside the irq handler was a bad idea. A really
> >>> bad idea as we could impose nearly 1s latency on other users of the
> >>> system, on average! Deferring our work to a tasklet allowed us to do the
> >>> processing with irqs enabled, reducing the impact to an average of about
> >>> 50us.
> >>>
> >>> We have since eradicated the use of forcewaked mmio from inside the CSB
> >>> processing and ELSP submission, bringing the impact down to around 5us
> >>> (on Kabylake); an order of magnitude better than our measurements 2
> >>> years ago on Broadwell and only about 2x worse on average than the
> >>> gem_syslatency on an unladen system.
> >>>
> >>> Comparing the impact on the maximum latency observed over a 120s interval,
> >>> repeated several times (using gem_syslatency, similar to RT's cyclictest)
> >>> while the system is fully laden with i915 nops, we see that direct
> >>> submission definitely worsens the response but not to the same outlandish
> >>> degree as before.
> >>>
> >>> x Unladen baseline
> >>> + Using tasklet
> >>> * Direct submission
> >>>
> >>> ++
> >>> |xx x  +++++ +   *  * *   ** *** *  *|
> >>> ||A|  |__AM__|   |_A_M___|   |
> >>> ++
> >>
> >> What are these headers? This one and below, I cannot decipher them at all.
> > 
> > Ministat histogram. The headers being the label for the charts; it's a
> > bit flat so hard to tell it's a histogram.
> > 
> >>>   N   Min   MaxMedian   Avg
> >>> Stddev
> >>> x  10 51810   9.3 
> >>> 3.6530049
> >>> +  1072   120   108 102.9 
> >>> 15.758243
> >>> *  10   255   348   316 305.7  
> >>> 28.74814
> >>
> >> In micro-seconds? so tasklet is 108us median? Direct submission 316us
> >> median?
> > 
> > Yup, more runs required so you have prettier graphs and units.
> 
> Biggest problem for me is that in these tests it is only showing as 
> significantly worse than the current tasklet. So it is kind of difficult 
> the sell the series. :)

As a reference point, on bdw from before we introduced tasklets:
gem_syslatency: cycles=20849458, latency mean=678.892us max=8308500us
gem_syslatency: cycles=22964600, latency mean=1583.312us max=5991894us
gem_syslatency: cycles=21404190, latency mean=1766.220us max=12423925us
gem_syslatency: cycles=22779405, latency mean=1698.282us max=9110117us
gem_syslatency: cycles=22021655, latency mean=1921.855us max=7193398us

Message from syslogd@broadwell at May 18 18:27:47 ...
 kernel:[ 8991.394488] NMI watchdog: BUG: soft lockup - CPU#2 stuck for
 22s! [gem_syslatency:1800]
 gem_syslatency: cycles=21218381, latency mean=1711.088us max=13932087us
 gem_syslatency: cycles=22189636, latency mean=1724.968us max=15483237us
 gem_syslatency: cycles=21941275, latency mean=1697.738us max=6065099us
 gem_syslatency: cycles=21896529, latency mean=1748.536us max=8007063us
 gem_syslatency: cycles=22053588, latency mean=1677.465us max=6604678us
 gem_syslatency: cycles=21657859, latency mean=1570.085us max=10346811us
 gem_syslatency: cycles=21627270, latency mean=1227.782us max=10489759us
 gem_syslatency: cycles=22441025, latency mean=1635.776us max=5932223us
 gem_syslatency: cycles=21890354, latency mean=1790.042us max=7956373us
 gem_syslatency: cycles=20976620, latency mean=989.215us max=11631632us
 gem_syslatency: cycles=22087242, latency mean=1723.138us max=12337920us
 gem_syslatency: cycles=22800746, latency mean=1749.050us max=7080445us

* at this point, the sata driver ate itself and took out the fs.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 18/19] drm/i915/execlists: Direct submission (avoid tasklet/ksoftirqd)

2018-05-18 Thread Chris Wilson
Quoting Tvrtko Ursulin (2018-05-18 09:06:03)
> 
> On 17/05/2018 18:07, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
> >>
> >> On 17/05/2018 08:40, Chris Wilson wrote:
> >>> Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> >>> bottom half"), we came to the conclusion that running our CSB processing
> >>> and ELSP submission from inside the irq handler was a bad idea. A really
> >>> bad idea as we could impose nearly 1s latency on other users of the
> >>> system, on average! Deferring our work to a tasklet allowed us to do the
> >>> processing with irqs enabled, reducing the impact to an average of about
> >>> 50us.
> >>>
> >>> We have since eradicated the use of forcewaked mmio from inside the CSB
> >>> processing and ELSP submission, bringing the impact down to around 5us
> >>> (on Kabylake); an order of magnitude better than our measurements 2
> >>> years ago on Broadwell and only about 2x worse on average than the
> >>> gem_syslatency on an unladen system.
> >>>
> >>> Comparing the impact on the maximum latency observed over a 120s interval,
> >>> repeated several times (using gem_syslatency, similar to RT's cyclictest)
> >>> while the system is fully laden with i915 nops, we see that direct
> >>> submission definitely worsens the response but not to the same outlandish
> >>> degree as before.
> >>>
> >>> x Unladen baseline
> >>> + Using tasklet
> >>> * Direct submission
> >>>
> >>> ++
> >>> |xx x  +++++ +   *  * *   ** *** *  *|
> >>> ||A|  |__AM__|   |_A_M___|   |
> >>> ++
> >>
> >> What are these headers? This one and below, I cannot decipher them at all.
> > 
> > Ministat histogram. The headers being the label for the charts; it's a
> > bit flat so hard to tell it's a histogram.
> > 
> >>>   N   Min   MaxMedian   Avg
> >>> Stddev
> >>> x  10 51810   9.3 
> >>> 3.6530049
> >>> +  1072   120   108 102.9 
> >>> 15.758243
> >>> *  10   255   348   316 305.7  
> >>> 28.74814
> >>
> >> In micro-seconds? so tasklet is 108us median? Direct submission 316us
> >> median?
> > 
> > Yup, more runs required so you have prettier graphs and units.
> 
> Biggest problem for me is that in these tests it is only showing as 
> significantly worse than the current tasklet. So it is kind of difficult 
> the sell the series. :)

Ba! Compared to the previous best state 2 years ago, we're an order
magnitude better. (Unbelievable!)

I think it's simply a lack of a reference point, we're about 100% faster
in some microbenchmark stress igts, about 10% faster in regular stress
igts, and will not suffer some of the 200ms timeout -> wedged!

Any latency is bad, but at what point do we worry about the trade-off
between our latency and the rest of the system?
 
> If it only solves issues when submitting from a RT task then it is not 
> so interesting. RT tasks are rare and which deal with 3d/media/ocl 
> probably even rarer. Or you know of any?

No, it's not just an issue from RT. We see examples of the driver being
so starved we declare the system is wedged, to much more mundane
artifacts of our throughput being diminished due to the latency in
submitting work.
 
> Or perhaps you need to add data from gem_latency as well if that shows 
> some wins purely on the i915 side?

The patch before (process CSB from interrupt handler) is justifiable
just by the bugs it solves. This patch, I think is justifiable by our
perf gains and the impact it has for low latency requirements. Or we can
wait until the next patch is mature, that tries to reduce the lock
contention by splitting the data structures.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 18/19] drm/i915/execlists: Direct submission (avoid tasklet/ksoftirqd)

2018-05-18 Thread Tvrtko Ursulin


On 17/05/2018 18:07, Chris Wilson wrote:

Quoting Tvrtko Ursulin (2018-05-17 14:13:00)


On 17/05/2018 08:40, Chris Wilson wrote:

Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.

We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.

Comparing the impact on the maximum latency observed over a 120s interval,
repeated several times (using gem_syslatency, similar to RT's cyclictest)
while the system is fully laden with i915 nops, we see that direct
submission definitely worsens the response but not to the same outlandish
degree as before.

x Unladen baseline
+ Using tasklet
* Direct submission

++
|xx x  +++++ +   *  * *   ** *** *  *|
||A|  |__AM__|   |_A_M___|   |
++


What are these headers? This one and below, I cannot decipher them at all.


Ministat histogram. The headers being the label for the charts; it's a
bit flat so hard to tell it's a histogram.


  N   Min   MaxMedian   AvgStddev
x  10 51810   9.3 3.6530049
+  1072   120   108 102.9 15.758243
*  10   255   348   316 305.7  28.74814


In micro-seconds? so tasklet is 108us median? Direct submission 316us
median?


Yup, more runs required so you have prettier graphs and units.


Biggest problem for me is that in these tests it is only showing as 
significantly worse than the current tasklet. So it is kind of difficult 
the sell the series. :)


If it only solves issues when submitting from a RT task then it is not 
so interesting. RT tasks are rare and which deal with 3d/media/ocl 
probably even rarer. Or you know of any?


Or perhaps you need to add data from gem_latency as well if that shows 
some wins purely on the i915 side?


Regards,

Tvrtko


And with a background load


This is IO background load?


Yes, background writeout.
-Chris


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 18/19] drm/i915/execlists: Direct submission (avoid tasklet/ksoftirqd)

2018-05-17 Thread Chris Wilson
Quoting Tvrtko Ursulin (2018-05-17 14:13:00)
> 
> On 17/05/2018 08:40, Chris Wilson wrote:
> > Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
> > bottom half"), we came to the conclusion that running our CSB processing
> > and ELSP submission from inside the irq handler was a bad idea. A really
> > bad idea as we could impose nearly 1s latency on other users of the
> > system, on average! Deferring our work to a tasklet allowed us to do the
> > processing with irqs enabled, reducing the impact to an average of about
> > 50us.
> > 
> > We have since eradicated the use of forcewaked mmio from inside the CSB
> > processing and ELSP submission, bringing the impact down to around 5us
> > (on Kabylake); an order of magnitude better than our measurements 2
> > years ago on Broadwell and only about 2x worse on average than the
> > gem_syslatency on an unladen system.
> > 
> > Comparing the impact on the maximum latency observed over a 120s interval,
> > repeated several times (using gem_syslatency, similar to RT's cyclictest)
> > while the system is fully laden with i915 nops, we see that direct
> > submission definitely worsens the response but not to the same outlandish
> > degree as before.
> > 
> > x Unladen baseline
> > + Using tasklet
> > * Direct submission
> > 
> > ++
> > |xx x  +++++ +   *  * *   ** *** *  *|
> > ||A|  |__AM__|   |_A_M___|   |
> > ++
> 
> What are these headers? This one and below, I cannot decipher them at all.

Ministat histogram. The headers being the label for the charts; it's a
bit flat so hard to tell it's a histogram.

> >  N   Min   MaxMedian   AvgStddev
> > x  10 51810   9.3 3.6530049
> > +  1072   120   108 102.9 15.758243
> > *  10   255   348   316 305.7  28.74814
> 
> In micro-seconds? so tasklet is 108us median? Direct submission 316us 
> median?

Yup, more runs required so you have prettier graphs and units.

> > And with a background load
> 
> This is IO background load?

Yes, background writeout.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 18/19] drm/i915/execlists: Direct submission (avoid tasklet/ksoftirqd)

2018-05-17 Thread Tvrtko Ursulin


On 17/05/2018 08:40, Chris Wilson wrote:

Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.

We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.

Comparing the impact on the maximum latency observed over a 120s interval,
repeated several times (using gem_syslatency, similar to RT's cyclictest)
while the system is fully laden with i915 nops, we see that direct
submission definitely worsens the response but not to the same outlandish
degree as before.

x Unladen baseline
+ Using tasklet
* Direct submission

++
|xx x  +++++ +   *  * *   ** *** *  *|
||A|  |__AM__|   |_A_M___|   |
++


What are these headers? This one and below, I cannot decipher them at all.


 N   Min   MaxMedian   AvgStddev
x  10 51810   9.3 3.6530049
+  1072   120   108 102.9 15.758243
*  10   255   348   316 305.7  28.74814


In micro-seconds? so tasklet is 108us median? Direct submission 316us 
median?




And with a background load


This is IO background load?

Regards,

Tvrtko



++
|x  +   *  * |
|x+ + + + +  + +* * ** ++  * *   *  *|
|A|___A_|__|___A___M__|  |
++
 N   Min   MaxMedian   AvgStddev
x  10 411 9   8.5 2.1730675
+  10   633  1388   972   993 243.33744
*  10  1152  2109  16081488.3 314.80719

References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom 
half")
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
  drivers/gpu/drm/i915/i915_irq.c  | 11 ++--
  drivers/gpu/drm/i915/intel_lrc.c | 44 +---
  2 files changed, 26 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 3f139ff64385..8b61ebf5cb4a 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1462,22 +1462,15 @@ static void snb_gt_irq_handler(struct drm_i915_private 
*dev_priv,
  static void
  gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
  {
-   bool tasklet = false;
-
-   if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
+   if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
intel_engine_handle_execlists_irq(engine);
-   tasklet = true;
-   }
  
  	if (iir & GT_RENDER_USER_INTERRUPT) {

if (intel_engine_uses_guc(engine))
-   tasklet = true;
+   tasklet_hi_schedule(>execlists.tasklet);
  
  		notify_ring(engine);

}
-
-   if (tasklet)
-   tasklet_hi_schedule(>execlists.tasklet);
  }
  
  static void gen8_gt_irq_ack(struct drm_i915_private *i915,

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 954eb3a71051..37839d89e03a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -575,7 +575,7 @@ static void complete_preempt_context(struct 
intel_engine_execlists *execlists)
execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
  }
  
-static void __execlists_dequeue(struct intel_engine_cs *engine)

+static void execlists_dequeue(struct intel_engine_cs *engine)
  {
struct intel_engine_execlists * const execlists = >execlists;
struct execlist_port *port = execlists->port;
@@ -587,7 +587,11 @@ static void __execlists_dequeue(struct intel_engine_cs 
*engine)
  
  	lockdep_assert_held(>timeline.lock);
  
-	/* Hardware submission is through 2 ports. Conceptually each port

+   if (execlists_is_active(>execlists, EXECLISTS_ACTIVE_PREEMPT))
+   return;
+
+   /*
+* 

[Intel-gfx] [PATCH 18/19] drm/i915/execlists: Direct submission (avoid tasklet/ksoftirqd)

2018-05-17 Thread Chris Wilson
Back in commit 27af5eea54d1 ("drm/i915: Move execlists irq handler to a
bottom half"), we came to the conclusion that running our CSB processing
and ELSP submission from inside the irq handler was a bad idea. A really
bad idea as we could impose nearly 1s latency on other users of the
system, on average! Deferring our work to a tasklet allowed us to do the
processing with irqs enabled, reducing the impact to an average of about
50us.

We have since eradicated the use of forcewaked mmio from inside the CSB
processing and ELSP submission, bringing the impact down to around 5us
(on Kabylake); an order of magnitude better than our measurements 2
years ago on Broadwell and only about 2x worse on average than the
gem_syslatency on an unladen system.

Comparing the impact on the maximum latency observed over a 120s interval,
repeated several times (using gem_syslatency, similar to RT's cyclictest)
while the system is fully laden with i915 nops, we see that direct
submission definitely worsens the response but not to the same outlandish
degree as before.

x Unladen baseline
+ Using tasklet
* Direct submission

++
|xx x  +++++ +   *  * *   ** *** *  *|
||A|  |__AM__|   |_A_M___|   |
++
N   Min   MaxMedian   AvgStddev
x  10 51810   9.3 3.6530049
+  1072   120   108 102.9 15.758243
*  10   255   348   316 305.7  28.74814

And with a background load

++
|x  +   *  * |
|x+ + + + +  + +* * ** ++  * *   *  *|
|A|___A_|__|___A___M__|  |
++
N   Min   MaxMedian   AvgStddev
x  10 411 9   8.5 2.1730675
+  10   633  1388   972   993 243.33744
*  10  1152  2109  16081488.3 314.80719

References: 27af5eea54d1 ("drm/i915: Move execlists irq handler to a bottom 
half")
Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
---
 drivers/gpu/drm/i915/i915_irq.c  | 11 ++--
 drivers/gpu/drm/i915/intel_lrc.c | 44 +---
 2 files changed, 26 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 3f139ff64385..8b61ebf5cb4a 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1462,22 +1462,15 @@ static void snb_gt_irq_handler(struct drm_i915_private 
*dev_priv,
 static void
 gen8_cs_irq_handler(struct intel_engine_cs *engine, u32 iir)
 {
-   bool tasklet = false;
-
-   if (iir & GT_CONTEXT_SWITCH_INTERRUPT) {
+   if (iir & GT_CONTEXT_SWITCH_INTERRUPT)
intel_engine_handle_execlists_irq(engine);
-   tasklet = true;
-   }
 
if (iir & GT_RENDER_USER_INTERRUPT) {
if (intel_engine_uses_guc(engine))
-   tasklet = true;
+   tasklet_hi_schedule(>execlists.tasklet);
 
notify_ring(engine);
}
-
-   if (tasklet)
-   tasklet_hi_schedule(>execlists.tasklet);
 }
 
 static void gen8_gt_irq_ack(struct drm_i915_private *i915,
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 954eb3a71051..37839d89e03a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -575,7 +575,7 @@ static void complete_preempt_context(struct 
intel_engine_execlists *execlists)
execlists_clear_active(execlists, EXECLISTS_ACTIVE_PREEMPT);
 }
 
-static void __execlists_dequeue(struct intel_engine_cs *engine)
+static void execlists_dequeue(struct intel_engine_cs *engine)
 {
struct intel_engine_execlists * const execlists = >execlists;
struct execlist_port *port = execlists->port;
@@ -587,7 +587,11 @@ static void __execlists_dequeue(struct intel_engine_cs 
*engine)
 
lockdep_assert_held(>timeline.lock);
 
-   /* Hardware submission is through 2 ports. Conceptually each port
+   if (execlists_is_active(>execlists, EXECLISTS_ACTIVE_PREEMPT))
+   return;
+
+   /*
+* Hardware submission is through 2 ports. Conceptually each port
 * has a (RING_START, RING_HEAD, RING_TAIL) tuple. RING_START is
 * static for a context, and unique to each, so we only execute
 * requests belonging to a single