Re: [Mesa-dev] [PATCH] RFC: Externd IMG_context_priority with NV_context_priority_realtime

2018-04-06 Thread Ben Widawsky

On 18-03-31 12:00:16, Chris Wilson wrote:

Quoting Kenneth Graunke (2018-03-30 19:20:57)

On Friday, March 30, 2018 7:40:13 AM PDT Chris Wilson wrote:
> For i915, we are proposing to use a quality-of-service parameter in
> addition to that of just a priority that usurps everyone. Due to our HW,
> preemption may not be immediate and will be forced to wait until an
> uncooperative process hits an arbitration point. To prevent that unduly
> impacting the privileged RealTime context, we back up the preemption
> request with a timeout to reset the GPU and forcibly evict the GPU hog
> in order to execute the new context.

I am strongly against exposing this in general.  Performing a GPU reset
in the middle of a batch can completely screw up whatever application
was running.  If the application is using robustness extensions, we may
be forced to return GL_DEVICE_LOST, causing the application to have to
recreate their entire GL context and start over.  If not, we may try to
let them limp on(*) - and hope they didn't get too badly damaged by some
of their commands not executing, or executing twice (if the kernel tries
to resubmit it).  But it may very well cause the app to misrender, or
even crash.


Yes, I think the revulsion has been universal. However, as a
quality-of-service guarantee, I can understand the appeal. The
difference is that instead of allowing a DoS for 6s or so as we
currently allow, we allow that to be specified by the context. As it
does allow one context to impact another, I want it locked down to
privileged processes. I have been using CAP_SYS_ADMIN as the potential
to do harm is even greater than exploiting the weak scheduler by
changing priority.



I'm not terribly worried about this on our hardware for 3d. Today, there is
exactly one case I think where this would happen, if you have a sufficiently
long running shader on a sufficiently large triangle.

The concern I have is about compute where I think we don't do preemption nearly
as well.


This seems like a crazy plan to me.  Scheduling has never been allowed
to just kill random processes.


That's not strictly true, as processes have their limits which if they
exceed they will be killed. On the CPU preemption is much better, the
issue of unyielding processes is pretty much limited to the kernel, where
we can run the NMI watchdog to kill broken code.


If you ever hit that case, then your
customers will see random application crashes, glitches, GPU hangs,
and be pretty unhappy with the result.  And not because something was
broken, but because somebody was impatient and an app was a bit slow.


Yes, that is their decision. Kill random apps so that their
uber-critical interface updates the clock.


If you have work that is so mission critical, maybe you shouldn't run it
on the same machine as one that runs applications which you care so
little about that you're willing to watch them crash and burn.  Don't
run the entertainment system on the flight computer, so to speak.


You are not the first to say that ;)
-Chris



___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] RFC: Externd IMG_context_priority with NV_context_priority_realtime

2018-04-02 Thread Daniel Stone
On 30 March 2018 at 19:20, Kenneth Graunke  wrote:
> On Friday, March 30, 2018 7:40:13 AM PDT Chris Wilson wrote:
>> For i915, we are proposing to use a quality-of-service parameter in
>> addition to that of just a priority that usurps everyone. Due to our HW,
>> preemption may not be immediate and will be forced to wait until an
>> uncooperative process hits an arbitration point. To prevent that unduly
>> impacting the privileged RealTime context, we back up the preemption
>> request with a timeout to reset the GPU and forcibly evict the GPU hog
>> in order to execute the new context.
>
> I am strongly against exposing this in general.  Performing a GPU reset
> in the middle of a batch can completely screw up whatever application
> was running.  If the application is using robustness extensions, we may
> be forced to return GL_DEVICE_LOST, causing the application to have to
> recreate their entire GL context and start over.  If not, we may try to
> let them limp on(*) - and hope they didn't get too badly damaged by some
> of their commands not executing, or executing twice (if the kernel tries
> to resubmit it).  But it may very well cause the app to misrender, or
> even crash.
>
> This seems like a crazy plan to me.  Scheduling has never been allowed
> to just kill random processes.  If you ever hit that case, then your
> customers will see random application crashes, glitches, GPU hangs,
> and be pretty unhappy with the result.  And not because something was
> broken, but because somebody was impatient and an app was a bit slow.
>
> If you have work that is so mission critical, maybe you shouldn't run it
> on the same machine as one that runs applications which you care so
> little about that you're willing to watch them crash and burn.  Don't
> run the entertainment system on the flight computer, so to speak.

I don't know what the automotive correspondent of 'that boat has
already sailed is', but that car has already driven (under the control
of those guys in Wired). For better or worse, having infotainment and
cluster UI run on single silicon is incredibly common nowadays.
Virtualisation platforms have been big business for a while now, and
GPU sharing is absolutely something which is happening as part of
that.

Cheers,
Daniel
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] RFC: Externd IMG_context_priority with NV_context_priority_realtime

2018-03-31 Thread Chris Wilson
Quoting Kenneth Graunke (2018-03-31 20:29:28)
> On Saturday, March 31, 2018 5:56:57 AM PDT Chris Wilson wrote:
> > Quoting Chris Wilson (2018-03-31 12:00:16)
> > > Quoting Kenneth Graunke (2018-03-30 19:20:57)
> > > > On Friday, March 30, 2018 7:40:13 AM PDT Chris Wilson wrote:
> > > > > For i915, we are proposing to use a quality-of-service parameter in
> > > > > addition to that of just a priority that usurps everyone. Due to our 
> > > > > HW,
> > > > > preemption may not be immediate and will be forced to wait until an
> > > > > uncooperative process hits an arbitration point. To prevent that 
> > > > > unduly
> > > > > impacting the privileged RealTime context, we back up the preemption
> > > > > request with a timeout to reset the GPU and forcibly evict the GPU hog
> > > > > in order to execute the new context.
> > > > 
> > > > I am strongly against exposing this in general.  Performing a GPU reset
> > > > in the middle of a batch can completely screw up whatever application
> > > > was running.  If the application is using robustness extensions, we may
> > > > be forced to return GL_DEVICE_LOST, causing the application to have to
> > > > recreate their entire GL context and start over.  If not, we may try to
> > > > let them limp on(*) - and hope they didn't get too badly damaged by some
> > > > of their commands not executing, or executing twice (if the kernel tries
> > > > to resubmit it).  But it may very well cause the app to misrender, or
> > > > even crash.
> > > 
> > > Yes, I think the revulsion has been universal. However, as a
> > > quality-of-service guarantee, I can understand the appeal. The
> > > difference is that instead of allowing a DoS for 6s or so as we
> > > currently allow, we allow that to be specified by the context. As it
> > > does allow one context to impact another, I want it locked down to
> > > privileged processes. I have been using CAP_SYS_ADMIN as the potential
> > > to do harm is even greater than exploiting the weak scheduler by
> > > changing priority.
> 
> Right...I was thinking perhaps a tunable to reduce the 6s would do the
> trick, and be much less complicated...but perhaps you want to let it go
> longer when there isn't super-critical work to do.

If (mid-object) preemption worked properly, we wouldn't see many GPU
hangs at all, depending on free the compositor is to inject work. Oh boy,
that suggests we need to rethink the current hangcheck.

Bring on timeslicing.

> > Also to add further insult to injury, we might want to force GPU clocks
> > to max for the RT context (so that the context starts executing at max
> > rather than wait for the system to upclock on load). Something like,
> 
> That makes some sense - but I wonder if it wouldn't cause more battery
> burn than is necessary.  The super-critical workload may also be
> relatively simple (redrawing a clock), and so up-clocking and
> down-clocking again might hurt us...it's hard to say. :(
> 
> I also don't know what I think of this plan to let userspace control
> (restrict) the frequency.  That's been restricted to root (via sysfs)
> in the past.  But I think you're allowing it more generally now, without
> CAP_SYS_ADMIN?  It seems like there's a lot of potential for abuse.
> (Hello, benchmark mode!  Zm!)  I know it solves a problem, but it
> seems like there's got to be a better way...

It's restricting the range the system can choose, but only within the
range the sysadmin defines. The expected use case for me is actually
HTPC more than benchmark mode (what benchmark that doesn't run at max
clocks that needs to?). Where you have a workload you know needs a
narrow band of frequencies and want to conserve energy by not
overclocking, and also have a good idea of the minimum required to avoid
frame drops. Tricking the system to run at high clocks isn't that hard
today.

It just happens that historically RT processes force max CPU clocks, and
for something that demands a low latency QoS I expect to also have low
latency tolerance throughout the pipeline.
-Chris
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] RFC: Externd IMG_context_priority with NV_context_priority_realtime

2018-03-31 Thread Kenneth Graunke
On Saturday, March 31, 2018 5:56:57 AM PDT Chris Wilson wrote:
> Quoting Chris Wilson (2018-03-31 12:00:16)
> > Quoting Kenneth Graunke (2018-03-30 19:20:57)
> > > On Friday, March 30, 2018 7:40:13 AM PDT Chris Wilson wrote:
> > > > For i915, we are proposing to use a quality-of-service parameter in
> > > > addition to that of just a priority that usurps everyone. Due to our HW,
> > > > preemption may not be immediate and will be forced to wait until an
> > > > uncooperative process hits an arbitration point. To prevent that unduly
> > > > impacting the privileged RealTime context, we back up the preemption
> > > > request with a timeout to reset the GPU and forcibly evict the GPU hog
> > > > in order to execute the new context.
> > > 
> > > I am strongly against exposing this in general.  Performing a GPU reset
> > > in the middle of a batch can completely screw up whatever application
> > > was running.  If the application is using robustness extensions, we may
> > > be forced to return GL_DEVICE_LOST, causing the application to have to
> > > recreate their entire GL context and start over.  If not, we may try to
> > > let them limp on(*) - and hope they didn't get too badly damaged by some
> > > of their commands not executing, or executing twice (if the kernel tries
> > > to resubmit it).  But it may very well cause the app to misrender, or
> > > even crash.
> > 
> > Yes, I think the revulsion has been universal. However, as a
> > quality-of-service guarantee, I can understand the appeal. The
> > difference is that instead of allowing a DoS for 6s or so as we
> > currently allow, we allow that to be specified by the context. As it
> > does allow one context to impact another, I want it locked down to
> > privileged processes. I have been using CAP_SYS_ADMIN as the potential
> > to do harm is even greater than exploiting the weak scheduler by
> > changing priority.

Right...I was thinking perhaps a tunable to reduce the 6s would do the
trick, and be much less complicated...but perhaps you want to let it go
longer when there isn't super-critical work to do.

> Also to add further insult to injury, we might want to force GPU clocks
> to max for the RT context (so that the context starts executing at max
> rather than wait for the system to upclock on load). Something like,

That makes some sense - but I wonder if it wouldn't cause more battery
burn than is necessary.  The super-critical workload may also be
relatively simple (redrawing a clock), and so up-clocking and
down-clocking again might hurt us...it's hard to say. :(

I also don't know what I think of this plan to let userspace control
(restrict) the frequency.  That's been restricted to root (via sysfs)
in the past.  But I think you're allowing it more generally now, without
CAP_SYS_ADMIN?  It seems like there's a lot of potential for abuse.
(Hello, benchmark mode!  Zm!)  I know it solves a problem, but it
seems like there's got to be a better way...

--Ken


signature.asc
Description: This is a digitally signed message part.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] RFC: Externd IMG_context_priority with NV_context_priority_realtime

2018-03-31 Thread Chris Wilson
Quoting Chris Wilson (2018-03-31 12:00:16)
> Quoting Kenneth Graunke (2018-03-30 19:20:57)
> > On Friday, March 30, 2018 7:40:13 AM PDT Chris Wilson wrote:
> > > For i915, we are proposing to use a quality-of-service parameter in
> > > addition to that of just a priority that usurps everyone. Due to our HW,
> > > preemption may not be immediate and will be forced to wait until an
> > > uncooperative process hits an arbitration point. To prevent that unduly
> > > impacting the privileged RealTime context, we back up the preemption
> > > request with a timeout to reset the GPU and forcibly evict the GPU hog
> > > in order to execute the new context.
> > 
> > I am strongly against exposing this in general.  Performing a GPU reset
> > in the middle of a batch can completely screw up whatever application
> > was running.  If the application is using robustness extensions, we may
> > be forced to return GL_DEVICE_LOST, causing the application to have to
> > recreate their entire GL context and start over.  If not, we may try to
> > let them limp on(*) - and hope they didn't get too badly damaged by some
> > of their commands not executing, or executing twice (if the kernel tries
> > to resubmit it).  But it may very well cause the app to misrender, or
> > even crash.
> 
> Yes, I think the revulsion has been universal. However, as a
> quality-of-service guarantee, I can understand the appeal. The
> difference is that instead of allowing a DoS for 6s or so as we
> currently allow, we allow that to be specified by the context. As it
> does allow one context to impact another, I want it locked down to
> privileged processes. I have been using CAP_SYS_ADMIN as the potential
> to do harm is even greater than exploiting the weak scheduler by
> changing priority.

Also to add further insult to injury, we might want to force GPU clocks
to max for the RT context (so that the context starts executing at max
rather than wait for the system to upclock on load). Something like,

diff --git a/src/mesa/drivers/dri/i965/brw_bufmgr.c 
b/src/mesa/drivers/dri/i965/brw_bufmgr.c
index b080c4c58f1..461b76b64c9 100644
--- a/src/mesa/drivers/dri/i965/brw_bufmgr.c
+++ b/src/mesa/drivers/dri/i965/brw_bufmgr.c
@@ -1370,6 +1370,36 @@ brw_hw_context_set_preempt_timeout(struct brw_bufmgr 
*bufmgr,
return err;
 }
 
+int
+brw_hw_context_force_maximum_frequency(struct brw_bufmgr *bufmgr,
+  uint32_t ctx_id)
+{
+#define I915_CONTEXT_PARAM_FREQUENCY0x8
+#define   I915_CONTEXT_MIN_FREQUENCY(x) ((x) & 0x)
+#define   I915_CONTEXT_MAX_FREQUENCY(x) ((x) >> 32)
+#define   I915_CONTEXT_SET_FREQUENCY(min, max) ((uint64_t)(max) << 32 | (min))
+
+   struct drm_i915_gem_context_param p = {
+  .ctx_id = ctx_id,
+  .param =I915_CONTEXT_PARAM_FREQUENCY,
+   };
+
+   /* First find the HW limits */
+   if (drmIoctl(bufmgr->fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, ))
+  return -errno;
+
+   /* Then specify that the context's minimum frequency is the HW max,
+* forcing the context to only run at the maximum frequency, as
+* restricted by the global user limits.
+*/
+   p.value = I915_CONTEXT_SET_FREQUENCY(I915_CONTEXT_MAX_FREQUENCY(p.value),
+   I915_CONTEXT_MAX_FREQUENCY(p.value));
+   if (drmIoctl(bufmgr->fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, ))
+  return -errno;
+
+   return 0;
+}
+
 void
 brw_destroy_hw_context(struct brw_bufmgr *bufmgr, uint32_t ctx_id)
 {
diff --git a/src/mesa/drivers/dri/i965/brw_bufmgr.h 
b/src/mesa/drivers/dri/i965/brw_bufmgr.h
index a493b7018af..07dc9ced57a 100644
--- a/src/mesa/drivers/dri/i965/brw_bufmgr.h
+++ b/src/mesa/drivers/dri/i965/brw_bufmgr.h
@@ -320,6 +320,9 @@ int brw_hw_context_set_preempt_timeout(struct brw_bufmgr 
*bufmgr,
   uint32_t ctx_id,
   uint64_t timeout_ns);
 
+int brw_hw_context_force_maximum_frequency(struct brw_bufmgr *bufmgr,
+   uint32_t ctx_id);
+
 void brw_destroy_hw_context(struct brw_bufmgr *bufmgr, uint32_t ctx_id);
 
 int brw_bo_gem_export_to_prime(struct brw_bo *bo, int *prime_fd);
diff --git a/src/mesa/drivers/dri/i965/brw_context.c 
b/src/mesa/drivers/dri/i965/brw_context.c
index 9b84a29d4a2..0bd965043c5 100644
--- a/src/mesa/drivers/dri/i965/brw_context.c
+++ b/src/mesa/drivers/dri/i965/brw_context.c
@@ -1026,13 +1026,17 @@ brwCreateContext(gl_api api,
  intelDestroyContext(driContextPriv);
  return false;
   }
-  if (hw_priority >= GEN_CONTEXT_REALTIME_PRIORITY &&
-  brw_hw_context_set_preempt_timeout(brw->bufmgr, brw->hw_ctx,
-8 * 1000 * 1000 /* 8ms */)) {
- fprintf(stderr,
-"Failed to set preempt timeout for RT hardware context.\n");
- intelDestroyContext(driContextPriv);
- return false;
+
+  if (hw_priority >= GEN_CONTEXT_REALTIME_PRIORITY) {
+

Re: [Mesa-dev] [PATCH] RFC: Externd IMG_context_priority with NV_context_priority_realtime

2018-03-31 Thread Chris Wilson
Quoting Kenneth Graunke (2018-03-30 19:20:57)
> On Friday, March 30, 2018 7:40:13 AM PDT Chris Wilson wrote:
> > For i915, we are proposing to use a quality-of-service parameter in
> > addition to that of just a priority that usurps everyone. Due to our HW,
> > preemption may not be immediate and will be forced to wait until an
> > uncooperative process hits an arbitration point. To prevent that unduly
> > impacting the privileged RealTime context, we back up the preemption
> > request with a timeout to reset the GPU and forcibly evict the GPU hog
> > in order to execute the new context.
> 
> I am strongly against exposing this in general.  Performing a GPU reset
> in the middle of a batch can completely screw up whatever application
> was running.  If the application is using robustness extensions, we may
> be forced to return GL_DEVICE_LOST, causing the application to have to
> recreate their entire GL context and start over.  If not, we may try to
> let them limp on(*) - and hope they didn't get too badly damaged by some
> of their commands not executing, or executing twice (if the kernel tries
> to resubmit it).  But it may very well cause the app to misrender, or
> even crash.

Yes, I think the revulsion has been universal. However, as a
quality-of-service guarantee, I can understand the appeal. The
difference is that instead of allowing a DoS for 6s or so as we
currently allow, we allow that to be specified by the context. As it
does allow one context to impact another, I want it locked down to
privileged processes. I have been using CAP_SYS_ADMIN as the potential
to do harm is even greater than exploiting the weak scheduler by
changing priority.
 
> This seems like a crazy plan to me.  Scheduling has never been allowed
> to just kill random processes.

That's not strictly true, as processes have their limits which if they
exceed they will be killed. On the CPU preemption is much better, the
issue of unyielding processes is pretty much limited to the kernel, where
we can run the NMI watchdog to kill broken code.

> If you ever hit that case, then your
> customers will see random application crashes, glitches, GPU hangs,
> and be pretty unhappy with the result.  And not because something was
> broken, but because somebody was impatient and an app was a bit slow.

Yes, that is their decision. Kill random apps so that their
uber-critical interface updates the clock.
 
> If you have work that is so mission critical, maybe you shouldn't run it
> on the same machine as one that runs applications which you care so
> little about that you're willing to watch them crash and burn.  Don't
> run the entertainment system on the flight computer, so to speak.

You are not the first to say that ;)
-Chris
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH] RFC: Externd IMG_context_priority with NV_context_priority_realtime

2018-03-30 Thread Kenneth Graunke
On Friday, March 30, 2018 7:40:13 AM PDT Chris Wilson wrote:
> NV_context_priority_realtime
> https://www.khronos.org/registry/EGL/extensions/NV/EGL_NV_context_priority_realtime.txt
> 
> "This extension allows an EGLContext to be created with one extra
> priority level in addition to three priority levels that are part of
> EGL_IMG_context_priority extension.
> 
> This new level has extra privileges that are not available to other three
> levels. Some of the privileges may include:
> - Allow realtime priority to only few contexts
> - Allow realtime priority only to trusted applications
> - Make sure realtime priority contexts are executed immediately
> - Preempt any current context running on GPU on submission of
>   commands for realtime context"
> 
> At its most basic, it just adds an extra enum and level into the existing
> context priority framework.
> 
> For i915, we are proposing to use a quality-of-service parameter in
> addition to that of just a priority that usurps everyone. Due to our HW,
> preemption may not be immediate and will be forced to wait until an
> uncooperative process hits an arbitration point. To prevent that unduly
> impacting the privileged RealTime context, we back up the preemption
> request with a timeout to reset the GPU and forcibly evict the GPU hog
> in order to execute the new context.

I am strongly against exposing this in general.  Performing a GPU reset
in the middle of a batch can completely screw up whatever application
was running.  If the application is using robustness extensions, we may
be forced to return GL_DEVICE_LOST, causing the application to have to
recreate their entire GL context and start over.  If not, we may try to
let them limp on(*) - and hope they didn't get too badly damaged by some
of their commands not executing, or executing twice (if the kernel tries
to resubmit it).  But it may very well cause the app to misrender, or
even crash.

This seems like a crazy plan to me.  Scheduling has never been allowed
to just kill random processes.  If you ever hit that case, then your
customers will see random application crashes, glitches, GPU hangs,
and be pretty unhappy with the result.  And not because something was
broken, but because somebody was impatient and an app was a bit slow.

If you have work that is so mission critical, maybe you shouldn't run it
on the same machine as one that runs applications which you care so
little about that you're willing to watch them crash and burn.  Don't
run the entertainment system on the flight computer, so to speak.

At any rate, I suspect you wouldn't go down this path unless you
absolutely had to, and there was some incredible forcing function at
play.  Which is why I said "against exposing this in general".  Maybe
you have a customer that's willing to play with fire.  I just wanted
to make it very abundantly clear that this is hazardous.

--Ken

(*) We don't actually let things limp along after a bad hang today, if
execbuf fails we just exit(1) and let it crash and burn.  We really
should fix that (but I need to fix some state tracking bugs first).


signature.asc
Description: This is a digitally signed message part.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [PATCH] RFC: Externd IMG_context_priority with NV_context_priority_realtime

2018-03-30 Thread Chris Wilson
NV_context_priority_realtime
https://www.khronos.org/registry/EGL/extensions/NV/EGL_NV_context_priority_realtime.txt

"This extension allows an EGLContext to be created with one extra
priority level in addition to three priority levels that are part of
EGL_IMG_context_priority extension.

This new level has extra privileges that are not available to other three
levels. Some of the privileges may include:
- Allow realtime priority to only few contexts
- Allow realtime priority only to trusted applications
- Make sure realtime priority contexts are executed immediately
- Preempt any current context running on GPU on submission of
  commands for realtime context"

At its most basic, it just adds an extra enum and level into the existing
context priority framework.

For i915, we are proposing to use a quality-of-service parameter in
addition to that of just a priority that usurps everyone. Due to our HW,
preemption may not be immediate and will be forced to wait until an
uncooperative process hits an arbitration point. To prevent that unduly
impacting the privileged RealTime context, we back up the preemption
request with a timeout to reset the GPU and forcibly evict the GPU hog
in order to execute the new context.

Opens:

 - How is include/EGL/eglext.h meant to be updated to include the new
   extension?

 - screen->priority_mask in freedeno_screen.c (Rob?)

References: 95ecf3df6237 ("egl: Support IMG_context_priority")
Signed-off-by: Chris Wilson 
Cc: Rob Clark 
Cc: Ben Widawsky 
Cc: Emil Velikov 
Cc: Eric Engestrom 
Cc: Kenneth Graunke 
Cc: Joonas Lahtinen 
---
 docs/relnotes/18.1.0.html|  1 +
 include/EGL/eglext.h |  5 +
 include/GL/internal/dri_interface.h  |  2 ++
 src/egl/drivers/dri2/egl_dri2.c  |  3 +++
 src/egl/main/eglcontext.c|  3 +++
 src/egl/main/egldisplay.h|  7 ---
 src/gallium/include/pipe/p_defines.h | 12 +---
 src/gallium/include/state_tracker/st_api.h   |  1 +
 src/gallium/state_trackers/dri/dri_context.c |  3 +++
 src/mesa/drivers/dri/i965/brw_bufmgr.c   | 19 +++
 src/mesa/drivers/dri/i965/brw_bufmgr.h   |  3 +++
 src/mesa/drivers/dri/i965/brw_context.c  | 11 +++
 src/mesa/drivers/dri/i965/intel_screen.c |  6 ++
 src/mesa/state_tracker/st_manager.c  |  2 ++
 14 files changed, 72 insertions(+), 6 deletions(-)

diff --git a/docs/relnotes/18.1.0.html b/docs/relnotes/18.1.0.html
index 3e119078731..43f29932e39 100644
--- a/docs/relnotes/18.1.0.html
+++ b/docs/relnotes/18.1.0.html
@@ -51,6 +51,7 @@ Note: some of the new features are only available with 
certain drivers.
 GL_EXT_shader_framebuffer_fetch on i965 on desktop GL (GLES was already 
supported)
 GL_EXT_shader_framebuffer_fetch_non_coherent on i965
 Disk shader cache support for i965 enabled by default
+EGL_NV_context_priority_realtime on i965, freedeno
 
 
 Bug fixes
diff --git a/include/EGL/eglext.h b/include/EGL/eglext.h
index 2f990cc54d6..068dbb481c2 100644
--- a/include/EGL/eglext.h
+++ b/include/EGL/eglext.h
@@ -918,6 +918,11 @@ EGLAPI EGLSurface EGLAPIENTRY eglCreatePixmapSurfaceHI 
(EGLDisplay dpy, EGLConfi
 #define EGL_CONTEXT_PRIORITY_LOW_IMG  0x3103
 #endif /* EGL_IMG_context_priority */
 
+#ifndef EGL_NV_context_priority_realtime
+#define EGL_NV_context_priority 1_realtime
+#define EGL_CONTEXT_PRIORITY_REALTIME_NV  0x3357
+#endif /* EGL_NV_context_priority_realtime */
+
 #ifndef EGL_IMG_image_plane_attribs
 #define EGL_IMG_image_plane_attribs 1
 #define EGL_NATIVE_BUFFER_MULTIPLANE_SEPARATE_IMG 0x3105
diff --git a/include/GL/internal/dri_interface.h 
b/include/GL/internal/dri_interface.h
index 4f4795c7ae3..8be0d89e6a6 100644
--- a/include/GL/internal/dri_interface.h
+++ b/include/GL/internal/dri_interface.h
@@ -1129,6 +1129,7 @@ struct __DRIdri2LoaderExtensionRec {
 #define __DRI_CTX_PRIORITY_LOW 0
 #define __DRI_CTX_PRIORITY_MEDIUM  1
 #define __DRI_CTX_PRIORITY_HIGH2
+#define __DRI_CTX_PRIORITY_REALTIME3
 
 /**
  * \name Context release behaviors.
@@ -1855,6 +1856,7 @@ typedef struct __DRIDriverVtableExtensionRec {
 #define   __DRI2_RENDERER_HAS_CONTEXT_PRIORITY_LOW(1 << 0)
 #define   __DRI2_RENDERER_HAS_CONTEXT_PRIORITY_MEDIUM (1 << 1)
 #define   __DRI2_RENDERER_HAS_CONTEXT_PRIORITY_HIGH   (1 << 2)
+#define   __DRI2_RENDERER_HAS_CONTEXT_PRIORITY_REALTIME   (1 << 3)
 
 typedef struct __DRI2rendererQueryExtensionRec __DRI2rendererQueryExtension;
 struct __DRI2rendererQueryExtensionRec {
diff --git a/src/egl/drivers/dri2/egl_dri2.c b/src/egl/drivers/dri2/egl_dri2.c
index 45d0c7275c5..4bed1aa2c6f 100644
---