>> Context of the question was whether you need to change slice configuration 
>> for a single GEM context between submitting batch buffers?

When we create a context we know the optimal slice count for it. And this 
optimal point does not change for this context in unperturbed conditions, i.e. 
when context runs alone. However, there are critical use cases when we never 
run single context, but instead we run few contexts in parallel and each of 
them has its own optimal slice count operating point. As a result, major 
question is: what is the cost of context switch if there is associated slice 
configuration change, powering on or off the slice(s)? In the ideal situation 
when the cost is 0, we don't need any change of slice configuration for single 
GEM context between submitting batch buffers. Problem is that cost is far from 
0. And the cost is non-tolerable for the worst case when we will have a switch 
at each next batch buffer. As a result, we are forced to have some negotiation 
channel between different contexts and make them agree on some single slice 
configuration which will exist for reasonably long period of time to have 
associated cost negligible in genera. During this period we will submit a 
number of batch buffers before next reconfiguration attempt. So, that's the 
situation when we need to reconfigure slice configuration for a single GEM 
context between submitting batches.

Dmitry.

-----Original Message-----
From: Tvrtko Ursulin [mailto:tvrtko.ursu...@linux.intel.com] 
Sent: Tuesday, May 8, 2018 1:25 AM
To: Rogozhkin, Dmitry V <dmitry.v.rogozh...@intel.com>; Landwerlin, Lionel G 
<lionel.g.landwer...@intel.com>; intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 8/8] drm/i915: Expose RPCS (SSEU) configuration 
to userspace


On 08/05/2018 05:04, Rogozhkin, Dmitry V wrote:
>>>  I'm pretty sure Dmitry wants dynamic configurations.
> 
> Yes, I afraid we really need dynamic slice configurations for media.

Context of the question was whether you need to change slice configuration for 
a single GEM context between submitting batch buffers?

Regards,

Tvrtko


> *From:*Landwerlin, Lionel G
> *Sent:* Friday, May 4, 2018 9:25 AM
> *To:* Tvrtko Ursulin <tvrtko.ursu...@linux.intel.com>; 
> intel-gfx@lists.freedesktop.org; Rogozhkin, Dmitry V 
> <dmitry.v.rogozh...@intel.com>
> *Subject:* Re: [Intel-gfx] [PATCH 8/8] drm/i915: Expose RPCS (SSEU) 
> configuration to userspace
> 
> On 03/05/18 18:18, Tvrtko Ursulin wrote:
> 
>            +int intel_lr_context_set_sseu(struct i915_gem_context *ctx,
>         +                  struct intel_engine_cs *engine,
>         +                  struct i915_gem_context_sseu *sseu)
>         +{
>         +    struct drm_i915_private *dev_priv = ctx->i915;
>         +    struct intel_context *ce;
>         +    enum intel_engine_id id;
>         +    int ret;
>         +
>         +    lockdep_assert_held(&dev_priv->drm.struct_mutex);
>         +
>         +    if (memcmp(sseu, &ctx->engine[engine->id].sseu,
>         sizeof(*sseu)) == 0)
>         +        return 0;
>         +
>         +    /*
>         +     * We can only program this on render ring.
>         +     */
>         +    ce = &ctx->engine[RCS];
>         +
>         +    if (ce->pin_count) { /* Assume that the context is active! */
>         +        ret = i915_gem_switch_to_kernel_context(dev_priv);
>         +        if (ret)
>         +            return ret;
>         +
>         +        ret = i915_gem_wait_for_idle(dev_priv,
>         +                         I915_WAIT_INTERRUPTIBLE |
>         +                         I915_WAIT_LOCKED);
> 
> 
>     Could we consider the alternative of only allowing this to be
>     configured on context create? That way we would not need to idle the
>     GPU every time userspace decides to fiddle with it. It is
>     unprivileged so quite an easy way for random app to ruin GPU
>     performance for everyone.
> 
>     Regards,
> 
>     Tvrtko
> 
> I'm pretty sure Dmitry wants dynamic configurations.
> 
> Dmitry?
> 
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to